OMG QIP=PSPACE!

Today on the arXiv an new paper appeared of great significance to quantum computational complexity: arXiv:0907.4737 (vote for it on scirate here)

Title: QIP = PSPACE
Authors: Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay, John Watrous

We prove that the complexity class QIP, which consists of all problems having quantum interactive proof systems, is contained in PSPACE. This containment is proved by applying a parallelized form of the matrix multiplicative weights update method to a class of semidefinite programs that captures the computational power of quantum interactive proofs. As the containment of PSPACE in QIP follows immediately from the well-known equality IP = PSPACE, the equality QIP = PSPACE follows.

This solves a long standing open problem (we can say long standing in quantum computing when we mean less than nine years because our field is so young.) It has always been known that PSPACE is in QIP, but prior to this result (assuming its correct: I just downloaded the paper and am starting to read it now) the best known inclusion in the other direction was that QIP is in EXP (this result is due to Kitaev and Watrous.) But wait a second, what the hell are all these letters representing?

PSPACE is the class of decision problems decidable by a Turing machine using only a polynomial amount of memory (polynomial in the problem size) and an unlimited amount of time. This should be contrasted with P which is the class of decision problems decidable by a deterministic Turing machine using a polynomial amount of time (and hence a polynomial amount of memory.) While things in P are problems that can reasonably be called efficient, things in PSPACE are not. In other words PSPACE is thought to contain problems which are very much not tractable. This, of course, defies most physicists, who upon hearing this result mumble something about space and time and spacetime. Of course these physicists are wrong and have read way to much into the popular expression that "Einstein put space and time on equal footing." Or maybe they are correct and there is a revolution waiting to happen (one would note, for instance that if one allows closed time like curves then BPP with close time like curves = PSPACE. Proven by Watrous and Aaronson here.)

Okay PSPACE seems like a reasonable complexity class to study. Sure it's got probably intractable problems, but hey we should still study it! Now what about QIP. Well to be down with QIP you need to first be down with IP. IP is the complexity class known as "interactive proof systems." In an interactive proof system there are two parties, a prover and a verifier. The verifier is assumed to computationally limited, i.e. he can only use resources (time,space) polynomial in the instance size (he is given access to randomness so that he can do things which are reasonably classically tractable.) The prover, on the other hand, is given god like powers and allowed unlimited computational power. These two parties interact by a polynomial number of rounds of communication. The idea here is that the prover is trying to prove that the problem instance is in the language and the verifier wants to make sure this is true. Thus on instances that are in the language the verifier should accept with greater than (say) 2/3 probability (this is called completeness) and on instance that are not in the language the verifier should accept with less than (say) 1/3 probability (this is called soundness.) Completeness tells us that the prover can convince us with high probability that the instance is in the language and soundness tells us that false statements cannot be proved (we can't be fooled.) IP is the class of decision problems that admit an interactive proof system.

Back to the story at hand. One of the major results in computational complexity theory is that PSPACE, that beast which uses limited memory but unlimited time, is actually equal to IP. Proofs of this fact involve some cool tricks, among them the arithmetization: basically this is a method for transforming boolean formulas into many variable polynomials over some finite field. Reasoning about this later object then facilitates the proof of PSPACE contained in IP (the other direction, that IP is contained in PSPACE is a bit easier.) I believe this proof is also important because it doesn't relativize (A note on attribution: I sometimes see two references attributed to this proof and sometimes only one. I don't know the history behind this, but the papers are one by Lund, Fortnow, Karloff, and Nisan and the other by Shamir.)

Okay so what is the claim of the paper published today? Well QIP is like IP but now the verifier is assumed to be running a polynomial sized quantum algorithm and the messages exchanged between the parties are allowed to be quantum. QIP was first studied in detail by Watrous and then by Watrous and Kitaev. The classic papers are

J. Watrous. PSPACE has constant-round quantum interactive proof systems. In Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science, pages 112-119, 1999. arXiv:cs/9901015
Kitaev and J. Watrous. Parallelization, ampliï¬cation, and exponential time simu-
lation of quantum interactive proof system. In Proceedings of the 32nd Annual ACM
Symposium on Theory of Computing, pages 608-617, 2000. pdf here

As I mentioned above, these papers showed that QIP was contained in EXP (the class of problems solvable by a deterministic Turing machine in exponential time.) PSPACE is in QIP because we could always have not used the quantum part of the interactive proof and just made it classical. But a longstanding question is whether one could tighten the gap between PSPACE and EXP. Today's paper shows that indeed QIP=PSPACE.

What does this say about the power of quantum computers? Well on the one hand, I'm not sure it does tell us too much (ask Scott, damnit!) since these complexity classes are, off the top, most interesting for the intractable problems they represent. On the other hand there is an impressive result about QIP. One can talk about the number of rounds in a QIP protocol. It turns out (this was proved by Watrous in the first paper listed above) that a QIP protocl with only three rounds has the power of all of QIP (i.e. with a polynomial number of rounds.) Such a result for IP does not hold unless the polynomial hierarchy collapses to AM (which, in complexity theorist talk is, quite blasphemous.) In other words one can be convinced of whether an instance of a problem is or is not in PSPACE quantumly using only three rounds of discussion, as opposed to in the classical world where the discussion can go on and on and on and on... Let it never be said that the quantum world is the blabber mouth in comparison to the classical world.

Well now it's time to seal myself away and actually read the proof. Congrats to the authors and here's hoping that the proof is correct and yet another major open problem has been put to its grave.

Update 6/29/09: See the Computational Complexity for an outline of the proof written by Steve Fenner.

Categories

More like this

This is a wonderful result, and a beautifully written article too.

It was interesting to me that for these authors, the key mathematical aspect of QM is its vector space nature ... they don't use QM's symplectic, Riemannian, or thermodynamic structure at all. Whereas, the QM simulationist point-of-view is exactly the opposite.

It seems to me that this differing emphasis is perfectly reasonable ... complexity theorists tend to focus on structures that can't be efficiently computed (so that a proof of P ne NP is the apotheosis of complexity theory) ... whereas simulation theorists focus on structures that can be efficiently computed (so that a whole-biome survey is the apotheosis of quantum simulation science).

The result seems to be a de facto schism in academic attitudes towards QIS/QIT ... with the vector-embracing quantum complexity theorists on one side ... and the Kähler-embracing quantum simulationists on the other side ... and (fortunately) plenty of mathematical elegance on both sides! :)

This is indeed a very cool result. For me the interesting "philosophical" consequence is the collapse QIP=IP: polynomial-round quantum interactive proof systems are no better than classical. It would be nicer still if there were an intuitive direct simulation result that, given a quantum protocol, simulated it efficiently classically...

If there were a direct simulation result it would, presumably, have to blow up the number of rounds.

To me this the result QIP=IP is more like showing that what is computable on a classical Turing machine is the same as that which is computable on a quantum Turing machine. True, but this has nothing to do wit the complexity of what each can do, and in terms of rounds in the interactive proof, QIP seems to have an infinite advantage over IP.

I wonder how many reader of your blogs saw the headline and wondered "What does 'OMG' mean?"

Nice article, but I'm waiting to hear from Geordie how this proves they were right all along!

Mike, I don't think this is relevant at all to what we're doing. My view of complexity theory arguments is closely aligned with RJ Lipton's. See for example this post: http://rjlipton.wordpress.com/2009/07/03/is-pnp-an-ill-posed-problem/.

The asymptotic scaling of worst case problem instances is a highly questionable way to think of the performance of an algorithm when applied to real world instance classes. Here's another Lipton post on this issue as applied to SAT: http://rjlipton.wordpress.com/2009/07/13/sat-solvers-is-sat-hard-or-eas….

Luca: ROFL!

Oh, and I meant to say exponential advantage, if that is what you call going from polynomial number of rounds to...3.

It will be interesting if these guys present at QIP 2010 - discussing QIP at QIP! OMG, that's bad. OK, that means it's time for me to go to sleep...

Geordie: sorry should be fixed. Stupid blog doesn't email me when things get marked as spam. Come on computers, you've got so many transistors why can't you get this right!

I would think that Lipton's view would actually be bad for Dwave in that it would imply that it's already true that the instances that occur in practice aren't hard. (Crap, I think something about polynomials is coming next...)

Thank you, Geordie, for those excellent Lipton links. He asks a good question with "The theory community is sure that P{\neq}NP, and so SAT is hard. The solver community is sure that they can solve most, if not all, of the problems that they get from real applications, and so SAT is easy. What is going on here?"

Doesn't the same apply equally to simulating quantum systems? "The QIT/QIS theory community is sure that P{\neq}BQP, and so quantum simulation is hard (as Nielsen and Chuang's textbook asserts, for example). The quantum simulation community is sure that they can solve most, if not all, of the problems that they get from real-world applications, and so simulating quantum systems is generically easy. What is going on here?"

Lipton deserves plenty of credit for emphasizing that from a mathematical point of view, it is perfectly plausible---even likely---that everyone is right. As for the quantum chemists and the molecular modeling folks, many of them regard it as already demonstrated (in real-world applications) that Lipton's ideas are right.

Hi John

We are currently trying to simulate the systems we're trying to build. These are N spin quantum Ising models. The reason for doing this is to extract the minimum value of the energy gap during the course of an adiabatic algorithm on a statistically significant number of problem instances at any given N. See http://arxiv.org/abs/0803.3971 for the basic idea.

We're using is a parallel tempering quantum monte carlo approach. We are able to simulate up to about N=240 before the times to acceptably bound errors get too big.

We use a distributed computing approach (see AQUA@home http://aqua.dwavesys.com/) that accesses about 5,000 cores ~ 15 teraflop/s.

What we'd like to do is extend these computations to much larger sizes (say N=2,048 qubits). However even doing N~300 is not reasonable (would take ~ 1 year) even with this rather large computing system and highly optimized code.

Isn't this a counter-example to your claim that quantum simulation is a solved problem?

I do not believe "the quantum simulation community is sure that they can solve most, if not all, of the problems that they get from real-world applications" for any reasonable definition of the quantum simulation community.

Dave (Bacon), my own attitude toward simulation has been changing, as I spend more time over at the Dave (Baker) lab, working with their Rosetta Code!

As Dave (Baker) said from the FOMMS podium "It has always been heresy to suggest that [biomolecular] structures derived from simulation might be more accurate than structures derived from x-ray crystallography. Well, I have to say, that I am starting to believe this heresy."

Further comments from the FOMMS panel included: "Anything we can image, we can simulate" ... and there was *no* dissent from the audience.

It's true that confidence in simulation capabilities (both classical and quantum) has greatly increased in the last five years ... and this confidence has solid mathematical foundations, as I discussed on Lipton's blog (link here).

A terrific (recent) book on this topic is Frenkel and Smits, Understanding Molecular Simulations: From Algorithms to Applications; this book builds on the symplectic dynamical foundations that set forth in another terrific---not particularly easy---book by Vladimir Arnol'd Mathematical Methods of Classical Mechanics.

Definitely, we are now in a world in which physics community should not underestimate the mathematical sophistication of the biologists.

Geordi asks: Isn't [quantum spin Ising model ground states] a counter-example to your claim that quantum simulation is a solved problem?

Hmmmm ... I think you'll find that assertions about growing confidence in simulation capability (whether from me, or from other people) have ample wiggle-room to exclude this case!

That magic phrase "real-world" needs to defined with care. For example, in the (hot, wet, noisy) world of biological systems, finding the ground state of a spin glass system is *not* a real-world problem ... because "hot, wet, noisy" dynamical systems spend none of their time in this ground state.

How likely is it that anywhere in the universe, there is a 1000-spin spin-glass system that is in its exact ground state? Isn't that probability exponentially small? Doesn't nature always content herself with approximate ground states of glass-type, whether of spin states or of protein states? Isn't this in fact a good working definition of a glass-type system?

It is thrilling (IMHO) to reflect that exact ground states of glass-type dynamical systems will come into physical being, for the first time, when the first quantum computers are constructed ... which is an excellent reason to construct them! :)

My own framework for thinking about quantum simulation is very similar to Dick Lipton's framework for thinking about SAT ... a central element of which is recognizing that the question "Is SAT/simulation generically hard?" depends crucially on the meaning of the word "generically", which varies greatly among communities.

When the dust settles---say a decade from now---it seems to me think that the statement "Simulating quantum systems with classical resources is easy" will be regarded as generically true by chemists, biologists, engineers, and geometers ... regarded as generically false by information theorists and algebraists ... and the physics community (as usual) will play on both sides of the street. :)

John, I think the ground state argument is a red herring. Sampling from the Gibbs ensemble for a suitably complex spin model is also known to be computationally intractable.

But you are right that the real question is what is the complexity in the real world and far too often we get stuck with models that are intractable because they are not what is around in the real world. So, for example, whether you need a quantum computer or not for simulation depends crucially on whether quantum effects are important or not in the observed results. That's the billion dollar question, but the fact that chemists still can't do thing like accurately predict the magnetic moments of large molecular magnets gives me pause.

Dave asserts: Chemists still can't do thing like accurately predict the magnetic moments of large molecular magnets gives me pause.

Gosh, is this really true? How hard have people tried to model these magnets? Because at FOMMS this year, there was pretty much no limit to the hubris of the quantum chemists!

On the other hand, we have the sobering reality that more than 30,000 theoretical articles on high-Tc superconductors have not yielded a consensus as to how these materials work. Which leads some folks to think, that several different mechanisms may be acting simultaneously ... such that large-scale simulation may prove to be the only path that leads to a predictive understanding of these systems.

That's why it is regrettable that the present mathematical understanding of Grassmannian/Kälerian tensor network manifolds (to describe the state-space of fermionic condensed matter systems in the language of algebraic geometry) is still in an embryonic state of development (AFAICT), and consequently, the available simulation codes for fermionic condensed matter are (for the present) pretty primitive too.

Hubris rarely convinces me. Mostly it convinces me I don't want to work in the field.

Said by someone with way too much hubris :)

Any state space worth being called a state space is huge. I call small state spaces unphysical. But this library is circular, I think.

Hi John, assuming quantum computers are going to end up being useful for something there will always exist very difficult or intractable quantum simulation problems.

Here is another counter-example to the claim that modern quantum chemistry techniques are "good enough in practice": imagine a quantum computer large enough to run Shor's algorithm on modern sized bi-primes. Now simulate the system and give me a factor.

You might say that the reason the best modern quantum chemistry techniques fail here (assuming they do, which I don't know for sure) is that this is in some sense a pathological case, in that you need to keep the full configuration interaction picture and can't do smart approximations for some fundamental reason. But I would argue that a priori for any physical system it is dangerous to assume in advance that you can truncate quantum mechanics to make calculations tractable. It may be the case that you can be very smart about a particular type of calculation and build a good approximation. But these approaches will not generalize gracefully and can be extremely misleading.

As for quantum simulation in the real world, I think as we get better at designing and building systems that can be highly quantum mechanical, we'll need tools that give as complete a picture as possible of the expected behavior of the thing. A real-life example is simulating a set of rf-SQUIDs at millikelvin (ie the stuff we're trying to build). It seems to just be freakin hard.

Geordie, no one would say that Shor's algorithm is in any sense "a pathological case" -- it is a beautiful algorithm!

It is true, though, that viewed as a physical process, Shor's algorithm---like all quantum algorithms---definitely *is* intolerant of observation and/or noise, in the sense that neither we nor the environment are allowed to monitor the internal workings of the algorithm.

In consequence, precisely to the extent that efficient simulation codes exploit the concentrating dynamics of Lindblad-Choi drifts---which (directly or indirectly) existing codes do exploit---then these same codes are unable to realistically simulate Shor's algorithm, or any other quantum process that simultaneously noise-free, measurement-free, and symmetry-free.

To put it another way, we can efficiently simulate Shor's algorithm, provided the noise levels are high enough that the computation fails. :)

The converse of this principle is physically reasonable: if a process is noisy, observed, or symmetric, then (generically) we expect that it can be simulated with classical resources, via concentration-type simulation algorithms that (in effect) exploit our knowledge that the system is not carrying out a quantum computation.

There is an very good article by Buhrman and collaborators, New Limits on Fault-Tolerant Quantum Computation, that exploits the above principle in a very ingenious way, to establish rigorous bounds on tolerable noise levels in quantum error-correction.

All of these ideas is reviewed at-length (with many numerical examples) in our QSE Group's recent NJP article. The same physical ideas, now translated into the computationally efficient language of symplectic molecular simulation, will be presented at the August Kavli Conference on Molecular Imaging at Cornell.

Needless to say, these ideas are perfectly consistent with orthodox QIT, via the same line of reasoning by which Dick Lipton's views are consistent with P ne NP.

Geordie, no one would say that Shor's algorithm is in any sense "a pathological case" -- it is a beautiful algorithm!

It is true, though, that viewed as a physical process, Shor's algorithm---like all quantum algorithms---definitely *is* intolerant of observation and/or noise, in the sense that neither we nor the environment are allowed to monitor the internal workings of the algorithm.

In consequence, precisely to the extent that efficient simulation codes exploit the concentrating dynamics of Lindblad-Choi drifts---which (directly or indirectly) existing codes do exploit---then these same codes are unable to realistically simulate Shor's algorithm, or any other quantum process that simultaneously noise-free, measurement-free, and symmetry-free.

To put it another way, we can efficiently simulate Shor's algorithm, provided the noise levels are high enough that the computation fails. :)

The converse of this principle is physically reasonable: if a process is noisy, observed, or symmetric, then (generically) we expect that it can be simulated with classical resources, via concentration-type simulation algorithms that (in effect) exploit our knowledge that the system is not carrying out a quantum computation.

There is an very good article by Buhrman and collaborators, New Limits on Fault-Tolerant Quantum Computation, that exploits the above principle in a very ingenious way, to establish rigorous bounds on tolerable noise levels in quantum error-correction.

All of these ideas is reviewed at-length (with many numerical examples) in our QSE Group's recent (online) NJP article Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems. And the same physical/mathematical ideas, now translated into the computationally efficient language of symplectic molecular simulation and rendered into simulation code, will be presented at the August Kavli Conference on Molecular Imaging at Cornell.

Needless to say, these ideas are perfectly consistent with orthodox QIT, via essentially the same line of reasoning by which Dick Lipton's views are consistent with P ne NP.

Well, just to cap this thread, it seems to me that all of the themes discussed on this very interesting topic---than you, Dave!---namely, classical and quantum complexity classes, classical and quantum simulation complexity, and hard versus easy problems in NP) ... nicely respect mathematical dictums of Jacobi and van Vleck: "Man muss immer umkehren and "Man muss immer generalizieren" ("We must always invert and generalize" ... BibTeX appended for folks who like full attribution).

Watrous and colleagues inverted PSPACE â QIP to obtain PSPACE â QIP, thus proving PSPACE = QIP ... by methods that go far to erase the separation between the methods of classical complexity and the methods quantum complexity.

Dick Lipton's essay points toward a similar strategy of inverting the (exceedingly well-established) hypothesis "NP problems are generically hard" to an (as yet vague) statement "real-world NP problems are generically easy" ... what Lipton explicitly asks for is better definitions and tools to complete the inversion.

Nowadays the classical and quantum simulationists are following Lipton's strategy, with the word "simulation" replacing "problem". The advantage of the simulationist approach is that Lindblad-Choi theorems of QIT/QIS provide us with dynamical concentration theorems, which allow us to "invert and generalize" the algebraic ideas of vector-space quantum mechanics to the (essentially) geometric ideas of Kähler-space quantum mechanics.

By embracing these "inverted and generalized" state-spaces---in both classical and quantum simulation---we can begin to glimpse a mathematical paradise in which "simulation is generically hard" and "simulation is generically easy" are both true statements.

------

@article{Author = {Van Vleck, Edward B.}, Journal = {Bull. Amer. Math. Soc.}, Number = {1}, Pages = {1--13}, Title = {Current tendencies of mathematical research}, Volume = {23}, Year = {1916},note={page 3, Jacobi ''Man muss immer umkehren'' and van Vleck ''Man muss immer generalizieren''.}}

Hope that nobody minds that I quoted John Sidles quoting Jacobi and van Vleck, on Facebook, and hotlinked to this thread... And earlier posted a query to Scott Aaronson from an earlier Facebook comment, that still had (I forgot to cut it out) a "read more" substring in the middle.