The Optimizer has gotten tired of everyone asking him about D-wave and gone and written a tirade about the subject. Like all of the optimizer’s stuff it’s a fun read. But, and of course I’m about to get tomatoes thrown on me for saying this, I have to say that I disagree with Scott’s assessment of the situation. (**Ducks** Mmm, tomato goo.) Further while I agree that people should stop bothering Scott about D-wave (I mean the dudes an assistant professor at an institution known for devouring these beasts for breakfast), I personally think the question of whether or not D-wave will succeed is one of the most important and interesting questions in quantum computing. The fact that we interface with this black box of a company via press releases, an occasional paper, and blog posts at rose blog, for me, makes it all the funner! Plus my father was a lawyer, so if you can’t argue the other side of the argument, well you’re not having any fun! So, in defense of D-wave…
The Optimizer begins by with a list of questions from the skeptic:
Skeptic: Let me see if I understand correctly. After three years, you still haven’t demonstrated two-qubit entanglement in a superconducting device (as the group at Yale appears to have done recently)?
Um, well, actually, Optimizer, entanglement has been demonstrated before the Yale group in superconducting qubit device. In phase qubits, I believe the Martinis group created entanglement between two qubits in 2006 (Science paper or if you want Bell inequality violations see this Nature paper.) As far as I know, no one has conclusively demonstrated entangled quantum states in flux qubits, which is what D-wave is using (the transmon qubits at Yale are charge superconducting qubits, right?) Okay, so well your facts are a little off Optimizer! But of course the real reason you bring this up is because you know (for pure states) that without entanglement there will be no quantum speedup. Actually I think one has to be very careful here as well. For example, in Richard Jozsa’s wonderful article on simulating non-entangled systems its not clear to me that these results can be used to rule out polynomial speedups for non-entangled states (update after Scott’s comment below: damnit here I meant to say, slightly entangled states. The relevance being that for these states it may be difficult to detect their entanglement even though they are useful for quantum computing.). And of course the question for mixed states (a.k.a. the real world) is still open. So I would say that the “entanglement” question is not settled. And I might even argue that the reason you need to worry about this is in quantum computing’s very history: it was well known that linear optics could not be used to quantum compute, but then, WHAM, KLM showed that if you had single photons and could detect single photons, you could build a quantum computer. Are we really that confident that quantum systems living somewhere just on the other side of entangled are not a useful resource. Of course my intuition is that for exponential speedups, yes, entaglement is necessary. But polynomial speedups?
You still haven’t explained how your “quantum computer” demos actually exploit any quantum effects?
Please define “quantum effects.” Also please read arXiv:0909.4321. That’s an interesting paper, and I definitely agree that it doesn’t demonstrate what I would call “quantum effects” it shows pretty clearly that the quantum description of what is going on in their flux qubits seems correct. And if you’re going to build an adiabatic quantum computer, what you really care about is that you have well characterized your Hamiltonian and understand the physics of that system.
While some of your employees are authoring or coauthoring perfectly-reasonable papers on various QC topics, those papers still bear essentially zero relation to your marketing hype?
I hate the term “reasonable papers.” Sorry. It sounds like the quantum computing gestapo to me. But beyond that what hype are you talking about in press releases. Their news section has absolutely zero about their latest NIPS demo (which is apparently what set you off, Dr. Optimizer.) If anything, I think your beef has to be with the science journalists who are producing articles on the recent paper or with Hartmut Neven whose blog post on the google research blog has more meat to argue about (the last lines are classic.)
The academic physicists working on superconducting QC–who have no interest in being scooped–still pay almost no attention to you?
Argument by authority? Really?
So, what exactly has changed since the last ten iterations?
Actually if you read the NIPS demo paper you would see that there is some interesting new stuff. In particular you would note that they believe they have 52 of their 128 “qubits” functioning. Independent of whether this thing quantum computes or represents a viable technology, getting 52 such flux qubits to operate in controllable manner such that they can read out the ground state to the combinatorial problem at all is, in my opinion, an impressive feat. The fact that they thought they would be at 128 qubits about a year ago is also a warning to me that this shit is hard. Also the paper gives a nice list of the “problems” they are encountering. In particular they acknowledge here the difficulties arising due to finite temperature and to parameter variability. You’d also read that their classifier doesn’t outperform the one they compare against for false-positives (and the real issue with that paper is that comparison, and no comparison of running times! So yes there is something new here and yes it is interesting and yes it still makes me skeptical of D-wave’s chances!)
Why are we still talking?
Good question! I hope you will forgive me, Optimizer, for I have sinned.
Okay, well now that I’ve got that out of my system. Whew. The rest of the Optimizer’s discussion of D-wave rings partially true. Though his detour into criticizing their AQUA at home seems silly to me (who cares, really? Have you really met someone who makes the argument presented? Was he or she surrounded by Dorothy and the Tin Man?) Certainly the fact that they are working with Google doesn’t convince me of much (sorry Google.) But I will stand by my criticism that just saying “quantum coherence” and “entanglement witness” as the things that must be demonstrated for D-wave to make an interesting device is wrong. Indeed, I’d probably argue that the reason quantum computing folks have made slow progress is that they themselves are hung up on this approach to building a quantum computer. It sure appeals to the scientist in everyone to validate every stage of everything you do, but technology development is different than science. For D-wave validating that their final and initial Hamiltonians are working as they think is important, but beyond that do they really care about whether they create entanglement? Of course, it’s my own opinion that their system will fail (finite temperature and problems with parameter controls in the middle of the computation) but holding them to the quantum gate standards doesn’t do it for me (though everytime I see their slogan I choke…quantum computing company? How about quantum technology company guys?) And the fact that we get our information second hand makes this whole argument rather academic: we don’t really know what is going on behind the walls of their Surrey, B.C. offices (cue conspiracy theories.)
Like my pa said: if you can’t defend both sides, your not having fun 🙂 (Hey I said tomato throwing, not watermelons or knives! **Ducks**)
Oh, and peoples, stop pestering Scott about D-wave, he’s just shown a major result in quantum computing and should be out celebrating (the jalapeno burger and beer at is good. I think I owe him one next time I’m in Boston.)