You know, QEC07 participant, you're supposed to be watching the talks and not reading this blog! But if you are reading this blog, you might as well not just lurk and instead comment. That's right its a QEC 07 open thread.
To start things off, would anyone care to comment on Robert Alicki's final slide during his talk yesterday? For those not attending QEC 07, Alicki and coworkers have been looking at the properties of two and four dimensional Kitaev phases. For the four dimensional phase, it seems that Alicki could show that they serve as good quantum memories, but he claimed that it was algorithmically hard (not sure what that means) to "encode" information into the system. I didn't quite understand this: the generic problem of decoding a quantum error correcting code is certainly a generically very hard problem and it seems that the encoding problem is very similar, but my understanding of the four dimensional toric code was that this should be easily achievable by a local algorithm. Anyone have any ideas of what this could be?
- Log in to post comments
I was also confused by his last statement in the talk. I did misunderstand and fixate on decoding instead of encoding. (There is a nice analog of Toom's rule for local decoding of the 4D toric code [quant-ph/0110143], and Charlene Ahn and I worked out an analog of Gacs's construction for local decoding of the 2D toric code, albeit with a lower threshold. If global classical processing is allowed, then minimum-weight perfect matching is a polynomial-time algorithm that can be used in decoding.)
Looking on the arXiv, I only see discussion of the statistical mechanics analysis of Kitaev's 2D surface codes in quant-ph/0702102, which concludes by saying that the 4D case "will be dealt with in a forthcoming paper." I guess the argument for encoding being difficult rests somehow on working with initial states in thermal equilibrium instead of the true ground state.
I had no idea that IBM were below the fault-tolerance threshold. Panos's talk really opened my eyes on that.
Joe, I think that the error rates for the IBM SC qubit were estimates only.
Estimates or not, it's still impressive. I thought he quoted a 10% error margin on the numbers.
Can anyone explain in a simple snappy paragraph how those cool diagrams that John Preskill used to explain all those nasty pertubation calculations work?
Ashley, were those estimates based on what they have for 2 qubits?
I'm not sure, but I do remember a slide in the presentation where he mentioned Johnson noise and pulse timing errors and other noise sources that were included in the calculations.
Joe comments on his blog
Come on people's Mick just told me your comments are boring him, and the goal of this comment thread is mostly just to entertain Mick!
That's such a lie! For once I was like the fourth commenter...
I was diligently paying attention to the talks till Dave's posts started filling up my RSS reader.
Wanna know something that I think has been cool about this conference? There's actual physics in some of these talks - you know, pertubation theory, Magnus expansions and experimental talks with squiggly looking graphs and everything!
squiggly looking graphs
Mmmm, experimental data fit to exponential decays. Makes me drool too.
So, here's a question:
How many people think that the FT threshold's high frequency cutoff dependence (for the more general error models that John Preskill discussed this morning) is a harmless artifact of the math or is it something nastier?
mick, can you elaborate on that "high frequency cutoff dependence" for those of us not at the conference?
cheers,
sean
Dear all,
Greetings from Jerusalem! At the end I could not make it to QEC07 which was quite disappointing for me since I looked forward to many talks, to touching base with much exciting research, to getting a little more of the background that I am still lacking, and to meeting people many of whom I only know by name or had email correspondence with. (Including, of course, you Dave.) If there is a place with links to the various presentations I will be happy to know. My own presentation is here . And, as always, remarks are very welcomed.
Best wishes, Gil Kalai
I put my slides here:
http://www.theory.caltech.edu/~preskill/talks/preskill-usc-qec07.pdf
-- John
In my talk (see the slides) I listed error estimates based on theoretical work by Frederico Brito and David DiVincenzo. These estimates are derived by using a theoretical model for the IBM qubit described in arXiv:0709.1478, and they include contributions from several understood sources of noise for superconducting flux qubits: magnetic low-frequency fluctuations, thermal noise, inaccuracies in the pulse generators, and Johnson noise in the resistances. Although these noise sources reflect the best understanding of IBM physicists about decoherence in their devices, experiments today are limited by additional sources of noise that are currently not understood. Present fidelities are many orders of magnitude below the estimates in my talk, and experimental work is focused on "debugging'' the IBM qubit and increasing coherence times.
Ah, well, that makes more sense to me. Thanks.
Mick posted: So, here's a question: How many people think that the FT threshold's high frequency cutoff dependence (for the more general error models that John Preskill discussed this morning) is a harmless artifact of the math or is it something nastier?
Here's a partial answer to Mick's question. The fluctuation-dissipation theorem can be readily extended to a fluctuation-dissipation-entanglement theorem, in which the same matrix elements that control fluctuation and dissipation also control the entanglement of the qubit ground-state with the environment.
Physically speaking, the qubit states are "dressed" by the reservoir, in precisely the way that electrons are "dressed" by the vacuum.
In the limit that the reservoir damping has an infinite high-frequency cut-off, the renormalized "dressing" of the qubit states diverges (just as e.g. the electron mass renormalization diverges in quantum electrodynamics).
Is the qubit "dressing" an engineering problem for quantum computers? Well, it all depends on the cut-off, which in turn, depends on the nature of the thermal reservoir.
These effects definitely are both physical and calculable, this much is clear. Calculating the practical consequences would be tough, but feasible, IMHO.
I'm inclined to believe (as the other John may have been suggesting) that the primary effect of the very-high-frequency fluctuations of the bath is to "dress" the qubits rather that to drive decoherence. In my talk, I described attempts to estimate the accuracy threshold from bounds on a resummation of perturbation theory to all orders, for a model in which qubits couple to a bath of harmonic oscillators. For the case of (for example) a zero-temperature Ohmic bath, my threshold estimate actually diverges as the ultraviolet cut off goes to infinity (though it is less sensitive to the cutoff than previous estimates by Terhal and Burkard, and in my papers with Aliferis and Gottesman and with Aharonov and Kitaev). That is, for this model of decoherence, the rigorous argument fails to establish that fault-tolerant quantum computing will work, unless the high-frequency fluctations of the bath are sufficiently suppressed.
When I said that this might be a mathematical technicality, what I really meant is that a more clever resummation of perturbation theory might yield a better result (and in fact for the special case where only dephasing noise occurs and all the gates are diagonal in the computational basis, I can do a tighter estimate and I do get a much better result).
You might argue that once we have a few-qubit system in the lab for which experiments indicate that decoherence is weak, then we already know that the very-high-frequency noise does not have catastrophic effects. That sounds reasonable, but there is at least a logical possibility that over the course of a long computation the environment could be driven to a state far from its initial state, threatening scalability. We'd like to exclude this kind of nightmare with convincing general arguments, but this does not (yet) work as well as I would like.
Thanks John and John for your detailed answers, they've clarified my own thinking on this quite a bit.
An engineering-level discussion of the fluctuation-dissipation-entanglement relation appears in Appendix III of the article referenced below. For reasons that aren't quite clear to us, this relation is expressed in terms of Stieltjies transforms.
Caveat: the article is focussed on decoherence in quantum spin microscopy rather than in quantum computing. Many of the physical issues are the same, however.
@article{Sidles:03, author = {J. A. Sidles and J. L. Garbini and W. M. Dougherty and S.-H. Chao}, title = {The Classical and Quantum Theory of Thermal Magnetic Noise, with Applications in Spintronics and Quantum Microscopy}, journal = {Proceedings of the {IEEE}}, volume = 91, number = 5, pages = {799--816}, year = 2003, }
As a further remark, one of the many virtues of the independent-oscillator heat-bath model is that dynamical effects of renormalization can be solved exactly in terms of the dissipative kernel (the details are given in Appendix III.C and II.D of the above reference).
The trick is to work backwards from the "dressed" equations of motions to the "bare" equations of motion; this backwards computational strategy turns out to be very natural for oscillator heat-bath models.
For me, the fact that heat-bath renormalization effects can be calculated "backwards and forwards" helped clear up a lot of conceptual issues that troubled me when I was a graduate student. Those features of renormalization that seemed mysterious when they were "calculated forwards" became transparent when they were "calculated backwards" and vice versa.
That's my experience, anyway, and for me, a major reason why there's fun in heat-bath renormalization physics.
Hi Dave! The above post by "geciktirici" seems to be the product of a clever spam robot -- as judged by where I was taken when I clicked the author's link.
Either that, or the spam robots have achieved sapience and are taking an interest in QIT? :)
Dear John, all,
is it possible to tell how the "very-high-frequency fluctuations of the bath" in the more realistic/physics descriptions of noise from John Preskill's talk translate back to the more abstract qubits/gates/errors description?
Dear Gil
I will tell you how I think about qubit renomormalization, with the caveat that there all no startling revelations in what I am about to review.
The first is to recognize that even a two-state atom in an idealized ion trap is in contact with an infinite zero-temperature reservoir---the vacuum. And from decades of work in QED, we know the vacuum reservoir alters the mass and magnetic moment of the electron (the charge is not renormalized, but that is a technicality). The reservoir also induces dynamical damping, since the excited state of the atom can (and does) decay by emission into the vacuum.
From a quantum computing point of view, these effects are indeed important -- qubit gates must be designed using the physical mass and magnetic moment of the atom, and the dynamical damping is a source of error that must be error-corrected.
Now, what happens when we (a) introduce one (or more) addition qubits into the vacuum, and (b) give the vacuum a more complex structure (like physical cavity walls).
Well, all kinds of complicated things happen! All of which are of practical importance in quantum computing. A good place to read about this is Lowell Brown's Review of Modern Physics article (BibTeX appended).
For example, if there are *many* qubits in the exited state, then potentially we've built not only a quantum computer, but a laser too ... and the qubit errors will become strongly correlated.
As another example, two interacting qubits see not only each other, but also "image qubits" that appear in the electrodes.
Just to be clear, I *don't* regard these effects as show-stoppers ... but there are a lot of them ... and it will take extraordinarily sophisticated engineering to recognize them and design around them.
Definitely, much good physics is here.
@article{Brown:86, author = {L. S. Brown and G. Gabrielse}, title = {Geonium theory: physics of a single electron or ion in a {P}enning trap}, journal = {Review of Modern Physics}, year = 1986, volume = 58, number = 1, pages = {233--313}, }
Dear Gil
After writing the above, I looked around and found your QEC 2007 talk on-line. May I say, that IMHO it outlines a very important and interesting research program, that might be viewed as a research program for reviewing and extending existing renormalization theory to encompass issues relating to quantum entanglement.
That's a wonderfully interesting program IMHO. For those interested, many of the QEC 2007 talks are on-line here; they are excellent.
Thanks a lot, John!
There is a fascinating winter workshop here at the Hebrew University on Condensed Matter Physics and Quantum Information with Cold Atoms . It gives another demonstration that regarding "winning the battle" that Dave mentioned in some earlier post the concepts and insights of quantum information and quantum computation are "winning" when it comes to many areas of physics. This is a well deserved victory. (Which is not related to the issue of computationally superior quantum computers; but certainly was boosted by this possibility.)
Quantum circuits (even unprotected) give much wider possibility for modeling, and for simulations of quantum systems. Insights regarding computation (for example, the fact that describing the ground state can be computationally infeasible,) are also getting through.
Here are 3 "soft" further remarks regarding QEC.
1) John Preskill wrote: "There is at least a logical possibility that over the course of a long computation the environment could be driven to a state far from its initial state;"
Another logical possibility (or just another way to think about the possibility raised by John) worth mentioning is this:
When we wish to prescribe an evolution of a quantum system (and, say, we assume we follow the prescription up to small amount of error,) the errors in the course of the computation can reflect not only the past but also the future. (Perhaps there is even a symmetry between the role of the initial and the terminal states.) For example, when we have two photons and we prescribe them to interact in the future, following the prescription may already drive the environment to a different state at present.
2) Another remark is that we can think about 2 possible "worlds".
The Feynman-Deutsch (FD) world: In this world we have full scale quantum computation.
The Unruh (U) world: Realistic evolutions of quantum systems behave like unprotected quantum computers.
In both worlds we can suppose that every evolution of a quantum system in nature can efficiently be simulated by a quantum computer.
Look at the beautiful picture drawn by Daniel Gottesman in the second slide of his QEC07 lecture: This is a picture of Feynman-Deutsch islands protected by QEC and FT "walls" living in a large U-world, or as Daniel call it "the desert of decoherence".
(This picture already raise some difficulty regarding the need for full scale QC for simulating the evolution of realistic quantum processes - if these realistic quantum processes are themselves unprotected. I am not sure if there are convincing examples of nature-made "QEC or FTQC walls". But there are a few very interesting suggestions regarding possible such examples.)
Regardless of the possibility of FD-islands, the U-world is interesting to study. I suspect it is rich enough to describe known quantum phenomena; and studying it may lead to general principles on errors and decoherence.
Understanding decoherence in the U-world can be related to to the "walls" needed for these FD islands if they exist. Maybe even to an understanding that no walls will do the job.
4) Regarding the connection with perturbation methods, renormalization, QED, etc, that John Sidles mentioned, unfortunately I know too little on these matters. If it is possible to translate back the arguments regarding realistic scenarios that John Preskill talked about, to the qubits/gates model, to demonstrate the perturbation/renormalization methods, and to explain what the "ultraviolet explosion" is, in the abstract setting of qubits/gates/errors this will be very interesting. (And another triumph for Deautch.)
Thanks John Sidles: definitely that was spam. Didn't notice it on first read.
Thanks a lot