One of the coauthors on the paper which I claimed was shoddy has written a comment in the original post. Which merits more commenting! But why comment in the comment section when you can write a whole blog post replying!
The paper in question is 0804.3076, and the commenter is George Viamontes:
Dave, this is a complete mis-characterization of the paper. Before I start the rebuttal, I’ll add the disclaimer that I am the second author of the paper, and would be more than happy to clarify this work to anyone.
We are absolutely well aware of the threshold theorem and we even cite one of Preskill’s post-1996 papers that directly discusses the threshold result of Knill and Laflamme.
Aware, yes, but did you read Knill and Laflamme and Zurek and Aharonov and Kitaev and Preskill and Gottesman and Alifers and Reichardt and Shor and DiVincenzo, and all of the other papers which construct the theorem? The paper clearly disregards how these results work. But putting that aside:
The problem with the threshold theorem for QECCs is that it only applies to errors that are *orthogonal* to the code basis. This is the whole point of all of the QECCs I’ve seen based on codewords, and it is explained quite clearly in Preskill’s paper.
Of course QECC does not fix perfectly errors all errors. This isn’t the point of error correction. The point of error correction is that you can reduce the error rate in a way that scales exponentially better as you use larger and larger codes. This is also true for classical codes of course: if I independently error a three bit redundancy code with a probability p bit flip, even given perfect encoding and decoding, there is a probability p cubed error which acts on the logical code space (worse there is a p square uncorrectable error). The whole point of error correction is that you can use good codes, or concatenate, and if your original error rate is p for a bare qubit or bit, then it is possible to make the failure rate some q raised to the nth power, where n is the number of qubits. If you use concatenation this is easy to see: by encoding twice, for example, your error rate goes not from p to p squared, but from p to p to the fourth power. So for the errors which concern you, the point is that if you concatenate your seven qubit code, or use better codes, and your overrotation is below a certain threshold, the error correction failure rate will be exponentially suppressed in the number of qubits.
It seems that you didn’t read the first three formulas at the top of page 4 of our paper or the description of the error model we use on pages 11 and 12.
Actually I did, BTW
We model gate imprecision, which adds a random epsilon of rotation error to each gate (in lay terms you add a small random epsilon error value to the sin/cos parameters of the matrix representation of the operators). Forgetting simulations for the moment, what happens with the gate imprecision model is that two types of error are created, and only one of these is corrected by a QECC. Indeed, some non-zero amplitudes that are associated with invalid codeword states will manifest when applying an imprecise gate. These errors will be caught by a QECC since this portion of the state is orthogonal to the codeword basis. However, another error occurs which over- or under-rotates the part of the state associated with valid codewords. This is exactly what is shown in the first three formulas on page 4 of our paper. These errors will not be detected by a QECC, and these are precisely the errors we are talking about.
You seem to think that the point of QECC is perfect correction. This is obviously impossible (classically as well as quantumly.) The entire point of error correction is to reduce the rate of errors in a manner which scales nicely. See above. This is a basic and essential point of nearly all ideas for fault-tolerant quantum computing. Indeed it is a basic idea of error correction, period.
For example, suppose we say that |L0> and |L1> are our logical codewords for 0 and 1 (let’s say they are 7-qubit CSS code registers initialized appropriately). Further suppose we start off with a valid error free state a|L0> + b|L1>. Now suppose we transversally apply a logical X gate to this state, well then we’d expect to get a|L1> + b|L0>. However, if there is gate imprecision error in each of the physical X operators applied transversally (exact precision is physically impossible), then you will get something like c|L1> + d|L0> + f|E>, where |E> is an invalid codeword basis vector. Certainly f|E> will be eliminated by a QECC if the magnitude of that error is large enough. However, even after correcting this error, the amplitudes for the |L1> and |L0> portions of the state will be slightly off from a and b because part of the action of an imprecise gate is to under- or over-rotate those basis vectors.
Of course. The real question is how this uncorrectable error scales as you do something which nearly everyone does, like concatenate codes, or use codes designed to correct more errors.
In other words, in the case of a logical X operator, this is a lot like applying “too much” or “too little” legitimate X rotation, which essentially amounts to a circuit that is fault-free according to a QECC but functionally slightly different from the real fault-free circuit with no under- or over-rotation. So yes, part of the error is eliminated, but not the part that affects the valid codeword basis vectors.
Again, no argument there. But your missing the point of error correction.
To address your point about our simulations, the diagrams you are referring to are merely there to show how the codewords are functionally created and how error correction is done. As noted in the text, we merely point out that regardless of how you want implement the codeword initialization or syndrome measurement, all the physical operators involved will be imprecise.
Of course. But why do you need to point this out? This has been a stable of people studying quantum computing for over a decade now. The line in your paper which you say that decoherence has been the main concern of the community is, in my opinion, a really really really misinformed statement.
More importantly, at the bottom of page 12, as we try to explain in the text it’s really the transversal X gates we apply that carry the gate imprecision error. As X gates are transversal, these are being applied fault-tolerantly, albeit with operator imprecision on each X operator. We’ve run the simulation both ways assuming a clean initialization of a logical 0 and even clean error correction machinery and assuming faulty versions of each using the simple functional circuit which indeed does not incorporate extra fault-tolerance. I think you are assuming we only did the latter, but both versions exhibit the same linear trend in the probability of error increasing, where in both cases the probability of error is calculated by directly comparing to the amplitudes of fault-free operation.
Again it is not surprising to find these errors. But your paper makes absolutely no mention of not using the circuits you constructed. And again, this makes no difference to the fact that you completely ignore the point of how quantum error correction is used in the threshold theorem.
Indeed, we can be more clear about the fact that even with fault-free state initialization and fault-free syndrome measurement/recovery, the trend is the same. This is just a first draft posted on arXiv after all, not a camera copy.
Fine, but do realize that posting on the arXiv for many is a defacto publication for many.
The results of merely using faulty transversal X gates and assuming everything else is error free are particularly telling because it seems to show that the under- and over-rotations on the valid codeword states do accumulate, and they are spread across all the physical qubits since they are not orthogonal to the codewords. This coupled with the fact that in *all* cases we provide pristine ancillary 0 qubits for each syndrome measurement shows that this experiment clearly gives the benefit of the doubt to the QECC being simulated.
Again, of course even if quantum error correction is done correctly it can never perfectly perform error correction. The point of error correction is that it can lower error rates. This is a fundamental, and basic, observation that you have totally overlooked.
This work is also merely a first step in that we intend to simulate other circuits and other QECCs with many different parameters. The problem is that this subset of the errors not caught by a QECC seem difficult to quantify, which spurred us to propose simulation to analyze this problem.
Well now here you are completely ignoring the fact that these errors have been considered. Indeed, there are even beautiful papers where rigorous provable threshold follow for handling these errors (see for example Aliferis, Gottesman, and Preskill quant-ph/0703264 or the thesis of Aliferis quant-ph/0703230 for the latest and greatest.) I do know that these papers are not the most accessible papers, but even a cursory reading of them would inform you that they are considering exactly the type of error which you wrongly claim doom quantum computing.
In every case we expect the non-orthogonal errors to accumulate just as they do in this simulation even when assuming error-free QECC machinery. We are definitely open to constructive feedback about this topic, and we posted this paper with the intention of getting such feedback.
However the claim that this work is bogus in any way is clearly ridiculous, and no offense intended, but it seems to have stemmed from not reading through the draft very closely.
Dude, I’m tearing apart your paper on a blog. Of course I’m not offended. You’re the one who has every right to be offended. And indeed my first reading saw the errors you were considering (which have been previously considered), your circuit diagrams, (which are not fault-tolerant and are highly misleading if you didn’t actually use them), and your conclusion (that logical operations with over rotations still cause errors: of course they do, the point is how error correction reduces this as the number of qubits used in your error correcting code), and knew immediately that you were missing the point of what had been previously done. I did focus on your not using fault-tolerant circuits, because that was the most obvious thing you did wrong, but really you’re also missing a very basic point of how the entire edifice of fault-tolerant quantum computation works. The fact that you state right up front things like “In fact quantum error correction is only able to reliably suppress environmental and operator imprecision errors that affect only one or a few physical qubits in a Quantum error-correcting code (QECC) code block” tells me that you haven’t gone carefully through the literature on fault-tolerance.
We certainly should clarify those diagrams and the fact that we don’t introduce errors into the QECC machinery in the actual simulations, since that makes it clear that we are not hiding errors in some way to sabotage the CSS scheme. We are working on a subsequent version to clarify this part of the discussion, but the ultimate results are the same.
While it is fine to say “this is just a preprint” who cares, you have to realize that posting things to the arxiv does have an affect beyond just putting your idea out to others. In particular, as I noted originally, papers like this immediately get picked up by those who haven’t even done the work you’ve done and learned quantum error correction. I mean, if I were going to write a paper saying an entire subject area that well over a thousand people have been working on for over a decade, I might, before I post the paper to the arxiv, run it by a few people who know the field. Me, I’m grumpy and old and mean, but the result of the quantum computing field is full of really nice people, like Preskill, DiVincenzo, Shor, Knill, Laflamme, Zurek, Gottesman, Aliferis, etc who I’m sure would give you great feedback.