Self-Correcting Quantum Computers, Part IV

Quantum error correction and quantum hard drives in four dimension. Part IV of my attempt to explain one of my main research interests in quantum computing:
Prior parts: Part I, Part II, Part III.

Quantum Error Correction

Classical error correction worked by encoding classical information across multiple systems and thus protecting the information better than if it was encoded just locally. Fault-tolerant techniques extend these results to the building of actual robust classical computers. Given that quantum theory seems to be quite different from classical theory, an important question to ask is whether the same can be achieved for information encoded in a quantum manner. The answer to this question, of whether quantum information can be successfully protected even when the quantum system being used is exposed to unwanted evolutions, is one of the great discoveries of quantum computing. In 1995, Peter Shor and Andrew Steane showed that, by using some clever tricks, one could perform quantum error correction on quantum systems and therefore preserve quantum information by suitably encoding the quantum information across multiple independently erred systems. This was a remarkable result, and was the beginning of a series of important discoveries about quantum information which showed how a reliable quantum computer was possible, in spite of the seemingly odd nature of quantum information.

i-6496bb871c79c5f51e0ab4cb6160b945-Thresholdthm.jpg
The threshold theorem for fault-tolerant quantum computing moves the model of quantum computation from totally crazy to fundable

The culmination of research in quantum error correction is usually expressed in terms of a result known as the threshold theorem for fault-tolerant quantum computing. This result states that if a quantum system can be controlled with enough precision, and does not enact with its environment too strongly (both below a threshold) then long quantum computations can be enacted with a cost which scales efficiently with the desired computation being enacted. The threshold theorem essentially states that the model of quantum computing is emphatically not the model of an analog computer, and that, assuming we understand how quantum theory and physics works, a large scale quantum computer is possible. That being said, the conditions for the threshold theorem for quantum computation are severe. While small scale quantum computers of a few qubits have successfully been demonstrated, none of the systems has been easily scaled up to a large scale system needed to test the threshold theorem of quantum computing, nor are all of the quantum computers demonstrated below the threshold values in terms of control of the quantum system.

Quantum Hard Drives in Four Dimensions

Given that the quantum error correction is possible, a natural question to ask is whether, like in classical computers, there exist, or whether we can engineer, quantum systems which robustly store quantum information via the physics of these quantum systems. The first to suggest that this might be a viable path toward constructing a quantum computer was Alexei Kitaev. Kitaev suggested that there were certain physical systems, related to topological field theories, where one could encode quantum information into their ground states, and an energetic gap would protect this quantum information from certain errors. The model Kitaev considered could be made into robust storage devices, but were not, by their physics alone, fault-tolerant. Thus, while enacting a computation, Kitaev's original models were not robust to error. A way around this, however, was found: if instead of using Kitaev's model in the two spatial dimensions originally considered, one looked at these models in four spatial dimensions, then the resulting physical system would be self-correcting and fully fault-tolerant due to the physics of these devices. This model, considered by Dennis, Landahl, Kitaev, and Preskill, was, in essence a recipe for constructing a four dimensional quantum hard drive. However, unfortunately, we do not live in a four dimensional world, so this model is not realistic.

Toward Quantum Hard Drives in Real Systems

i-b54c29e8621938087ca65463f39b33c0-Grid.jpg
A potential quantum memory

So, given that we know that fault-tolerant quantum computation is possible, under reasonable assumptions, and we know that there exists models of physical systems which can enact these ideas in a natural setting, an important, and vexing problem, is whether we can engineer realistic physical systems which enact these ideas and which don't have bad properties, like only existing in four spatial dimensions. This is the focus of my own research on "self-correcting" quantum computers: to develop techniques for building quantum computers whose physical dynamics enacts quantum error correction and which therefore don't need an active quantum error correcting control system. One of our proposed systems, is the three dimensional system seen to the right of this picture. For details on this models see arXiv:quant-ph/0506023.

So, is it possible that building a quantum computer will be done via self-correcting quantum computers? At this point we don't know the answer. We have a few good examples of such self-correcting systems, but none of them, as of yet, are completely reasonable. But if such systems exist with reasonable interactions/geometries this might present a different method for building a quantum computer than the path pursued by the majority of the quantum computing community. In other words, high risk and high reward.

Categories

More like this

Great series. Everything up to this article describes classical error correcting methods correct? (Classical meaning the two non-quantum examples you start with). The comments seem otherwise and it is confusing me.

All of "classical computing" basically rests on that principal of self organization leading to highly detectable "boundaries" of states that you describe for disks. Kind of the emergent properties so often thrown about. With that we can ignore the physics underneath once we've got it working. So we can talk about domains in discs or flip-flops without really losing anything needed.

Qubits seem to be the high level thing added by Quantum Computing analogous to a flip-flop. Now my question about them is are they more like the domains on a disk, (or say a dynamic memory cell) or are they more like flip flops?

That is, reading about QC it seems like the goal is more like - have an array of qubits, Write some states in them, then go away, let the system evolve, come back and read them out. This is more like dynamic memory cell or disk domain as opposed to having a layer of logic gates, adders, fp registers, memory, whatever built of flip flops.

I guess I am just missing that intermediate layer of organization, it must be there.

Of course, I am biased towards Preskill, given my Caltech degrees, back in the Feynman/Gell-Mann era, and Preskill's great online tutorials.

"a recipe for constructing a four dimensional quantum hard drive. However, unfortunately, we do not live in a four dimensional world, so this model is not realistic."

Ummmm... how many dimensions does the world actually have? With what signature? Are you 100% sure? I kind of like the speculations on closed timeline loops in computation (done seriously by Scott Aaronson and fictionally by Charles Stross).

And who's to say that, after nanotechnology, we do Picotechnology, Attotechnology, and somewhere along the way start building computers out of loops quantuj gravity thingies, or strings, or pregeometry. As Greg Egan has fictionalized.

I need to justify all the time I'm spending gluing together 4-simplexes (pentatopes) and straining my brain trying to Wick rotate them from Euclidean space to Minkowski space.

How much do we really know about Turing machines whose tapes or heads are Lorentz transformed?

Dave: Following on Jonathan's post, there is a 4 spatial dimensions + 0 time dimensions == 3 spatial dimensions + 1 time dimensions equivalency by exploiting the equivalence of the evolution operator in 3+1 D quantum mechanics and classical 4+0 D statistical mechanics. (ie imaginary time is identified with inverse temperature). Maybe you could encode something in 4+0 stat mech and then map back to 3+1 QM?

Also while I agree 100% with the direction you're going with this (build QCs whose natural evolution passively protects against errors) I think you're going about it the wrong way. If your objective is to actually build a real QC, your starting point has to be the actual devices & architectures you can build in real life, ie you have to start with some set of real physical Hamiltonians of actual systems that can be built, including the actual environments they sit in. This then constrains everything that happens after. You aren't really free to suggest a Hamiltonian or method of use that is not in that original set if you want to really build a machine.

Your ideas are exactly in the right direction in my opinion. But in order to be actual prescriptions for building real machines you need to further constrain to the sets of really achievable Hamiltonians (which includes of course realistic environments).

Dave, I want to add my voice to the (many) folks who think this was a outstanding series of posts. Thank you!

One thing these posts did, was sharpen my appreciation that the boundary between classical and quantum computation is becoming markedly less distinct. This unifying trend is becoming apparent simultaneously in the mathematics, and in the physics, and in the engineering, of modern memory technologies.

To reflect back the point your posts were making (as I understood it ... with an engineering spin), ordinary everyday computers use *two* strategies for preserving memory integrity.

One strategy is passive: let the ambient thermal environment protect the bits ... this what disc memory does. The other strategy is active: (1) observe the qubits continuously, and (2) apply error-correction continuously, and (3) concatenate levels of error-correction to achieve arbitrarily low error rates. This is what SRAM memory does.

QIT teaches us a tremendously powerful insight: the above two strategies are fundamentally identical. The proof is simple: if we describe the passive strategy in terms of a set of Lindblad generators for noise and damping, and we describe the active strategy in terms of a set of Lindblad generators for measurement and (stateless) control, then we find that each set of Lindlbad generators can be written in terms of the other set.

So we might as well imagine that each bit on the disc drive is protected by measurement-and-control circuit ... which the clever disc-drive designers have arranged to let Nature provide for us!

Equally, we might as well conceive SRAM cells as 6-state quantum circuits whose architecture is inherently error-protected against bit-flips induced by (damped and noisy) Lindladian contact with a thermal reservoir.

From this point of view, the physics-protected memories that your posts envision exist already, in the form of SRAM memory. Now if only SRAM architecture corrected phase errors in addition to bit errors ... then quantum computers would be good-to-go ... hmmmmmm ...

Nowadays, as memory technology approaches quantum limits, it is not only permissible to think of memory in these quantum ways ... it's mandatory! :)

Also while I agree 100% with the direction you're going with this (build QCs whose natural evolution passively protects against errors) I think you're going about it the wrong way. If your objective is to actually build a real QC, your starting point has to be the actual devices & architectures you can build in real life, ie you have to start with some set of real physical Hamiltonians of actual systems that can be built, including the actual environments they sit in. This then constrains everything that happens after. You aren't really free to suggest a Hamiltonian or method of use that is not in that original set if you want to really build a machine.

Hey Geordie (sorry for the slow reply...hard to reply while hiking around Greece :) )

I agree with you that staying close to what is experimentally possible is of the up most importance. However, if I may put on a theoreticians hat, I would also like to suggest that there is often great benefits for looking at the bigger picture and trying to understand if and when self-correction is possible at all. Right now we only have the toric code in four spatial dimensions as an example. Moving this towards examples which are more reasonable (i.e. say using only two qubit interactions in your Hamiltonian, and working in at most three spatial dimensions) is, I think, a reasonable approach toward moving this towards experimentally feasible. Of course even having moved the goal post toward ``reasonable'' Hamiltonian's doesn't solve the problem of how to implement this in a real experiment. But it would sure be closer than our current state.

This isn't to say that the reverse approach of working with the details of physics to engineer devices which are resistant to decoherence/noise/lack of control can't be fruitful. However, on a deep level, I definitely feel that the problem of building noise resistant quantum computers requires at least some of the major ideas of quantum error correction or quantum control. Thus it seems hard to imagine wandering around with the messy physics of a device and stumbling upon these big ideas.

Dave Bacon says: "It seems hard to imagine wandering around with the messy physics of a device and stumbling upon these big ideas."

Dave, what you say surely is a Great Truth. Which means (according to Bohr's Principle) that its exact opposite is also a Great Truth! :)

The following quote from John Bardeen expresses that complementary Great Truth: "Invention does not occur in a vacuum. [...] Most advances are made in response to a need, so that it is necessary to have some sort of practical goal in mind while the basic research is being done; otherwise it may be of little value. [That is why] there is really no sharp dividing line between basic and applied research."

Now, I know what you're thinking ... if Bardeen's pragmatic principle *really* worked, wouldn't Bardeen have won more Nobel Prizes in physics than just two? :)

If we balance these two Great Truths, we are left with Terence Tao's observation that sometimes progress flows from the concrete to the abstract, other times the opposite.

By the default notion of dimensionality, it was no easier to build geometrically 4-D hard drives earlier in the history of the cosmos.

Consider Buettner et al., "Review of Spectroscopic Determination of Extra Spatial Dimensions in the Early Universe"
http://arxiv.org/PS_cache/astro-ph/pdf/0312/0312425v1.pdf
16 Dec 2003:

"We live in a four dimensional space-time world. This has been checked experimentally with great precision [Zeilinger, Anton and Svozil, Karl, 1985, Phy. Rev. Lett., 54, 2553; Muller, Berndt and Schafer, Andreas, 1986, Phys. Rev. Lett., 56, 1215; 1986, J.Phys. A, 19, 3891.]. Nevertheless, in principle there does not seem to be a reason for this, and in fact the universe might have any number of dimensions. The physics of extra dimensions began with the work of Kaluza and Klein. They proposed uniting Maxwell's theory of electromagnetism and Einstein's theory of gravitation by embedding them into a generally covariant fivedimensional space-time, whose fifth dimension was curled up into a tiny ring which was not experimentally observable. More complicated non-abelian theories can be obtained in much the same way, by starting with more dimensions and compactifying them in various ways.
In recent years the idea of extra dimensions has been resurrected. The main reason is that the leading candidate for providing a framework in which to build a theory which unifies all interactions, superstrings, has been found to be mathematically consistent only if there are six or seven extra spatial dimensions. Otherwise the theory is anomalous...."

"The experimental spectroscopic data from ancient light shows that the dimension of space was 3 (present value) very soon after the Big Bang. The extra dimensions that some theories predict must either occur at very early time or somehow be restricted such that ordinary baryonic matter cannot couple to it."

This kind of analysis is why writing "hard Science Fiction" is hard.

And so is building self-correcting Quantum Computers.

For that matter, hard to build these gadgets unless very very small, as stable 4-dimensional and 5-dimensional atoms are not easily obtained, and thus probably not allowed to be shipped by snailmail to labs in the USA.

Mario Rabinowitz, "No stable gravitationally or electrostatically bound atoms in n-space for n > 3"
Authors:
(Submitted on 27 Feb 2003 (v1), last revised 30 Mar 2003 (this version, v3))

Abstract: It is demonstrated in general that stable gravitational or electrostatic orbits are not possible for spatial dimensions n >=4, and in particular atoms cannot be bound by energy constraints in higher dimensions. Furthermore, angular momentum cannot be quantized in the usual manner in 4-space, leading to interesting constraints on mass. Thus Kaluza Klein and string theory may be impacted since it appears that the unfurled higher dimensions of string theory will not permit the existence of energetically stable atoms. This also has bearing on the search for deviations from 1/r^2 of the gravitational force at sub-millimeter distances. The results here imply that such a deviation must occur at less than ~ 10^-8 cm, since atoms would be unstable if the curled up dimensions were larger than this.

Comments: 10 pages, 1 figure, 0 tables
Subjects: Atomic Physics (physics.atom-ph); General Physics (physics.gen-ph)
Cite as: arXiv:physics/0302098v3 [physics.atom-ph]
[v3] Sun, 30 Mar 2003 01:11:41 GMT (48kb)