What Do You Need to Make a Quantum Computer?

(This is the second of two background posts for a peer-reviewed research blogging post that has now slipped to tomorrow. I started writing it, but realized that it needed some more background information, which became this post. And now I don't have time to write the originally intended post...)

Making a quantum computer is a tricky business. The process of quantum computing requires the creation of both superposition states of individual quantum bits (in which the "qubit" is in some mixture of "0" and "1" at the same time) and also entangled states of different qubits (states where the state of one qubit depends on the state of another bit so that measuring one absolutely determines the value of the other). The only way to do this is to have some sort of controllable interaction between the qubits in your quantum computer, but once they're created, these states are extremely fragile, and the only way to keep them around long enough to finish a calculation is to shield them from interactions with the outside world. But, of course, at the end of the day, you need to be able to get your answer out of the quantum computer, which requires interaction with the world outside the machine...

Finding the right sort of system, and getting everything to work in the right way, is an incredibly fiddly balancing act. You need to have just the right amount of interaction, and just the right amount of coupling, but if you're too far off in one direction or another, the whole thing falls apart.

A few years back, a group from Los Alamos created a "Road Map for Quantum Computation laying out the state of research, and what you need to make a practical quantum computer in very clear language. The detailed notes about research are kind of dated now-- a lot has been accomplished since the 2004 publication of the "Road Map"-- but the list of qualities that a workable system needs to have is still as good a description as I've seen. This takes the form of five "criteria" originally listed by David DiVincenzo of IBM:

  1. A scalable physical system with well-characterized qubits.
  2. The ability to initialize the state of the qubits to a simple fiducial state.
  3. Long (relative) decoherence times, much longer than the gate-operation time.
  4. A universal set of quantum gates.
  5. A qubit-specific measurement capability.

These can probably use a little unpacking...

A scalable physical system with well-characterized qubits. A "qubit" is basically any quantum system that has two distinct states that can be used as the "0" and "1" for computation. These should ideally be long-lived states-- there's not much point to making a computer whose "bits" decay back to zero in a matter of nanoseconds-- and you would like it to be as close to a two-level system as possible. As Bill Phillips was once misquoted, "there are no two-level atoms, and sodium is not one of them," so it's almost impossible to find real two-level systems, but you can find lots of systems where there are two levels that are well away from any other states in the system, and that's good enough.

"Well-characterized" means that you should know what the states are, where they are, and what possible interactions they have. If there's some other state or outside factor that can affect the states, you need to know about it.

"Scalable" means that you need to be able to stick a lot of qubits together. The killer app for a quantum computer is the factoring of very large numbers for code-breaking, and this will require hundreds of qubits even under ideal circumstances (about 330 just to represent a 100-digit number to be factored), and probably thousands of qubits if you want to use error correction techniques to protect your calculation. A candidate quantum computer system that only allows a maximum of 10 qubits isn't much more than a toy.

The ability to initialize the state of the qubits to a simple fiducial state. The nature of quantum measurement means that once you start a computation, you can't just look at the system partway through and make sure it's working properly. All you can do is set the system up, and run through a prescribed series of operations, then look at the end result, and see if that's the answer you wanted.

As a result, it's absolutely critical to be able to "initialize" the quantum computer. You typically imagine setting the bits to "00000000..." but it could be anything at all-- "1101001010111....." is perfectly fine, as long as you know exactly what state you start with, and can get exactly that state every time you start out. If your initialization process doesn't give the same result every time, you'll never be sure whether you actually ran the right calculation.

Long (relative) decoherence times, much longer than the gate-operation time. "Decoherence" refers to the tendency of quantum states to fall apart over time due to interactions with the environment. If you create a quantum superposition state consisting of some mixture of "0" and "1," and wait a while, you'll find that it ends up in either "0" or "1," with some probability. You need the superposition states to do quantum computing, so this is a Bad Thing.

In order to be able to complete a computation, you need to do lots and lots of operations, each of which takes some finite time (the "gate-operation time"). The time it takes a superposition of two qubit states to fall apart (the "decoherence time") needs to be longer than the time it takes to run all the operations for the calculation you want to do, otherwise your computer will lose its quantum nature partway through, and then you're screwed.

A universal set of quantum gates. "Gates" is a term of art meaning "operations on one or more qubits." A "universal set of gates" is the smallest number of possible operations that you need in order to be able to do any computation you want.

There are a couple of ways of doing this, but one example of a "universal set" of gates is a rotation operation (that takes a "0" to a "1" or any arbitrary superposition of "0" and "1") plus a "controlled-NOT" operation (that flips the state of a target bit only if a second, "control" bit is 1). If you have a way to do those two operations on your system, you can do absolutely any operation you want to do, in the same way that you can do any classical computing operation using NAND gates.

A qubit-specific measurement capability. Once you've done the computation, you need to read the answer out. In order to do this, you need some way of measuring the state of each and every qubit individually. A computer that doesn't let you find out the result of the computation isn't much good.

There are a bunch of different schemes out there for doing quantum computation, none of which have managed to meet all of these criteria yet (to the best of my knowledge). All of them have different strengths and weaknesses, and some of them have obvious paths forward toward meeting these criteria, while others really have no clear road to progress.

At present, there is still no obvious perfect system for building a quantum computer, but I'll post some peer-reviewed blogging tomorrow talking about a particularly cool idea for a candidate system.

Categories

More like this

Here's a question about initialization. Let's say that we can initialize to the correct state, say, half the time. And let's say that the problem we're solving is in NP, so we can test if we got the correct answer quickly. (Factoring, for example -- we can test if an answer is correct by multiplication, on a classical computer.) Would this in some sense be "good enough" -- since we could run the factorization, and then it'll output the checkable right answer with probability 1/2, so on average we only have to run our computation twice to get a known right answer?

. . . and probably thousands of qubits if you want to use error correction techniques to protect your calculation.

As far as I know, in most systems currently discussed this is a necessity and not just a nice bonus. Maybe there are some systems with a big enough ratio of coherence times to gate times to actually do real calculations, but it seems like the holy grail is getting your coherence times long enough to satisfy the threshold theorem. The optimistically quoted figure I hear a lot is on the order of 10^5 gate operations within the coherence time; once you get there, error correction gets you the rest of the way.

I think error correction also typically involves making measurements along the way and then rejiggering how you proceed based on what you project to, which of course means you're going to need to be making pretty fast, high accuracy measurements. In superconducting QC this can be a tricky issue in itself, as the fastest measurements kick you out of the logical basis, and nice projective measurements are frequently pretty slow.

Isabel:

A couple comments.

If you could initialize a single qubit correctly only half the time you would have a problem. Why? Well to initialize a single qubit correctly half the time, over n qubits, implies that you will correctly initialize all n qubits 1/2^n times. Thus your algorithm will only correctly work with an exponentially small probability. This is not good.

That being said, there are ways to "purify" out improperly initialized qubits from imprecisely prepared qubits. So you could take a bunch of improperly prepared qubits and distill out properly distilled qubits.

But more to the point, I think, of what your asking: if you can solve a problem with only a 50 percent probability and you can verify whether the answer you got is right, is this _okay_. The answer is yes. Why? Because you can run the algorithm many many times. The probability that you will fail all of those times is 1/2^k (remember you can verify whether the answer is correct or not.) Thus the probability of failing is 1-1/2^k. For k=20 this is like 0.00001. If you really really want to be sure, just set k=100 which gives you a failure probability of 8 times 10^{-31}.

bcooper:

The optimistic ratios for the threshold are 10^2. But they require huge numbers of qubits and are not practical. More reasonable architectures yeild ratios of 10^3-10^4.

Also note that for many systems right now it isn't necessarily the coherence time that is the issue, but the precision and noise induced by the unitary gates. Some implmentations, like ion traps, have demonstrated all components of the threshold theorem below threshold, but not at the same time! That latter is the next big step for ion traps. Other systems (like some superconducting qubits) haven't yet reached this for all components.

Also two points about the slow/fast measurement issue. A nice paper by DiVincenzo and Aliferis in PRL discusses how to quantum compute with slow measurements and it actually (amazingly) turns out to not affect the threshold much (using some tricks the authors invented.) Also I think if you have fast destructive measurements are good if you can initialize again quickly.

A point along these lines which is often overlooked is that it often very helpful to have a way to decohere extremely rapidly as well as a way to not decohere. This is because error correction is mostly about dumping entropy: and if you have a way to dump quantum information into a system that dumps entropy fast, then this is very good.