Over at Emergent Chaos I found an article which throws down the gauntlet over quantum computers. And there isn’t anything I cherish more than gauntlets thrown down!

Note: I should preface this by saying that I don’t consider myself a over the top hyper of quantum computers in the sense attacked by the author. I find quantum computers fascinating, would really like to see if one can be built, but find the hyperbole that accompanies any small advance in the field a bit over the top. However I also think the article misses a lot of important points (and insinuates that people haven’t actually thought about these points, something which I find as annoying as quantum computing hype.)

**Update:** Say it ain’t so Bruce. Say it ain’t so.

But onto the article. It begins with notes about quantum computer hyperbole, and about Scott Aaronson’s Scientific American article, and then jumps into the meat:

The crypto-obviating algorithms in question are Shor’s algorithm for factoring and an algorithm he developed for discrete logs. I was surprised to learn that Shor’s algorithm requires 72k

^{3}quantum gates to be able to factor a number k bits long. Cubed is a somewhat high power. So I decided to look at a 4096-bit RSA key, which is the largest that most current software supports — the crypto experts all say that if you want something stronger, you should shift to elliptic curve, and the US government is pushing this, too, with their “Suite B” algorithms.To factor a 4096-bit number, you need 72*4096

^{3}or 4,947,802,324,992 quantum gates. Lets just round that up to an even 5 trillion. Five trillion is a big number. We’re only now getting to the point that we can put about that many normal bits on a disk drive. The first thing this tells me is that we aren’t going to wake up one day and find out that someone’s put that many q-gates on something you can buy from Fry’s from a white-box Taiwanese special.

Okay a couple points are in order. First of all, the main limiting factor for Shor’s algorithm is modular exponentiation. Modular exponentiation is the procedure for taking two whole numbers a and N and computing a^{r} mod N. Interestingly it is also the limiting step in RSA encryption. Now the author of this article says that to factor a k bit number you need O(k^{3}) quantum gates. But, while this is true, it isn’t the real important part of the story. This is because while you require O(k^{3}) quantum gates (or classical gates if you are doing the RSA encryption), this does not imply that Shor’s algorithm requires O(k^{3}) time steps. In fact, it is fairly easy to trade time and space resources in modular exponentiation. A good resource for some of these tradeoffs is in Rod Van Meter’s thesis quant-ph/0607065. For example, Rod shows algorithms for modular exponentiation on a nearest neighbor linear architecture (which is very very restrictive) which require a time of O(k^{2} log k). So, to say that this naive analysis which just takes O(k^{3}), calculates it, notes it is a large number, isn’t nearly half the story. A more complete version of the story can be found in Rod’s thesis, and in particular I would direct you to the plot on page 151. There, for example, you will learn that the real story is that a quantum computer running at 1 MHZ could factor a 4096 qubit number using millions of qubits, in about an hour, even with the severe restriction of the architecture being a linear nearest neighbor system.

Then the author goes on to say some very strange things

A complication in my calculations is the relationship between quantum gates and quantum bits. For small numbers of qubits, you get about 200 qugates per qubit. But qubits are rum beasts. There are several major technologies that people are trying to tease qubits out of. There’s the adiabatic techlogies that D-Wave is trying. There are photon dots, and who knows how many semiconductor-based methods.

Quantum gates per qubit? I assume the author is refering to problems with quantum error correction, but have no idea where the 200 quantum gates per qubit number comes from. Anyone? (And “photon dots?” The author probably means quantum dots used to produce single photons and linear optics, but “photon dots” what the hell are those?)

The confusion continues with

But let’s just assume that they all work as advertised, today. My next observation is that probably looking at billions of q-bits to be able to get trillions of q-gates. My questions to people who know about the relationship between quantum gates and quantum bits yielded that the real experts don’t have a good answer, but that 200:1 ratio is more likely to go down than up. Intel’s two-billion transistor “Tukwila” chip comes out this year. Five trillion is a big number. We are as likely to need 25 billion qbits to factor that number as any other good guess. Wow.

Yeah, certainly a billion qubits is a lot, but the numbers I see are millions to tens of millions.

Finally we get something truly strange:

The factoring that has been done on today’s quantum computers is of a four-bit number, 15. If you pay attention to quantum computing articles, you’ll note they always factor 15. There’s a reason for this. It’s of the form (2

^{n}-1) * ( 2^{n}+1). In binary, 2^{n}-1 is a string of all 1 bits. A number that is 2^{n}+1 is a 1 bit followed by a string of 0s, and then a 1 again. These numbers are a special form that is easy to factor, and in the real world not going to occur in a public key.

No the reason 15 was factored doesn’t have anything to do with the fact that these numbers are easy to factor. It has to do with this being the smallest example where Shor’s algorithm is not vacuous. Strawman, meet this paragraph. Paragraph, strawman.

Next we get some more silliness:

Let’s presume that quantum computers advance in some exponential curve that resembles Moore’s Law. That is to say that there is going to be a doubling of quantum gates periodically, and we’ll call that period a “generation.” Moore’s specific observation about transistors had a generation every eighteen months.

The difference between factoring four bits and factoring 4096 bits is 30 generations. In other words, 72*4

^{3}* 2^{30}= 72*4096^{3}. If we look at a generation of eighteen months, then quantum computers will be able to factor a 4096-bit number in 45 years, or on the Ides of March, 2053.

Okay, so what is wrong with this argument? Well first of all, the author takes the quantum computer of four qubits, and then converts this to a number which is, as the author says about a number about the total number of quantum gates! Doh. The number of quantum gates needed to factor 15 isn’t 4. But even assuming this, note that the author is trying to compare apples and oranges. There are two separate issues going on in these scaling arguments. One is the number of qubits and the other is the depth of the circuits used in Shor’s algorithm. If you make a reasonable division between these two (as is done in Van Meter’s thesis) for example, then I can assure you that the date at which we factor a 4096 bit number, assuming a Moore’s law type scaling, certainly ain’t 2053.

And then finally we have the triumphant “I asked a smart guy ending:”

But I also talked to a bigwig in Quantum Information Theory (that’s quantum computing and more) and gave him a sketch of my argument. I heard him speak about Quantum Information and he gave the usual Oooooo Scary Quantum Computers Are Going to Factor Numbers Which Will Cause The Collapse of All Financial Markets And Then We Will All DIEEEEE — So That’s Why We Need More Research Money boosterism.

He wouldn’t let me attribute anything to him, which I understand completely. We live in a world in which partisanship is necessary and if he were seen putting down the pompoms, he’d be fired. Telling middle-aged technocrats that the math says their grandkids are going to see quantum computers shortly before they retire will cause the research money dry up, and if that happens then — well, the world won’t end. And then where would we be?

Nonetheless, he said to me sotto voce, “There’s nothing wrong with your math.”

Okay, peoples, fess up. Who was it who didn’t read this closely enough?

So what’s the lesson? Building a real quantum computer for cracking RSA is a huge task. It’s not, however, one, which, as the author of the above article insinuates, is beyond reason to be striving to achieve. It’s also important to point out that there are also lots of other reasons we’d like to build a quantum computer, not the least of which is doing things like physical chemistry with them (and these kick in their advantage, it seems, for much smaller and short running quantum computers.)

Another thing I find fascinating about this whole discussion is that the basic slowdown in Shor’s algorithm is the slowdown used by RSA in its encryption. What this tells me at some deep level is that the quantum computers were are going to build to break modern public key cryptosystems are probably also going to have to be comparable in power to the computers they are trying to beat. In other words I’d be amazed if a TRS-80 quantum computer can break the cryptography on today’s desktop. On the other hand, it’s also important to realize that modern public key cryptosystem’s are probably restricted, right now, to something not too different from aTRS-80 color computer….because they’re running on your cell phone!