Over at Emergent Chaos I found an article which throws down the gauntlet over quantum computers. And there isn't anything I cherish more than gauntlets thrown down!
Note: I should preface this by saying that I don't consider myself a over the top hyper of quantum computers in the sense attacked by the author. I find quantum computers fascinating, would really like to see if one can be built, but find the hyperbole that accompanies any small advance in the field a bit over the top. However I also think the article misses a lot of important points (and insinuates that people haven't actually thought about these points, something which I find as annoying as quantum computing hype.)
Update: Say it ain't so Bruce. Say it ain't so.
But onto the article. It begins with notes about quantum computer hyperbole, and about Scott Aaronson's Scientific American article, and then jumps into the meat:
The crypto-obviating algorithms in question are Shor's algorithm for factoring and an algorithm he developed for discrete logs. I was surprised to learn that Shor's algorithm requires 72k3 quantum gates to be able to factor a number k bits long. Cubed is a somewhat high power. So I decided to look at a 4096-bit RSA key, which is the largest that most current software supports -- the crypto experts all say that if you want something stronger, you should shift to elliptic curve, and the US government is pushing this, too, with their "Suite B" algorithms.
To factor a 4096-bit number, you need 72*40963 or 4,947,802,324,992 quantum gates. Lets just round that up to an even 5 trillion. Five trillion is a big number. We're only now getting to the point that we can put about that many normal bits on a disk drive. The first thing this tells me is that we aren't going to wake up one day and find out that someone's put that many q-gates on something you can buy from Fry's from a white-box Taiwanese special.
Okay a couple points are in order. First of all, the main limiting factor for Shor's algorithm is modular exponentiation. Modular exponentiation is the procedure for taking two whole numbers a and N and computing ar mod N. Interestingly it is also the limiting step in RSA encryption. Now the author of this article says that to factor a k bit number you need O(k3) quantum gates. But, while this is true, it isn't the real important part of the story. This is because while you require O(k3) quantum gates (or classical gates if you are doing the RSA encryption), this does not imply that Shor's algorithm requires O(k3) time steps. In fact, it is fairly easy to trade time and space resources in modular exponentiation. A good resource for some of these tradeoffs is in Rod Van Meter's thesis quant-ph/0607065. For example, Rod shows algorithms for modular exponentiation on a nearest neighbor linear architecture (which is very very restrictive) which require a time of O(k2 log k). So, to say that this naive analysis which just takes O(k3), calculates it, notes it is a large number, isn't nearly half the story. A more complete version of the story can be found in Rod's thesis, and in particular I would direct you to the plot on page 151. There, for example, you will learn that the real story is that a quantum computer running at 1 MHZ could factor a 4096 qubit number using millions of qubits, in about an hour, even with the severe restriction of the architecture being a linear nearest neighbor system.
Then the author goes on to say some very strange things
A complication in my calculations is the relationship between quantum gates and quantum bits. For small numbers of qubits, you get about 200 qugates per qubit. But qubits are rum beasts. There are several major technologies that people are trying to tease qubits out of. There's the adiabatic techlogies that D-Wave is trying. There are photon dots, and who knows how many semiconductor-based methods.
Quantum gates per qubit? I assume the author is refering to problems with quantum error correction, but have no idea where the 200 quantum gates per qubit number comes from. Anyone? (And "photon dots?" The author probably means quantum dots used to produce single photons and linear optics, but "photon dots" what the hell are those?)
The confusion continues with
But let's just assume that they all work as advertised, today. My next observation is that probably looking at billions of q-bits to be able to get trillions of q-gates. My questions to people who know about the relationship between quantum gates and quantum bits yielded that the real experts don't have a good answer, but that 200:1 ratio is more likely to go down than up. Intel's two-billion transistor "Tukwila" chip comes out this year. Five trillion is a big number. We are as likely to need 25 billion qbits to factor that number as any other good guess. Wow.
Yeah, certainly a billion qubits is a lot, but the numbers I see are millions to tens of millions.
Finally we get something truly strange:
The factoring that has been done on today's quantum computers is of a four-bit number, 15. If you pay attention to quantum computing articles, you'll note they always factor 15. There's a reason for this. It's of the form (2n-1) * ( 2n+1). In binary, 2n-1 is a string of all 1 bits. A number that is 2n+1 is a 1 bit followed by a string of 0s, and then a 1 again. These numbers are a special form that is easy to factor, and in the real world not going to occur in a public key.
No the reason 15 was factored doesn't have anything to do with the fact that these numbers are easy to factor. It has to do with this being the smallest example where Shor's algorithm is not vacuous. Strawman, meet this paragraph. Paragraph, strawman.
Next we get some more silliness:
Let's presume that quantum computers advance in some exponential curve that resembles Moore's Law. That is to say that there is going to be a doubling of quantum gates periodically, and we'll call that period a "generation." Moore's specific observation about transistors had a generation every eighteen months.
The difference between factoring four bits and factoring 4096 bits is 30 generations. In other words, 72*43 * 230 = 72*40963. If we look at a generation of eighteen months, then quantum computers will be able to factor a 4096-bit number in 45 years, or on the Ides of March, 2053.
Okay, so what is wrong with this argument? Well first of all, the author takes the quantum computer of four qubits, and then converts this to a number which is, as the author says about a number about the total number of quantum gates! Doh. The number of quantum gates needed to factor 15 isn't 4. But even assuming this, note that the author is trying to compare apples and oranges. There are two separate issues going on in these scaling arguments. One is the number of qubits and the other is the depth of the circuits used in Shor's algorithm. If you make a reasonable division between these two (as is done in Van Meter's thesis) for example, then I can assure you that the date at which we factor a 4096 bit number, assuming a Moore's law type scaling, certainly ain't 2053.
And then finally we have the triumphant "I asked a smart guy ending:"
But I also talked to a bigwig in Quantum Information Theory (that's quantum computing and more) and gave him a sketch of my argument. I heard him speak about Quantum Information and he gave the usual Oooooo Scary Quantum Computers Are Going to Factor Numbers Which Will Cause The Collapse of All Financial Markets And Then We Will All DIEEEEE -- So That's Why We Need More Research Money boosterism.
He wouldn't let me attribute anything to him, which I understand completely. We live in a world in which partisanship is necessary and if he were seen putting down the pompoms, he'd be fired. Telling middle-aged technocrats that the math says their grandkids are going to see quantum computers shortly before they retire will cause the research money dry up, and if that happens then -- well, the world won't end. And then where would we be?
Nonetheless, he said to me sotto voce, "There's nothing wrong with your math."
Okay, peoples, fess up. Who was it who didn't read this closely enough?
So what's the lesson? Building a real quantum computer for cracking RSA is a huge task. It's not, however, one, which, as the author of the above article insinuates, is beyond reason to be striving to achieve. It's also important to point out that there are also lots of other reasons we'd like to build a quantum computer, not the least of which is doing things like physical chemistry with them (and these kick in their advantage, it seems, for much smaller and short running quantum computers.)
Another thing I find fascinating about this whole discussion is that the basic slowdown in Shor's algorithm is the slowdown used by RSA in its encryption. What this tells me at some deep level is that the quantum computers were are going to build to break modern public key cryptosystems are probably also going to have to be comparable in power to the computers they are trying to beat. In other words I'd be amazed if a TRS-80 quantum computer can break the cryptography on today's desktop. On the other hand, it's also important to realize that modern public key cryptosystem's are probably restricted, right now, to something not too different from aTRS-80 color computer....because they're running on your cell phone!
- Log in to post comments
It's worth noting that the best algorithm for modular exponentiation does not in fact require O(k^3) gates. The algorithm requires O(k) iterations of the multiplication algorithm. While naively multiplicating numbers is O(k^2), there are much better algorithms, such as the Schönhage-Strassen algorithm, which is O(k logk loglogk).
The sad thing is that Shor even mentions this algorithm in his paper, yet people seem to be ignoring it.
Yeah, but I think the Schonhag-Strassen algorithm, in practice has horrible constants? Or am I wrong? (So it makes theorists happy, but architects sad.)
Sure, but Toom-Cook and Karatsuba algorithms nicely plug that gap.
What's truly sad about this Emergent Chaos article is that it misleads otherwise knowledgeable and intelligent people like Bruce Schneier. He's obviously competent (he invented Blowfish, for crying out loud), but he isn't a physicist, and thus relies upon others for that information. When such information is so patently misleading, we all hurt for it. Of course, it doesn't seem like the author, mordaxus, is too well-versed in quantum mechanics or in quantum computation himself...
In addition to these problems, another issue is that the essay assumes that we make no significant progress in improving Shor's algorithm or find any other decent quantum factoring algorithms. Considering how new an area quantum computing is, that's an unwise assumption.
(Echoed over at EC, too.)
I freely admit that I'm a mathematician, not a physicist, and that there is much about quantum computation that I do not understand.
Nonetheless, here's the gauntlet: when do you think it is reasonable that there will be a quantum computer that can factor a 4096-bit key in quasi-reasonable time? (Oh, let's say a year of running time.)
Under the assumptions that Moore's law continues with semiconductors indefinitely, 2060 is a good guess.
Under my lick-my-finger-and-stick-it-in-the-wind calculations for quantum computers, assuming similar ramps and dead reckoning, it's 2053.
I further observe that those two guesses are interestingly close.
What's your guess? What's your reasoning?
if it makes you feel any better, I predict we get a usable quantum computer before we get a HAL-9000-level AI, or flying cars, or jet packs, or moon bases.
M
I've been wondering about the speed of quantum computers myself. That a 1 MHz quantum computer with no more than millions of qubits could crack 4096-bit RSA within an hour is pretty interesting, as well as slightly worrying. I don't think it's likely that Moore's law would apply to other technologies, but if existing silicon processes can be adapted to making qubits on silicon, we'll see millions of qubits and more in no time.
Another problem is that he seems to be thinking that for every quantum logic gate, you need a separate physical device. That's like saying I need a separate transistor every time I want to XOR two bits. Hint: You can reuse these in a computation.
You need to run O(k) iterations of the multiplication algorithm. That doesn't mean you need O(k^3) (or O(k^2 log k loglog k)) gates. It means you need O(k^2) (k^{1+\epsilon}) gates that you reuse k times. And in fact you can do better than that by compressing the "circuitry" needed for a single multiplication.
Craig: Actually you need way less unique gates (4 is quite usual). In quantum computers you tend not to have physical gates etched out, but rather implement them by applying pulses of electromagnetic radiation and allowing free evolution under the natural Hamiltonian of the system. You only need O(k) actual qubits.
Okay here is a rough Moore's law calculation (for whatever it's worth, which is probably pretty much zero: scaling is likely to leverage existing technology which has been through many doubles of the real Moore's law, and therefore it is doubtful Moore's will law hold in quantum computers: witness even D-wave's scaling efforts which will be roughly a factor of 5 in under two years. Off all the implementations, ion traps are probably most amenable to an argument like this, and superconducting qubits are probably the least.)
If you want to just talk about number of qubits, then using BCDP you can factor a 4096 bit number using 5*4096 qubits (plus 3, I think). This is like 20000 qubits. We currently have roughly 8 qubits . So this means we have to achieve a factor of 2500 in number of qubits. This is 11 doublings, so assuming a Moore's law I get 2008+11*1.5=2024 ish. Of course by using BCDP you would be able to factor a 4096 bit number in about a year assuming you have a machine running at a MHZ. I suspect, however, that this is a big enough threat to keep you from using your 4096 bit key. This also ignores error correction. You can then start playing a game with number of qubits and speed of the algorithm. More qubits roughly give you a faster speed.
One thing that often gets lost in such discussions is that there are some people whom I imagine would be interested in decrypting even 50-year old secrets. For example, by unraveling old spy networks one might be able to infer some number of current-day spies. Cracking the 200-year-old Beale cipher might make one a millionaire---or not. Of course, quantum computers can't crack every code, but those that it can, even if the codes are quite old, might be of value to someone.
That said, I think it's an historical accident that we live in a society where so much value is placed on secrets; the true value of quantum computers is surely more than just codebreaking. I agree with Dave that quantum simulation of some kind will turn out to be quantum computing's "killer app." Of course, maybe they will just be better for playing computer games (quant-ph/0702144).
Logic gates in Shor's algorithm are going to have to be pretty accurate (required accuracy increases with increasing problem size) and so I don't think that it is reasonable to neglect error correction and its overhead. The type of error correction used (still an open question) and how much is required (still an open question because the answer depends on how accurate physical logic gates are) will obviously determine (algorithmic) clock speed, number of (physical) qubits, computer size, power consumption, etc. I did some work on this a while ago and the results led me to believe that a quantum computer would not be able to break RSA-2048 unless there was a significant reduction in the error correction resources required (from what is required for concatenated quantum codes). This could be some new way of doing error correction that doesn't have an exponential resource cost (maybe the surface code) or it could simply be not needing so much error correction (maybe by using photonic qubits).
astephens,
You mention "photonic quibits" as being one way of increasing the accuracy of physical logic gates. There was an interesing paper that discusses using both stationary qubits (e.g. single photon sources made out of trapped atoms, molecules, ions, quantum dots, or defect centers in solids) and flying qubits (e.g. photons) to design robust quantum computers. It's here:
http://xxx.lanl.gov/abs/quant-ph/0508218
I was wondering if you were familiar with the paper and what your thoughts might be regarding this type of approach? Dave, do you have any views in this regard?
Thanks
I suggested photonic qubits because they are not expected to decohere as quickly as other qubits and because they will be easier to transport than other qubits, not because photonic gates will be more accurate. In fact photonic gates are stll indeterministic and so I guess that suggestion was made with the expectation that this problem would soon be addressed (at least theoetically).
The real kicker (one of them) for other systems (solid state for example) is that they will be restricted to relatively short-range coupling. Some photon mediated long-range coupling could be useful in these systems (as an alternative to transport) but only if it doesn't require some large overhead and only if it doesn't significantly reduce the fidelity of the logic gates.
That last sentence is pretty obvious: long-range coupling could be useful as long as it isn't rubbish.
Actually that's something our group has done quite a lot of work on. If you have two mater qubits in each location, then you can avoid the non-determinism of entangling measurements (quant-ph/0509209). And with 2-3 qubits you can distill the entanglement generated, and so can produce high fidelity gates even if the entangling operation is very noisy (quant-ph/0606199, quant-ph/0702209, arXiv:0704.1464, arXiv:0710.4352).
Quantum simulation will be ofcourse faster with 1000 qubits (without error correction) than on any supercomputer with size of universe... Becouse for quantum simulation need n or n^2 steps/gates and for classical computer 2^n steps/gates. But quantum simulation isn't proven to be nessasray if quantum computer will not be ever maded...
discusion about moore law is silly since no progress was maded at all, so according to moore law 0*50 years is 0.
I don't know what all quantum theoretics think reading sentence like this: "We cannot be hopeful for other methods of qubit error correction either,
since the difficulty arises out of the analog nature of the error process." http://arxiv.org/pdf/quant-ph/0206144v4
But if analog computer single element (qubit in analogy) has accuracy at most 0.001%, then isn't it mean that accuracy with 100000 qubits will be +-100%? If not, isn't it mean, that probability to get right answer with 1000 qubits QC and after 10000 iterations (more precisly - lazer pulses) will be 0.99999^{1000*10000}=~0.9^{1000}=~10^{-45}. Such conclusion makes me think, that even with 0.99999 precision one qubit manipulation and without any over noise, quantum computer can be faster only than my calculator. Why analog computer are used in NASA, jet and are more effiecent? Simple. Becouse they doing short tasks and then those short task are analized pilot or digital computer.
For sure I can say that with 0.99999 one qubit measurment precision (this is best precision for any analog device) quantum computer is imposible with 10 milions qubits, becouse 0.99999^10,000,000=1/10^44 probability to measure answer, which compute quantum computer. If to factorize 4096 bit number need 5 million qubits, then probability to get computed answer (without any over errors) is 0.99999^5,000,000=1/10^22. Working with 10GHz frenquency would need 10^12 seconds or 30000 years. Anyway, 4096^2=16,777,216 and so 0.99999^16,777,216=1/10^73 is probability to get desired answer, even in thought that quantum computer qubit accurasy is 0.00001, but it can be 0.0001 or 0.001...
Possible: if you think that a quantum computer is analog, you need to relearn quantum theory. That's a complete and total misunderstanding of quantum computation. See http://dabacon.org/pontiff/?p=882 for some words on this.
Oh, and the quant-ph you linked to is complete rubbish. But thanks for playing.
Digital computers errors probability is very small: >1/10^20 per bit and digital computers don't using error correction (except servers and NASA spaceship antiquars...). While qubit can be rotated with max precision 0.00001 (like any over analog device precision). The more qubits is the bigger precision needed and like I read needed for large powerfull quantum computation error probability rate per qubit 0.0001-0.00001 - the limit of analog devices. In digital computers is separation barier and in quantum computer such barier don't exist. So in digital computers error rate is much smaller than in ideal quantum computer, where errors occurs only from inperfect qubits rotations. To comparison digital computer is swich of light, which can be wrongly swiched by stupidity (say cosmic rays) and in quantum compurs qubit rotation(s) is regulation of light with changeable-resistor (you never rotate resitor to precise value of 1 or 0). In quantum computer inperfect rotations is projected like probabilistic errors, which means that in quantum computer errors occurs not only from stupidity (which are very small in digital computer) but also from inperfect rotation and from decoherence... Even without decoherence in fully isolated sistem qubit is much more sensitive to errors than bit (in transistor, etc). Why I am so optimistic about that digital computer CPU, RAM don't using error correction? Becouse I read that microsoft wanted make memory (RAM) with error correction (ECC?) and in past to them say that operation sistem more producing errors than RAM, and after some several years time MS again suggest to use error corection when OS become more stable, but this plan still wasn't accepted. So digital computer don't using error correction at all and quantum computer without error correction and without any over noise, except qubits inperfect rotation with accuracy 0.00001 will be unable to have more than few hundreds qubits and to performe more than few hundreds operation. And this is without decoherence and over noise! So I don't think that it is very digital like his real digital counterpart. At least quantum computer is nor digital nor analog http://www.tau.ac.il/~tsirel/Courses/QuantInf/lect9.pdf .
BTW, quantum computer one qubit rotation accuracy need increase polinomialy (linarly?) as number of qubits increases, so with single qubit (rotation) accuracy <0.00001 quantum computer can't have arbitrary much qubits and number of qubits limited to few thousunds with using quantum error correction.
Digital computer isn't limited to number of bits or somthing, but only by time, but to fight time limit need more advanced error correction. And the same for number of transistors - need more advanced error correction for more transitors. But since digital computer error probabiliti is very small per bit or transistors (1/10^25) the error correction isn't needed, becouse 10^9 transistors * 10^9 hz=10^18, 10^25/10^18=10^7 seconds or 115 days digital computer can work without error.
P.S. I am not very sure that digital computer error rate is very small and that error rate (per bit) isn't the same as quantum computer qubit rotation precision 0.00001.
Digital computers errors probability is very small: >1/10^20 per bit and digital computers don't using error correction (except servers and NASA spaceship antiquars...).
Digital computers use error correction in their hard drives and if you really think that in the next twenty years digital computers aren't going to have to error correct, you haven't been paying attention.
While qubit can be rotated with max precision 0.00001 (like any over analog device precision). The more qubits is the bigger precision needed and like I read needed for large powerfull quantum computation error probability rate per qubit 0.0001-0.00001 - the limit of analog devices.
What? This is an idiotic statement. Why is there a limit at 0.0001 for any other analog device? Dude or dudette you're making absolutely no sense.
So in digital computers error rate is much smaller than in ideal quantum computer, where errors occurs only from inperfect qubits rotations
Currently yes. Also note that errors don't just come from imperfect qubit rotations, they also come from decoherence.
To comparison digital computer is swich of light, which can be wrongly swiched by stupidity (say cosmic rays) and in quantum compurs qubit rotation(s) is regulation of light with changeable-resistor (you never rotate resitor to precise value of 1 or 0).
Digital computers aren't switched by light. Qubits aren't regulation of a changebale resistor.
Why I am so optimistic about that digital computer CPU, RAM don't using error correction? Becouse I read that microsoft wanted make memory (RAM) with error correction (ECC?) and in past to them say that operation sistem more producing errors than RAM, and after some several years time MS again suggest to use error corection when OS become more stable, but this plan still wasn't accepted.
Well first of all this is silly. Microsoft making memory? Um, they're a software company. Secondly the people who will be thinking about error correction in digital computers are people who are at Intel and AMD.
BTW, quantum computer one qubit rotation accuracy need increase polinomialy (linarly?) as number of qubits increases,
Bzzzt. Wrong. Have you ever heard of the thershold theorem?
Possible: did you even read what I linked to? Or even the lecture you linked to?
Of course: digital computer currently work highly reliably. But, as Moore's law progresses we see more and more signs that this will be changing. Of course: the requirements for a threshold in quantum computing are fairly severe, we don't yet know how to put all the components of a quantum computer together at the same time and achieve these specs.
But you're missing something very fundamental which is this "myth" of digital computation. Digitization is robust in classical computer exactly because the physics of these devices enacts error correction. Information in your magnetic media, for example, is protected by a redundancy code along with an energy landscape of an Ising model which, at low enough temperature, performs error correction automatically.
For example, qubit rotation accuracy is 0.0001. And say computation time consist of 10^10 operations and 10^5 qubits and then precision 1/(10^10*10^5)=1/10^15. Then need accuracy 0.0001^5=1/10^20. And need 7^5=16807 haming corrections or somthing (16807 "correction qubits" per one data qubit or similar). 10^20/7^5=10^15. So need about 10000 error correction qubits for one qubit to perform enough long computation of enough big number of qubits. So while computation time increasing and number of qubits increasing, number of error correction gates also increasing. So to factorize 4096 bit digit number without error correction need say about 10^7 qubits and with error correction about 10^11 qubits (100 intel quad core processors surface or 10cm*10cm chip size, in thought Moore law is over). Accuracy just increasing faster than number of qubits growing to correct, which depends on computation time or at least on number of qubits.
In fact(?) hard drive and SSD (and optical disks and flash?) using error correction, but processors and RAM - no and thus microsoft sugest to use it for RAM (I just read it in news).
0.00001 precision is in best case if it is not big deal, then why all even modern analog computers working with such precision /www.cisl.columbia.edu/grads/gcowan/vlsianalog.pdf ? In fact analog computers are limited with such precision +-fiew magnitudes.
And in threshhold theorem is saying that need error rate per qubit at least 0.001-0.00001 accuracy (this also true for qubit rotation accuracy, becouse analog error becoming probabilistic digital errors and for accuracy 0.001 one error per qubit per 1000 steprs or at least one qubit error per 1000 qubits oer one step...). So 10^{-5} accuracy needed for long(?) and many qubits quantum computation mach with analog computer accuracy >10^{-5}. Coincidence?
"Digitization is robust in classical computer exactly because the physics of these devices enacts error correction."
What is different? Transistors maded in "smarter" or "more digital" way and thus they are much more robust for errors than quantum computers and they "accuracy" is probably 1/10^25 per transistor. If say to perform digital computation with such analog bits like in quantum computer then it would need also about 100000 bits to correct data bits and then processors would consist of 10^13 transistors instead 10^8-10^9 transitors like are in current processors. And hard disk accuracy is probably bigger than "analog rotation" accuracy and probably is about 1/10^{-10} and thus need only about 7^2=49 correction bits for one data bit (or maybe for 16 data bits, whatever).
Maybe I somthere wrong, but still need not so small number of gates for error correction. For example to factorize 4096 bit number need 4096^2 *7^4 /4=10^10 at all qubits for 16777216 data qubits. In though one qubit is not equal one gate (or one transistor in analogy), it's means that if quantum gates will convert into "transitors litography" then there would need about 10^11 (just look how much qubit flux, couples, etc in D-wave computer for one qubit...). 10^11 "transistors" in quantum computer is realy hard imaginable number, chip size in analogy with current would be 30cm*30cm!
Error don't coming only from imperfect qubit rotation, but now I am talking only about them, don't including decoherence or any over errors, except this (qubit rotation errors). And I show that even only with this erorrs need quantum chip 30*30cm^2 size to factorise 4096 bit number.
possible: I think you need to do some reading on fault-tolerant computation. In fault-tolerant quantum computing a discrete set of gates is used. You don't do arbitrary qubit rotations. Conveniently by combining a number Hadamard and pi/8 gates you can get arbitrarily close to any single qubit unitary. Couple this with an entangling gate (such as a CNOT), and you have a universal quantum computer.
I know, that qubit rotation is to ficset point; from 1 to one 0, but rotation still creating imperfection error, which then comes out as probabilistic errors of 0.00001 to get say 0 instead 1. Those rotations are probably in pauli matrices space, I don't understand them enough, but this don't changing situation about qubit rotation.
Also about shor algorithm I maybe was wrong, and for it need about 5*4096*7^3=7024640 qubits needed to factorise 4096 number, but if quantum computer working on low frenquency then maybe need about 10^10 qubits.
Interesting how D-wave suspect to get enough good aproximate answer without error correction? If measurment making disturbance on qubit and rotate it a little bit, say measurment is performed with accuracy 0.1, then for 1000 qubit 0.9^1000=10^{-46} probability to get answer, which compute they "quantum computer"...
Anyway, it is possible to create 10000 bit number and even quantum computer will be unable to crack with trilions qubits.
You know, I learn somthing today, quantum computer is digital but it qubit rotation errors are much bigger than conventional digital computer, but with error correction it's possible to correct this errors with about 1000 times more qubits and to get advantage over classical computer.
And I wonder what quantum computer can do if error rate is 0.1 or 0.01 per gate instead 0.0001-0.000001? Can it factorise 100 digit number in such case(0.1)? 2^100=10^30 - quantum computer would be still faster even with 100 data qubits in quantum simulation.
And what can I say about grover algorithm? Quantum computer with about 100 data qubits and with error correction it would about 10^5 qubits and at 10 MHz would be the same speed as Top supercomputer and would take about 100 days to perform search in unstructured database. For more dificult task quantum computer with gorver algorithm would calculate hudnred years (don't matter how much qubits you have)...
But seems that there is strong reason to build quantum computer, since it can be much cheaper for certain tasks and incredible effiecnt in mystical quantum simulation.
You mentioned Phys Chem as possible use for QC. Does that mean what I'd call Quantum Chemistry? Calculating molecular properties &c?
And just to go back the code issue - I take it there isn't an easy way to encode messages with a QC that's hard to decode with a QC? Essentially the same asymmetry that exists with current computers, I mean.
Yes, I mean physical chemistry. See for example: Simulated Quantum Computation of Molecular Energies, Alan Aspuru-Guzik, Anthony Dutoi, Peter J. Love, Martin Head-Gordon, Science, 309, 5741, (2005)
As for post-quantum-computers secure cryptography: this is a very new field. A start (at least) is http://arxiv.org/abs/quant-ph/0701115 by Moore, Russell, and Vazirani.
first of all let me apologize for my poor english.
like most people i only had the chance to study "classic physics". i tried a few times to approach quantum mechanics, but it requires "mathematical tools" which i don't have and i miserably failed. still i'm fascinated by it and also by its possible applications. quantum computing is no exception.
i am not able to speak about qubits, quantum logic gates and stuff like that, but having a degree in computer science, i'd like to share my opinion on a couple of things.
classic theory of computation is based on turing machines and identifies two separate classes of problems: "polinomial problems" and "non polinomial problems".
polinomial problems are those that can be solved performing a number of operations given by a whatever polynomial function of input data. polinomial problems are considered "easy to solve problems", because time of computation keeps acceptable when input data grows.
non polinomial problems are those that requires a number of operations given by a whatever non polynomial function of input data (often an exponential function). non polinomial problems are considered "hard to solve problems", because time of computation grows exponentially when input data grows.
among non polinomial problem there are two other subsets. "np problems" and "np complete problems". in the first case, nobody has yet found a polinomial algorithm to solve it, but such an algorithm might exist. in the second case it has been mathematically proved that there can't be a polinomial algorithm to solve the problem.
modern cryptography is based on the fact that prime number decomposition is a np problem.
the classic theory of computation is true only for turing machines. quantum computers will not operate as turing machines and a new theory of computation must be developed. this is what people like shor are doing. the importance of shor algorithm is that it proves that a problem that is np for turing machines might instead be solved in polinomial time by quantum computers.
this is the fundamental result of shor algorithm!