The Quantum Pontiff

Why is classical computing possible at all? A silly question, but one which never ceases to amaze me. Part II of my attempt to explain one of my main research interests in quantum computing: “self-correcting quantum computers.” Prior parts: Part I

Last time I discussed how quantum computing was a lot like classical probabilistic computing. Given this, one can think about a question which seems silly at first: how is it possible to compute when you have a classical probabilistic computer?

Why Is Classical Computation Possible?

Classical computers are both digital and deterministic. But if you go and take a microscope to your classical computer what you will see isn’t anything like these two categories. For example, if you go and look at the electrons in your silicon based transistor, not all of the electrons are doing the same things. Even in todays ultra pure system, real systems have defects (are dirty) and the electrons in the system are behaving in all sorts of strange ways. Of course the aggregate effect of all of these electrons bouncing around and doing all sorts of crazy things is, when digitized, deterministic. Thus the transistors in your computer are digital and deterministic in spite of the fact that the systems out of which they are constructed are both analog and probabilistic (or worse analog and quantum!) How is this possible? How is it possible to take probabilistic (read noisy) classical analog systems and turn them into deterministic digital computers? The answer to these questions isn’t as obvious at first thought as you might think. The first part of the answer is how to go from analog to digital. This is done, in most physical systems, by applying a discretization to some analog set of configurations. Of course, any such discretization must map some values which are nearby in an analog space to differing digital values. So there are always precision questions in defining digital configurations. But, and here is an important point, it is often possible to take an analog system and keep it out of the regimes of difficulty.

i-82bf9f660769c7992bbab854fd73edfe-Shannon.jpg
Claude Shannon

Okay so going from analog to digital seems fairly straightforward. But what about going from probabilistic to deterministic. This problem, of how to take systems which change in time according to probabilistic laws, and turn them into systems which compute deterministically has quite a few answers. If we approach this problem from a simple theoretical perspective, then the answer really comes from the theory of classical error correction, and its further extension to the theory of fault-tolerant classical computation. The former of these is founded in the groundbreaking work of Claude Shannon. What Shannon and others showed was that if a classical system is sent through a probabilistic channel which destroys deterministic information, then repeated use of this channel, along with the ideas of classical error correction, can be used to make this probabilistic channel behave nearly deterministically (where nearly means as close to nearly as you want with a good scaling in the resources used.)

The basic idea of classical error correction is simple. If you are sending information down a line where these information can be distorted, then you can lessen the probability of this distortion by encoding your message redundantly. When such an encoding is performed on information, then after it is sent down the noisy line, a suitable decoding can be performed such that if the noise is not too strong, the probability of correctly transmitting the information is greater than if it was just used without the encoding. So, by using classical error correction, one can turn a channel where information evolves probabilistically into one which behave much less probabilistically. If enough encoding is used, this makes the transmission of information effectively deterministic.

i-859bb02813aa44be3cd14c9e04cbfb5e-Vonn.jpg
John von Neumann

A refinement of the idea of classical error correction is the idea of fault-tolerant computation. This is the study of what happens when you are not just transmitting information down a noisy/probabilistic channel, but also allow all of the components of your computer to fail. Thus for instance, in fault-tolerant computation, things like your logic gates can act probabilistically. Further even things like preparing a classical system in a well defined state can be taken into account in fault-tolerant computation. Von Neumann was one of the first to consider how to build an entire computer out of components which were probabilistic. Von Neumann’s theory showed, that in principle, if noise was not too strong, then it is possible to build a fault-tolerant classical computer out of faulty components.

Next Time..

In part III, I’ll discuss how the story of why classical computation is possible is actually connected to reality. In other words, how does it relate to how your hard drive operates?

Comments

  1. #1 JohnQPublic
    August 25, 2008

    Is this another manifestation of the collapsing wave function? Doesn’t that very condition give rise to the question of why anything is deterministic at all?

  2. #2 Tyler DiPietro
    August 25, 2008

    “In part III, I’ll discuss how the story of why classical computation is possible is actually connected to reality. In other words, how does it relate to how your hard drive operates?”

    The question of why classical computation is possible seems obviously connection to the broader question of why classical systems behave the way they do when they ultimately operate on the basis of quantum mechanics. Still a wide open question IIRC.

  3. #3 John Sidles
    August 25, 2008

    For me, it is productive to regard classical computers and quantum computers as representing two ends of a continuum.

    Suppose we have a system of qubits, subject to some adjustable noise level. If the noise level is sufficiently high, we can simulate the systems dynamics classically, because the system we’re simulating might *be* performing a classical computation.

    If the noise level is sufficiently low, then we *can’t* simulate the system efficiently, because the system we’re simulating might *be* performing a quantum computation.

    What about the middle ground? That’s a hard question … maybe even NP-hard?

    To the extent that finding a simulation representation of a quantum trajectory is the same problem as finding an algorithmically compressed representation, this problem most likely *is* NP-hard.

  4. #4 Tyler DiPietro
    August 25, 2008

    Shot in the dark: If the noise-level is formalized as a function, then it sounds like you would be trying to solve an NLP problem, which also probably would be NP-hard.

    One could probably isolate a feasible search space though, since the breakaway point would probably be where system approaches the order of Planck’s constant.

  5. #5 pr
    August 26, 2008

    I think nobody realy understand how transistor or even semiconductor diod working! Of course there is such diods like
    |/
    |\
    or
    |<
    but this logic don’t seems very logical… This is like if you want to fill all botle of cocacola with such specific filter \/ with water or milk. So in one direction water going 100% and in backward direction from botle water don’t going well through filter \/, but I don’t sure how this is connected with electrons travleing in diod? etc
    +|<- and why diod don’t transfering current in this way:
    -|<+ ? Somthing like no pressure of electrons? Okay, maybe this type of diod is logical, but semiconductor diod is real mystery… Maybe it’s just some random matching of properties?
    Only electric lamps computation seems logical and fits with known laws of physics without doubts…

  6. #6 Jonathan Vos Post
    August 26, 2008

    There seem to be more than 3 types of computers. Our brains, for example. And so we try to model one kind of computer with another.

    New submissions for Tue, 26 Aug 08

    [1] arXiv:0808.3162 [pdf]
    Title: Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states
    Authors: Laszlo B. Kish
    Comments: Submitted for publication
    Subjects: General Physics (physics.gen-ph)

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also nonexistent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  7. #7 pr
    August 26, 2008

    there is many times of computer like atomic computer, computation reactions in atom, chemical computer, reaction between atoms, macromolecular computation, because nobody have evidence about such computation exact processes, thus brain computation is absolutly mystery and known only very roughly process of computation in brain, because all neurons signals measurment experiments can be wrong measured depending on surounding environment etc… So there is much room for troling about this unknow nano and even micromechanic and especialy about physics and biology…