Majority Gate Reality

The universe doesn't always operate the way we want it to. No, I'm not talking about the stock market (unless you've been short lately), I'm talking about the role of error in logical deterministic systems! When you zoom down far enough into any computing device you'll see that its constituent components don't behave in a completely digital manner and all sorts of crazy random crap (technical term) is going on. Yet, on a larger scale, digital logical can and does emerge.

Today heading to work in the early dawn I was pondering what all of this meant for our notion of reality.

Some of the seminal work in how ordered digital logic can emerge from faulty components was done by John von Neumann in the 1950s and published in 1956. In "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" (available here) von Neumann asked the question of whether it is possible to reliably compute when the constituent gates doing the computing can fail. In particular von Neumann considered a model in which each individual gate in a digital circuit can fail with probability p and that each gate fails independently of every other gate. (For the history buffs you might check out the acknowledgements in this paper where a future Nobel prize winner in physics makes the acknowledgement list.)

What most people know about von Neumann's early work is that he was able to show how one could turn a network with failing gates into one which was highly reliable using some basic ideas in error correction along with some careful choices of circuits (assuming the probability of an individual gate failing was below some threshold which von Neumann estimates at about 1 percent. Also, technically, von Neumann's work was a bit incomplete, relying on a random choice of permutations. An explicit construction was later provided by Pippenger.)

What most people don't know about von Neumann's work was his first observation about computing with faulty components. Suppose you have a circuit with faulty components. Further suppose that at the end of the day you would like your computation to output a single bit. If we require that this single bit be carried on one of the wires of our digital circuit them we immediately have a problem. No matter what we do, the probability that the last circuit element which outputs our output bit fails is p. Thus the probability that we will get the wrong answer is p. In other words if we depend on individual bits in noisy networks, we are doomed to fail!

Of course the computer scientist in you says, well: that's not a big deal, why not just run the computation many times and then take the majority vote of each of the bits in these runs. Fine, except that this then requires that either a circuit element must take the majority (and we run into the same problem) or that you must take the majority of the different runs. But you who read this, what makes you think that you are able to reliably perform this majority operation?

The way around the single bit problem, of course, is to build a reliable wire out of N unreliable wires. Then you can set a threshold of zeros and ones which you can "interpret" as a zero and one. In other words you take N bits and if, say 70 percent of them are 0s you interpret that as a 0 and if, say 70 percent of them are 1s you interpret that as a 1. Anything else you can interpret as "ambiguous." Thus digital 0 and digital 1 emerge out of a sort of democracy of bits voting. In practice, when you think about how classical computers achieve robust computing, you can explicitly see this sort of effect going on.

But what does this say, exactly, on a deeper level.? On the one hand it seems to me that this is a fundamental challenge to those who believe that digital computing is somehow the natural language of the universe. If in the universe digital computation must emerge from the noisy muck, what right do we have to believe that it is the natural language to express how the universe works?

In a similar vein am I the only one who finds von Neumann's arguments strangely disturbing when I think about my own thought process? We like our thoughts to be fundamental bits of how we construct reality. But these bits must emerge again from the muck of noisy constituents. The reality in our head emerges unscathed only when we interpret it with a mythical majority vote gate which never fails.

Which is all to say that thinking while you're heading to work about majority vote gates and what your actually thinking is enough to send this poor saps head into a state of confusion. Or maybe that's just a sign that the bits in my brain are voting "ambiguous" at this very moment?

Categories

More like this

"What most people don't know about von Neumann's work was his first observation about computing with faulty components."

If Charles Babbage had figured that out, the British Empire, running on steam-powered mechanical mainframes, would never have fallen.

Errors can be good, as long as they don't bring the whole machine to a halt. Genetics as an example, DNA is "digital", as the information is stored in a structured code. If it wasn't for transcription errors, how would evolution work?

sep332: Hmmmm... Quantum Neural Networks.

Of course, the Hodgkin-Huxley Model is a continuous (diff EQ) approximation to a discrete (stochastic) process. This formula is used to calculate the membrane potential assuming some initial state. The calculation is based on Sodium ion flow, Potassium ion flow and leakage ion flow (which is a proxy for all the insignificant ions which cross the membrane.)

If there is a F = mA of computational neuroscience this is it. It earned an Nobel Prize for the authors.

[Hodgkin, A. L. and Huxley, A. F. (1952) "A Quantitative Description of Membrane Current and its Application to Conduction and Excitation in Nerve" Journal of Physiology 117: 500-544]

Now, build neural networks out of those components... What is the error-detection error-correction process?

Anyway, in grad school (1973-1977) I lost faith in that paradigm, and openly insisted in writing and lectures that the brain is a quantum computer, with 90% of the information processing being done in the phase space of protein dynamics, not in DNA.

I think that the axons are no more than the Local Area Network connecting the molecular nanocomputing in remote regions of the central nervous system.

Seriously.

Honestly I don't see how von Neumann's ideas are all that different from the idea of building up predictable macroscopic systems from probabilistic microscopic systems, i.e. statistical mechanics. In that vein, I don't think it's all that disturbing.

Figure it this way: most people have a few neurons that don't fire properly, but the effect gets all washed out in the end. People like me, however, have enough mis-firing neurons to warrant medication (which ones misfire will determine whether you have ADD like me or something else).

Or, another way to put it, even the most perfect brain is probably best described as being fault tolerant to some extent.

JVP, you are in good company. I believe at least one Nobel laureate, Mr. Josephson, has been consumed with the same sors of thoughts these days. Then again, he's a couple radians short of a full two pi...

Honestly I don't see how von Neumann's ideas are all that different from the idea of building up predictable macroscopic systems from probabilistic microscopic systems, i.e. statistical mechanics. In that vein, I don't think it's all that disturbing.

Actually I'd disagree with this and I think von Neumann would too! In the intro to the paper I link to he says how he feels that the theory of faulty _computing_ should be put on a firm theoretic foundation akin to that of Shannon information theory/statistical mechanics. However he says his theory doesn't achieve that. And indeed there are distinct differences between the order parameter for memories like those you find in statmech treatments, and actual fault-tolerant computation, where you have to be concerned with the gates destorying the information. Indeed there are _wrong_ ways to do what von Neumann did, even assuming you do majority voting.

ADD is a creation of modern society. It is not natural, nor reasonable to expect, young boys to "sit still" in desks in an "educational" setting. Kids are made to get out and rough-house and play. The situations we've created in the modern world, 'safety' and sitting still in school are what's artificial and pathological. I bet nobody would have ADD in a rural society.

Von Neumman's comments about faulty computing don't bother me about my thought processes at all-because the brain is fundamentally different than a computer. AI folks need to admit this, throw in the towel, and start over.

Dave, were you riding, walking, driving, or taking a bus or a train?

By astephens (not verified) on 22 Oct 2008 #permalink

Brian Josephson, notwithstanding Chi's comment (which was de facto echoed by Cambridge in refuising to allow him to supervise doctoral students and postdocs) continues to have both good and bad ideas. As usual, for any thinker, the problem is in telling one from the other. I have found him a polite, brilliant, and charming person in face-to-face conversation.

Part of my point is that the brain is NOT either an Analog computer (as traditionally described) NOR a Digital computer (as traditionally described). Whether it is a quantum computer (as Penrose and Josephson and others argue) or something else entirely, may well be answered in this century. I hope that my research of 1973-1977 in grad scool, as published in many venues, and extended subsequently, is at least partially on the right track, and not enitrely on the wrong track.

As usual, for any thinker, the problem is in telling one from the other.

I see a problem with some QC experts (such as Scott Aaronson) who simply do not know enough Chrmistry nor Biology nor Neuroanatomy nor Neurophysiology to see obvious truths about the brain. That makes them prey to lunatics and cultists of many flavors, such as the True Believers in Singularity, or the tax-dodging hijacked Transhumanists, nor the cult of (as opposed to the science of) Nanotechnology.

Thanks for your (?) support, Jonathan. Actually, re Chi's comment "... which was de facto echoed by Cambridge in refusing to allow him to supervise doctoral students and postdocs", it was just a rather silly HoD (excuse my French!) who pronounced that no-one working with me could get a laboratory grant. Actually, since the time I met Jonathan at a conference one student, David Aragon, found 'an unguarded back door' and thus evaded the usual attempts by the dept. to stop people working with me as research students, and got his Ph.D. under my supervision in a conventional quantum mechanics topic some time back (he had his own funding so was not dependent on the good will of the dept.). The said HoD when I made him aware of this claimed not to remember an email he sent me prouncing judgement that I was an unsuitable person to supervise graduate students!

It is a pity, though, that owing to the dept's blinkered view I never got the chance to develop the pilot project of Osborne

http://cogprints.org/4888/

which explored the relevance of the hyperstructure concept of Baas to developmental processes. Unfortunately, the whole hyperstructure idea was beyond the limited conceptual capacities of key people in the administration!

Brian Jossephson

By Brian Josephson (not verified) on 26 Oct 2008 #permalink

But these bits must emerge again from the muck of noisy constituents.

It would be a fun sidetrack to examine how mental hash functions are formed. They are the program-as-data component of your brain that we don't think of very often. Your meta-mentality, maybe.

It's deep, fun thinking about this though.

David: do we know THAT "mental hash functions are formed"?

Or is than an abuse of analogy?

We do say that data is compressed in the Central Nervous System. We do say that there are "pointers" or "indices" to memories. We do say that there is neural "coding."

But how far should we take the brain/computer and mind/program dynamics analogy?

It's a complete abuse of analogy. Inspired a little bit by reading about Mnemosyne [which I haven't gotten around to using], and training your brain-recall function. It makes me wonder. [I wonder a lot, lately]

Over on n-Catgory Cafe I'd mention the Mother of the Muses thus.

Tegmark and Egan; Re: emergent phenomena Re: Michael Polanyi and Personal Knowledge

Other attacks on the hierarchy of sciences are common in the literature.

Tegmark has claimed, as another thread here discussed, that the Physical Universe is IN FACT a Mathematical Structure as such. That chages the first arrow in your diagram.

However, as we've discussed in other threads, Mathematics does not spring from the null set free of charge, nor spontaneously. Nor from the brow of Zeus (as did Athena). And the Muse of Math is Urania, in any case, daughter of Mnemosyne, Urania also being the muse of Astronomy. That's why Cosmology is big in this blog;) By the way, Michael Salamon took me and my wife to dinner and tells that he's been put in charge of ALL cosmology missions at NASA HQ.

Mathematics TO WE HUMANS is possible because of evolution that selected for certain neurological and sensory and kinaesthetic capabilities.

Mathematical notation, as with written alphabets, may result from repurposing of visual and neural capabilities for abstracting the graphs of horizon, trails, vertical trees, and the like. The theory has been articulated that Math is based on how we physically manipulate objects and our own bodies.

Other intelligences, with different biology, may have VERY different Mathematics, although it may have natural transformations between a subset of theirs and a subset of ours.

Other universes, with different physical laws, may also allow elegant embeddings of other mathematics.

We don't know with 100% certainty that the same Math applies to all parts of OUR universe. Greg Egan has written brilliantly on this in Luminous and other fictions.

Posted by: Jonathan Vos Post on July 2, 2008 7:33 PM

Sadly, I was remarking about a more mundane Mnemosyne.

I will demur commenting on your post otherwise except to note that it fits with my otherwise naive worldview.

David refers to "The Mnemosyne software [which] resembles a traditional flash-card program to help you memorise question/answer pairs, but with an important twist: it uses a sophisticated algorithm to schedule the best time for a card to come up for review. Difficult cards that you tend to forget quickly will be scheduled more often, while Mnemosyne won't waste your time on things you remember well. The software runs on Linux, Windows and Mac OS X."

But the play on words is intentional. The Greek metaphysics disguised as mythos held that [human] Memory is the "mother" of all the Arts and Sciences.

"History records that there is nothing so powerful as a fantasy whose time has come."

-- Historian Tony Judt, Reappraisals

Consider also "... concept of 'story' from Walter Fisher, a communications theorist who argues that humans are essentially storytellers, and that all communication -- history, art, language, science, etc. -- is a form of
storytelling. That is to say, the world is a collection of 'stories' -- or 'narrative paradigms,' to use Fisher's terms -- that we constantly examine for coherence and check against our experience as we attempt to create meaningful lives, individually and collectively."

In a similar vein am I the only one who finds von Neumann's arguments strangely disturbing when I think about my own thought process?
Dennett proposes that your conscious thoughts are those most strongly represented among the many concurrent unconscious thoughts in your brain. The stability of conscious thought over second timescales might apply some lower bound on the number of votes separating candidate answers to be derived.

If indeed, as the Amazon.com review paraphrases Daniel Dennett's linked-to book, "the mind is a bubbling congeries of unsupervised parallel processing" then:

(1) This is not much more than a quasi-computer-scientific retelling of the metaphor of philosopher and psychologist William James, when he described the world of the infant as a "blooming, buzzing confusion."

(2) We see this imitated or parodied as "stream of consciousness" where the protagonist's thought processes are depicted as overheard in the mind (or addressed to oneself), as a fictional device. The term "stream of consciousness" was first introduced to the discipline of literary studies from that of psychology by the same William James, in part because of his being brother of the influential writer Henry James.

(3) I'd go a step further in suggesting that "stability of conscious thought over second timescales" means that consciousness is a chaotic attractor in the space of the trajectories of all possible thoughts (that space being what Astronomer Fritz Zwicky called the "ideocosm").

Further, on the question of whether DNA computing is suficiently different from either classical analog, classical digital, or quantum computers:

The genome is not a computer program
http://scienceblogs.com/pharyngula/2008/02/the_genome_is_not_a_computer…
Category: Creationism - Development - Evolution - Genetics - Science
Posted on: February 24, 2008 11:22 AM, by PZ Myers
The author of All-Too-Common Dissent has found a bizarre creationist on the web; this fellow, Randy Stimpson, isn't at all unusual, but he does represent well some common characteristics of creationists in general: arrogance, ignorance, and projection. He writes software, so he thinks we have to interpret the genome as a big program; he knows
nothing about biology; and he thinks his expertise in an unrelated field means he knows better than biologists.... [truncated]

Comments:

#33

Stipulated: Randy Stimpson is a stupid, clueless, pathetic, Creationist idiot.

However, the question that he raises is far more subtle and nuanced that PZ Myers herein allows.

I speak as someone with lesser Biology credentials that PZ (I have only about 25 publications and conference presentations in the field), but enough to be negotiating for a Research Scientist position in Biological Networks at Caltech.

On the other hand, I have 42 years of computer/software experience, significantly more than does PZ.

I strongly feel and have since before I began my Ph.D. dissertation research in 1975 (in what's now considered Nanotechnology, Artificial Life, Systems Biology, and Metabolomics) that there is a profound relationship between Genome/Proteome/Physiome and Source code/interpreted or compiled object code / effected change in embedded system or robot behavior or client-server interaction.

In my dissertation, I sometimes referred to "genocode" versus "phenocode." Several chapters of that dissertation have now been published in refereed venues.

The question: "what is the channel capacity of Evolution by Natural Selection" and the related question: "What is the Shannon information in an organism's genome" is a very hard question, which we have discussed in this blog and elsewhere. I have a draft paper of some 100-page length sitting on a NECSI wiki, triggered by a what I took to
be a good question from an annoying Intelligent Design troll; said wiki paper draft online thanks to the dedicated work of the admirable Blake Stacy, for about a year, which I have not had a chance to complete, due to little distractions such as life-threatening medical condition, 9 days in hospital, and 6 weeks away from the classroom
teaching that I love.

I think that there is common ground between the naive "DNA = computer" myth and PZ's very thoughtful description above, which I quite enjoy:

"... the genome is nothing like a program. The hard work of cellular activity is done via the chemistry of molecular interactions in the cytoplasm, and the genome is more like a crudely organized archive of components. It's probably (analogies are always dangerous) better to think of gene products as like small autonomous agents that carry out bits of chemistry in the economy of the cell. There is no central authority, no guiding plan. Order emerges in the interactions of these agents, not by an encoded program within the strands of DNA."

"I'd also add that the situation is very similar in multicellular organisms. Cells are also semi-independent automata that interact through a process called development in the absence of any kind of overriding blueprint. There is nothing in your genome that says anything comparable to 'make 5 fingers': cells tumble through coarsely predictable patterns of interactions during which that pattern emerges. '5-fingeredness' is not a program, it is not explicitly laid out anywhere in the genome, and it cannot be separated from the contingent chain of events involved in limb formation."

I should like to point out that Artificial Intelligence (my M.S. in 1975 was for work on the borderline between AI and Cybernetics), Agent-based software, and Quantum Computing have brought "program" into a new paradigm, as much as genomic and post-genomic research and data have brought DNA/RNA/Protein into such a new paradigm that the very word "gene" is difficult to properly define at any level of
education.

Posted by: Jonathan Vos Post | February 24, 2008 1:50 PM

So long as I'm on the Biology channel, let me bring it
back towards a main interest of this blog collquium,
namely use of Physics to compute and communicate.

Here's a free review article from Nature on a subject
that overlaps my pioneering research therein
(1973-1977).

Nature Nanotechnology 2, 399 - 410 (2007)
doi:10.1038/nnano.2007.188

Subject Categories: Molecular machines and motors |
Nanosensors and other devices

Molecular logic and computing

A. Prasanna de Silva & Seiichi Uchiyama
http://www.nature.com/nnano/journal/v2/n7/full/nnano.2007.188.html

Abstract

Molecular substrates can be viewed as computational
devices that process physical or chemical 'inputs' to
generate 'outputs' based on a set of logical
operators. By recognizing this conceptual crossover
between chemistry and computation, it can be argued
that the success of life itself is founded on a much
longer-term revolution in information handling when
compared with the modern semiconductor computing
industry. Many of the simpler logic operations can be
identified within chemical reactions and phenomena, as
well as being produced in specifically designed
systems. Some degree of integration can also be
arranged, leading, in some instances, to arithmetic
processing. These molecular logic systems can also
lend themselves to convenient reconfiguring. Their
clearest application area is in the life sciences,
where their small size is a distinct advantage over
conventional semiconductor counterparts. Molecular
logic designs aid chemical (especially intracellular)
sensing, small object recognition and intelligent
diagnostics.

[follow hotlink for full article]