"A good metaphor is something even the police should keep an eye on." - G.C. Lichtenberg
Although the brain-computer metaphor has served cognitive psychology well, research in cognitive neuroscience has revealed many important differences between brains and computers. Appreciating these differences may be crucial to understanding the mechanisms of neural information processing, and ultimately for the creation of artificial intelligence. Below, I review the most important of these differences (and the consequences to cognitive psychology of failing to recognize them): similar ground is covered in this excellent (though lengthy) lecture.
Difference # 1: Brains are analogue; computers are digital
It's easy to think that neurons are essentially binary, given that they fire an action potential if they reach a certain threshold, and otherwise do not fire. This superficial similarity to digital "1's and 0's" belies a wide variety of continuous and non-linear processes that directly influence neuronal processing.
For example, one of the primary mechanisms of information transmission appears to be the rate at which neurons fire - an essentially continuous variable. Similarly, networks of neurons can fire in relative synchrony or in relative disarray; this coherence affects the strength of the signals received by downstream neurons. Finally, inside each and every neuron is a leaky integrator circuit, composed of a variety of ion channels and continuously fluctuating membrane potentials.
Failure to recognize these important subtleties may have contributed to Minksy & Papert's infamous mischaracterization of perceptrons, a neural network without an intermediate layer between input and output. In linear networks, any function computed by a 3-layer network can also be computed by a suitably rearranged 2-layer network. In other words, combinations of multiple linear functions can be modeled precisely by just a single linear function. Since their simple 2-layer networks could not solve many important problems, Minksy & Papert reasoned that that larger networks also could not. In contrast, the computations performed by more realistic (i.e., nonlinear) networks are highly dependent on the number of layers - thus, "perceptrons" grossly underestimate the computational power of neural networks.
Difference # 2: The brain uses content-addressable memory
In computers, information in memory is accessed by polling its precise memory address. This is known as byte-addressable memory. In contrast, the brain uses content-addressable memory, such that information can be accessed in memory through "spreading activation" from closely related concepts. For example, thinking of the word "fox" may automatically spread activation to memories related to other clever animals, fox-hunting horseback riders, or attractive members of the opposite sex.
The end result is that your brain has a kind of "built-in Google," in which just a few cues (key words) are enough to cause a full memory to be retrieved. Of course, similar things can be done in computers, mostly by building massive indices of stored data, which then also need to be stored and searched through for the relevant information (incidentally, this is pretty much what Google does, with a few twists).
Although this may seem like a rather minor difference between computers and brains, it has profound effects on neural computation. For example, a lasting debate in cognitive psychology concerned whether information is lost from memory because of simply decay or because of interference from other information. In retrospect, this debate is partially based on the false asssumption that these two possibilities are dissociable, as they can be in computers. Many are now realizing that this debate represents a false dichotomy.
Difference # 3: The brain is a massively parallel machine; computers are modular and serial
An unfortunate legacy of the brain-computer metaphor is the tendency for cognitive psychologists to seek out modularity in the brain. For example, the idea that computers require memory has lead some to seek for the "memory area," when in fact these distinctions are far more messy. One consequence of this over-simplification is that we are only now learning that "memory" regions (such as the hippocampus) are also important for imagination, the representation of novel goals, spatial navigation, and other diverse functions.
Similarly, one could imagine there being a "language module" in the brain, as there might be in computers with natural language processing programs. Cognitive psychologists even claimed to have found this module, based on patients with damage to a region of the brain known as Broca's area. More recent evidence has shown that language too is computed by widely distributed and domain-general neural circuits, and Broca's area may also be involved in other computations (see here for more on this).
Difference # 4: Processing speed is not fixed in the brain; there is no system clock
The speed of neural information processing is subject to a variety of constraints, including the time for electrochemical signals to traverse axons and dendrites, axonal myelination, the diffusion time of neurotransmitters across the synaptic cleft, differences in synaptic efficacy, the coherence of neural firing, the current availability of neurotransmitters, and the prior history of neuronal firing. Although there are individual differences in something psychometricians call "processing speed," this does not reflect a monolithic or unitary construct, and certainly nothing as concrete as the speed of a microprocessor. Instead, psychometric "processing speed" probably indexes a heterogenous combination of all the speed constraints mentioned above.
Similarly, there does not appear to be any central clock in the brain, and there is debate as to how clock-like the brain's time-keeping devices actually are. To use just one example, the cerebellum is often thought to calculate information involving precise timing, as required for delicate motor movements; however, recent evidence suggests that time-keeping in the brain bears more similarity to ripples on a pond than to a standard digital clock.
Difference # 5 - Short-term memory is not like RAM
Although the apparent similarities between RAM and short-term or "working" memory emboldened many early cognitive psychologists, a closer examination reveals strikingly important differences. Although RAM and short-term memory both seem to require power (sustained neuronal firing in the case of short-term memory, and electricity in the case of RAM), short-term memory seems to hold only "pointers" to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk. (See here for more about "attentional pointers" in short term memory).
Unlike RAM, the capacity limit of short-term memory is not fixed; the capacity of short-term memory seems to fluctuate with differences in "processing speed" (see Difference #4) as well as with expertise and familiarity.
Difference # 6: No hardware/software distinction can be made with respect to the brain or mind
For years it was tempting to imagine that the brain was the hardware on which a "mind program" or "mind software" is executing. This gave rise to a variety of abstract program-like models of cognition, in which the details of how the brain actually executed those programs was considered irrelevant, in the same way that a Java program can accomplish the same function as a C++ program.
Unfortunately, this appealing hardware/software distinction obscures an important fact: the mind emerges directly from the brain, and changes in the mind are always accompanied by changes in the brain. Any abstract information processing account of cognition will always need to specify how neuronal architecture can implement those processes - otherwise, cognitive modeling is grossly underconstrained. Some blame this misunderstanding for the infamous failure of "symbolic AI."
Difference # 7: Synapses are far more complex than electrical logic gates
Another pernicious feature of the brain-computer metaphor is that it seems to suggest that brains might also operate on the basis of electrical signals (action potentials) traveling along individual logical gates. Unfortunately, this is only half true. The signals which are propagated along axons are actually electrochemical in nature, meaning that they travel much more slowly than electrical signals in a computer, and that they can be modulated in myriad ways. For example, signal transmission is dependent not only on the putative "logical gates" of synaptic architecture but also by the presence of a variety of chemicals in the synaptic cleft, the relative distance between synapse and dendrites, and many other factors. This adds to the complexity of the processing taking place at each synapse - and it is therefore profoundly wrong to think that neurons function merely as transistors.
Difference #8: Unlike computers, processing and memory are performed by the same components in the brain
Computers process information from memory using CPUs, and then write the results of that processing back to memory. No such distinction exists in the brain. As neurons process information they are also modifying their synapses - which are themselves the substrate of memory. As a result, retrieval from memory always slightly alters those memories (usually making them stronger, but sometimes making them less accurate - see here for more on this).
Difference # 9: The brain is a self-organizing system
This point follows naturally from the previous point - experience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit - something known as "trauma-induced plasticity" kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism), and others that can result in profound cognitive dysfunction (as is unfortunately far more typical in traumatic brain injury and developmental disorders).
One consequence of failing to recognize this difference has been in the field of neuropsychology, where the cognitive performance of brain-damaged patients is examined to determine the computational function of the damaged region. Unfortunately, because of the poorly-understood nature of trauma-induced plasticity, the logic cannot be so straightforward. Similar problems underlie work on developmental disorders and the emerging field of "cognitive genetics", in which the consequences of neural self-organization are frequently neglected .
Difference # 10: Brains have bodies
This is not as trivial as it might seem: it turns out that the brain takes surprising advantage of the fact that it has a body at its disposal. For example, despite your intuitive feeling that you could close your eyes and know the locations of objects around you, a series of experiments in the field of change blindness has shown that our visual memories are actually quite sparse. In this case, the brain is "offloading" its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice? A surprising set of experiments by Jeremy Wolfe has shown that even after being asked hundreds of times which simple geometrical shapes are displayed on a computer screen, human subjects continue to answer those questions by gaze rather than rote memory. A wide variety of evidence from other domains suggests that we are only beginning to understand the importance of embodiment in information processing.
Bonus Difference: The brain is much, much bigger than any [current] computer
Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225 million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines, and that doesn't include the influences of dendritic geometry, or the approximately 1 trillion glial cells which may or may not be important for neural information processing. Because the brain is nonlinear, and because it is so much larger than all current computers, it seems likely that it functions in a completely different fashion. (See here for more on this.) The brain-computer metaphor obscures this important, though perhaps obvious, difference in raw computational power.
- Log in to post comments
Fantastic post and comments together!
In simple terms, don't forget that computers are still a relatively new thing. Over time, they will be modelled more and more like the human brain.
As human understanding of the brain increases, so too will our ability to model it in future computers and technology.
Whatever is discovered and whatever is going to be discovered , it can't be better than human brains because they are products of human brains so no doubt about which is better! Really good information and also good arguments!
Awesome ans..... thanx thanx super thanx...
I definitely need assistance loosing weight. I can't believe how fat I am right now. :( I already lost like 60 pounds but i still need to lose like 25 more. ive remained the same bodyweight since june but i cant find inspiration to start up again.
I definitely need help loosing weight. I am super fat right now. :( I already lost 40 pounds and have to lose 25 more.. ive remained the same bodyweight since july but i cant find motivation to start up again.
This is good motivation to begin the year off right. I really need to lose excess weight. I am at least FIFTY pounds over weight and its driving me crazy.
You wrote a very, I might use some of this into my fitness competion.
A valuable and informative article just to bring up the issues which can be discussed. Do any of the arguments prove that computers cannot become just as sentient as humans? Not one bit.
Each of your points is either not true or it just lists an area where the brain is currently at a high complexity level.
There isn't a significant difference between an analog signal and a digital signal of a certain complexity. An MP3 file might only record music at a rate of 44,000 times in a single second. Are you unable to hear music on the radio or using an ipod? What do people buy ipods for then?
This sample rate is the degree to which the digitalized signal matches the analog signal. If you sample twice as often, there is half as much difference between the digital signal and the analog signal. At some point the two signals are indistinguishable in any meaningful way.
The real crux of your argument is
"Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225 million billion) interactions"
With what we've seen in computing power increases by Moore's law there's a doubling every 18 months. It takes only 87 years for exponential chips to increase vastly above the level you are talking about (hundreds of millions of billions.)
You also are making an assumption ahead of time that every single variable in the brain has millions of important states. There is no evidence proving any of that yet. Only a billionth of those values might be meaningful for the brain.
Cognitive Psychology has already shown us how huge the limitations of the brain are. We have MASSIVELY finite and limited memory and thinking ability. That supports a brain that does NOT have the complexity of 225,000,000,000,000,000 (225 million billion) interactions.
Miller's number says people can normally handle only 7 plus or minus 2 pieces of info at a time. Our brains throw away most all of the information that comes into our ears and eyes right away because of specifically how limited our brains are. It's hard to watch you try and argue that a brain has basically impossible to replicate complexity when it's such a weak device.
ANY number of interactions is possible to mathematically model. If you have 4 factors interacting that's easy to understand. If you have 1000 factors then the formula is going to be a lot larger and more memory-intensive but there is no difference at all as to mathematical possibility.
The only real question will be "how big does the super-computer need to be to model a human brain"
Maybe it will have to be as big as a watch, maybe it will have to be as big as a car. It is only dumb human ego to think that a computer will not be able to write its own music or stories in the future.
And I thought my articles were good, you my friend are a great writer! Thanks for sharing to the blog community
Unlike RAM, the capacity limit of short-term memory is not fixed; the capacity of short-term memory seems to fluctuate with differences in "processing speed" (see Difference #4) as well as with expertise and familiarity.
I really like the article, it was very informative.
Yes, computers are powerful and great. They are super smart and they are capable of things that humans are not normally capable of. But let us not forget that humans were still the one who invented them. So if someone would say that computers are much better than humans, just think that an invention would not be that powerful if it weren't for the inventor.
Similarly, one could imagine there being a "language module" in the brain, as there might be in computers with natural language processing programs.
Lisa & Richard both make the very interesting point that metaphorical reasoning appears to be a necessary component for understanding complex things, in particular the brain. Houston Personal Injury Attorneys
This article is explained in simple terms and good information is available.
Good information on your blog guys. I really appreciate your work her. Thanks for sharing it.
:-D
hey buddy,this is one of the best posts that Iâve ever seen; you may include some more ideas in the same theme. Iâm still waiting for some interesting thoughts from your side in your next post
or more information see CODIL. The Architecture of an Information Language, The Computer Journal Vol 33. No 2, 1990, pp 155-163 [Due to the delays in the peer review system this was published over two years after I had decided to throw in the towel!]
these important subtleties may have contributed to Minksy & Papert's infamous mischaracterization of perceptrons, a neural network without an intermediate layer between input and output. In linear networks, any function computed by a 3-layer network can also be computed by a suitably rearranged 2-layer network. In other words, combinations of multiple linear functions can be modeled precisely by just a single linear function. Since their simple 2-layer networks could not solve many important problems, Minksy & Papert reasoned that that larger networks also could not.
Your points about the differences between animal brains and typical modern computers are mostly correct. The implied conclusion is that AI is impossible as a result.
To extend the analogy, I have a slide rule to calculate logarithms. It operates on a very different basis from a modern desktop computer. Therefore, my computer can't calculate logarithms.
Critical posting up! pretty nice article just what I was looking for! By the way I totally wonder to reading this construction mate!!!
Everything is good, but something more assured. By reading through this crucial appointment I came to know the statements regarding Moore's law. Thanks!
Very happy to be here, thank you for the article let me learn many useful things!
At first, that I was doing anything "questionable." Before entering the computer industry I have been involved with several different types of very complex human-based information processing tasks
Great article!!! I found some games, like lumosity can greatly speedup our brain to make it more like a computer; however I find these math games better as they specifically focus on speeding up applicable skills.
Awesome posting dude! I appreciate your all thoughts difference between computer and brain. Thanks!
It is great that we can get the loans and it opens new chances.
Set your own life more simple take the home loans and all you require.
Set your own life time more simple get the mortgage loans and all you need.
I took my first loan when I was 25 and this helped my relatives very much. Nevertheless, I require the short term loan as well.
If you want to buy a car, you would have to receive the business loans. Furthermore, my sister all the time uses a short term loan, which supposes to be the most fast.
Buildings are not cheap and not everyone can buy it. Nevertheless, personal loans was invented to help different people in such kind of cases.
Following my exploration, billions of people on our planet receive the business loans from well known creditors. So, there's a good chance to receive a car loan in all countries.
I received my first mortgage loans when I was not very old and it aided me a lot. However, I require the term loan once more time.
One acknowledges that modern life is not cheap, nevertheless some people require cash for different things and not every person gets enough money. Thence to receive some home loans and collateral loan will be a correct solution.
Every one admits that humen's life is expensive, however we require money for various things and not every man gets enough money. Therefore to receive fast loan or just bank loan should be a proper way out.
Don't have a lot of money to buy some real estate? Do not worry, because it's available to take the loan to resolve all the problems. Hence take a secured loan to buy everything you require.
This article is predicated on a litany of unfounded and false assumptions about computers and the many, and still being discovered, numerous ways they can be made to work.
Essentially the main mistake here that is made again and again is to assume that your current incomplete and poorly informed understanding of a computer is the only way a computer has ever been and will ever be.
The unfortunate thing is that such misinformation based on ignorance can lead people to persist in their non-thinking about what they are going to have to deal with in the medium to long term future.
The article is even wrong about current computers. You have no idea what is going on.
There is Possibility of replacement of silicon chip by macromolecules that exactly work as the neurotransmitter. Nature, 475, 368â372
But But but what about
Terminator
Colossus
IRobot
and all those other Si Fi movies
Arrrrrrg!
What was that one Peter Sellers was in when we were about to Nuke ourselves to the stone age?
As a behavioral neurobiologist, I am always amazed to see one of the difference between computer (or say turing machines) and brain (or say nervous systems) is always overlooked. There is a difference in essence: turing machine are input-output machines: if you do not feed the machine with a task, it is not computing anything. Every nervous systems, even the smallest and simplest ones-even the one that have no brain part-, are (also/primarily?) output-input "machines": in complete absence of input, they produce outputs (spontaneous activity), which will trigger changes in the input, changes that may bear some information (turn your head: your whole visual input change).
I guess that to get the personal loans from creditors you ought to have a great reason. Nevertheless, one time I have received a credit loan, just because I wanted to buy a car.
Specialists argue that personal loans help a lot of people to live their own way, just because they can feel free to buy necessary things. Moreover, some banks present sba loan for different classes of people.
I strictly recommend not to wait until you earn enough money to buy all you need! You should take the credit loans or term loan and feel fine
Set your own life easier take the loan and everything you want.
One acknowledges that modern life seems to be expensive, however different people require cash for various issues and not every one gets enough money. Thus to receive quick mortgage loans and short term loan will be good way out.
Houses are quite expensive and not every person can buy it. Nevertheless, loans was created to aid people in such kind of situations.
People deserve wealthy life and business loans or just secured loan can make it much better. Because freedom is grounded on money.
That's great that we can get the business loans moreover, it opens new opportunities.
It really a useful idea.I will have a tiral of this idea as soon as possible as have already frustrated by jimmy choo bridaland jimmy choo outletfor a long time.Thank you very much for your continously post of effective tips.It really do me a great favor.if you have more information about can you tell me Onitsuka Tiger Shoesand jimmy choo shoe i also like ,my boyfriend want me to buy but i can't find a shop that sell ,as i always want to be a before Nike Lebron shoes andNike ZOOM LeBron , but i only get the result of ,then i Walk into marketing and marketing center, to find ,, my sister want me to buy new burberry but i only find replica burberry it really drive me crazy!you know, next month i will marry with my boyfriend jimmy choo weddingand lebron 8 v2i want to look some pandora jewellery; pandora jewellery beads also cheap nfl jerseys or jersey supply and religion jeans, you can leave a email to me! thank you! nike air max 2011white air maxnike griffey max burberry sweater religions jeanstrue religions jeans true religion jeans outletbears jersey blackhawks jerseyken griffey shoes air griffey maxken griffey jr shoes
air griffey max 2Nike Dunk SB HighNike Dunk SB Lowwhite dunksDunks For Womenlebron 8 south beach,burberry belts
burberry hats women
burberry shearling coats
vintage burberry bags
burberry handbag price
cheap burberry handbags
burberry accessories
burberry china
burberry leather handbag
burberry check purse
mens cashmere scarf
fashion men shirts
burberry boots on sale
men fashion suits
lacoste sweaters
burberry men tie
discount burberry handbags
discount burberry handbags
discount burberry handbags
burberry men purse
burberry women purse
burberry tee shirt
burberry men trench coat
burberry nordstrom
burberry shawl
burberry scarves outlet
burberry scarf on sale
burberry sneakers men
burberry women shoes
burberry men bags
fashion women bags
vintage designer handbags
burberry dress for girls
Adidas Shoes
Air Griffey Max Fury 2012
Air Jordan High Heels
Air Jordan Shoes
Air Jordan Wool Boots
Jordan CMFT Max 12Leather
Jordan Sandals
Ken Griffey Jr Jersey
Ken Griffey Shoes 2011
Ken Griffey Shoes Womens
Nike ACG Boots
Nike Aina Chukka
Nike Air 1/2 Cent Penny
Nike Air Foamposite One
Nike Air Max 2009
Nike Air Max 2010
Nike Air Max 24-7
Nike Air Max 95
Nike Air Max 95 Boots
Nike Air Max NM Nomo
Nike Air Max TN
Nike Air Max Uptempo 97
Nike Heels for Women
Nike Ken Griffey Shoes
Nike Zoom Lebron
Women's Nike Dunk SB
cheap Nike Dunk SB Low
cheap Nike Air Max 2009
cheap Nike Air Max 2010
cheap Nike Air Max 2011
cheap Nike Air Max 24-7
cheap Nike Air Max 360
cheap Nike Air Max 87
cheap Nike Air Max 88 Men
cheap Nike Air Max 89 Men
authentic Nike Air Max 90
authentic Nike Air Max 91 Men
authentic Nike Air Max 92 Men
authentic Nike Air Max 93 Women
authentic Nike Air Max 95
authentic Nike Air Max ACG
authentic Nike Air Max Classic BW
authentic Nike Air Max LTD
authentic Nike Air Max Preview EU Men
authentic Nike Air Max Skyline Men
authentic Nike Air Max Tn Men
Nike Air Zenyth Men
authentic Nike Griffey Max
cheap Pandora Necklacescheap Pandora Ringscheap Pandora Setscheap Pondora Packages
herve leger bandage/a>herve leger gownsherve leger saleherve leger bandage dressesmoncler down jacketmoncler doudounesmoncler coatcheap moncler jacket
burberry belts
burberry hat
burberry duffle coat
discount burberry handbags
cheap burberry bags
burberry messenger bag
burberry bags outlet
burberry style
burberry factory outlet store
burberry purse sale
fashion scarves
burberry shirts for men
burberry novacheck
business suits for men
burberry cashmere sweater
fashion ties
discount burberry handbags
discount burberry handbags
discount burberry handbags
burberry men purse
burberry women purse
burberry t shirts
authentic burberry
burberry pea coat
burberry shawl
faux burberry
burberry cashmere scarf
burberry men shoes
burberry women shoes
burberry men bags
burberry women bag
vintage designer bags
burberry brit shirt
nike air griffey max i
ken griffey jr shoes
nike air griffey max ii
nike mens running shoes
nike womens running shoes
true religion womens jeans
mens true religion jeans
moncler down coats
moncler vests women
thomas sabo carriers
thomas sabo earring
tiffany charm bracelets
pandora wedding charms
lebron 8 south beach
Nike ZOOM LeBron
Nike Lebron shoes
lebron 8 v2Nike Air Max LeBron 8 V2
Nike Air Max Lebron VII
Nike Air Max LeBron VII Low
Nike Air Max LeBron VIII
Nike Air Max Lebron VIII Low
Nike LeBron 8 V2 Low
Nike Lebron James 9
Nike LeBron VII PS
Nike Lebron VIII PS V3
Nike ZOOM LeBron IV
Nike ZOOM LeBron V
Nike ZOOM LeBron VI
Nike ZOOM Soldier III
Nike ZOOM Soldier IV
Nike ZOOM Soldier V
Nike ZOOM LBJ Ambassador III
Nike ZOOM LBJ Ambassador IV
detroit lions jersey
cheap authentic jerseys
cheap nfl jerseys
youth nfl jerseys
nfl jerseys wholesale
religion jeans outlet
true religion store
true religions
true religion billy
true religion men
men Bootcut jeans
men Flare jeans
men Skinny jeans
straight leg jean
true religion women
women bootcut jeans
women Flare jeans
cut off jeans shorts
girls skinny jeans
straight legs jeans
nike shox shoes
sb dunks
Online shopping Online Shopping Store to buy Daily essential and Digital electronic products Hydroponics (Deep water culture, Ebb and Flow, Drip system, Aeroponics, Mini system, Grow trays, Propagation & Cloning, Indoor grow Tents), Grow lights(Grow light kits, HPS lights, Metal halide grow lights, Fluorescent lamps, LED grow lights, Grow light, reflectors, HID grow lights, Grow light movers, Quality light meters, Lighting accessories ), Controllers (Lighting and power controllers, Temperature and humidity controllers, Operate fans, Multi-Function controllers, CO2 controllers and monitors, CO2 generators), Plan Care(Plant nutrients, Plant supplements, Grow media, Grow pots, buckets, and bags, pests, Leaf Trimmers), Water and Vent(Air filters and odor control, Water purifiers, Ventilation & fan, Air pumps, Water pumps and irrigation, Heat Exchangers, Dehumidifiers and Air Conditioners, Water Chillers, Test meters and PH control) etc.. Free Shipping & Pay Cash on Delivery.
Website: http://www.hydroponicsxl.com/
Call us: +1 888-551-2685
Best of the Luck
Following my investigation, thousands of people in the world receive the personal loans from different banks. So, there is great possibilities to get a collateral loan in every country.
Some time ago, I needed to buy a car for my business but I did not earn enough cash and could not purchase something. Thank God my sister adviced to try to get the mortgage loans from reliable creditors. Therefore, I did so and was happy with my auto loan.
Thanks for sharing. A real lot of useful info here!
These are all great comments here. Very cool article.
What the heck is the purpose of "10 Important Differences Between Brains and Computers : Developing Intelligence" ?
That's known that money can make people free. But how to act if one has no money? The only one way is to try to get the loans or just college loan.
great post. i liked it. the differences between brain and computer has be described very smartly. thank you for your valuable information.
If you want to buy real estate, you would have to get the mortgage loans. Moreover, my brother usually utilizes a consolidation loan, which supposes to be the most fast.
e CODIL. The Architecture of an Information Language, The Computer Journal Vol 33. No
Do not cash to buy a building? You should not worry, just because that is available to take the loan to resolve all the problems. So take a consolidation loan to buy everything you require.
Set your own life time more easy get the loan and all you want.
I think that to get the business loans from banks you should present a firm reason. Nevertheless, one time I've received a small business loan, because I was willing to buy a building.
Ur guys r d best
I'm happy when reading through your site with up-to-date information! thanks alot and hope that you'll publish more site that are based on this website.
I really appreciate your content. The article has actually peaks my interest. I'm going to bookmark your site and hold checking for new information. Thank you very much.
Thatâs very nice , thanks for sharing and the nice tips.
If our brain "has a kind of "built-in Google"", then the internet could be an organism whose neurons would be Google ? That's what I suggest in this text (sorry, in french): http://www.societesdelinformation.net/frontend/index.php?action=getArti…
The post is very informative and one of the chief memories well worth reading. That really huge important difference between brains and computers input here. I notice overall points though it's very critical to differentiate them Thanks!
Most important differencebetween computer and Brain
â¢Computers access information in memory by polling a memory address, brains search memories using cues
â¢The brain is a massively parallel machine; computers are modular and serial
â¢Processing speed is not fixed in the brain; there is no system clock
â¢Short-term memory is not like RAM
â¢Computers are hardware that runs software, there is no âmind softwareâ running on brains
â¢Synapses are far more complex (electrochemical) than computer logic gates (electrical)
â¢Computers use processors and memory for different functions, there is no such distinction in the brain
â¢Computers are designed, built and are of fixed architecture, the brain is a self-organizing system
â¢Computers have no body, brains do
Thank you,
I think itâs a wonderful post.
I was just searching for information about this on the web and I am glad I run into your post, and so fast too (I taught it will take for ever since I didnât know where to start searching).
I have a friend that would like to read it too and I will definitely send him a link now (since I canât find the âtell a friendâ button â I think it should be more obvious were it is).I have also bookmarked this site and surly will come to read some more of your other posts.
Can you add some resources where I can find additional information?
Thank you again for this great post.
human has super intelligent and can carry the Globe on thier brain
Great differentiates! I would like to say thanks for that exciting impression. Anyway you write up frequently the dispute in brain and computer issue. sector 9 Thanks!
Well your site is very informative and it will be equally worthwhile.Well thanks for this information it will really gonna help a lot of people.
Bonus different; Brain have bodies. Both brain and computer have bodies, i think that is similarities but no different.
Thank You
The given information is very effective
i will keep updated with the same
industrial automation
it good and simple to understand the con tent in this page thks
I would certainly like to customer article on this blog. It's very useful and I definitely don't frequently see web logs in this way.
Fantastic overview, something I think I'll refer people to if they start talking about computers being able to go beyond the human brain in the next 10 years
Excellent article and very informative.
--Brett
I have to disagree with you on a number of these points (namely, the first half). While they are all generally true regarding the computers we work with during our daily blogging activities, I don't think they are entirely true when taking into account a more sophisticated understanding of what a "computer" is.
Difference # 1: Brains are analogue; computers are digital
Calling something a computer doesn't necessarily mean that it must have digital circuits, though analog circuits clearly are rare. So this is good to point out, but there is interest in analog circuitsespecially in building artificial neural networks,.
Also, I fail to see how rate of firing is significant in digital versus analog. If the importance of the continuity of firing is the average signal strength over a given time period then a signal sent digitally in intervals should be equivalent in its ability to cause the next set of neurons to fire. There might be something I'm missing here, but as you stated it I don't see how this necessitates analog circuitry.
Also, I think the Minsky & Papert reference is off the mark. It certainly was a sad day for AI, but again I don't see what this has to do with analog versus digital. What you mentioned demonstrates their fallacious thinking about how one can structure neural networks and what they can compute, but what does that have to do with whether they are analog or digitial?
Alan Turing firmly believed it was possible to simulate the continuous nature of the brain on a digital machine; however I think this is still an open question. At any rate, I think the more important question is, given that one can build a machine using analog circuits and call it a computer, whether the entire brain can be described using math equations.
Difference # 2: The brain uses content-addressable memory
I think this is really only a superficial distinction. If you implement a neural network on a computer, then all of the inner workings of the CPU and its methods of memory allocation become irrelevant. If a neural network can be simulated digitally, then where or how it is implemented is a non-issue, as any implementation will be equivalent. (So the question is then analog versus digital.)
On the other hand, I can see how this architectural difference can be important to point out to people who have no idea how the brain or computers are constructed.
Difference # 3: The brain is a massively parallel machine; computers are modular and serial
In the same way that implementation wasn't an issue before, it isn't here either. Parallel processing can be implemented equivalently on a serial machine. Also, this isn't even true anymore, as most if not all supercomputers used for research have dozens, hundreds, or even thousands of processors. And even consumer level machines are becoming parallel, with dual-core CPUs coming out in the past few years.
The modularity issue I'm intrigued by. Clearly the areas of the brain are not as discrete as those in our computers, but don't we still refer to experience occurring in the neocortex? Although I really don't know enough about this (and I want to know more!) there must be some level of modularity occurring in the brain. My gut instinct is telling me here that a brain based completely on spaghetti wiring just wouldn't work very well...
Difference # 4: Processing speed is not fixed in the brain; there is no system clock
You might call me out for nitpicking here, but CPUs don't require system clocks. Asynchronous processors are being actively researched and produced.
A key advantage is that clockless CPUs don't consume energy when they aren't active. Machines based on synchronous processors, on the other hand, constantly have the "pulse" of the clock traveling through the system (and the frequency at which it "beats" determines the speed of the CPU). The pulse of the clock continuously coursing through all of the circuits also results in "wasted cycles," meaning power is being used when the CPU isn't doing anything, and heat is being dissipated for no reason.
Difference # 5 - Short-term memory is not like RAM
Again, this seems to be a superficial architectural difference. I think that if your intent is to simulate the brain using artificial neural networks, then the how the RAM or hard drive works is inconsequential.
I'll admit that it is something worth pointing out to someone who does take the brain/computer analogy too far (which is I guess exactly who you're targeting here) or doesn't know much about computers or brains.
Difference # 6: No hardware/software distinction can be made with respect to the brain or mind
This one I completely agree with. I always get the feeling when I read philosophy of AI papers that some of the philosophers take the sentiment "the mind is the program being executed on the machine that is the brain" too far. Consequently, and I feel this is actually a central problem with philosophy of AI, they pay too little attention to how the brain actually operates and try to think about how to implement consciousness on a computer without considering how the activities of the brain relate to the mind.
...
Anyway, I think it would be fair to describe the brain as an asynchronous, analog, and massively parallel computer where the hardware itself is inherently mutable and self-organizing.
Jonathan - thanks for these astute comments!
Minksy & Papert's conclusions were based on the faulty assumption that neural networks are computationally linear; linearity is a very unusual characteristic for analogue systems. Had M&P fully appreciated the analogue nature of the brain, they likely would not have made this faulty assumption.
A consistent thread in your comment is that some differences are merely "implementational" or "architectural" details, and thus are actually unimportant or otherwise superficial. IMO, that attitude is scientifically dangerous (how can you know for sure?) and *very* premature (when we have an artificially intelligent digital computer, I'll be convinced).
It is also the same attitude that pervaded both classic cognitive psychology and GOFAI (good old-fashioned AI). I don't think the track record of either is very good: 20th century advances in statistical theory may be responsible for the few successes in both disciplines (just don't tell Chris at Mixing Memory I said that... ;)
Just adding something to Jonathan's answer to #6: A computer can run entirely in hardware (actually it's a very strange affirmation, but you know what I mean). And more: if that hardware is a FPGA, it can mutate and self-organize.
I think Chris' arguments would target "today personal computers" and not computers in general, since "computer" is a very wide term. However, I think it was the true target of the article, just with some differences/mistakes in the use of the terms.
Just wanted to chime in on what a great article this is. Not too complex or technical, and gives a great overview of the significant differences.
Rafael, when you say "I think Chris' arguments would target 'today personal computers' and not computers in general, since 'computer' is a very wide term", you have a good point, but that is exactly the computer model on which analogies were (and are) based. Chris's arguments are about the analogies were drawn, and not not about computers, and not about analogies can be drawn on the basis of future or experimental computer architectures.
Thanks incze - that is exactly where I was coming from. That said, I am now considering a second post entitled "10 important similarities between brains and Turing machines" haha.
You compare high and low, fundamental things about the brain against implementation details of a specific microprocessor. But Jonathan already said all those things much better than I can.
Rafael, your point about targeting "todays computers" is questionable. There are much more obvious differences between a brain and todays computers: The microprocessor is square, the brain is roughly round. The computer run on electricity, the brain runs on icky chemical stuff. The computer has a hard disk, the brain do not. The computer stores data sequentially, the brain do not. Etc.
Actually, all of these points are wrong: sometimes trivially, sometimes in a way that completely invalidates the conclusions that accompany them.
But the biggest problem isn't a mistake, but an unspoken assumption: it's being argued (incorrectly, as it happens) that brains aren't like computers, whenn the arguments being made are actually about the idea that brains aren't like one particular type of computational devices. Computers aren't inherently digital, for example, so the analog/digital distinction doesn't mean anything.
I forget where I was reading that before computers, people compared the workings of the brain to the steam engine, and before that, to watches. It seems whatever the most complex and intricate man-made device of the time was, that would be compared to the brain. It sort of makes sense because in some ways the brain is like any complex machine, and also it's interesting to compare the best man-made devices with what we may perceive to be the most important natural "device". I do think the brain has tons of things more in common with a computer than a watch (even a complex one), but maybe that is partly due to the current cultural importance of computers and my relative lack of amazement (and lack of understanding the specifics) of watches.
Great post and comments.
To add something to taking down the idealized picture of computers:
Difference # 4 is perhaps the most superficial. Synchronous electronics is made to simplify architecture, but as noted it is wasteful and also difficult in large "nets" where the clock no longer look ideal. I haven't looked at all on todays common dual and quad processors, but it would surprise me if not one of the advantages is just decoupling of each processor from each others clock synchronity.
It is anyway rather difficult (read: impossible) to run the same software with the same timing on a more complex machine.
Difference # 6 and # 9: As noted, there are some special systems which allows reconfiguring hardware vs software to suit the task, and they may become more popular because they also save energy. (And in a small manner, that is also what happens when power saving slow down or turn off parts of some modern systems.) Ideally, this plasticity could also work around failing components in future aging systems to increase reliability and save repairs. Who knows, maybe it will become reality.
Difference # 7: Another superficial difference, since signal handling in VLSI components is complex and highly non-linear. Threshold (and local, connected) devices are made to simplify architecture, but are again wasteful and difficult in larger applications. The difference to #4 is that the alternatives are not much developed, and may never be used.
But it is rather impressive that the Blue Brain project apparently need to use one processor just to emulate one neuron...
Ok, agreed with you all =). I got to an extreme to explain what I thought, but it was wrong. I just think there are some restrictions above the arguments, but Jonathan already said it in details.
This is a great overview and vey educational. Thanks for putting this together. I do however caution people from making assumptions as to what the nature/design of hardware and software systems may be ten years from now. That's adequate time for a whole new technology/discoveries to come to being and alter our definition of a human brain or a computer is.
Forgive me if this seems high-level and uninformed, but entirely new circuit materials and design *might* be right around the corner. Just the devil's advocate in me.
Chris, I agree with all your points, but I can't help thinking 'straw man' as I read this. I mean, that may not be quite the right word for it, but isn't most of this already pretty well assimilated into common thinking about the brain? You have to go back pretty far in time for Minksy & Papert. With the exception of #7 (I do think a lot of writers equate neurons with big wet transistors), I don't think I've read anything that compares brain functions with computer circuitry in any kind of a literal sense. (Of course, I'm excluding certain philosophers when I say that - there's just no accounting for what some of them will argue...)
Now, I will admit that I'm not really that well-read on the subject, so maybe I've just been lucky so far.
As someone whose specialty happens to be computer science, I would have to say that I agree overall with your overview, except for a few points.
Difference # 3: The brain is a massively parallel machine; computers are modular and serial
[pedantic computer science junky mode]
Well, no. You're generalizing a specific architecture (the serial von Neumann machine) as a computer. Parallelism, concurrency and distribution are huge areas of research now of days, primarily due to the fact that the hardware industry has reached a plateau with the serial arch.
You could grant that computers are probably not as massively parallel as human brain architecture, but that's really a question of scale and not essence. And as well, there is a good reason that parallelism hasn't been a hugely popular field up until now: parallel machines are notoriously difficult to program. (Even with the comparatively minor levels of multi-threading and task distribution being used with new multi-core processors on the market, software dev schedules are being doubled and sometimes tripled to assimilate the requirements.)
[/pedantic computer science junky mode]
Other than that, I don't have many complains. But when it comes to fields like A.I., I personally find their value from a purely CS centric perspective questionable. As technology, A.I. and adaptive computing have been beneficial in many ways, but I don't see them as being especially valuable to actually researching more "natural" computing devices (like the brain).
In the end, I see as somewhat akin to human flight. Our air traversing machines are certainly technically different than those produced by nature, but mostly because nature's ad hoc, Rube Goldberg designs didn't prove very useful. Computing is the same way, IMO. The technical value of A.I. should be able to stand on it's own.
This post is a healthy and much-needed rebuttal to the weird idea that in 20 or so years machine intelligence may exceed human intelligence (the so-called Singularity). The proponents of this idea seem to be basing their belief largely on extrapolating on Moore's Law.
But if Moore's law holds up for the next 20 years computers will be only about 4,000 times more capable. (And yes, I'm using the bastardized version of Moore's law that presumes increasing component density [what Moore was really talking about] correlates in a 1:1 way with increased processing power.) But if artificial intelligence requires only (or chiefly) on increased hardware power than it would already exist. It would just take 4,000 times longer than we would deem practical.
Before reading this article I would have argued is primarily as software problem. Now I have to agree that it is also a hardware problem. But who's to say we can't simulate (if not develop) hardware that will work sufficiently similar to our organic hardware? Then it will still come down to software problem. And we just don't have a good model of how human "brain software" works. And I don't think we will for a very long time.
Chris, thanks for the post: a very concise presentation of the arguments circling in my own field. I have to say though that the discussion is just as good :-) If I might add a few points...
Concerning point 10: I think this is something which has been remarkably overlooked, and one which, if you were listing your 10 points in order of importance, I would place near the top. The fact that brains (and therefore from point #6, the mind) are embodied (have a body which is localised in the real world) is, I believe, the most important constraint that is on this system. I would agree with your point that 'computers' lack this, however, I should point out that the rapidly growing field of cognitive robotics (of which I see myself as a part) is growing in importance precisely because of this. The view that brains (be they biological or otherwise) need bodies (again, biological or otherwise) is thus not one which has been forgotten.
Concerning point 3: I agree with your point, and I have to say that slightly disagree with Jonathans response that "Parallel processing can be implemented equivalently on a serial machine". I believe by definition, the best you can hope for in terms of parallel processing is a simulation of parallel processing. There are clever computational algorithms capable of getting very close in certain circumstances (threading, etc), but at the end of the day (and if using a single processor) only one computational instruction can be executed at any given time step, thus imposing a degree of serial processing on what would ideally be parallel. Also, on the point of multiprocessor systems, this is in all likelyhood adequate for relatively simple systems, however, I get the feeling (I hasten to add that this is based on limited experience of multiprocessor systems) that for larger systems (if one were to simulate many thousands of neurons an a processor per neuron), the problem would not be one of raw processing power, but one of communication limitations between the processors (bandwidth, speed etc) - although I have come across work which is attempting precisely this (in Switzerland perhaps?). Having said all this, I do believe that pseudo-parallel computation is more than adequate for most modelling purposes, and that the shortfall may be compesentated for to a certain extent.
Concerning point 9: if you are referring to hardware, then I completely agree - but not if you were also including software in that. There are many adaptive and self-organising algorithms/computational techniques capable of organisation given constraints and inputs.
More generally, the points you have raised are very important ones - particularly your assertion that the brain is nothing like a standard desktop - maybe your idea of writing another one on the similarities is a good one (on the functional nature of the two rather than the structural nature)! Thanks again though!
The things you say are all roughly correct. To make them more so would bog the article down in details that important to the majority of readers. If all you mean to do is tell people with a rough understanding of how their computer works that their brain doesn't work the same way, then all of your points are valid.
If you want to get into cutting-edge, high-end, or low-market-share technology then the argument requires more support, but is far from invalidated.
Also, the argument that the brain is not like a computer does not mean that the computer can't simulate some aspects of the brain - it means that they don't inherently work the same way.
Before I annoy anyone here's something for Jonathan:
As far as modularity goes, there is some. You can predict what kind of deficits a person will have based on where an injury occurs. The problem comes from assuming that this is where the processing of that particular thing is done. To use computers as an analogy - I can't help it - if we cut the power cord on a compter it stops adding numbers together. Thus addition takes place in the power cord.
To save reading all of this lengthy post, my other responses to Jonathan are summarized:
#2, #3 and #5 We can't accurately describe these phenomena, let alone model them. The distinction between the actual brain and the computer model only disappears when the model is completely accurate. Also, Chris has massively understated the differences between RAM and working memory. The main thing is that you COULD make enough RAM act like working memory, but that is the same as saying you could make a fly look like a raisin. It would be cruel and severely diminish the capability of your fly. For instance, to make capacity vary you could some of your RAM unavailable sometimes. Why would you do that?
To assume that current simulations of the spreading activation of a neuron works with a lookup table - the current method - is accurate is to assume that we know all about how this addressing works. I assure you that we don't. Some of my current work examines the effect of working memory load on inhibition (Following from http://www.sciencemag.org/cgi/content/abstract/sci;291/5509/1803, but I should add that I'm not affiliated with them in any way shape or form). Are you trying to tell me that the amount of RAM available will affect how we traverse a neural network lookup table? Because then the difference between working memory (which we don't really understand either) and RAM becomes extremely important.
Thus when Jonathan says "implement a neural network" does he mean a current neural network, in which case it isn't really very much like the brain, and thus not in conflict with this article at all? Or does he mean implement an accurate model of all functional aspects of the brain? Because computers aren't like that now and we have no evidence they ever will be.
The simple fact is that arguing that the brain is analogous to a Turing machine is a dangerous thing to do. Philosophers have created theoretical machines capable of solving the halting problem (for the uninitiated that's a problem computers can't solve). The brain may be a realisation of some super-Turing machine. It is true that any parallel arrangement of Turing machines can be modelled by a single machine, but it is not certain that the brain can be modelled by a collection of parallel Turing machines.
I think that the brain probably can be modelled by a Turing machine. But not yet.
Lastly, I love comment number 10. Go J. J. Gibson.
Jonathan: Unlike Chris, I'm not buying most of your criticisms.
Yeah, we've built "artificial neural networks", but most of those are research simulations! Simulating analog processes on a digital system (or vice versa) tends to pull in huge overheads, worsening the basic order of the computational costs -- and it still isn't "exact".
Simulating massively parallel systems on CPU-based systems is worse, and less reliable. The CPU version fundamentally has time cost at least linear to the number of nodes and connections, whereas a true parallel system does not.
It might well be possible to make something like "content-addressible" memory in the RAM model, but it would be a "bloody hack" with no connection to our usual programming schemes, or to a biological-style memory.)
Then too, our ability to "program" neural nets is frankly humbled, by the ordinary development of almost any vertebrate's nervous system.
These aren't issues I'm willing to wave off as "implementation details"....
And here is one dimension along which the brain is like a computer.
See O'Reilly, 2006:
"Computer models based on the detailed biology of the brain can help us understand the myriad complexities of human cognition and intelligence. Here, we review models of the higher level aspects of human intelligence, which depend critically on the prefrontal cortex and associated subcortical areas. The picture emerging from a convergence of detailed mechanistic models and more abstract functional models represents a synthesis between analog and digital forms of computation. Specifically, the need for robust active maintenance and rapid updating of information in the prefrontal cortex appears to be satisfied by bistable activation states and dynamic gating mechanisms. These mechanisms are fundamental to digital computers and may be critical for the distinctive aspects of human intelligence."
I think this list is all correct, except for the last item. It seems obvious to me that the chances of anything even *close* to a biologically accurate simulation of the brain being necessary to get the relevant aspects of its functioning are tiny. Much more likely is that the vast majority of the complexity is accidental. This isn't to say that we don't need to understand most everything that goes on before we can get a good model, it's just to say that current estimations of brain computation capacity *aren't* much too low. If Moore's law holds, I don't see why we can't have supercomputers within ten years that could simulate a human brain, given a model that we probably won't have for another 15 years.
Re: 'computer' defined...
It appears inevitable to speak in analogies when discussing AI, so the analogy I'd offer is this: the brain is like what we would call a computer network. It has areas of specialization of function, but it operates overall by combining data, memory and processing to arrive at decisions. While today's networks are still quite primitive, perhaps self-evolving networks using nanomachines to construct their own development/evolution would ultimately become 'intelligent'. Of course, one gets into the philosophy of human intelligence and...
"Contemplate this upon the Tree of Woe."-Conan the Barbarian
I think it's important to note that Chris is not criticizing or "targeting" computers. His target is the erroneous thinking that people working in cognitive science, AI, etc. have committed over the years because of a faulty analogy between "a brain" and "a computer". Or more accurately, "their conceptualization of a brain" and "their conceptualization of a computer."
From what I read, it seems that part of what created said thinking was a mistaken idea of how a computer is built, and so in this article Chris must explain a few points about how typical computers work in order to help dispel the incorrect portions of the analogy. Don't take that to mean that he is making authoritative statements about computers and what they can be or that his article is meant to target shortcomings of computers. Any criticism is targeted at a tendency for some researchers and lay people to think of a brain as being like the computer they sit at every day.
So yes, there are alternate computer architectures that more closely resemble a brain. And certainly computers don't have to work the way Chris describes. But the model of "computer" that he uses very strongly matches that used by people who subscribe to the erroneous analogy he is attempting to debunk.
Your points are well-taken and very relevant when people start talking about resemblances between brains and computers. But it seems a little misguided to compare brains to computers, whose current form, was a result of a lot of historical and technical factors: technology available, standardization decisions taken, and so on. I think computers could have taken other forms -- for instance, if the von neumann architecture had not proved to be so influential, isn't it possible that computers today would be massively parallel? -- that would have made them more structurally (although superficially) similar to the brain.
The key difference, it seems to me, is the difference between brain processes and computational processes. What is the role of brain processes in our interaction with the world? And at what level of description are brain processes computational (if at all)? And so on.
Still, great post!
Sorry for the delay in posting all these very thoughtful comments - I have been camping in Utah for a couple of days, and just now got access to the intarweb:)
I'll go through them in more detail soon.
Sorry for the belated reply, but here goes...
As for Minsky & Papert, I'll defer to your knowledge, as I'm shaky on the concept of linearity in neural networks. I understand that they made the mistake of only considering single-layer networks when pouncing on perceptrons; if they had considered multi-layered networks they would have seen that things like XOR are possible. Linearity and analog systems notwithstanding, I can say with the hindsight of a huge generational gap that it just seems silly to me that they didn't consider multi-layered networks.
A consistent thread in your comment is that some differences are merely "implementational" or "architectural" details, and thus are actually unimportant or otherwise superficial. IMO, that attitude is scientifically dangerous (how can you know for sure?) and *very* premature (when we have an artificially intelligent digital computer, I'll be convinced).
We can know for sure because all modern day digital computers are Turing-equivalent, meaning any program implemented on one can by implemented on another and be computationally equivalent despite differences in system design. Just as the brain only has hardware (as you said, there is no software that is the mind running on top), the only thing that counts when programming a mind is the software. The high-level software need not be concerned with memory registers when representing knowledge, and "pointers" can be implemented on a system that uses only non-volatile memory.
I think the only real problem here is whether or not digital computers can simulate the continuous nature of the brain. If it is the case that a discrete state machine is not hindered by this, then the brain's architecture with all of the intricacies of neuronal activity can be implemented to the fullest extent with no other problem (although we'd of course want to abstract away as much complexity as possible). However, if digital computers cannot simulate continuous structures with sufficient robustness, then I think AI would have to start putting more research into analog circuits. But I don't think we currently have enough evidence yet to make the case for either.
So yes, brains and PCs have different architectures, but that doesn't mean you necessarily cannot implement a mind on a computer.
BTW I'm looking forward to your upcoming article on Turing Machines vs Brains. :D
Thomas & Kurt - I agree with both of your comments, but again my point here was to contrast brains with modern PCs (think of a standard dell laptop). If I am guilty of a straw man fallacy, then this is doubly true of those who take issue with this post for the reason that the brain may *in principle* be the same as a "computer" (i.e., an information processing device). I never argued that the brain is not an information processing device, but for some reason many think that's what this post is about (e.g., Tyler, whose points I completely agree with - b/c he's not criticizing my opinion, but rather one that I've never actually seen anyone endorse. But perhaps we could push shreeharsh in that direction?)
Paul: I've been following your posts on cognitive/epigenetic/developmental robotics for quite a while, and I am also very interested in it. On the other hand, while there is substantial reason to believe that embodiment is important, a lot of the arguments used to support this claim are far too philosophical for my taste (and indeed I am currently collecting experimental evidence against one of the strongest claims for the importance of embodiment in developmental psychology). You'll notice the evidence I present in #10 actually pertains to immersion rather than embodiment (a logical fallacy I permitted myself;). I believe embodiment is important, but I don't think it's actually been proven.
Brian - I saw Randy's Science paper and (probably like yourself) was very surprised. On the other hand, he himself admitted in a recent prosem that the FPGA metaphor for PFC "may or may not" be deeply informative. So I have some difficulty taking that paper's perspective very far.
Lisa & Richard both make the very interesting point that metaphorical reasoning appears to be a necessary component for understanding complex things, in particular the brain.
"Are you trying to tell me that the amount of RAM available will affect how we traverse a neural network lookup table?" --Steve G
I sure don't remember telling you that...I was responding to architectural differences and the non-issue of how RAM works. Other than that I'm not quite clear on what you were saying.
"Jonathan: Unlike Chris, I'm not buying most of your criticisms.
Yeah, we've built "artificial neural networks", but most of those are research simulations! Simulating analog processes on a digital system (or vice versa) tends to pull in huge overheads, worsening the basic order of the computational costs -- and it still isn't "exact"."--David Harmon
If you reread what I wrote then you'll see that I stated my belief that it is an open question whether analog can be simulated by digital.
Also when you take issue with "time cost," you're arguing against computational power. Again, I was arguing against the problem of architectural differences. Yes, clearly a massively parallel system would make for a much nimbler machine. I would never argue against that.
And yes, we obviously haven't built neural networks that reflect the robust behavior of living systems. I was only making the claim that it is *possible* to implement real nervous systems (with some amount of complexity abstracted away). In fact I'm only arguing that it *might* be possible.
Ah you kids, arguing analog vs. digital while I sit here remembering the hybrid system (two PACE analogs and a Scientific Data Systems Sigma 7 digital) that we used for aircraft simulations back in the 1960s. How soon they forget. :(
IIRC Scott Aaronson argues on his blog and in papers that P != NP could eventually become accepted as an observed fundamental constraint like 2LOT since it seems impossible to find a proof and/or physical systems that would go against. Invoking this would lead to a couple of reasonable and simplifying no-go corollaries, like no closed timelike loops.
If I understand correctly, super-Turing machines has pretty much the same basis. In this perspective it would seem really surprising if they exist.
1) It is an "open question" whether any part of reality is actually analog.
2) If reality IS analog, so that human brains are analog, then computers can be constructed in such a way so that they are analog as well. We build computers whose information processing is (approximately) digital because that's convenient for us - but the underlying substrates are the same. Anything we can do, they can do - no matter what speculations we may entertain about brains being beyond Turing capabilities.
I believe embodiment is important, but I don't think it's actually been proven.
If you consider that the human being has output systems, be it in movements, writing, speech, or imagination, etc, it stands to reason that the architectural demands of those systems are likely to have an effect on design of a resource constrained brain, as the thoughts and plans are to be enacted through them. On the other hand, the little I've heard about embodiment seems to go too far, and the above is not really evidence as much as it is logic. Embodiment would be part of internal context, btw...
Good posts, keep it up.
"Are you trying to tell me that the amount of RAM available will affect how we traverse a neural network lookup table?" --Steve G
I sure don't remember telling you that...I was responding to architectural differences and the non-issue of how RAM works. Other than that I'm not quite clear on what you were saying.` --Jonathan
I'm arguing that the way the RAM works is important, because one doesn't need to have the same limitations in RAM behaviour that one has the brain. You *can* model the limitations on a sufficiently powerful computer. But like I said before, you can model a raisin with a housefly.*
The key words there are sufficiently powerful. Because unless you were recreating the wetware, an emulation of the brain on any other architecture is going to be sub-optimal. Why would you want to handicap your amazingly powerful computer?
*What do you call a fly with no wings?
A walk.
What do you call a fly with no wings and no legs?
A raisin. :D
Oooh! I can't wait.
History is another essential difference, often overlooked. The organization of computers is explicitly designed by people. The structure of brains is not designed but emerges from contact with the world over evolutionary time, developmental time, and in the immediate moment.
You're right. It would be really strange, but I'm just pointing out that it's still theoretically possible at this point. To say "We can *probably* simulate the brain in a computer, therefore the brain *is* like a computer and it's just a question of getting a proper interface to hardware" is not a valid argument.
Emotionally, I find the idea that the brain isn't a Turing machine only slightly less distasteful and unlikely than violating the physical domain in explaining consciousness. But the idea of Peyton Manning winning a superbowl is distasteful and unlikely, but it happened too.
I think, that the post addresses a certain, probaly ruling methaphore of brain computation. This comes from the Turing-model of computers and reflects the understandings of the second half of past century. In this sense, the post urges for metaphore change, and I fully agree with that.
If you go out to everyday informatics you'll hardly find the low-level structures and mechanisms that are used in cognitive sciences, even common systems are much better modelized and understood (see. Universal Modeling Languge, Design Patterns, etc.).
I think, cognitive sciences would gain a lot by recycling the models, patterns, analogies that are used in designed and partly designed systems (like the Internet, and yes, Google is functionally something like the hippocampus). Anyway, in these systems, to get them work (OK, more or less), problems (like asynchronous mixed inputs, information relaying, updating and evolving strucures, interconnection of heterogenous environments, growth, continous work, bottlenecks, prioritizing activities, etc.) must have been resolved in some way. My first candidate is, of course, the Internet, NOT as an analogon to the brain, but as a warehouse of patterns enough rich to reach for better metaphores in the understanding of cognitive processes. Here, you have the unique opporunity to study the evolution of solutions, what forces and processes led to change, what goals could be achieved and what not, and why not, what competing solutions existed, what was the winner, and why (not allways the "better", the Internet "has body"), etc.
I'm less optimistic about the usability of new hardware architectures, or dedicated brain models, they have a couple of design points in place, so can be good to demonstrate a certain aspect, not more.
A really great post Chris indeed.
But an "AI-believer" is throwing the gauntlet : A challenge to the AI-deniers.
I cross-linked this posting over there already.
I think the substrate-neutrality requirement of computationalism should be re-examined. Computationalists work hard to maintain a definition of computation that includes both computing devices and people. But what falls out of that definition is not particular enough to talk in a meaningful way about organismic knowing. That human cognition can be modelled in a limited way using hardware and software does not mean that computation is the best analytic framework for understanding what cephalized organisms do and how they go about doing it.
I am not sure the change blindness literature supports the notion that visual memory is sparse. Dan Simons, a leading change blindness researcher, has recently argued against the notion of a sparse representation. Instead Simons and Ambinder (2005) and other researchers have postulated that people have the information, but it is inaccessible. Hollingworth and Henderson (2002) found memory for changed objects when participants failed to report the changes. Similarly, Mitroff, Simons, and Levin (2004) found memory for both the pre-change and post-change objects. Object viewing times in other studies have also shown what appears to be covert change detection (e.g., changed objects had longer second looks than objects that were not changed). Simons, Chabris, Schnur, and Levin (2002) also found memory for changes in a real-world environment when changes were not reported. Conversely, another reason we may not detect the changes is simply that we perceive a coherently richly detailed world with everything perceived simultaneously, when in fact we do not.
Mark, thanks for the very interesting comments about change blindness. You clearly know more about that research niche than I do - but I am confused by your last sentence, which seems to support the idea that visual memory is indeed sparse (the point I was originally trying to make): "another reason we may not detect the changes is simply that we perceive a coherently richly detailed world with everything perceived simultaneously, when in fact we do not"
Perhaps my definition of "sparse" includes temporal aspects, whereas yours does not?
I'd be really interested in further clarification on this or other points from you.
You are correct. I was not including in my definition the exact spatial or temporal aspects of a scene. Instead, we seem to do a good job of quickly grasping the gist of a scene, which I suspect may contribute to failures to report changes. Often, depending on how it is perceived, the gist does not change in change blindness paradigms. Researchers have found that whether a change is detected, or not, is probably a function of how long an object was looked and whether the object's image fell on fovea which is thought to imply that it was attended. By our perception of a rich detailed coherent scene, I mean when we look at the world, we 'see' color all the way across our visual field, but in the periphery we do not have the cones to detect colors. So, we somehow put the color into the part of the scene in the periphery. We also think we have a crisp clear image in front of us, but once you get away from the highest region of visual acuity, we do not see things clearly (the fovea is only 1 to 1.5 degrees of visual angle). Provided it is not too big, if one has a hole in the photoreceptors, the visual system fills in the hole with the background. So with vision, we often know more than we think we know and it may be the researchers, like me, have not figured out how to measure change blindness. I think we have a much better system than I could write code for. But, I would still like to know how we know to look at something if we have not already looked at it and attended to it.
I put abstract about this post in my weblog in portuguese.
http://www.gluon.com.br/blog/2007/03/31/computador-cerebro/
If you dont like I can remove it.
Thanks
Ho.. Cool information,
but please translate to Spanish...!!
THANSK!
I think the fact that we can even have conversations at this level of detail shows that we are in fact very close to being able to reproduce the brain, or something able to compute like the brain in the near future. A post like this would have been much, much different even 10-15 years ago, a very short time in the grand scheme of things.
Another thing to consider, is maybe the solution isn't so much in recreating the brain in a computer - I like to use flight as another technology. We've created rockets and jets and shuttles, etc, which acheive flight, but not in the same way that birds do. A simple example, perhaps, but I think if you spoke to scientists 150 or 200 years ago, maybe they would be focused on all the reasons why flight might be impossible, pointing to all the details of why we can't make a machine that is like a bird.
If I might be so presumptuous, I've recently got round to writing a post on embodiment which I think is quite relevant to the discussion of point #10 (Brains have Bodies). It looks at two fairly extreme views of embodiment, and whether they may lead to "strong AI". It's over at my blog (not including link, as that's just rude! :-) ).
Secondly, if I could respond to John Walters comment on flight as an analogue. Minor point perhaps, but I think that there is a huge difference between trying to recreate the brain and tring to achieve flight. One of the main points of creating an 'artificial brain' is the lessons that may be learned for medical purposes through experimentation which is simply impossible in a biological system. The implementational detail is thus of paramount importance. In trying to achieve flight, we (the human race) wanted the end product - flight - where the implementational details were completely irrelevant. After all, who's going to learn anything about bird flight by examining a rocket? In terms of the creation of an 'intelligent' artificial entity though, I agree with you completely - there is no fundamental necessity to tie the artificial system to a neural system (although the human brain does serve as the best example of an 'intelligent' system).
This is a great article, it really shows where we are in regards to computers. I had previously thought that we had mad great strides in AI emulation, but I know see we still have a long way to go.
This is a great article!
I would like to add one more difference, though it really overlaps your #9 and maybe #10: Computers are constructed from components, while brains are produced by a developmental process. (Furthermore, that developmental process is not only interactive with the environment, but broadly dependent on various features thereof.)
With respect to #3 (modularity), be upshot is that while there are certainly "critical areas" for various brain functions, these don't represent "modules" in the engineering sense. A "module" is a discrete section with a specified function and interface. The various brain regions instead seem to be "ad hoc" processing sectors, with multipath datastreams all over the place.
Indeed, as our aircraft don't fly like birds, an AI computer built like the brain will have emergent mind-like properties that are quite different. In fact we already have a computer that is brain-like and self-organizing called the Internet, with PC's and servers acting somewhat like big fat neurons.
Wow. That's a new point of view: an Internet model of the brain, with computers are neurons. Very interesting!
Computers are devices made by man.Try to program a computer to write a good book and achieve to win a Nobel Prize of Literature.We talk about different order of concepts.
"Difference # 7: Synapses are far more complex than electrical logic gates"
Why did evolution choose such a convoluted route? I know evolution is a blind, random process and doesn't strictly 'choose' anything, but the point is, I would think it would take much longer to evolve a full-fledged neuron than to evolve a simple organic equivalent to the transistor and then replicate it in quantity to match the 'computational' power of a neuron.
Moreover, considering just the space and weight penalties, wouldn't something that seems as unnecessarily convoluted as a neuron soon be replaced in evolutionary competition by something as elegantly simple as a network of organic transistors? Yet that never happened in billions of years of evolutionary history, and if there's a good reason for that, it may not happen now, either.
This is a great post. Just bookmarked your site, good stuff.
Great article and a good read.
Although it can be argued (clearly) is the brain not an evolution of the computer? Albeit an incredibly advanced one, but when compared to a computer, the real difference is that we think for ourselves.
Now granted I may be the only one here without a science degree, but it's more me thinking out loud. But if you take away our ability to think for ourselves, a sort of total control sodium penethol, then do we not just become computers that can be controlled, and in turn control a system of devices etc?
Along those lines, you have the software side of us, which I read you argued against, but if you look at it in the sense that we do control our brains like a machine. Subconscious body functions are like the hidden tasks, things we don't do ourselves (because we don't know how to, not intimately knowledgeable of our brains, kind of like Vista...), while our senses act as inputs.
While I do note the mind spawns from the brain, would it not seem logical that a complex biological computer like us, that is assumingly self aware before the development of our mind, would actually go and develop a mind that could take full advantage of the body it is in? Rather than standing around somewhere (ie. mindless human) to die from starvation etc and not increasing it's chances of survival by multiplying? (Although that latter sounds far more virus like than computer like)
I'll cut it off here, I'm trying to curb down on my ramblings. Some of my 2c on the issue anyway.
great overview... this reminded me of Hyperion-Endimion novel by Dan Simmons where AI is using human brains to calculate UI (ultimate intelligence - god) :D
Super interesting post I found from Digg. Just like to point out that almost all of this discussion about "computers" is taking place at a very high level of abstraction. At cutting edge technology processes there is nothing simple about a "simple" electrical logic gate. They are subject to many similar continous effects as neurons and much time and effort is spent trying to restrict their behavior to produce "digital" behavior. In this sense, ALL computing is analog, even the digital stuff. It seems that most of the analysis here is overlooking the essential differences between abstract CS and hard EE.
This article has explained in simple terms to normal users, that it is impossible to make some device which could perform at least 10% of what our brain can do.
I guess, it might take around 200 more years...to see some prototype.
Another difference is I can turn my computer off.
There are obvious and many differences between computers and brains in underlying "hardware" and "wetware". However, how the brain works at a macro level, a symbolic level, and ultimately gives rise to self-consciousness for example, has only little to do with the detailed working of neurons and synapses as studied in neuro science. To state that understanding the differences between hardware and wetware may be crucial in the ultimate creation of artificial intelligence is likely an overstatement. The knowledge gap is our lack of understanding of the "symbolic" system model of the brain, and the bottom up model(s) that connect the macro level model to the neuroscience of neurons and synapses at a micro level (e.g. perception, ).
For a more in depth treatise on this subject see the book "I am a strange loop" from Douglas R. Hofstadter.
Amazing and a quite interesting read. I have thought for quite some time that the next jump in human evolution will be when the human brain is interconnected with a CPU and associated resources. We may become somewhat of a Borg, but are math skills will be incredible. It may not be so much evolution, but the end results would become truly revolutionary.
Difference #11: Many Republicans have computers.
-Crow
Difference #6 it is interesting to see that the brain has a hardware=software coexistance...this is also very similar to an FPGA (field programmable gate array) which changes its structure dependant on the software that it is running
Regarding #6 -- #6 is the one that interests me, as it seems to be the place where "something else" comes into the picture -- that is "something else" besides the brain.
Unfortunately, this appealing hardware/software distinction obscures an important fact: the mind emerges directly from the brain, and changes in the mind are always accompanied by changes in the brain.
DOES the mind emerge directly from the brain? It seems to me there is something more required -- that is, the brain is part of a living being, whatever that is. Clearly a brain, on its own, without "life", without being part of a whole body, doesn't generate a mind, right?
Where does awareness play into all this? That's the bit that interests me most. I realize that some fiction has postulated that at some point when a computer is big enough or complex enough, it might have awareness. It's an interesting idea, but seems doubtful.
Is awareness a pre-requisite for a mind? Or is awareness generated by a brain/mind? Or both?
I have the distinct idea that having awareness -- or conciousness -- or even, more specifically, self-conciousness -- is a sign of a "being" or "self" being present. That is, I think it is not just a result of having a brain. I think there is "someone" there.
I'm curious is there is any research in this direction, that is, if there are even ways to do research on something I can barely manage to even state!
Great. Still accepts the mechanistic (i.e. human) views of a brain-inclusive mechanism per se. It is still a replica of AI as we know it, with all the differences listed.
"Brain" is a tool of mentality, with the rest of it (unknown so far).
Atemporal, aspatial. Concentrating on neurons is a stingy selection.
AI is being designed on the pattern of a so far discovered brainfunction mentality, so the reverse is untrue.
I did not have the guts to read all those lengthy comments. Sorry.
JM
Great post. I take issue with just one statement: "short-term memory seems to hold only "pointers" to long term memory." This is inconsistent with cases where brain damage made it impossible for victims to create new, long-term memories while short term memory seemed unaffected. I'm open to plausible explanations; it just seems problematic on its face.
Thanks for this post, and thanks to all the commenters!
Are we presuming all transistors are simple gates, or are we presuming multi-gate transistors?
It seems a neuron is really a simple analog computer. Maybe some research into the old tube pentodes and their marvelous integrating circuits might be in order.
It's like forgetting the average citizen in 1776 was literate in a way we're not now.
Your article made the size and quality of the gulf between the brain and computer mimickry even larger than I thought. I posted a comment to Jonah Lehrer's recent article "Blue Brain" in Seed (http://scienceblogs.com/cortex/2008/03/blue_brain.php) to point out some recurring problems with those equating a computational process to consciousness.
Quoting from the article article: "There is nothing inherently mysterious about the mind or anything it makes," Markram says. "Consciousness is just a massive amount of information being exchanged by trillions of brain cells. If you can precisely model that information, then I don't know why you wouldn't be able to generate a conscious mind."
Well, if that is true then would a machine processing somewhat less than massive amounts of information fall short of being conscious? Markram is going too far, logically, in asserting that a set of electrical impulses become consciousness simply because a similar set of impulses is observed to coincide with awareness in a human brain. At most, one could say it is a necessary condition that some such process be present for the associated experience of consciousness to occur in a living brain.
In any case, no process can be said to actually be consciousness, any more than a digital piano can be called a musician, even if it has a massive repertoire with complex programs that make it indistinguishable from a human player. Basically, no quantity of observed correlations between a brain, the information it processes, and what that brain's owner experiences is adequate to get us from the world of encoded information to the qualitative world of someone being informed. Consciousness could be an innate property of all living organisms, or of matter in general, or even be independent of matter, operating as a field that is concentrated and finds an interface in neural networks through some as yet unknown principle.
With a machine merely mimicking a brain's logical processes, however numerous or complex, it is truly preposterous to believe that a mind is at work inside the box. We don't even have a scientific model of what a thought is, let alone a measure of consciousness.
I recently read a very interesting book called 'Understanding Thinking' which has a chapter (chapter two) on the differences between the ubiquitous general purpose computer and the emergent properties of our neural networks. The author's intention, I think, is to try to wake educationalists up to the fact that neuroscience / AI / neural network simulation has discovered new and previously unimaginable principles about the way we perceive the world, trap experiences, generalise, abstract, learn, etc., which have profound implications for education. It also suggests that we need to start developing a new vocabulary to better convey these ideas as our old ideas about rationality, memory, awareness, consciousness, mind, matter, etc., are rather misleading. It covers some of the same issues as this essay, but from a slightly different perspective, and identifies some other difference as well. You can read that section for free (preview) at www.lulu.com/content/01224917 or you can 'search inside the book' at www.amazon.com (search for 'Understanding Thinking'. Bart
Interesting read. Now I'm more confident that robots won't take over the world in the future.
Thanks for the interesting comparisons
FatLoss4Idiots
Keith Maven
FatLoss4Idiots Editor
To be honest i think u'd need a supercomputer just to model one brain cell, and it would still probably fail.
I could make an OR gate with some water tubes and levers... would i call that an 'intelligence'?
We just dont have the technology. In a few hundred years people will laugh at throwing a buncha little OR and NAND gates together and thinking that could ever have intelligence.
You make some good points here. I still like to use the analogy of RAM and short term memory people (though of course they are not the same) because it shows people how important it is as far as processing things. People know that if you increase RAM you can do more tasks at once, where short term memory is similar because it determines how much information you can hold in your head at once. Great article and it was very informative.
There are so many topics to be sorted out here, time is too short.
I would however like to defend Minsky & Papert's work on perceptrons. When that was written (about 30 years ago), perceptrons were being promoted as a realistic paradigm for neural modelling, and in fact were the only neural-modelling paradigm that had been fully worked out. M & P did a superb and devastating job of analysing them better than anyone before them and showing conclusively that they were inadequate; and this was, and remains, a model of intellectual clarity and original thought. M & P did not consider three-layer networks because nobody had considered them at that time, and because they were, after all, talking about perceptrons.
It may be true that their critique had the effect of making neural networks in general such an unpopular topic that the modern analysis of three-layer networks was delayed by a few years, but one can hardly blame M & P for that, or for not developing or presaging an entire new field of research, one that they were were both early to acknowledge to be of immense importance. Science advances by bad, but influential, ideas being conclusively refuted, as well as by great ideas coming to the fore.
Each of the ten differences is absent in at least one computer design. Thus, the differences are only valid for common classes of computer, as you cannot build a computer to perform non-computable tasks. (Alan Turing proved that one.) Thus, if none of the ten differences introduce non-computability, the differences are purely implementation decisions. As any Turing-complete device can perform the tasks of any other Turing-complete device, those implementation decisions are largely illusionary, they do not alter what can be done, merely how efficiently it can be done.
I really like the article, it was very informative.
Yes, computers are powerful and great. They are super smart and they are capable of things that humans are not normally capable of. But let us not forget that humans were still the one who invented them. So if someone would say that computers are much better than humans, just think that an invention would not be that powerful if it weren't for the inventor.
Great article!
The computer can be compared to human brain because of its logical capacities. I would like to think that the computer was modeled after the capability of human thinking. However, the downside of computers is its incapable of imagination. That is one of the obvious demarcation between man and computer.
Imipak (#74), you make a very interesting point about Turing equivalence but it relies on your strong premise that these implementational decisions are immaterial to whether the brain is Turing equivalent. In addition, it sidesteps the more important question about whether AI could be achieved without making these same implementational decisions. It reminds me a little of the proof that a sufficiently large 3-layer neural network can approximate any function to arbitrary precision, but that there are not necessarily learning algorithms to determine the necessary weights. Similarly, it may be possible to arrive at some kind of strong AI without making these same implementational decisions, but there's no guarantee that we'd ever be able to do it that way!
Just off the cuff here, but building on the previous statements regarding Moore's law(which he himself said can't continue forever and doesn't actually translate into doubled processing power), assuming it holds true long enough; we could be looking at 225 million billion calculations per second in just over 100 years. If we base it on the 128 gigaflop Fujitsu "Venus" processor.
Hello and thanks for this...
Not a computer scientist, indeed not a scientist at all, but following links from Neuroanthropology because the brain.computer analogy gives me the serious crazy. Thanks for addressing it...but..I can't understand most of it. [e.g. I don't know what 'isomorphic' means except that it has to do with forecasting the weather. And I don't know how serial differs from parallel, nor what a system clock does in a computer] I know this is meant to be a science blog and maybe aimed over my head, but I would really love to have some ammunition when I get hit over the head with the 'hard-wiring' stuff. Does anyone know of a link to a less technical discussion of this?
thanks.
Hi Stephen - http://www.thinkquest.org, has a under-19 site in its library called The Computer versus The Brain, created in 2000, that is far less technical, and covers many of the same points. I use it as a reference for my high school computer literacy class. This posting challenges my more knowledgeable students, as the Thinkquest site lacks the breadth and depth of this, as well as the posts that follow.
Hope that helps.
I think after you study yourself enough we pretty well are computers, I think the ultimate computer would eventually end up being us. being able to develop a machine that can have conscious is a dream for many scientists but they say it will be possible one day. One way to gather more information and increase your intelligence is to try some brain training, good blog with a list of what's new in brain training: http://www.allreviews.com/brain-training/
Thanks,
Kris
I love how you ended by stating that brains have "raw computational power", thus debunking all of the above.
OK â some good points, and I will comment on each
in turn â but first the most important difference .
The essence of a stored program computer is that
in addition to the âprocessorâ and âmemoryâ you need a program that needs to be
created â and I would suggest that the intelligence of the creator (i.e. the
god-like programmer or team of programmers) always greatly exceeds the effective
intelligence of the resulting programs. If you consider an evolutionary approach
devoid of âIntelligent Designâ the stored program model must be considered
totally inappropriate before you consider any other factors.
The problem is that after the war computers took
off at an enormous pace â with potential manufacturers falling over one another
to try and capture the âclever programmable calculator marketâ. No one â but no
one - had time to stop and do any blue sky research to see if there were other
ways of organising electronics to build a human-friendly information processor.
After 20-30 years of this mad rush to make money and build careers everyone knew
that:
     (1) You had to be very clever to program a computer
     (2) There was a vast establishment of people whose careers and/or income depended on stored program
computers and
     (3) Computers were so wonderful the underlying theory MUST BE
RIGHT.
People started to look at the Human-Computer Interface â but this was not
fundamental research â it was technology to find better ways of hiding the
incomprehensible âblack boxâ in the heart of a stored program computer.
Over 40 years ago, as a ânaive noviceâ in the
computer industry I was looking at a major commercial system (say 250,000
customers, 5,000 product lines, 25,000 transactions a day â and a dynamically
changing market.). Not knowing any better I decided that the communication chain
involving programmers and systems analysts were a liability â and that one could
provide a sales staff friendly system which could dynamically change with
changing requirements. Of course my boss threw the whole idea into the waste
paper basket and I decided to change jobs. I ended up in the Future Large
Computer marketing research department with a small but imaginative computer
company. I quickly realised (1) that there were many other tasks with similar
requirements and (2) talks to hardware designers showed me it was easy to
redesign the central processor if you had a good reason to do so. I concluded it
was possible to build a âwhite boxâ information processor which could be be used
by normal humans to help them work on dynamic open-ended tasks. When I mentioned
this to my boss, a âTOP SECRETâ label was stuck on the idea, patents were taken
out, and I was put in charge of a team which, two years later showed the basic
idea was sound.
So why haven't you heard of it? Well at this point
the company was taken over and as my idea was incompatible with the âstored
program computer â approach I was declared redundant. I found what turned out to
be a most unfriendly hole to try and continue the research but after 20 years, a
family suicide and a bullying head of department I gave up âfighting the
computer establishmentâ from sheer exhaustion, Selling the idea became harder
and harder as even school children were being brainwashed to believe that
computers are the ultimate technology and you need to be clever to program them.
The idea that it might be possible to built a simple human-friendly processor
was deemed ridiculous â as demonstrated by the millions of people whose careers
were dependant on the fact that stored computers worked..
So what I will do is to answer your questions in
terms of the kind of processor I was researching.
Difference # 1: Brains are
analogue; computers are digital
My initial proposals â and all subsequent
experiments â involved sets defined by text strings, but in theory all the
processor needed was a mechanism to say if two set elements were (at least
approximately) identical. It was not concerned with how these set were
represented. In principal the system I proposed would be quite happy with the
idea of a âcatâ being represented as the written word âcatâ, the sound of
someone saying âcatâ, the sound of purring, visual images of a cat or parts of a
cat, the feel of stroking cat fur, etc., or any combinations. What is important
in understanding the processes involved is not the media in which the
information is stored but the ability to make a comparison in that media.
Difference # 2: The brain
uses content-addressable memory
The whole basis of my proposals were based on
content addressable memory â because that is how people think and what they can
understand. In fact one of difficulties of my approach was that if you had a
task which was best analysed mathematically in terms of a regular grid, such as
a chess board, it was at a disadvantage compared with a stored program computer
â which after all was designed to handle mathematically regular problems.
[Comment - how relevant is the precisely predefined and unchanging rules of
chess, and the fixed an unchanging dimensions of a chess board, to the kinds of
mental activities needed to be a hunter gatherer.]
Difference # 3: The brain is
a massively parallel machine; computers are modular and serial
My proposals are based to the idea of having one
comparatively simple processor which operates recursively â i.e. It is
continually re-using itself. This is an economic way of building the processor
if you are working in electronics (or at least the electronics of the 1970's,
which was the time the first commercial hardware might have been built if it had
got that far.) Another way of looking at recursions is that you have one
processor which passes subsidiary tasks on to itself. If you had millions of
identical processors it could just as easily work by passing subsidiary tasks
onto other identical processors to work in parallel. While I never looked at the
problem of parallel working seriously my approach should be equally valid with
either serial or parallel processing.
[Comment â of course neural nets (as developed
bythe A.I. community) are parallel â but they are not inherently user-friendly â
as my approach tried to be.]
Difference # 4: Processing
speed is not fixed in the brain; there is no system clock
Of course the stored program computer has an
electronic clock to ensure that all components work as fast as possible in
synchronisation. However I feel that is really saying no more than that a
circuit board and a brain used different mechanisms to order to do what they do.Â
In one sense this relates to the difference between serial and parallel
processing (#3) in that with serial processing you must have a very robust
sequence of operations which can be executed rapidly. With parallel processing
you can have many multiple processes going on simultaneously and it doesn't
matter is they are not perfectly synchronised as long as there is some mechanism
to bring the combine results together.
Difference # 5 - Short-term
memory is not like RAM
An important feature of my system was a Â
âworking areaâ which I called âThe Factsâ and which I considered to be
equivalent to human short term memory. While there were some practical
implementation features the difference between the Facts and any other
information in the knowledge base was that the Facts were the active focus of
attention. The Facts were, of course, context addressed.
However there was a very interesting feature about
the Facts â the number of active items in the Facts at any one time way very
small and one of the first observations when I tried out a variety of
application was that the number of active items in the Facts at any one time was
often around 6 or 7 [Miller's Magic Number 7?] and I don't think I every found
an application that genuinely needed as many as a dozen. (In electronics terms
there was no reason why I shouldn't have had a system that could work with a
thousand or more such items.) The reason for the number being small that in
order to make the approach human-friendly I needed to find a way that would not
overwhelm the human with too many active facts. It seemed important that, to be
understandable, the number of Fasts items should match the number of items in a
human short term memory if the human was doing the same tack in their head. In
other words the processing architecture I was proposing could work just as
easily with very much more complex problems in terms of the number of Facts
being simultaneously considered â but humans would have difficulty following
what was going on because of the limited capacity of the human short term
memory.
(It should be noted that if the same tasks were
implemented using conventional programming techniques they would need a very
much larger number of named variables. This id due to the difference between the
linear addressing of the stored program computer and an associative system. In
an associative system the same things are given the same name where-ever they
occur so that the processor can see they are the same. In a conventional
programming language the addressing needs to be able to distinguish between
related entities by giving each occurence a different name because they are held
at different addresses.)
Difference # 6: No
hardware/software distinction can be made with respect to the brain or mind
You mis-identify the problem. On one hand you have
a cell with information stored within. On the other hand you have a processor
with two different kinds of information â program and data â stored in the
memory. The real difference is that the brain does not distinguish between
program and data while the stored program computer does.
My system does not distinguish between program and
date â it simply stores information which can be used in subtly different ways.
What I called the âDecision Making Unitâ simply compared items from the
knowledge base (i.e. Long term memory) with the currently activated Facts (the
short term memory) and as a result the Facts might be changed or become part of
the knowledge base. That was all the processor could do if you excluded (1) new
information was added to the Facts from the âoutside worldâ or (2) some
combinations of Facts triggered an action in the âoutside worldâ.
This is a critical distinction. The information in
a unit of memory in a stored program computer is meaningless except implicitly
in that it is defined by a series of complex predefined âprogramâ instructions
somewhere else in the computer. In the brain, and in my system, the meaning in
embedded in the memory unit and so can be used meaningfully with out reference
to a separate task specific program.
Difference # 7: Synapses are
far more complex than electrical logic gates
We are back to the biological against electronic
hardware situation. The driver mechanism in my system is remarkably simple (in
stored computer terms) and produces very different effects with superficially
small changes in context. I suspect that synapses behave differently with minor
changes in context â but I would not want to explore the analogy further at this
stage.
Difference #8: Unlike
computers, processing and memory are performed by the same components in the
brain
I doubt that that the brain uses exactly the same
proteins and other chemicals in exactly the same way for both processing and
memory. If you concede this point â and that the brains cells have different
mechanisms operating in the same âboxâ - the analogy is with a computer on a
single chip â which also has everything in the same box.
Difference # 9: The brain is
a self-organizing system
At the processor level the stored program computer
is not designed to be self-organising â so one would not expect it to be!
My system was designed to model the way the human
thought about his problems and the processing of the facts was completely
automatic in that the decision making unit mechanism organised information is a
way which was independent of the task. This is a very important distinction
between what I was doing and a stored program computer. The computer requires a
precise pre-definition of the task it has to perform before it can do anything.
My approach involves the decision making unit scanning and comparing structured
sets in a way that solutions happen to fall out in the process [would you
condiser this to be self-organising?]. The human brain similarly can handle
information from a vast number of different contexts without having to be
pre-programmed for each particular context that might occur.
The final version of my system had several
features which could be called directed self-organisation of the knowledge base
â allowing information to become more or less easy to find depending on its
usage history. Because I was looking at a system to help humans I felt it was
important that the triggering such a process should be under human control. I
must admit I had never thought of implementing it in a way that it reorganised
the user's information without the user being in control â If the system worked
this was I suppose you might call that âFree Willâ.
Difference # 10: Brains have
bodies
This relates to the input and output of stimuli
with the outside world â and I would not expect there to be close similarities
between a biological and electronic system.
I suspect you may be saying â if your idea
was so good why haven't I heard of it. Rather than go into details let me
suggest, somewhat tongue in cheek, a possible reason.
Reason for Rejection â
Incompatible with the âReligion of the Magnificent Computerâ
The Philosophy of the stored program computer is
rather like a religion. A few people in the 1940s had some ideas which took off
like a rocket and everyone wanted to get onto the band wagon. To successfully
program the first machines one need a somewhat warped chess-playing type mind â
and the most successful programmers lead the stampede to build even more
powerful computers in their own image, and this continued for several
generations. Within years it was generally accepted that the computer processor
was a black box which worked in a way alien to the way humans thought and needed
a "priesthood" of super-intelligent systems analysts and programmers to control
them. In addition the âpriesthoodâ started building layers and layers of
software to try and disguise the workings of the inner sanctum black box and
make it appear human friendly. All this extra software used up memory and
computer cycles â but not to worry â as the hardware engineers were building
ever faster processors and bigger memories â so no-one ever seriously needed to
worry about efficiency. It was boom time all round. While a modern personal
computer will do some pretty impressive tasks, there is a vast onion like
structure of programs under the skin, written as the result of millions of
man-hours of intellectual effort. However there is very little mutual
understanding between the average human user and what the system does for him â
and in many cases the human is only successful in their task because they have
programmed their brain to become compatible with the computer package they are
using. [Comment: Computers are often only useful because people have changed to
do what the computer can understand - rather than the other way round.]
While this was happening the more academic came to
the conclusion that as you needed to be able to think like a chess player the
âartificial intelligenceâ way forward was to explore highly artificial scenarios
(in terms of everyday thought) such as chess playing. Even the psychologists
were caught up with the excitement and switched from the earlier telephone
exchange models of the brain to stored computer models. The mathematicians noted
that the underlying theory could be linked to the concept of a Universal Machine
and interpreted this as âThe Universal Machine.â They stopped
looking for alternatives because the success of the computer industry in earning
money, generating jobs, providing exciting new modelling tools, etc. means it
must be the only possible approach. Educationalists should not be
forgotten and it is now virtually impossible to find a teenager who has not been
taught in school that the computers are wonderful, and that you have to be very
clever to become a âhigh priestâ to write programs and cash into the exciting
market.
Because of the rush to capitalise on the success
of the stored program computer approach, and the almost universal acceptance of
the results, no one has every stopped to ask if the âblack boxâ approach could
be replaced by a âwhite boxâ approach â with a fundamentally different type of
processor which could interact directly with human beings. While I came up with
such a design this was purely accidental and I had no idea, at first, that I was
doing anything "questionable." Before entering the computer industry I have been
involved with several different types of very complex human-based information
processing tasks. I was therefore very familiar with the frequent problems of
âknown unknownsâ and the less common â but more difficult problems when âunknown
unknownsâ revealed themselves. To me it was laughable that anyone faced with
such complex information processing problems would attempt to construct a âdo it
allâ pre-defined application program. My ânaïve noviceâ solution to the
commercial task mention earlier was no more that micro-managing the task as I
would have done if I had been doing it manually. My move to a âfuturesâ
environment then gave me an overview of how computers were constructed â and it
became apparent that by simply reshuffling the architecture of a computer
processor you could get a very different kind of information processing engine â
as different from the stored program approach as a jet engine is from a piston
engine.
The problem was that what I was doing was
philosophically incompatible with the âreligionâ of the Stored Program Computer
and through the 1970s and 1980's I found it impossible to get a university
grant, or to get many peer-reviewed papers past the deeply entrenched computer
science âpriesthood.â I'm the kind of person who needs a good mentor to carry on
â and every rejection sent men deeper into depression â until I finally threw in
the towel in order to remain sane. For all I know there may have been other
researchers starting along similar lines who were similarly jumped on for daring
to suggest that the foundations that underlie the stored program computer
approach may have some cracks in them.
* * * *
For more information see CODIL. The Architecture of an Information
Language, The Computer Journal Vol 33. No 2, 1990, pp 155-163 [Due to
the delays in the peer review system this was published over two years after I
had decided to throw in the towel!]
The last difference: A computer can't "feel". Imagine the amount of work our brain goes through for us to "feel" something. And of course, intuition too.
#1 and #7 are not true.
I am pretty sure that McCulloch, Pitts, Minsky and Papert mathematically demonstrated that the computer is an excellent analogy for the brain. Just go read their papers and you'll see.
Computer Science is a very powerful tool for neuroscientists and psychology, because the CS provides a mathematically rigorous framework for describing and solving problems in neuroscience and psychology. Without these formal methods, psychology wouldn't even be a science.
Btw, I'm not a computer scientist, I'm a molecular biologist.
The usefulness of Computer can't say in word. Now a days it uses in every sector just have no any sector without use it. Moreover compared it with human (quite not) but some place it work more then human. That is all for the time being.
The usefulness of Computer can't say in word. Now a days it uses in every sector just have no any sector without use it. Moreover compared it with human (quite not) but some place it work more then human. That is all for the time being.