The human brain, it turns out, is even more efficient than previous estimates:

Fifty-seven years ago, Nobel laureates Alan Hodgkin and Andrew Huxley came up with a model to calculate the power behind electrochemical currents in neurons--a great step forward in understanding how the brain worked and how it divvied up resources. The only problem was that their subject was not a person, or even a rodent, but a giant squid. Today, researchers announced that they have found a more accurate model for mammal brains, which elevates some of their transactions to three times more efficient than that of the squid-based equations.

This computational efficiency is the single most astonishing fact of the mammalian brain. Here you are, reading these words, daydreaming about lunch, processing the richness of reality, thinking about tomorrow, and your brain requires less energy than a low wattage lightbulb. Evolution is an impressive engineer.

One way to think about this efficiency is to compare the performance of Deep Blue, that IBM chess supercomputer, to its human opponents. While Deep Blue is capable of analyzing over 200 million possible chess moves per second - it wins through sheer computational force - chess grandmasters like Gary Kasparov can only consciously evaluate about five moves per second. From the perspective of processing speed, humans are at a severe disadvantage. We're an Atari surrounded by X-Box 360s.

But here's the surprising fact: Deep Blue can still only win about half the time. Although their biological computers seem woefully outclassed, grandmasters can still use wit and guile and learned intelligence to beat the silicon mainframe. Even more impressive is the relative energy efficiency of these two machines. Just look at Deep Blue: when the machine is operating at full speed it's a fire hazard, and requires specialized heat-dissipating equipment to keep it cool. Meanwhile, people like Kasparov barely break a sweat.

More like this

Daniel Dennett, in the latest Technology Review, argues that there's no meaningful difference between the chess cognition of Deep Blue and that of Gary Kasparov. Both are functionalist machines, employing mental shortcuts to settle on an optimal strategy: The best computer chess is well nigh…
Time Magazine has an interesting profile of Magnus Carlsen, the youngest chess player to achieve a number one world ranking: Genius can appear anywhere, but the origins of Carlsen's talent are particularly mysterious. He hails from Norway -- a "small, poxy chess nation with almost no history of…
Henry Markram, the director of the Blue Brain project, recently delivered a talk at TED that's gotten lots of press coverage. (It was the lead story on the BBC for a few hours...) Not surprisingly, all the coverage focused on the same stunningly ambitious claim, which is Markram's assertion that an…
Vladimir Kramnik will receive a lengthy section in any book devoted to history's greatest chess players. He's been a top grandmaster for close to twnety years. He defeated the seemingly invincible Gary Kasparov in a straight-up match. He has successfully defended his title twice, both times…

I love this. The human brain is so beautifully evolved. Not only is it efficient, but it keeps itself thought-ready: by doodling. A recent study suggests that doodling is the brain's way of "running in place" when bored, so as to be in full functioning mode just in case. Just in case a yummy rabbit happens by, or a scary tiger, or a suitable mate, or a dangerous chess move!

One small mistake (in the original quoted article)--H&H did not look at the axon of a giant squid, but the giant axon of the loligo squid.

The squid giant axon is, well, giant because that is one way to be fast. Another way to be fast is through myelination, which squid giant axons are not, but which at least some of our brain ("white matter") is.

Is that the basis for the discrepancy?

Hm. Squid are aquatic, and thus can rely on water cooling rather than air cooling. I wonder how much necessity (in preventing overheating) was the mother of evolution here.

Deep Blue is circa 1997 - semiconductor development in the last 12 years has been continuing to follow Moore's law, which is continually improving processing power, power usage, etc. by a factor of 2 every two years.

Three years ago a top end desktop would be using 100s of W, the newer processors strive to use 10s (or less) of W.

Whether this trajectory will place current computer technology at an intersect with the brain in the near future is another discussion.

You posted a perfect comment to follow, Ted. Even following the international technology roadmap, power will be a problem in 2022. Converting to carbon nanotubes or other nanotechnologies will cool us some, but not enough to compete with the biological brain. The massive parallism of the brain makes it difficult to turn off parts of an electronic brain to save power like we do with electronic computers. I think a different sort of electronic circuitry would be required. See the NSF Emerging Models of Technology program for some ideas.

I may be wrong, but I think the difference is that while you can program the machine to make deceptive moves, and react to such, you can't program it to "think" deceptively and come up with its own versions of guile. Machines are "honest" practitioners of their programmer's duplicity.

The brain is amazing in many ways. However, I do think that computers will be able to beat human brain's in every way imaginable eventually. Neuromorphic computing seeks to copy how the brain functions in order to carry out specific tasks. They are having some success modeling brain regions already. I definitely think it will be possible to improve upon nature and make more energy efficient and computationally adept artificial brains in the future.

"Evolution is an impressive engineer."

Am I wrong to expect that this statement should produce substantial cognitive dissonance?

Yes, but the human brain's very complexity is its ultimate downfall, isn't it? Maintaining a complex system with hundreds of billions of cells forming quadrillions of connections and that are each individually composed of billions of precisely interacting molecules is a losing proposition over the long haul, thanks to the second law of thermodynamics. The game can be played for years, of course, and mechanisms like sleep keep entropy at bay, but no system of such exponentially small relative entropy can maintain itself indefinitely.

The obvious prediction is that any computer system complicated enough to simulate the human brain exactly must be at least as complicated, and so will be in the very same predicament, necessarily having a finite life span. Deep Blue doesn't come close to the human brain's complexity, but it has the advantage of being immortal, giving proper cleaning and maintenance. The very complexity that gives the human brain an edge over Deep Blue is also responsible for its mortality. That's kind of depressing, isn't it?

Ironically, the lack of efficiency of silicon computers, like deep blue, is because it's hard to write software for massively parallel machines (somewhat like the brain).
See wikipedia article on 'Parallel computing'

...the romanticizing of our brains is just another pathetic fallacy....it's really a kluge and prob will wipe us out designed for 6-4-2mm yrs for tiny populations of hominids...

....once again MIT World has some of the best stuff...here's a good one http://mitworld.mit.edu/video/150..note her comments at the end about the silliness of ever designing a mechanical analog...

...keep that nonsense for TED Talks - the Popular Science magazine of the digital generation..."gee, rocket packs for everyone, next year!!"....

....apparently mammal brains were designed up from neuronal organization for smell which is basically a RAM system, the other senses were layed on top with same design...then topped our with a cortex...and the cherry on top, the brow-prominent, but largely decorative it seems, frontal lobes...

...cute!...don't you think?...we need someplace to wear our headbands!!