Reverse-Engineering

Last week, a team of computer scientists led by Dharmendra S. Modha announced what sounded like an impressive breakthrough for neuroscience-inspired computing:

Using Dawn Blue Gene / P supercomputer at Lawrence Livermore National Lab with 147,456 processors and 144 TB of main memory, we achieved a simulation with 1 billion spiking neurons and 10 trillion individual learning synapses. This is equivalent to 1,000 cognitive computing chips each with 1 million neurons and 10 billion synapses, and exceeds the scale of cat cerebral cortex. The simulation ran 100 to 1,000 times slower than real-time.

The press coverage was predictably hyperbolic. Here's Popular Mechanics:

Scientists at IBM's Almaden research center have built the biggest artificial brain ever--a cell-by-cell simulation of the human visual cortex: 1.6 billion virtual neurons connected by 9 trillion synapses. This computer simulation, as large as a cat's brain, blows away the previous record--a simulated rat's brain with 55 million neurons--built by the same team two years ago.

"This is a Hubble Telescope of the mind, a linear accelerator of the brain," says Dharmendra Modha, the Almaden computer scientist who will announce the feat at the Supercomputing 2009 conference in Portland, Ore.

I have no idea what a "linear accelerator" of the brain might be, but I think there are a few reasons to be slightly skeptical of such lofty claims. The first problem is that it's incredibly difficult to reverse-engineer the mind, or construct a "cell-by-cell" simulation of the cortex. This is for an obvious reason: it's hard to simulate something we don't yet understand. A few years ago, I profiled the Blue Brain project - another IBM funded attempt to fuse neuroscience and supercomputing - and described their painstaking empirical approach, which mixed cutting-edge computer science with messy wet-lab experiments:

At first glance, the room looks like a generic neuroscience lab. The benches are cluttered with the usual salt solutions and biotech catalogs. There's the familiar odor of agar plates and astringent chemicals. But then I notice, tucked in the corner of the room, is a small robot. The machine is about the size of a microwave, and consists of a beige plastic tray filled with a variety of test tubes and a delicate metal claw holding a pipette. The claw is constantly moving back and forth across the tray, taking tiny sips from its buffet of different liquids. I ask Schürmann what the robot is doing. "Right now," he says, "it's recording from a cell. It does this 24 hours a day, seven days a week. It doesn't sleep and it never gets frustrated. It's the perfect postdoc."

The science behind the robotic experiments is straightforward. The Blue Brain team genetically engineers Chinese hamster ovary cells to express a single type of ion channel--the brain contains more than 30 different types of channels--then they subject the cells to a variety of physiological conditions. That's when the robot goes to work. It manages to "patch" a neuron about 50 percent of the time, which means that it can generate hundreds of data points a day, or about 10 times more than an efficient lab technician. Markram refers to the robot as "science on an industrial scale," and is convinced that it's the future of lab work. "So much of what we do in science isn't actually science," he says, "I say let robots do the mindless work so that we can spend more time thinking about our questions."

According to Markram, the patch clamp robot helped the Blue Brain team redo 30 years of research in six months. By analyzing the genetic expression of real rat neurons, the scientists could then start to integrate these details into the model. They were able to construct a precise map of ion channels, figuring out which cell types had which kind of ion channel and in what density. This new knowledge was then plugged into Blue Brain, allowing the supercomputer to accurately simulate any neuron anywhere in the neocortical column. "The simulation is getting to the point," Schürmann says, "where it gives us better results than an actual experiment. We get the same data, but with less noise and human error." The model, in other words, has exceeded its own inputs.

This ion channel data allows the Blue Brain team to construct a simulation of cortical circuits from the bottom-up. After all, the first step of reverse-engineering a machine is trying to figure out how the machine is actually engineered. Here's Henry Markram, the director of the Blue Brain Project:

"There are lots of models out there, but this is the only one that is totally biologically accurate," Markram says. "We began with the most basic facts about the brain and just worked from there."

It's this meticulous reductionist research that distinguishes the Blue Brain project from so many other neural simulators, including the recent paper by Modha, et. al. For too long, we've pretended that the human brain is a generic information processor, stuffed full of binary transistors. (The only difference was that our microchips used fatty membranes and carbon, not silicon. And that we ran some complex if buggy software.) But that's almost certainly not the case. Instead, the talents of our mind are inseparable from the evolved quirks of its machinery, which suggests that simply crossing some arbitrary computational threshold - such as simulating 1.6 billion ersatz "neurons" - doesn't mean very much if those simulations aren't rooted in biological reality. A neuron isn't just another electrical switch; our cells are much more interesting than that. In a recent email, Markram was rather critical of Modha's new paper, since the simulation depended on "deeply unrealistic models of neuronal action":

1. These are point neurons (missing 99.999% of the brain; no branches; no detailed ion channels; the simplest possible equation you can imagine to simulate a neuron, totally trivial synapses).

2. All these kinds of simulations are trivial and have been around for decades - simply called artificial neural network (ANN) simulations. We even stooped to doing these kinds of simulations as bench mark tests 4 years ago with 10's of millions of such points before we bought the Blue Gene/L. If we (or anyone else) wanted to we could easily do this for a billion "points", but we would certainly not call it a cat-scale simulation. It is really no big deal to simulate a billion points interacting if you have a big enough computer. The only step here is that they have at their disposal a big computer.

3. It is not even an innovation in simulation technology. You don't need any special "C2 simulator", this is just a hoax and a PR stunt. Most neural network simulators for parallel machines can do this today. Nest, pNeuron, SPIKE, CSIM, etc, etc. all of them can do this! We could do the same simulation immediately, this very second by just loading up some network of points on such a machine, but it would just be a complete waste of time.

In the coming years, there will be many grand announcements about supercomputers that attempt to imitate the machinery inside the skull. One way to distinguish between such claims is to look at their cellular realism: Are these microchips really behaving like neurons? Or has the simulation taken a shortcut, and turned our neurons into dumb little microchips? Because we sometimes forget that the "mind is like a computer" metaphor is only a metaphor. The mind is really just a piece of meat.

Categories

More like this

Tom Vanderbilt has a fascinating article on the infrastructure of data centers, those server farms that make Google, Facebook and World of Warcraft possible. Every keystroke on the internet (including this one) relies on shuttling electrons back and forth in a remote air-conditioned industrial…
A rat's brain has millions of neurons, each with up to 10,000 connections to other neurons. This "simple" animal's neural network is mind-bogglingly complex. Yet a Swiss laboratory has achieved remarkable success duplicating a vast region of a rat's brain using a supercomputer. They still have a…
Henry Markram, the director of the Blue Brain project, recently delivered a talk at TED that's gotten lots of press coverage. (It was the lead story on the BBC for a few hours...) Not surprisingly, all the coverage focused on the same stunningly ambitious claim, which is Markram's assertion that an…
The image on the right is a supercomputer simulation of the microcircuitry found within a column from the neocortex of the rat brain. The simulation is a tour de force of computational neuroscience: a single column is a highly complex structure, containing approximately 10,000 neurons and 30…

Currently I'm trying to integrate neural networks into a modification of Linux. The basis of this distro is how different applications can interact with each other, and the neural network will hopefully learn the best interactions and perfect them.

This is incredibly limited compared to a fully functional cat brain, and although I am one guy with a few computers, I really don't see how even a team could create a whole brain, with todays technology. There's just too much we don't get.

A virtual neuron is a set of inputs, which add together to create a total input. A certain level of input will activate an output to other neurons. This is only a basic model, the complexity lies in the interconnections. With so many mysteries about the brain still existing, even the complexities of the interconnections seem to me to be too far beyond our grasp. Their simulation is just a dumb show. A mass of nothing. Some sort of progress at least..

The mind is really just a piece of meat.

This last sentence brings to mind one of my favorite short stories of late, They're Made Out of Meat, by Terry Bisson. I was reintroduced to Bisson's story this year in On Being Certain, by Robert A. Burton, MD â incidentally a book which pairs beautifully with How We Decide.

The brain is a piece of meat that is, metaphorically speaking, operationally aware of itself - a self-actuating choice making mechanism that can use such awareness to apply and direct energy by its own choice. It can determine its own options or add to its available set through its own feedback assessment system.
Our human brain, in short, has awareness of its own operational purposes. We simply don't have that yet in our computers.
All a computer would arguably be "aware" of are the programs in its memory, and of the results of whatever it is required by those programs to add to that memory. There is no awareness of the meaning of the symbols outside of that memory, no ability to seek out and retrieve sensations that it could use in its calculative processes before converting the input to symbols that mean something to the computer itself, and that would allow it to make choices through an assessment process outside of the restrictions of its programing.
Not that one could never be constructed that could - but only because one can never say never with any degree of certainty.

I totally agree with Lahrer. Putting together billions of neurons and waiting for a miracle is just waste of time. The primary focus should be on understanding the role of the neuron in cognitive process, which can be done on just a few neuron as well. We managed to simulate some parts of an ant's behavior (make path, stay on a path, find sugar, go back) on a dozen neurons. The theory, discussion, and even java source files are available on our website. Take a look, it could be inspirational.

I agree with you that this Modha is not as interesting as all the hype would suggest, but your final litmus test

" One way to distinguish between such claims is to look at their cellular realism: Are these microchips really behaving like neurons? Or has the simulation taken a shortcut, and turned our neurons into dumb little microchips? "

has got to be in some sense not the right answer. I mean any simulation has got to take "a shortcut" on one scale or another. Although it seems to me that the shortcuts and simplifications taken by Modha are clearly over some line of realism, where exactly that line exists is far from clear. Maybe Markram has it right, but maybe he needs more details (like the exact spatial distribution of ion channels along their dendrites, and complex multistate biochemical models of their kinetics), and maybe less (maybe there are some sort of simple two or three compartment model that is specifically tuned for each cell types that is sufficient). The real interesting question is at the end of the day how you would really know.

It seems at the core of it, Markram and Modha are both just riding their intuition. My intuition leads me to someplace closer to Markram than Modha, but I don't pretend that I'm really standing of absolutely firm ground when I take that stance.

By Forrest Collman (not verified) on 30 Nov 2009 #permalink

The mind isn't meat. The mind is the changing pattern of data expressed by, in, and through the changing state of the meat. What do you mean "rooted in biological reality"?

I think you really mischaracterize these efforts (however silly they may be to jump ahead to a holistic model) by implying that anyone is assuming that neurons are analogous to simple transistors. Regardless of what they're made of or how they really work, we only need to simulate the actions of neurons to a certain degree of resolution -- and yeah, these guys aren't there yet, but it doesn't mean that "shortcuts" aren't possible.

By pepsicoke (not verified) on 30 Nov 2009 #permalink

Hmm, actually I see now that you probably aren't taking any issue with a functionalist approach - you're saying that because the "resolution" of the Modha neuron simulation is so low, it's absurd to think that it will capture critical and difficult data behavior arising from the brain's deeply messy larger scale structure. I'd agree with that...

By pepsicoke (not verified) on 30 Nov 2009 #permalink

Just a piece of meat? Just a piece of meat? When computer designers create chips that can grow other chips, extend themselves around damaged circuits to maintain function, and re-wire themselves to create greater efficiencies in information transmission and resource consumption, then and only then can we begin to call the brain "just a piece of meat". Its the most amazing damn piece of meat ever! Muscles that think? Inconceivable!?!

IBM has done a mistake in letting this guy lead such an important project. Time for a head to be chopped? They can replace it by 2012 with a simulated human brain, right?