Last week, a team of computer scientists led by Dharmendra S. Modha announced what sounded like an impressive breakthrough for neuroscience-inspired computing:
Using Dawn Blue Gene / P supercomputer at Lawrence Livermore National Lab with 147,456 processors and 144 TB of main memory, we achieved a simulation with 1 billion spiking neurons and 10 trillion individual learning synapses. This is equivalent to 1,000 cognitive computing chips each with 1 million neurons and 10 billion synapses, and exceeds the scale of cat cerebral cortex. The simulation ran 100 to 1,000 times slower than real-time.
The press coverage was predictably hyperbolic. Here’s Popular Mechanics:
Scientists at IBM’s Almaden research center have built the biggest artificial brain ever–a cell-by-cell simulation of the human visual cortex: 1.6 billion virtual neurons connected by 9 trillion synapses. This computer simulation, as large as a cat’s brain, blows away the previous record–a simulated rat’s brain with 55 million neurons–built by the same team two years ago.
“This is a Hubble Telescope of the mind, a linear accelerator of the brain,” says Dharmendra Modha, the Almaden computer scientist who will announce the feat at the Supercomputing 2009 conference in Portland, Ore.
I have no idea what a “linear accelerator” of the brain might be, but I think there are a few reasons to be slightly skeptical of such lofty claims. The first problem is that it’s incredibly difficult to reverse-engineer the mind, or construct a “cell-by-cell” simulation of the cortex. This is for an obvious reason: it’s hard to simulate something we don’t yet understand. A few years ago, I profiled the Blue Brain project – another IBM funded attempt to fuse neuroscience and supercomputing – and described their painstaking empirical approach, which mixed cutting-edge computer science with messy wet-lab experiments:
At first glance, the room looks like a generic neuroscience lab. The benches are cluttered with the usual salt solutions and biotech catalogs. There’s the familiar odor of agar plates and astringent chemicals. But then I notice, tucked in the corner of the room, is a small robot. The machine is about the size of a microwave, and consists of a beige plastic tray filled with a variety of test tubes and a delicate metal claw holding a pipette. The claw is constantly moving back and forth across the tray, taking tiny sips from its buffet of different liquids. I ask Schürmann what the robot is doing. “Right now,” he says, “it’s recording from a cell. It does this 24 hours a day, seven days a week. It doesn’t sleep and it never gets frustrated. It’s the perfect postdoc.”
The science behind the robotic experiments is straightforward. The Blue Brain team genetically engineers Chinese hamster ovary cells to express a single type of ion channel–the brain contains more than 30 different types of channels–then they subject the cells to a variety of physiological conditions. That’s when the robot goes to work. It manages to “patch” a neuron about 50 percent of the time, which means that it can generate hundreds of data points a day, or about 10 times more than an efficient lab technician. Markram refers to the robot as “science on an industrial scale,” and is convinced that it’s the future of lab work. “So much of what we do in science isn’t actually science,” he says, “I say let robots do the mindless work so that we can spend more time thinking about our questions.”
According to Markram, the patch clamp robot helped the Blue Brain team redo 30 years of research in six months. By analyzing the genetic expression of real rat neurons, the scientists could then start to integrate these details into the model. They were able to construct a precise map of ion channels, figuring out which cell types had which kind of ion channel and in what density. This new knowledge was then plugged into Blue Brain, allowing the supercomputer to accurately simulate any neuron anywhere in the neocortical column. “The simulation is getting to the point,” Schürmann says, “where it gives us better results than an actual experiment. We get the same data, but with less noise and human error.” The model, in other words, has exceeded its own inputs.
This ion channel data allows the Blue Brain team to construct a simulation of cortical circuits from the bottom-up. After all, the first step of reverse-engineering a machine is trying to figure out how the machine is actually engineered. Here’s Henry Markram, the director of the Blue Brain Project:
“There are lots of models out there, but this is the only one that is totally biologically accurate,” Markram says. “We began with the most basic facts about the brain and just worked from there.”
It’s this meticulous reductionist research that distinguishes the Blue Brain project from so many other neural simulators, including the recent paper by Modha, et. al. For too long, we’ve pretended that the human brain is a generic information processor, stuffed full of binary transistors. (The only difference was that our microchips used fatty membranes and carbon, not silicon. And that we ran some complex if buggy software.) But that’s almost certainly not the case. Instead, the talents of our mind are inseparable from the evolved quirks of its machinery, which suggests that simply crossing some arbitrary computational threshold – such as simulating 1.6 billion ersatz “neurons” – doesn’t mean very much if those simulations aren’t rooted in biological reality. A neuron isn’t just another electrical switch; our cells are much more interesting than that. In a recent email, Markram was rather critical of Modha’s new paper, since the simulation depended on “deeply unrealistic models of neuronal action”:
1. These are point neurons (missing 99.999% of the brain; no branches; no detailed ion channels; the simplest possible equation you can imagine to simulate a neuron, totally trivial synapses).
2. All these kinds of simulations are trivial and have been around for decades – simply called artificial neural network (ANN) simulations. We even stooped to doing these kinds of simulations as bench mark tests 4 years ago with 10’s of millions of such points before we bought the Blue Gene/L. If we (or anyone else) wanted to we could easily do this for a billion “points”, but we would certainly not call it a cat-scale simulation. It is really no big deal to simulate a billion points interacting if you have a big enough computer. The only step here is that they have at their disposal a big computer.
3. It is not even an innovation in simulation technology. You don’t need any special “C2 simulator”, this is just a hoax and a PR stunt. Most neural network simulators for parallel machines can do this today. Nest, pNeuron, SPIKE, CSIM, etc, etc. all of them can do this! We could do the same simulation immediately, this very second by just loading up some network of points on such a machine, but it would just be a complete waste of time.
In the coming years, there will be many grand announcements about supercomputers that attempt to imitate the machinery inside the skull. One way to distinguish between such claims is to look at their cellular realism: Are these microchips really behaving like neurons? Or has the simulation taken a shortcut, and turned our neurons into dumb little microchips? Because we sometimes forget that the “mind is like a computer” metaphor is only a metaphor. The mind is really just a piece of meat.