Every science goes through several distinct phases. First, there is the dissection phase. The subject is broken apart into its simplest possible elements. (As Plato put it, “nature is cut at the joints, like a good butcher.”) For neuroscience, this involved reducing the brain into a byzantine collection of chemical ingredients, from kinase enzymes to neurotransmitters to sodium ions. (Let’s the say this phase began with Ramon y Cajal.) Then, there is the model phase. Scientists begin tentatively trying to figure out how these parts interact. Finally, once the models start to make sense, scientists can use the models to make predictions. They can simulate circuits in a real brain and predict how those circuits will react under a given set of conditions.
One of the reasons I find neuroscience so interesting is that the science is making important progress in all three phases. New synaptic proteins, receptors, enzymes, etc. are still being discovered; the pathways of the brain are still stuffed with obscurities. And yes, there is no shortage of neuroscientific models. But perhaps the most exciting new research in the field concerns prediction, as these models are put to the empirical test. Consider this brand new paper in Science, by Tom Mitchell’s group at Carnegie-Mellon. Underneath the passive prose of the abstract, there is some serious ambition:
The question of how the human brain represents conceptual knowledge has been debated in many scientific fields. Brain imaging studies have shown that different spatial patterns of neural activation are associated with thinking about different semantic categories of pictures and words (for example, tools, buildings, and animals). We present a computational model that predicts the functional magnetic resonance imaging (fMRI) neural activation associated with words for which fMRI data are not yet available. This model is trained with a combination of data from a trillion-word text corpus and observed fMRI data associated with viewing several dozen concrete nouns. Once trained, the model predicts fMRI activation for thousands of other concrete nouns in the text corpus, with highly significant accuracies over the 60 nouns for which we currently have fMRI data.
In other words, Mitchell was able to construct a model of how different nouns are processed by networks in the brain as captured by fMRI. (This involved plugging a trillion-word database into a powerful computer, so that the researchers could understand how each noun is typically used in conjunction with a list of “master verbs”. For instance, they could predict that “celery,” based on its relationship with the verbs “eat,” “taste,” and “fill”, would exhibit a certain pattern of brain activity in an fMRI machine.) The predictions were accurate about 77 percent of the time. What’s even more impressive is that the model could also make relatively accurate predictions about words outside of the training set.
The paper is extremely impressive, but there are some important caveats. The big one is that the scientists aren’t mind-reading: they are brain-scan reading. There’s a crucial difference between deciphering the actual activity of the cortex and making predictions about what an fMRI image will look like. A crude map of a place (and that’s what an fMRI image is) is not the same thing as the place itself.
Nevertheless, it’s pretty thrilling (and perhaps a little bit scary, in the Orwellian sense) that neuroscience is entering the prediction phase. For more on this genre of brain research, check out my article on the Blue Brain project.
Note: Ed has a nice write-up of the experiment.