The Science of Mind-Reading: SVMs Extract Intentions from Neural Activity (video)

For the basics about multivariate fMRI "mind-reading" techniques, see the video below. Some of it is based on this 2007 Haynes et al paper from Current Biology, described in more detail following the video.

What Haynes et al have done is to ask 8 subjects to freely decide either to add or subtract two numbers, and to select among 4 options an answer corresponding to the task they chose. After repeating this process many times, the authors ran a pattern classifier on the metabolic activity recorded in the brain.

This pattern classifier was run on the unsmoothed fMRI data - smoothing is normally applied because fMRI is thought to be a relatively noisy recording technique. Critically, the use of a pattern classifier allows the use of unsmoothed data (and in fact requires it) because buried within the noise is a distributed signal reflecting the distributed neural patterns encoding the subject's intention. Such data is presumably lost in averaging/smoothing operations.

Haynes et al trained their pattern classifier (a linear support vector machine) using a "multivariate searchlight" (described here) approach. This means that for every recorded voxel, they fed the classifier information about both that voxel and those surrounding it. The classifier was trained on 87.5% of the data (using 8-fold validation), and maps were produced of the classifier's accuracy at each voxel in the brain. These "accuracy maps" were averaged across subjects to produce the following figure:

i-421b839adb3a1aceb41285863f556d85-Haynesetal.jpg

As you can see above, the results showed that intention is decodable both prior to and during the intended response in numerous regions in the prefrontal cortex. In particular, the anterior & posterior medial prefrontal cortices as well as lateral frontopolar cortex, right middle frontal gyrus, and left operculum contained information that allowed the decoding of intentions at a level significantly above chance. Intentions prior to responses were also decoded based on activity in the temporo-parietal junction, although it is not illustrated in the above figure (see the supporting online material here). Much debate focuses on the precise roles of these regions, but their involvement here would be predicted by the majority of cognitive neuroscientists.

Conspicuously absent from these maps is the intraparietal sulcus (IPS), which has been argued to reflect numerical processing. An interesting possibility is that the numerical processing accomplished by this region cannot be distinguished based on the numerical operation (addition vs. subtraction), which would support a process-independent representation of quantity. Note that this conflicts with some theories of numerical processing in the IPS.

What's fairly amazing about this work is that they used a pretty standard scanner (only 3 tesla) with a reasonable sampling time (a TR of just over 2.7s). Peter Bandettini has suggested that this unsmoothed multivariate approach would benefit from higher resolution MRI, but Haynes et al have demonstrated surprising success with much more widely-available technology.

Related Posts:
Soon we'll be reading your mind!
Attention vs. Intention: Dissociations in Parietal Cortex
Action without Intention: Parietal Damage Alters Intention Awareness

More like this

Andersen et al discuss both the attentional and intentional aspects to the function of the intraparietal sulcus. What's the distinction between attention and intention? First, let's talk about attention. The modal view, based on the biased competition model of Desimone and Duncan, and the Miller…
Much has been written about the nonspatial functions of the parietal lobe, but these nonspatial functions are rarely evaluated as to whether they are also nonmotoric or reflect some covert form of spatial attention. Establishing whether the parietal lobe has truly nonmotoric and nonspatial…
Last month, a paper was published in Nature, in which Kay et al(1) were able to guess which of their stimuli a person was seeing by looking at their fMRI scans. The model looked something like this (from Kay et al's Figure 1, p. 352): The image the participant is seeing is on the left, the numbers…
A variety of new cognitive neuroscience shows how our ability to ignore distractions - to "perceptually filter", in a sense - is based on a ventral attentional network, is related to working memory, and may be involved in putative inhibitory tasks. First, a little background. In 2004, Vogel &…

Terrific works! This is the type of information that should be shared around the web. Shame on the search engines for not positioning this post higher! Come on over and visit my site. Thanks

In fact your creative writing skill has inspired me to get my own blog now. Iâm impressed, I must say. I am hoping the same top-grade blog article from you in the future as well.

yeah, i thought movable type was supposed to not post things until the date/time I enter when posting. not sure whether it's worth removing it now.