Suppose that one day your computer's hard drive stops working, but everything else about the machine is fine. Your friend has an identical computer in which the hard drive works fine, but the keyboard suddenly stopped working. Based on this "double dissociation" between the two different problems, can you safely assume that the "hard drive system" and the "keyboard system" rely on distinct underlying mechanisms?
For years, cognitive neuropsychologists have felt safe in making equivalent assumptions about brain damage. If one type of damage leads to difficulty on task A, but not task B, and a different type of damage leads to the opposite pattern of performance, then tasks A and B must rely on distinct neural mechanisms ... Right?
Given what everyone knows about computers, you might think this inference is perfectly valid... The brain is a computing machine of some sort. Yet some debate has recently emerged about the logic of such double dissociations in cognitive science.
In their 2003 editorial, Dunn & Kirsner review several challenges to the kinds of logical inferences that are typically made from double dissociations:
1) If performance on tasks A and B is negatively correlated in healthy populations (suggesting the two tasks occupy either side of some computational tradeoff, for example the tradeoff between stability and flexibility), then brain damage might impact a regulatory function which mediates performance between the two of them. Dunn & Kirsner cite other similar difficulties identified in previous work.
2) If real double dissociations actually exist, there's just no way to prove it: maybe performance on one of the tasks B is impaired but the test is too insensitive to detect it.
3) Supposing that a true double dissociation is found, does it reflect that there are more than one cognitive processes contributing to task performance? More than one modules? More than one system? What exactly does it tell us about the nature of the underlying system? Maybe nothing, based on evidence that damage to a connectionist model can result in apparent double dissociations, even if the network's architecture has no clear modularity or division of function.
4) If we assume that there are unique mechanisms or modules which contribute to double dissociations, then we also have to make several additional problematic assumptions. First, each of the patients or patterns of brain damage need to be "pure cases" - in which damage affects just one of these hypothetical modules. Similarly, each hypothetical module must not laterally interact with one another. If either of these requirements is not met, then the set of valid inferences from double dissociations become far less clear. And given what we know about the highly distributed and interactive nature of neural networks, it seems highly unlikely that these requirements could ever be met: many biological systems are highly recursive and every "part" works through dynamic interactions with other parts - this seems particularly true of the brain (here's a similar argument).
5) Finally, Dunn & Kirsner argue that no one should be surprised if two different tasks can be differentially impaired by two types of brain damage. In their words: "Since any two tasks, different enough to be called different, cannot recruit exactly the same mental functions in exactly the same way, it is inevitable that they will eventually yield a dissociation ... Such fractionations call into question the utility of dissociations as they seem to suggest that we will eventually need as many mental functions or modules or systems as there are tasks for humans to do."
Still, the logic of double dissociations has its defenders. The strongest points seem to be the following:
1) Alan Baddeley replied to argue that double dissociations are far stronger than merely correlational evidence, often a lamentable standard in clinical neuropsychology.
2) Similarly, Jack Lyons argues that brain damage is mostly informative in terms of performance that is spared - suggesting that the clearest interpretation of performance after brain damage relates to functions that those regions could not perform.
3) Another reply by Coltheart and Davies is even more confrontational: they suggest that not all "different" tasks will necessarily yield dissociations, but rather only those tasks which rely on different underlying processes. They provide the example of reading words with and without the letter 'A' - no dissociation between these two types of reading has ever been found, although one might suspect on the basis of Dunn & Kirsner's argument that this would be possible.
4) Gurd & Marshall argue that dynamical and connectionist models do presume some degree of modularity or division of function, where they are capable of generating double dissociations. According to Gurd & Marshall, coaxing these models to behave in that way requires the use of parameters which affect subsets or particular "modes" of the models, and thus they are effectively modular or functionally-partitioned.
5) Others argued that within-task double dissociations are much stronger evidence for functional-partitioning the cognitive system (including Sternberg and McCloskey).
In summary, it seems unsafe to make strong inferences about the functional architecture of cognitive processing on the basis of double dissociations alone. Such inferences are bolstered when made on the basis of within-task double dissociations between measures that are positively correlated in the normal population. These inferences may be particularly strong when they relate to functions that are not performed by those brain regions.
At the same time, it will be important for increasingly biologically-plausible models of double dissociations; maybe the above recommendations are still not strong enough to helpfully constrain the types of underlying neural computations giving rise to apparent double dissociations.
- Log in to post comments
I agree with your main argument. It is another example of over simplistic approaches. Brain function is not one of a simple machine but one of a very intricate system.
For decades, some widely quoted cognitive scientists and some AI people have been making the mistake - assuming that 'the brain is a computer' means the brain is an awful lot like the computer on their desktop. But the mathematical definition of computer includes all possible devices with a particular limit on what sorts of problems they can solve. It posits no requirements on modularity, on divisions between hardware and software, or on divisions between memory regions or processing regions.
I'm glad to see more of those baseless assumptions unravel - something that seems to have happened a lot in the last 5 years.
Hi there -- I'm an undergrad student in cognitive science in my second year, and before I add my comments, I wanted to let you know that I find your blog a big help! I definitely agree with the criticisms of double dissociation methodology you've posted here. The problem is that this methodology assumes modularity at the outset.
However, one question I have is, what about converging evidence that shows double dissociations from a number of different perspectives, including both behavioral and neuroimaging studies? If converging evidence is showing these dissociations, surely that adds weight to the argument. One example I can think of is the dissociations between familiarity and recollection -- I believe there was an excellent review by Yonelidas in the Journal of Memory and Language on this.
On the related subject of modularity, I wanted to post some thoughts on what basis we have for assuming modularity at all. When we look at some of the studies on animal conditioning, one of the things we observe is that apparently similar behaviors can arise from different underlying processes, e.g., goal-directed and habitual instrumental behaviors. Similarly, memory is also observed as being of many different types and categories. Another example is cue combination in Pavlovian conditioning, which can be additive, or can generalize across different configurations. So I guess my question here is, why should such fractionation arise at all? Why should this modularity and dissociation be employed by the brain? Isn't it inefficient to use duplicate strategies like this to exhibit the same behaviors?
I was thinking that maybe Marr's levels have something to do with this. Perhaps Marr's levels can help us see that the same computational goal can be attained via different algorithms and neural mechanisms, and thus the brain inevitably fractionates and this gives rise to modularity and ultimately, to dissociations. Any thoughts? (I know these were a lot of questions! ;-) )
Oh, and one more thing -- regarding this part of your post:
"Maybe nothing, based on evidence that damage to a connectionist model can result in apparent double dissociations, even if the network's architecture has no clear modularity or division of function."
The link doesn't seem to be working any more (presumably it was an article about damage to a connectionist model). Can you point me to that paper/article? Thanks!
sorry, the link was
http://www.mathcs.duq.edu/~juola/papers.d/cogsci98dissociate.ps