Towards Evidence of Absence: Conjunction Analyses in fMRI

An absence of evidence is not itself evidence for the absence of a particular effect. This simple problem - generally known as the problem of null effects - yields many difficulties in cognitive science, making it relatively easier to parcellate cognitive and neural processes into ever-finer detail than to show when two processes are identical. Recently, this problem has emerged for the wonder child of cognitive science, functional magnetic resonance imaging (fMRI). The problem in this case is how to determine whether two tasks recruit at least one of the same areas of the brain in the same way.

Friston & Price introduced a method based on the minimum statistic general linear model (thanks Kevin H.!) for estimating when two tasks activate the same region of the brain. This is widely known as a conjunction analysis. Unlike more traditional subtraction analyses of fMRI data (which rely on the assumption of "pure insertion"), these conjunction analyses retain voxels with main effects of condition but a "null" interaction effect between them: in other words, an absence of differences between conditions.

Friston & Price suggest that a valid conjunction analysis can be viewed in a brain map of activations as follows: 1) each voxel is significantly activated by two or more tasks, 2) each voxel is not significantly modulated by an interaction effect between tasks, and 3) the estimated relationships between each voxel and each task are not significantly different.

Subsequently, Nichols et al have referred to this as "interaction masking," and have noted that this method uses null effects to support a hypothesis. (To put it bluntly, you can't do that.) As Nichols et al say, "we cannot assume that there is no interaction if the interaction effect is not significant."

Nichols et al describe subsequent improvements on this test by Friston. The basic idea is that if two tasks, A and B, activated a particular voxel with t-values of .8 and 1.6, we can estimate the probability that a value of .8 would be the smallest t-value drawn from two random samplings of the t-value distribution.

As Nichols et al point out, however, this random sampling of the t-value distribution assumes a very particular null hypothesis: that there's no effect for task A AND task B. Rejection of the null hypothesis thus shows that there's an effect for task A OR task B OR both, which is obviously not a proper conjunction!

Instead, a proper conjunction analysis would test the null hypothesis that there's no effect for task A OR task B. Rejection of this null hypothesis would indicate that there's a significant effect for task A AND task B.

The issue then becomes what the proper significance level is for this new null hypothesis. Nichols et al. call this the "minimum statistic for the conjunction null" hypothesis, or MS/CN. The same basic method of Friston et al. is employed, but Nichols et al decrease the alpha level for the minimum t statistic to a level that is appropriate under the worst-case scenario: all but one task in the conjunction analysis significantly activates a particular region.

As far as I can tell, this is the state of the art for conjunction analyses. One could still claim that we have the problem of null effects, since even with the improved Nichols et al. method, we cannot say that a voxel is not differentially activated by task A and task B, but rather only that both are active in a given task.

Question for the statisticians out there: there are numerous other tests one could run to fortify conclusions about conjunction analyses - and more generally, about two processes being the same - so why aren't these used? For example, the t-values of a voxel identified in a conjunction analysis of two tasks should be more highly correlated across those tasks and within individuals than the correlation of t-values for voxels not identified in the conjunction analysis. While the conjunction analysis will bias that sample towards those voxels with high t-values, higher t-values should not necessarily be associated with a higher correlation.

More like this

Reductionism in the neurosciences has been incredibly productive, but it has been difficult to reconstruct how high-level behaviors emerge from the myriad biological mechanisms discovered with such reductionistic methods. This is most clearly true in the case of the motor system, which has long…
Maggie Fox writes: Brain scans may be able to predict what you will do better than you can yourself . . . They found a way to interpret "real time" brain images to show whether people who viewed messages about using sunscreen would actually use sunscreen during the following week. The scans were…
As described in yesterday's post, many theories have been proposed on the possible functional organization of prefrontal cortex (PFC). Although it's clear that this region plays a large role in human intelligence, it is unclear exactly "how" it does so. Nonetheless at least some general…
Suppose that "memory task A" shows marked improvement at 5 months, but "memory task B" doesn't show marked improvement until 9 months. Before we can make inferences about the development of memory, we need to understand how tasks A and B differentially strain the developing cognitive system. Along…

A minimum statistic by its self does NOT test for interaction. It is quite literally taking the minimum beta value at each voxel for all conditions in the conjunction. You can have a very large difference between the conditions, and it will not change the results of the conjunction. Using minimum statistic only ensures that both areas are active, not that they have the same size of activation.

Thanks Kevin for the clarification - I've corrected the text above. Friston & price didn't introduce the minimum statistic until their '99 paper, at which point they dropped the requirement for a null effect of the interaction. In the '97 paper (which I incorrectly indicated used the minimum statistic) they were using a vanilla GLM.