I’ve got an interview with Ed Vul, the lead author of the recent paper on “Voodoo Correlations in Social Neuroscience,” over at Scientific American. Since the paper hit the web, it has provoked a flurry of rebuttals and responses. If you’d like a balanced perspective on the issue – and it’s worth pointing out that this issue isn’t unique to social neuroscience, but has implications for fMRI research in general – I’d suggest reading the interview and then perusing this eloquent and fiery response by Matthew Lieberman, Elliot Berkman and Tor Wager.
LEHRER: Your paper has prompted a great deal of debate among social neuroscientists, and some of the scientists have issued a rebuttal of your paper. (You have since rebutted this rebuttal.) What do you hope this debate leads to? What methodological changes would you like to see adopted by social neuroscientists using fMRI?
VUL: The debate we have spurred is quite interesting to watch. At first some of the authors whose papers we criticized challenged our statistical point, but–for good reason–that line of argument doesn’t seem to have caught on. Right now, so far as I know, everyone seems to concede that the analysis used in these studies was not kosher, in the sense of providing correlation numbers that can be taken seriously. Instead, we are mostly hearing a couple of other arguments at this point.
One is that the correlation values themselves don’t really matter–it’s just the fact there is a correlation in a certain spot in the head that matters. I don’t agree with this observation at all, and we think the fact that many of these papers appeared in such high profile places is because editors were (justifiably) impressed with big effects. If one can account for, say, three quarters of individual differences in something important such as anxiety or empathy–obviously, that’s a real breakthrough, and it tells you not only where future research ought to look, but also where it shouldn’t. On the other hand, if it’s just 3 percent of the variance, that’s a whole lot less impressive, and may reflect much more indirect kinds of associations.
I have also heard some people complain that even if we are right on the mathematical point, we presented our argument in a bit of a rough-mannered way–criticizing particular articles and drawing unfavorable outside attention to the field, and using the humorous term “voodoo.” We were as surprised as anyone by how much interest our paper sparked. Evidently it spread sort of “virally”–one neuroscientist we know said he got seven copies sent to him (none of them by us). The good side is that people are thinking harder now about how they do their analyses. The bad side is that all this publicity has left some authors feeling embarrassed and picked on. In our view, the statistical issues of independence and multiple comparisons are full of tricky pitfalls–we do not suggest that these were stupid mistakes people were making, and we regret hurting anyone’s feelings. I don’t think, however, that it would have made sense to write an article that did not “name names,” because if the scientific literature is to guide future research decisions, people have to know which results can be relied upon, and which cannot. (In fact, we suspect we only flagged a small fraction of the papers that have these problems, and some are in other fields, such as neurogenetics, cognitive neuroscience more broadly, and others.)