As a graduate student, I observed the nascent field of functional magnetic resonance imaging and thought to myself with some amusement “modern phrenology! Now with big, fancy, expensive equipment!” Count me among those who have never been terribly impressed with fMRI, and certainly not with its applications in what is known as social neuroscience.
Now we have this:
Late last year, Ed Vul, a graduate student at MIT working with neuroscientist Nancy Kanwisher and UCSD psychologist Hal Pashler, prereleased “Voodoo Correlations in Social Neuroscience” on his website. The journal Perspectives in Psychological Science accepted the paper but will not formally publish it until May.
The paper argues that the way many social neuroimaging researchers are analyzing their data is so deeply flawed that it calls into question much of their methodology. Specifically, Vul and his coauthors claim that many, if not most, social neuroscientists commit a nonindependence error in their research in which the final measure (say, a correlation between behavior and brain activity in a certain region) is not independent of the selection criteria (how the researchers chose which brain region to study), thus allowing noise to inflate their correlation estimates. Further, the researchers found that the methods sections that were clearing peer review boards were woefully inadequate, often lacking basic information about how data was analyzed so that others could evaluate their methods. (Read Vul et al.’s entire in-press paper here.)
There’s been a flurry of response in the blogosphere to this not-yet-published paper. There are many interesting things about it, including the changing way in which scientific debate is being conducted. Several of my Sciblings have posted in response to the controversy. I particularly like Ed Yong’s post; if you weren’t skeptical about fMRI before reading it, you should be afterward. (Side note: Ed Yong is a damn good writer!)
What interests me in all this, however, is the debate over words. Consider this:
Tor Wager, a Columbia University cognitive neuroscientist, whose work was not mentioned in Vul’s paper but who helped prepare one of the rebuttals, says that it was important to respond both publicly and swiftly. “The public and the news media operate on sound bites, and the real scientific issues are quite complex.” His complaints focus not only on the content of Vul’s paper, but also on the authors’ diction – specifically, the title, and its use of “voodoo.”
“When the conversation gets complex – and with statistics it always is – many blog readers will form opinions based on very simple things,” says Wager. “Like words such as ‘voodoo correlations.’ There’s no reason to use such loaded words when making a statistical argument. The argument should be able to stand on its own.”
Should it? Vul notes:
He and Kanwisher had previously written a similar paper discussing the statistical point on its own, and it went largely unnoticed.
Maybe a good argument can’t really stand on its own against established dogma, without a little help. In any case
…the editor of Perspectives, Ed Diener, in conversation with Vul and his rebutters, has decided to strike the word “voodoo” from the paper’s name. Yet for Wager and social neuroscientists, it feels like a hollow victory that’s come too late, and they find themselves wondering why Diener and the reviewers approved the title in the first place. According to Wager, the paper has made grant administration officers more wary, and it has affected the peer review process: “Everyone knows about the voodoo thing now, even though it’s getting taken out of the journal article,” he says. “The idea is out there, and it’s hard to correct.”
Are these bad things? If peer reviewers were letting papers slide through without adequate attention to things like the possibility of nonindependence errors, that would be a bad thing for everyone, right? So increased scrutiny can only improve the quality of published work, no? Isn’t that what we all want? Shouldn’t an argument be able to stand on its own, Mr. Wager? Or do some social neuroscience arguments only “stand on their own” when we don’t pay too close attention to the methods and statistical arguments?
Mr. Wager objects to language like “voodoo” because it creates an “emotional effect” that affects public perception. The implication here is that the type of language Mr. Wager approves of creates NO emotional effect and has NO affect on public perception. And yet this is crazy. Using language that is more traditionally associated with scientific discourse creates its own emotional effect – it creates the effect that the speaker is speaking without emotion, is completely rational and objective, has no vested interests or biases, and can be trusted, even when some or all of those statements are unfounded. The discourse of science most assuredly has an affect on public perception, and we would be crazy to pretend that we did not want to have such an effect. What Mr. Wager is really upset about is that the perception that bucket loads of cash should be poured indiscriminately into social neuroscience because those folks know absolutely what they are talking about and their work is incredibly important has been disrupted.
If we take the definition of voodoo as “characterized by deceptively simple, almost magical, solutions or ideas” then the use of the word voodoo in Vul et al.’s title is really not all that unreasonable, if a bit unusual, of a word choice. It’s a bit disingenuous to focus in critique so much upon the use of this word, to complain about how it interjects “emotion” into what was (erroneously) supposed to have been previously an emotion-free space. Ed Vul has exposed the biases existing in an entire field of research, and his critic is upset because he used an “emotion”-laden word in his paper title? There’s a lot of emotion present here, for sure, but it isn’t in the word voodoo.