When we see a brain "light up," [most of] our brains shut off

ResearchBlogging.orgPsychologists often complain that neuroscientists get a disproportionate share of the glory when the mainstream media reports on their studies. It seems to some that an important new psychology study is often neglected or ignored entirely, while neuroscience studies of similar importance are hailed as "groundbreaking." What is it about pictures of brains that are so appealing?

A while back, were excited to hear of a study which promised to show that people are more impressed by neuroscience explanations of research results than nonneural psychology explanations. Paul Bloom's article about the then-unpublished research suggested that even experts were more impressed with explanations of psychological phenomena that included irrelevant references to brain activity.

But the study was unpublished, so we didn't report on the results here. Now, finally, the study has been published by a team led by Deena Skolnick Weisberg. However, the results, though still intriguing, were a little different from what Bloom's account promised.

The researchers began by asking 81 non-experts to read descriptions of 18 different psychological phenomena like Attentional Blink and the Curse of Knowledge. Then they read an explanation of the phenomenon that either included some bit of neuroscience or did not. Here, for example, are two explanations of why the curse of knowledge occurs:

Without neuroscience: The researchers claim that this "curse" happens because subjects have trouble switching their point of view to consider what someone else might know, mistakenly projecting their own knowledge onto others.

With neuroscience: Brain scans indicate that this "curse" happens because of the frontal lobe brain circuitry known to be involved in self-knowledge. Subjects have trouble switching their point of view to consider what someone else might know, mistakenly projecting their own knowledge onto others.

Even though both of these explanations are accurate, the neuroscience explanation in fact offers nothing to help understand the phenomenon. The non-experts also read bad explanations of the phenomena, again both with and without neuroscience:

Without neuroscience: The researchers claim that this "curse" happens because subjects make more mistakes when they have to judge the knowledge of others. People are much better at judging what they themselves know.

With neuroscience: Brain scans indicate that this "curse" happens because of the frontal lobe brain circuitry known to be involved in self-knowledge. Subjects make more mistakes when they have to judge the knowledge of others. People are much better at judging what they themselves know.

They were asked to rate each explanation on a seven-point scale from -3 (very unsatisfying) to +3 (very satisfying). Here are the results:

i-49faeed5ccdd11bab4b62423ebffb66f-weisberg1.jpg

While there was no significant difference in the results for good explanations, people rated the bad explanations significantly better when they included irrelevant neuroscience. The team repeated the experiment on college undergraduates who were enrolled in an introductory neuroscience class, and the results were similar, both at the beginning and the end of the semester. The students actually rated all the explanations significantly better when neuroscience was included, whether or not the explanations themselves were good.

Finally, the study was repeated on neuroscience experts: people who had completed undergraduate or graduate neuroscience or cognitive psychology degrees. Here are those results:

i-f0ac22a47a129cd90866effc3682e1c4-weisberg2.jpg

While there was a slight trend to rate the bad explanations with neuroscience better, it did not rise to the level of significance. Good explanations with neuroscience were actually rated worse by the experts than explanations without neuroscience content. So contrary to Bloom's article, the final result of the study did not find the same effect for experts and non-experts. Perhaps preliminary data had been significant, but when more participants were tested, the effect went away.

Still, Weisberg et al. point out that their results are troubling: experts have a different view of what makes a good explanation of psychology research results. This could mean that they are ineffective in explaining their work to the public. Indeed, as we've seen, even after a semester-long neuroscience course, students are still impressed by irrelevant neuroscience explanations of research.

Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., Gray, J.R. (2008). The Seductive Allure of Neuroscience Explanations. Journal of Cognitive Neuroscience, 20(3), 470-477.

More like this

About a year ago, there was an article in Seed Magazine titled "Seduced by the Flickering Lights of the Brain," in which Paul Bloom argued that people are too easily seduced by neuroscience, believing that it made for good science, even when it doesn't. At the end of the article, Bloom mentioned a…
Not long ago we discussed work led by Deena Skolnick Weisberg showing that most people are more impressed by neuroscience explanations of psychological phenomena than plain-old psychology explanations. Talking about brains, it seems, is more convincing than simply talking about behavior, even when…
Dave over at Cognitive Daily beat me to this (curse you, Dave!), but I wanted to point everyone to an article in Seed Magazine by Paul Bloom, titled "Seduced by the Flickering Lights of the Brain." If you can't tell from the title, the article is on the lure of imaging studies, and the sense that…
In case you're reading this on RSS, or have trained yourself to ignore the links immediately to the right, I wanted to point you to Paul Bloom's excellent article on Seedmagazine.com. Why does an fMRI brain scan suddenly make a humdrum task suddenly seem like "real science?" Bloom points to one…

I wounder how much of this effect is from the irrelevant neuroscience increasing the feeling of how much one has learned. If you already know neuroscience then it would be boring and obviously irrelevant, but if you don't then it would increase the amount of new information and perhaps make it feel like a better explanation.

It's as if the neurosciency stuff "decorates" the article better to cover up its mistakes. For the layperson, it works. But experts are not going to be tricked by that. This is my reasoning on these results.

Perhaps people are quicker to criticize in their own fields since they need to maintain the perception that they are an expert, an it is well known that criticizing makes one appear smarter.

Looking at the samples provided, it seems to me that the results could be interpreted very differently. Just read the first few words of each statement:

The researchers claim that ...
versus
Brain scans indicate that ...

If these samples are typical, it is possible that the respondents favor explanations that are backed up by data, rather than explanations based on opinions. If true, this would be a less sensational but more reassuring conclusion.

I usually map knowledge and explanations as high dimensional data to understand computational benefits. It more a hobby than a full fledged scientific model. I think this phenomenon is quite interesting. As $(HSWOD) wrote above, one might be tempted to consider an explanation benefited with a ns theory better simply because they are not well versed with it. It is common that if you explain a common phenomenon mathematically, you might get instant recognition or acknowledgements, especially from non-mathematicians. It is not that they agree, but the analysis is so perplexing, it makes one realize there is a domain of knowledge that they have no idea bout and hence the most abstract analysis must be but the truth.

~
Vaishak Belle
http://vaishakbelle.wordpress.com/2008/03/14/curse-of-dimensionality-an…

I agree that "Researchers claim..." is much less convincing than "Brain scans indicate...". It's sounds more like hearsay than conclusions based on analyzed data.

If the two had been more similar in semantics, such as "Research shows..." or something else more related to actual data collection than speculation, the Psychology side might seem more believable as well. Were these two statements modelled after actual scientific reporting? If so, this seems like a problem with AP or Reuters than Psychology vs. Neuroscience. (No, don't even get me started on the crap the BBC prints on science!)

My bet is that in some level or other, people has this (unconscious) feeling that there's indeed a good thing for psychology to get support from other sciences. After all it's been 50 years of cognitive science and the message always was "victory through multidisciplinariety!" (or something like that).

In other words, I guess people find awesome the interaction between "social" and "natural" sciences, a divide that is product of the explanatory gap.

Or maybe we're all just morons who like best colored images instead of boring reports... LOL

Perhaps we need some brain scan studies on people reading these good and bad explanations so we can really understand what's going on. ;-}

Remis probably has something with the "coloured pictures" explanation; brain scan evidence somehow makes explanations more concrete. Plus, there's an "oo, shiny" reaction that occurs when advanced technology - a big humming machine costing hundreds of thousands of dollars - is involved.

Prediction: look for this effect to diminish as brain scanning becomes more routine. Nobody's impressed these days when a scientist says they used a computer to process their results, because a computer isn't a big humming machine costing hundreds of thousands of dollars any more.

I can understand the perspective that "researchers claim" might seem like hearsay. However, "claim" is the first element of Toulmin's argumentation model. It's not equivalent to hearsay, it's just the statement you are asking the other person to accept. But as djc points out, the "brain scans" version brings in (possibly bogus) data, so people might find that more convincing.

I'm not entirely sure what point is being made by "even after a semester-long neuroscience course, students are still impressed by irrelevant neuroscience explanations of research." Even after a semester-long intro course, those students area still essentially novices in the field!

From a lay person's perspective, evidence is more reliable if it comes from observation of the brain itself rather than self-report or a researchers impressions ("...researchers claim..."), which are more prone to subjectivity and error. In these statements no actual observation is described, but if you were asked to read and make a quick assessment, not knowing the point of the study, I would guess that those not trained in how research is done and reported would simply make assumptions based on the implication. Reporters and marketers often use the tool of implication, sometimes for well-meaning intentions like brevity and sometimes not. My guess is these results could be replicated across many professions.

Supports my belief... Neuroscience is crap. (Explanatory fuction, obsfucation, all that.)

Good study though!

@ Josh above.

Neuroscience would be "crap" if it relied entirely on fMRI studies to draw its conclusions. This is often a misconception about cognitive neuroscience (which is by many assumed to be the only neuroscience).

Neuroscientists study the brain from every possible level and approach possible: animal studies, single-cell recordings, MRI, EEG/ERP (and the related EMP), lesion studies, and the hall-mark behavioural tasks (reaction time based tasks such as RSVP, attentional blink, etc). It is the combination of different methods that give cognitive neuroscience its strength. Relying entirely on fMRI has long been criticised and, in my opinion, is a pretty weak methodology (no field, including neursocience, relies entirely on correlational methods).