What's more convincing than talking about brains? Pictures of brains!

ResearchBlogging.orgNot long ago we discussed work led by Deena Skolnick Weisberg showing that most people are more impressed by neuroscience explanations of psychological phenomena than plain-old psychology explanations. Talking about brains, it seems, is more convincing than simply talking about behavior, even when the neuroscience explanation doesn't actually add any substantive details.

Now David McCabe and Alan Castel have taken this work on the acceptance of neuroscience to a new level: now they've got pictures! They asked 156 students at Colorado State University to read three different newspaper articles about brain imaging studies. The articles were completely fake, and they all discussed brain imaging, but one of the articles included only text, one included a bar graph showing brain-scan results, and one showed pictures of brains. The articles were about three different topics, but an equal number of students saw each article with text only, the graph, or the brain image.

For example, in one of the fake studies, the claim was made that TV-watching is related to math ability. As evidence, students read a text explanation, or saw one of these two figures:

i-dce5836f824998c13672fd6ec5475d3d-mccabe1.gif

The [fake] claim was that since the same area of the brain is activated while doing arithmetic or watching TV, that the two activities are related. The students then rated this article for whether its scientific reading made sense, on a scale of 1 (strongly disagree) to 4 (strongly agree). Here are the results:

i-ae64f71ac7e461ad2436d621cfa3c7fd-mccabe2.gif

The articles accompanied by brain images were rated significantly higher than the other articles, despite the fact that the fake claim in each article wasn't actually supported by the fake evidence, in whatever form it was presented.

But maybe people are simply more impressed by the complexity of the brain image. In a second experiment, the researchers repeated the study but instead of using a graph, they used a topographic map of brain activity and a brain scan:

i-5af84c868157d91e4bfc389244c0e841-mccabe3.jpg

Once again, the article with the brain image was rated higher for scientific reasoning.

But since these studies feature made-up data, maybe they don't apply to real studies. In a final experiment, McCabe and Castel modified a real write-up of a real brain-imaging study, which argued that brain imaging can be used as a lie detector. Students read one of two different versions of the article. One version contained criticism from a brain researcher who wondered whether the technique would work in the real world, while the other omitted the criticism. These groups were again divided into two groups, one of which saw a brain image accompanying the article, and the other who read the article without any images. Here are the results:

i-dec2c4e0ab5b97d4b2d9499d666556ab-mccabe4.jpg

Once again, the agreement with the article's conclusion was significantly higher when a brain image was presented, even though the same evidence was presented in textual form in the article, making the brain image redundant. Even more striking, while agreement with the conclusion was affected by the presence of the brain image, the presence or absence of substantive criticism had no effect. Criticism did have an effect on whether the students thought the article title ("Brain scans can detect criminals") was appropriate.

Given the power of the brain image to sway opinion, perhaps it will only be a matter of time before we start seeing brain images in advertising.

MCCABE, D., CASTEL, A. (2008). Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition, 107(1), 343-352. DOI: 10.1016/j.cognition.2007.07.017

Categories

More like this

They asked 156 students at Colorado State University to read three different newspaper articles about brain imaging studies.

Does the study itemize the majors of the students? I would expect that biology majors would be less swayed by the images. Or psychology majors, folks with training in reading scientific studies. I would also hope that philosophers would be more critical, but that's a personal bias.

(The earlier study you say didn't show any significant difference, I wonder if this is the same.)

The article doesn't say what their majors are, but most participants in this sort of study are taking Intro to Psychology. At many universities, participating in research is a requirement of the class.

This doesn't mean they are psychology majors, though -- lots of non-majors take that course.

Off topic: is it ethical in psychology to lie to subjects?

On topic: If these subjects were contaminated --- they were expecting to be lied to --- they might think brain pictures are harder to fake.

By pushmedia1 (not verified) on 04 Jun 2008 #permalink

I was wondering if there were other studies of this sort for other sciences? For example, if you show a picture of cells instead of a bar graph of data. Sometimes the image itself may make it more convincing.

By benedetta (not verified) on 04 Jun 2008 #permalink

Benedetta, I think you raise a good point but the average non-scientist is rarely required to examine pictures of cells. Brain images on the other hand are sometimes admissible in court and can have a huge impact on the jury.
For more on this topic see Mobbs D, Lau HC, Jones OD, Frith CD (2007) Law, Responsibility, and the Brain. PLoS Biol 5(4): e103 doi:10.1371/journal.pbio.0050103

In education, we know that students are far more likely to learn when pictures are used. The test should have included some type of graphical representation that shows the tangible results other than a brain scan. Graphs can only go so far compared to picture or illustration.

Given the power of the brain image to sway opinion, perhaps it will only be a matter of time before we start seeing brain images in advertising.

"...That last piece of research is particularly worrisome to anti-marketing activists, some of whom are already mobilizing against the nascent field of neuromarketing. Gary Ruskin of Commercial Alert, a non-profit that argues for strict regulations on advertising, says that "a year ago almost nobody had heard of neuromarketing except for Forbes readers." Now, he says, it's everywhere, and over the past year he has waged a campaign against the practice, lobbying Congress and the American Psychological Association (APA) and threatening lawsuits against BrightHouse and other practitioners. Even though he admits the research is still "in the very preliminary stages," he says it could eventually lead to complete corporate manipulation of consumers -- or citizens, with governments using brain scans to create more effective propaganda..."

Neuromarketing: Is it coming to a lab near you?

By Tony Jeremiah (not verified) on 04 Jun 2008 #permalink

I also observed that numbers seem to convince more than text. Saying that "71% +/- 10% of the paricipants support a decision" seems to sound better than "a large majority". Maybe "seeing is believing"?

By Herman Claus (not verified) on 05 Jun 2008 #permalink

Interesting. More generally, it seems to me that the general public might think that psychology, up to this point, has been theories and "educated guesses" about human behaviour, but that neuroscience, with its "real pictures" of the brain, may provide the final and correct answer that pscyhology couldn't confirm. I realize this may not be true, but maybe its the perception out there?

I once had a great clinical psychology professor (we're still friends) who said that people have this overwhelming desire to "know" things. For instance, a lot of people suffer from OCD, and some get a certain peace of mind (no pun intended) from knowing that it's a problem in the cingulate gyrus, one that might be corrected through a cingulotomy (kind of sort of). It's always more assuring to be able to point to an x-ray or a brain scan and say, "It's there!" even if "there" doesn't really mean anything.

I say all this as someone who firmly believes that careers are built on good visualizations of the data. ;-)

Speaking of using images to convince, the results graphs presented here do not show the full range of the scale on the Y-axis, and thus visually distorts the actual results. The actual differences in the conditions, while statistically reliable, are actually not that large (about 5% difference between controls and brain image in the first experiment). Presenting ratings data without showing the full range of the scale does not give an accurate picture of the results. Tufte would be displeased.

Tulse,

I disagree. The scale only goes from 1 to 4, but the 2-3 range is the critical range in this experiment. A rating of 2 = "disagree" and 3 = "agree". So the difference of .2 in the ratings is quite dramatic, especially when you consider that the only difference in the stories was the graphics.

Tufte would probably prefer something that showed more of the data than the bar graph, but unfortunately I only have the information that the researchers provided in their report -- bar graphs.

The scale only goes from 1 to 4, but the 2-3 range is the critical range in this experiment. A rating of 2 = "disagree" and 3 = "agree".

The "critical range" is not the issue, as the data are analyzed as if they are an interval scale from 1 to 4. To use a restricted range distorts the apparent difference, and gives the visual impression of a much larger effect. It is misleading not to present the data using the full range in the Y-axis. One wouldn't only analyze the scores between 2 and 3, so why would one restrict the range of the data presentation?

To put it another way, if they had used a 100-point Likert scale and found a reliable difference between means of 49.5 and 50.5, would you think it appropriate to graph the conditions only using the range 49 to 51?

If you think the best approach would be to compare "agree" versus "disagree", then the data should be treated as nominal, as counts of "agree" and "disagree". But given that the authors did their ANOVAs as if the ratings were interval scale data, then the whole range should be presented.

The middle of the 4-point scale is 2.5. This value represents "no bias" in responding (i.e., research participants neither agreed nor disagreed). This is what we used in the original article for first two graphs but we used 2.0 in the third graph (an experiment that was encouraged by reviewers) because one of the means was lower than 2.5. But note that the full scales are rarely used when presenting data. Figures are intended to draw readers' attention to the critical differences in the results. For example, if there were differences in errors on a task, such that one group had error rates of 4%, and another 8%, should the authors/editors really use a 100% scale? Note that the effect size in this example might be huge, but it would look like there was no difference between groups if the whole scale were used in presenting the results. All that said, in this study the effect sizes were not large, though they were respectable and were replicated multiple times, and we noted this in the General Discussion of the article.

Wouldn't error bars on the graph solve this problem simply and quickly? Then the scale wouldn't matter, because the error bars would be scaled too. I'm sure the article made this clear in the results section, but a couple of whiskers on top of each bar would allow the reader to perceive critical differences regardless of scale.