Stereotype Threat Happens in the Brain

That's it! I'm never reading another imaging paper again, ever. OK, I might read one or two, and I might even post about them, but for now I'm telling myself, for my own sanity, that I'm never, ever, under any circumstances, going to read another imaging study. If you read my last post, or have been hanging around here for a while, you may have realized that I'm not a big fan of cognitive neuroscience. More often than not (I'd argue, always), you can learn the same thing and more by doing behavioral studies, and in most cases it'll cost you several hundred dollars less per participant. For example, I recently saw a talk by a famous memory researcher, who's written a couple popular books on memory and its infelicities. In the talk, he presented research supporting an interesting hypothesis about the relationship between memory and thinking about the future. He had really interesting behavioral and neuropsychological data that, as far as I'm concerned, provided ample support for his hypothesis. But for whatever reason (perhaps because these days some people need to see imaging data to be convinced?), the bulk of his talk was on imaging studies that, aside from being poorly controlled (how do you control for all the aspects of thinking about the past and the future in an imaging study? I mean, really!), didn't tell us anything he hadn't already told us with his behavioral and neuropsychological data. Ugh.

But a paper in the February issue of Psychological Science really takes the cake. The paper, by Krendl et al.(1), is titled "The Negative Consequences of Threat: A Functional Magnetic Resonance Imaging Investigation of the Neural Mechanisms Underlying Women's Underperformance in Math," and as the title suggests, uses fMRI to investigate where in the brain stereotype threat occurs, at least when women are doing math. In case you don't know, stereotype threat is the term used to describe the detrimental effects that awareness of negative stereotypes can have on performance. So, for example, when African American students are aware of stereotypes that associate black students and poor academic performance, they tend to perform worse academically, and when women are made aware of stereotypes suggesting that they're not as good at math as men, they tend to do worse in math.

There is a ton, a ton of behavioral data now demonstrating that stereotype threat is real and can have a significant impact on performance. I've discussed that evidence at length before (here and here, for example), so I won't go into it again in this post. I'll just note that, despite the many demonstrations of the existence of stereotype threat, we still don't have a real good explanation for why it occurs. That is, we don't know the mechanisms underlying the effect. Some recent evidence points to the important role of verbal working memory. Specifically, math problems that create heavy working memory loads are more effected by threat manipulations than problems that are less working memory intensive(2). Regulatory fit may also play a role, perhaps by affecting working memory, but who knows(3)? At this point, though, any study that sheds light on the causes of stereotype threat effects would be valuable.

What wouldn't be valuable, though, would be an imaging study. Especially one like this. In Krendl et al.'s study, they induced threat in women by having them perform an Implicit Association Test (oh, brother!) that forced them to associate gender and math, and were told in the instructions that the experimenters were using this task because of the existence of gender-math stereotypes. So participants who took the gender-math IAT were in the "threat" condition, while another set of participants took an unrelated version of the IAT, and were thus in the "non-threat" condition. Next, the participants got into the fMRI machine, and took a math test that involved viewing equations (5 x 2 - 3 = 7), and indicating as fast as they could whether the equations were true or false. All the while, they were having pictures of their brain taken.

All that's fine, I suppose, but the resulting data suffers from two serious problems. The first, and most important problem, is that they didn't get the stereotype threat effect. Let me repeat that. They didn't get the stereotype threat effect!!! OK, now if you're going to an explain an effect, or at least argue that it occurs in certain regions of the brain, using imaging data, it would be a good idea to first get the effect. But no! Here's a graph of their data (from Krendl et al.'s Figure 1, p. 171) :

i-b9f0cc0c2726d106fe1da3f71012e067-krendletalfig1.jpg

This is the accuracy data, and the first thing to notice is that Krendl et al.'s participants sucked at this task. The highest mean accuracy in any condition was just over 20%. Who knows why this is, perhaps the task was really difficult, perhaps the five seconds they were given to respond wasn't enough time, or perhaps they were anxious about being surrounded by a giant, super-powerful magnet, who's to say? But that's not really important. What's important is that if you look at the two bars on the right, the bar on the far right is slightly lower than the one on the left. This indicates that participants in the threat condition performed slightly worse after the threat manipulation than they did before it. Unfortunately, however, this difference wasn't quite statistically significant. In other words, the threat effect itself was not statistically significant! So, they're basing their imaging data conclusions on a non-significant behavioral effect. Oops.

Now to the imaging data. Krendl et al. found that, relative to the non-threat participants, participants in the threat condition showed less activation the left inferior
prefrontal cortex, left inferior parietal cortex, and the bilateral angular gyrus, during the second test (i.e., the one after the threat-inducing manipulation). These areas, it turns out, are associated with doing math. So what they've found, in essence, is that people who do better on math problems (i.e., those in the non-treat condition) show more activity in math-related brain areas than those who don't (in this case, those in the threat condition). That's, well, not exactly ground-breaking. And it doesn't tell us anything about stereotype threat. They did find greater activation for participants in the threat condition, relative to those in the non-threat condition, in the ventral anterior cingulate cortex. This area is associated with negative emotions. So, we know that when people are doing worse on a math test, they have more negative emotions. Again, we haven't learned anything about threat.

So, to summarize, this study didn't get a stereotype threat effect and only showed differences in brain activation associated with, well, the consequences of threat, rather than with the mechanisms by which threat produces them. In short, then, we've learned that stereotype threat happens in the brain, we're just not sure where, or how, or when. I say again, ugh. I wonder how much time the imaging data for their 28 subjects cost, because the research was funded by the NSF, and therefore your tax dollars and mine. I'm never reading another imaging study again.


1Krendl A.C., Richeson J.A., Kelley W.M., &Heatherton T.F. (2008). The negative consequences of threat: A functional magnetic resonance imaging investigation of the neural mechanisms underlying women's underperformance in math. Psychological Science, 19(2), 168-175.
2Beilock, S.L., Rydell, R.J., & McConnell, A.R. (2007). Stereotype threat and working memory: Mechanisms, alleviation, and spillover. Journal of Experimental Psychology: General, 136, 256-276.
3Grimm, L.R., Markman, A.B., Maddox, W.T., & Baldwin, G.C. (Under Review). Stereotype threat reinterpreted as regulatory mismatch. Journal of Personality and Social Psychology.

Categories

More like this

Yes there are a lot of lousy fMRI studies and they probably do get a disproportionate amount of publicity, but I'm yet to be convinced that the % of bad studies is much different from many other sub-areas of science. The
Still, your tarring all of fMRI and most of cognitive neuroscience based on a couple of bad studies is pretty impressive.
I'm not sure I get your critique of cognitive neuroscience. Are you saying that once we know an effect exists, there no reason to try to understand out that effect manifests itself in brain function? For example, once we identify some aspect of illusionary motion, it's irrelevant whether that illusion happens in primary visual areas or higher brain regions? The field of cognitive neuroscience, which used patient studies, animal work, and many types of imaging methods to better understand how the brain function links to behavior and perception is pointless?
Focusing just on fMRI, I have a general dislike of studies whose sole point is to say "we found the brain region for X."
Though the blog post is mediocre, what do you think of http://scienceblogs.com/notrocketscience/2008/03/the_machine_that_ident…
The fact that using some basic rules of the construction of primary visual cortex is sufficient to partially reconstruct viewed novel images from fMRI spatial response patterns is pretty amazing. This shows how much information complexity is stored using just these few rules in a way that would be impossible with just psychophysics or even neural network modeling.
In general, most of the people I know who were pure psychophysicists have at least partially branched into fMRI. The reason is simple. They can use the old tools to answer certain questions, but imaging opens up entirely new areas of discovery.

I suspect if you want to read fMRI studies that don't raise your blood pressure, pull up the names of psychophysics people you respect and see what some of them are doing with imaging.

That's astonishing. Are those error bars confidence intervals or standard errors?

If it's standard errors, then clearly they're not even close to significance.

Dave, hard to tell. They don't actually put that info in the label, or anywhere else that I can find.

Bsci, my critique of imaging studies is pretty simple, but I will confess that I don't include studies of vision (audition is in play, though, because even most psychophysicists I know will admit that studying audition with imaging data is really, really messy). I don't include vision because we know a hell of a lot -- a hell of a lot -- about the visual system, and particularly the early visual system, through single-cell recording data. So you can map the imaging data onto that data and get a pretty good picture of what's going on.

However, when it comes to cognition, the story is very different. We don't have much single-cell recording data related to higher-level processing outside of the visual system or neighboring areas, and while neuropsychological data (lesion studies, e.g.) can tell us a lot about localization, without the systems data and data about how particular regions function (like, say, our knowledge of hyper-columns in V1), imaging data doesn't tell us very much.

These, then, are the main problems with imaging data:

1.) Localization tells us little about mechanism, and co-localization tells us little about common mechanisms, because we don't have the systems and functional data to back it up.
2.) Imaging studies are almost always parasitic on behavioral data. Still, I sometimes think that imaging data can provide a fairly indirect method for hypothesis testing. For example, if two tasks are thought to recruit the same mechanisms, but it turns out that they don't show activation in the same regions, then they probably don't recruit the same mechanisms. However, in order to draw the conclusion from imaging data that they do recruit similar mechanisms, you'd have to conduct the sort of carefully controlled study that it's very difficult to run in an imaging machine, particularly when it comes to higher-order cognition. This means that imaging studies turn out to be pretty good at helping us to reject hypotheses, but terrible at supporting them. In the end, you just have to get more behavioral data.
3.) There are still well documented problems with the sorts of subtraction methods used in imaging research.
4.) Imaging research is very expensive, so the first question any researcher asks should be, "Can I test this hypothesis through behavioral data just as well?" I don't think I've ever seen a cognitive study in which the answer wasn't, "Yes, I can."
5.) I'll say it again: control. Because, with any higher-order cognitive task, there is a ton of stuff going on at once, it's very difficult, even in behavioral studies, to isolate particular processes/mechanisms. It becomes even more difficult inside an imaging machine, where only simple tasks are possible (and there's a giant, loud magnet swirling around your head, say). This means that a lot of cognitive stuff suffers from less than carefully defined tasks. Not everything, mind you, but most things.

Chris, why do you think there are all these imaging studies of late? Is it because they are relatively easy? Because it is at the moment novel and allows the "publish or perish" mentality to flock to it? Do you see the fad dying off any time soon?

Clark, I used to think it was just a fad, and would soon die. Several years later, it's still around, and hiring rates for cog neuro people have gone up, so I'm less convinced it's just a fad. I even know grad students in cog psy who feel like they need to get some cog neuro experience to get a job.

I'm not sure what's caused this, exactly. On the one hand, there is an obvious air of respectability and even scientific legitimacy that comes with neuroscience. Then there's the fact that most of us are at least implicit physicalists, and by and large, reductionists. It's not surprising, I suppose, that a field that embraced connectionism largely because of its purported neural plausibility would embrace imaging because it's actually looking at the brain. But these don't seem sufficient by themselves to cause so much time and money to be put into imaging. So I'm still somewhat confused.

Using fMRI to study stereotype threat is sort of like using mass spectroscopy to detect a fly in your soup. Not all cog neuro studies are as pointless as Krendl et al.'s, not by a longshot. However, the ones that are just make me shake my head at the time and money some researchers waste in order to blind their colleagues with the gleam of fancy instruments.

Chris, I'll try to respond to your 5 points.
#1. Localization by itself does tell us about the organization of the brain. For decades the core of animal neurophysiology was to localize function. It is figuring out what processes occur in each brain region. Only, after localization can one really dig into mechanisms of how each region works.
One example of this is the dense and overly published area of object categorization (i.e. finding brain regions that response stronger to faces/scenes/homes/cars etc) At the first level, this is not very useful and localization of an infinite number of object categories is slowing down, but now that we have a better concept of how these different objects are localized we're building larger mechanisms of how the brain categorizes more general sets of objects and even what goes wrong in various clinical disorders. This research as gotten so rich that even neurophysiologists are looking back into these regions to try to better understand what is happening.

Also for localization as an end to itself. Speak to neurosurgery patients and ask if improved fMRI and MEG localization before they have their skull open on the table is something worthwhile.

#2. I really don't get your contrast between behavioral data and fMRI and calling it parasitic. Yes, good fMRI studies are built off of behavioral data. For that matter, good neurophysiology, EEG, surgical stimulation mapping studies are built off of behavioral data. When done right, a behavioral study and pilot data is used to figure out the general aspects of a topic, but at a certain point a solely behavior model cannot be proven or disproven. For some of these studies, imaging can be used to add or decrease support for a model. If an fMRI study refutes a behavioral model, that's advancement of science and a good things. This is why many people I know who were trained in psychophysics and neurophysiology do imaging today. They needed a way to push and test their models more than they could with only the older tools.

#3 Subtraction has limitations, but the tools is in more than fMRI. Behavior studies regularly have two conditions and try to find significant differences in reaction time, accuracy, and response biases. Is subtraction ok in these cases, but not fMRI? I very much know that fMRI signal modeling is weak, but most of the weaknesses would make significant findings less likely. Also, not all fMRI studies involve subtraction. more and more are using various forms of connection maps.

#4 Yes imaging is expensive and at lot of this money is wasted on poorly designed studies like the one you comment on here. Still, at the very least, understanding how brain structure links to behavior requires more than behavioral studies. I've also given some other examples above. In addition, it is expensive, but I've yet to see a true head-to-head comparison of fMRI to neurphysiology costs. A primate study requires a highly trained person to spend months or years training 2 or 3 monkeys for the sake of 100 data points in a single study. An fMRI study could take a couple of weeks of planning and a couple of weeks to collect data. Is neurophysiology a worthwhile expense or would you prefer a world with only behavioral data.

#5 Control is an issue with any study. Good studies are designed within the limitations of the method. There is noise, but nothing is swirling around. You can have an extremely well controlled visual presentation system and collected response times as accurately as out of a scanner. If subjects are performing a task differently from outside the scanner it would be observed and considered in good analyses. As for the noise some people use it to their advantage. There are some studies studying phoneme and word understanding in noisy environments were the scanner is used to provide the noise and the word volume is adjusted to alter the signal to noise level.

I can write more on most of these points, but I figure this is long enough for now.

I also noticed an old post of yours:
http://scienceblogs.com/mixingmemory/2006/06/a_lot_of_people_in_white_c…
If you've been reading this blog for a while, you probably know what my attitude towards cognitive neuroscience is: in most cases, it tells us little more than that cognition happens in the brain. For the most part, I see cognitive neuroscience as a fad that I hope will soon die off, or merge into psychology and neuroscience proper.

I'm curious if you still believe this. It seems like your dislike of of cognitive neuroscience as a whole as less about fMRI. I find this interesting because it definitely doesn't seem to be the direction the field is going. If anything the borders between psychology, cognitive science, cognitive neuroscience, and animal neuroscience are becoming more fluid. You seem to prefer a time where there was little interaction between the realms of behavior-based cognitive science and brain focused neuroscience. When fields and departments are merging and communicating better than ever, I wonder why you prefer this separation.

I also wanted to comment on the fad issue. fMRI really is being used more than necessary now, but I think it was even worse 5 years ago. There are more studies now, but the average study is of a much higher quality. I suspect at some point, funding for the whole area will shrink, but it will still be large. As for the bad studies, I've yet to be convinced that the bottom 20% of behavioral studies are any better than the bottom 20% of fMRI studies. (Those fMRI studies probably get more publicity)

"Subsequent t-tests indicated that the interaction emerged because the performance of control participants improved significantly over time, t(13) 5 2.81, p < .02, whereas the threatened groupâs performance decreased slightly over time, t(13) 51.98, p = .07."

Doesn't this show the control did better than the threat group? Also, remember that the p-values you're hanging your hat on are a function of the sample size, which in this study, was fairly small.