Twelve years ago, Greta and I were awakened by a rattling on the door of our Bronx apartment. It was about three A.M.; our children were asleep in the next room. “What should I do?” Greta whispered to me. She had woken first and was holding the deadbolt on the door locked so the intruder couldn’t get in.
“Call the police,” I whispered, and took hold of the lock. I ventured a peek through our peephole. I could see only the grizzled razor stubble of a man who was clearly shorter than I was. He continued to struggle with the door. He was making progress picking our lock — I had to forcefully resist to keep the lock from turning. As I heard Greta talking with the 911 operator on the phone in the other room, I grew bolder. “Who’s there?” I asked, in as gruff and aggressive a voice as I could manage. As soon as he realized there was someone in the apartment, he was gone.
About 30 seconds later, the police appeared at our door. They had been less than a block away when they received the dispatcher’s call, had already searched the stairwells, and found no one. We told them our story, and they asked for a description. I told them about the man’s height, and the razor stubble. “Did you notice anything else,” the officer asked.
“No,” I responded.
“What about race — was he black?”
As I gave my reply: “no,” I could see an expression of perplexed astonishment crossing the officer’s face. Was it possible that the burglar had walked right past the police, and they had assumed this couldn’t be the guy, since he was white? I’ll never know the answer, because the police didn’t say anything more to me about race. They never caught the intruder.
Let’s suppose he actually had walked right by the offender: could we then say that this police officer was racist? After all, he was probably playing the odds — it’s likely that there were more black criminals than white criminals in our area at that time. He never expressed an explicit racial bias to me: he was simply trying to obtain an accurate description of the perpetrator.
A team of researchers at Harvard University have developed another measure of bias, which has been widely reported in the popular press: Project Implicit. As the name of the project suggests, it seeks to measure “implicit attitudes” — biases that people don’t express overtly. The method has been used to measure all sorts of bias: gender, age, even science versus humanities classes. But not surprisingly, racial biases have caught the lion’s share of the attention: a recent article in U.S. News and World Report, for example, suggests ways to counter implicit racial biases. The libertarian blogger “Winterspeak” takes a dim view of implicit bias research, suggesting that such biases are merely the result of rational decisions based on knowledge such as “blacks are more likely to commit crimes.”
But what about when our “knowledge” about racial differences isn’t true? I had a humbling moment a few months ago when I admitted on Cognitive Daily that I was surprised that African American kids are less likely to do drugs or consume alcohol than white kids. How much real knowledge do most people have about racial differences? Could knowledge really be the sole motivator of implicit bias?
Andrew Scott Baron and Mahzarin Banaji recently conducted a study that offers some tentative answers. They gave their racial implicit bias test to white middle class kids aged 6, 10, as well as adults. You can try the test for yourself at Project Implicit, but here’s a quick summary of how it works. First, you’re shown pictures of black faces or white faces: the task is to press a button as quickly as possible when you you see each face (E for a black face or I for a white face). Next you’re shown a set of words, some good and some bad (love, joy, friend, hate, vomit, bomb), and again, you’re asked to press a designated key for each type. Finally, the tasks are combined: “When you see a black face or a good word, press the E key” and “When you see a white face or a bad word, press the I key.” Then the tasks are reversed, so good words are associated with white faces and bad words are associated with black faces. Reaction times are measured, and when a particular association results in a faster response time, then participants are said to have an implicit attitude prefering that association.
In this case, the test was modified for the smallest children so that instead of words appearing on the screen, recorded words were played for them. Here are the results:
For every age group, the association of white faces with good words was stronger than the association of black faces with good words: an implicit bias for white faces over black faces. The bias must have formed before the age of six, and is undiminished in adulthood. To make sure everyone understood the task, a similar test was given to measure preference for insects versus flowers. Everyone except six-year-old boys said they preferred the flowers, but when the preference was measured with the implicit task, even the boys showed an implicit bias for flowers.
But Baron and Banaji didn’t stop with measuring implicit preferences. They also performed an explicit preference task, in which participants were asked overtly whether they preferred a white face or a black face. Here are the results:
Unlike the implicit task, these results do change over time, with each age group showing a significant difference from the other groups, and adults showing an equal preference for black and white faces. Though the implicit biases remain until adulthood, explicit biases appear to have been extinguished.
This data certainly is compatible with the idea that people can claim they are “not racist,” when their actions appear to contradict that notion. But what of Winterspeak’s criticism: “The Implicit Project implicitly assumes that any differentiation between blacks and whites is racist”? That’s a difficult notion to defend. Winterspeak offers no data in support of her/his claim, while Project Implicit can demonstrate that people’s actions differ from their words. There’s no mention of racism at all in Baron and Banaji’s report, or in Banaji’s quotes in the U.S. News article.
Worst of all, when people justify racial discrimination based on “knowledge” of racial differences, they are making two assumptions: that their knowledge of the stereotype is correct, and that the individual in question conforms to the stereotype. In three cases, (inaccurate stereotype, individual doesn’t conform to stereotype, or both) their judgement is not only incorrect, but immoral. In the last case — assuming the person doesn’t know for certain whether the individual he or she discriminates against conforms to the stereotype — it is merely immoral.
Isn’t it better to accurately know what your implicit biases are, and to try to adjust your behavior accordingly?
Baron, A.S., & Banaji, M.R. (2006). The development of implicit attitudes: Evidence of race evaluations from ages 6 and 10 and adulthood. Psychological Science, 17(1), 53-58.