Neuroscientific Evidence for the Influence of Language on Color Perception

You know, just the other day, on this very blog, I swore I would never read another (cognitive) imaging paper again, but between then and now, I've read 5 of 6, so apparently my oath didn't take. It's sort of like my constantly telling myself, as I ride the bus to campus in the morning, that I'm going to stop drinking coffee. As soon as I get off the bus, I walk 30 or so feet to the little coffee stand where they have my 16 oz. coffee waiting for me, 'cause they know as well as I do that I ain't quittin'. Cognitive neuroscience is like coffee.

Anyway, one of the imaging papers I've read since swearing off cognitive neuroscience altogether was published just last week in the Proceedings of the National Academy of Sciences (PNAS, pronounced like... well, you can guess what it's pronounced like), and is an imaging study on linguistic relativity. For blogging purposes, such a paper is doubly awesome, because it gives me an opportunity to blog about wo of my favorite topics: 1.) The influence of language on thought and perception, and 2.) How much cognitive neuroscience sucks. And I can do both by presenting previous studies in contrast to last week's PNAS (pronounce it as you read, it makes this post funnier) paper. So I'll start with research published way back in the year 2006.

I've written a lot about linguistic relativity (a soft version of the Sapir-Whorf hypothesis) in the past (see here, here, and here), so I won't go into it in too much detail here. For now it will do simply to say that linguistic relativity has been a hot topic off and on since the first half of the 20th century, and each time it's become hot again, one of the main focuses has been on the influence of language on color perception. If you can show the influence of language on, say, temporal reasoning, that's interesting, but it's conceptual, and we know that words and concepts are pretty intertwined. However, if you can show that language influences low-level perception, like color perception, then you will have demonstrated something exciting. In the 1960s, there was a bunch of research suggesting that color words do influence color perception, but in the late 60s and early 70s, further research suggested this was not the case. Then, in the 2000s, researchers revisited the question, and again found evidence that color words influence color perception in a variety of different tasks.

At this point, at least until another Eleanor Rosch comes around, the evidence for some sort of interaction between language and color perception is pretty strong. The main problem in interpreting this evidence, and most of the evidence related to linguistic relativity more generally, is that it is difficult, if not impossible, to tease apart linguistic and cultural influences. The key to doing so would be to make some sort of prediction about the interaction of color terms and color perception that relies on our knowledge of the unique properties of language processing. If you can provide support for predictions like that, then you can make a pretty good case that the influence of language is direct, rather than mediated by cultural differences that are correlated with linguistic differences.

This brings us to the neuroscience. The one part of the brain that we know a whole hell of a lot about is the visual system, and the early visual system in particular. Neuroscientists can basically tell you exactly what happens to visual information from the time a photon hits a photoreceptor in the back of the retina to the time it reaches the visual cortex, and beyond (notable exceptions are the amacrine cells, the functions of which are a bit of a mystery). For example we know that information from the retina of the right eye crosses over to the left side of the brain at the optic chiasm, and then travels to the left hemisphere of the visual system. The information from the left eye goes in the opposite direction.

When it comes to things outside of the visual system, we know considerably less. However, if there's one area that we know more than a little bit about, it's language processing. Most importantly, for our purposes, we know that for right-handers, the left hemisphere is doing the bulk of the language processing work. Knowing this, combined with our knowledge of where visual information from each eye gets processed, we can make a prediction about how language will affect perception. That is, we can predict that, because information from the right eye ends up being processed on the left side of the brain, and language is, for the most part, processed on the left side, we should see stronger effects of language on perception for information that comes in through the right eye. And over the last couple years, a series of papers have been published presenting studies that test this prediction.

The first paper, by Gilbert et al.(1) used a simple visual search paradigm. This involves putting a target stimulus in an array with a bunch of distractors. In this case, the targets were squares of a particular color, and the distractors were squares of a different color. In some cases, the distractors and target shared the same color label (e.g., "blue"), while in others they had different labels (e.g., "blue" and "green"). Research in a bunch of different domains have shown that it's easier to discriminate members of different categories than members of the same category, even when the perceptual distance between the two is the same, a phenomenon usually called categorical perception. In this case, it should be easier to discriminate "blue" from "green" than "blue" from "blue," even when the difference between the shades of blue is the same as the difference between the "blue" and "green" shades. Previous research using the visual search paradigm has shown that people are faster at finding targets among perceptually similar targets when they're from a different color category than when they're from the same color category(2). The twist in Gilbert et al.'s study is that half the time, the target appeared in the right visual field (i.e., appeared to the right eye), and half the time it appeared in the left. If the labels really are affecting color perception, then we'd expect to find the categorical perception effect much more strongly for targets presented in the right visual field than those presented in the left.

Of course, that's what they found. Participants' reaction times were significantly faster for between-category target-distractor searches than for within-category searches when the targets were in the right visual field, but there was no difference between between and within-category searches for targets presented in the left visual field.

In their second study, Gilbert et al. gave participants a verbal interference task (silently repeating an eight-digit number), and the effect for the right visual field reversed: between-category searches took longer than within-category searches. The opposite was the case for the left visual field (though the difference between within and between-category searches was not significant in the left visual field). This suggests that it really is the category label that is causing the categorical perception effect, because the verbal interference task does just what it says: it interferes with language processing. Since this processing takes place primarily in the left hemisphere, it should only affect targets presented to the right eye, as it did Gilbert et al.'s study.

Similar studies by Drivonikou et al.(3), one using a visual search task with more color categories and more distractors, and one asking participants to indicate whether a colored dot is different from a colored background, showed the same effects with more color categories and, in the visual search task, more distractors. Below is a graph from one of their studies (from their Figure 2, p. 1099), which clearly illustrates the effect of visual field (RVF = right visual field, LVF = left visual field).

i-53df2c7e2cc9b6209ae4de61753c9078-DrivonikouetalFig2c.jpg

In perhaps the coolest of the papers in this line of research, Gilbert et al.(4) conducted another visual search task, but this time they used non-color categories, like animals (e.g., dogs and cats). In this case, there'd be a bunch of cats in a circle, and one dog (see below, from their Figure 2, p. 3), and the task is to indicate which side of the circle the dog is on. As in the previous studies, the dog was either in the right or left visual field, and we would expect that the effect of label (i.e., the faster times for between-category searches) would be stronger in the right visual field than the left.

i-814f84d2aacf790c8c25be024e7d0d7e-GilbertetalFig3.jpg

As in the color perception studies, the categorical perception effect was significantly stronger in the right visual field than in the left, and it disappeared when participants were given a verbal interference task.

Now, for me, those studies are pretty convincing. In each case, the effect was stronger when perceptual processing took place in the same hemisphere where language is processed, and the effects disappeared when you interfered with language processing. That seems like pretty direct evidence that language is influence categorical perception in color and other domains. But why be satisfied with convincing evidence when you've got an fMRI machine and twenty thousand dollars, right? Enter Tan et al.(5)

Tan et al.'s task was much simpler than in the Gilbert et al. and Drivonikou et al. studies. All their participants had to do was decide whether two color squares were of the same or different colors. Granted, the squares were only presented for 100 ms, but still. They used colors with six different names in Mandarin, three of which were easy to name, and three of which were difficult to name (based on data from a pilot study). Given that the colors were only presented for a brief moment, the effects of language should only show up for the easily (i.e., quickly) accessed color labels.

Now, they didn't find any behavioral differences between the easy and hard to name conditions. That is, people were equally fast at naming the colors in both conditions. But they did find differences in brain activation. Both conditions produced activation in areas associated with color vision (medial frontal gyrus, mid-inferior prefrontal cortex, insula, right superior temporal cortex, thalamus, and cerebellum. The left superior temporal gyrus, left precuneus, and left postcentrual gyrus, all areas associated with language processing, showed more activation in the easy name condition than in the hard name condition.

Aside from pretty pictures of the brain, what has the Tan et al. study taught us that the previous studies hadn't? Well, considering the fact that there were no behavioral differences observed, it's hard to know exactly what was going on, but at most, all these data suggest is that when presented quickly, easy-to-name colors prime their labels, while hard-to-name colors do not. Not only is this not interesting in itself, but in the context of linguistic relativity, it doesn't even suggest the right direction of influence. That is, without behavioral differences, the imaging data doesn't suggest that language processing is influencing perception, but instead that the perception is priming particular lexical items. That's just, well, boring. I mean, duh. But again, cool brain pictures. Coffee.

Are you starting to see why I find cognitive neuroscience so frustrating? The first series of studies -- those by Gilbert et al. and Drivonikou et al. -- are excellent lessons in using neuroscience to test hypotheses. They took things we know about the brain (things we knew about the brain long before imaging technology existed), came up with hypotheses based on them, and then developed behavioral predictions from those hypotheses. The Tan et al. study, on the other hand, doesn't really test any hypothesis directly relevant to linguistic relativity. We can't, from their data, make any behavioral predictions, and we can't infer that the increased processing in language areas of the brain that they observed had any influence on the processing in the visual areas that were active. And I guarantee you that the Tan et al. study cost more. In all likelihood, the single study in Tan et al. cost more than the eight studies presented in the other three papers combined! A simple cost benefit analysis of the Tan et al. study therefore gives us a ratio of 0: costs a bunch, and we've learned jack.

I'm never reading another imaging study again, or drinking anymore coffee.


1Gilbert, A.L., Regier, T., Kay, P., Ivry, R.B. (2006) Whorf hypothesis is supported in the right visual field but not the left. Proceedings of the National Academy of Sciences, 103(2), 489-494.
2Roberson, D. & Davidoff, J. (2000) The categorical perception of colours and facial expressions: The effect of verbal interference. Memory & Cognition, 28, 977-986.
3Further evidence that Whorfian effects are stronger in the right visual field than the left. Proceedings of the National Academy of Sciences, 104(3), 1097-1102.
4Gilbert, A.L., Regierd, T., Kaye, P., & Irvy, R.B. (In Press). Support for lateralization of the Whorf effect beyond the realm of color discrimination. Brain and Behavior.
5Tan, L.H., Chan, A.H.D., Khong, P.L., Yip, L.K.C., & Luke, K.K. (2008). Language affects patterns of brain activation associated with perceptual decision. Proceedings of the National Academy of Sciences, 105(10), 4004-4009.

Categories

More like this

The story of research on linguistic relativity can be summarized thusly: early cognitive scientists, inspired by the work of Edward Sapir and Benjamin Whorf, were all-too eager to find that thought is influenced, if not determined, by language (either by its grammatical categories, ala Whorf, or by…
Note: This was originally posted at the old blog on August 14, 2005. Enjoy. After I finally finished Language in Mind, about which I posted the other day, I went back and looked at some of the literature on linguistic relativity that I had read over the years, but had mostly forgotten. And since…
That's it! I'm never reading another imaging paper again, ever. OK, I might read one or two, and I might even post about them, but for now I'm telling myself, for my own sanity, that I'm never, ever, under any circumstances, going to read another imaging study. If you read my last post, or have…
All of you are probably familiar with color opponency, but just in case, I'll give you a quick refresher. I'll even start with the history. In the 19th century, there were two competing theories of color vision. The first was the Young-Helmholtz theory (sometimes called the trichromatic theory),…

A minor point: input from the right *visual field*, which includes information from both eyes, goes to left V1, and vice versa.

You're right on that many cognitive neuroscience studies are pointless (a collaborator once referred to their authors as 'cognitive paparazzi'), but there are plenty of crappy behavioral studies, as well, so I actually hope this doesn't turn you off neuro forever - you just have to seek out the good ones based on sounds theory and explaining something interesting!

I think your flaw here is that you've chosen a superb behavioral study and decided to compare it to a good fMRI study and seem proud when you come to the conclusion that the behavioral study was better.
It's interesting that the group you are holding up as the ideal happens to be a group of cognitive neuroscientists (see: http://socrates.berkeley.edu/~ivrylab/people/ivry.html ) who actually do use fMRI, TMS, patient populations, and a host of other tools to localize brain function. Like I mentioned in a pervious post, the best imaging research is done by people who also do behavioral studies and understand the limits of behavioral studies and what imaging can add beyond that. (In fact the behavioral and fMRI papers you list even share an author)

As for this specific study. Did any of the behavioral findings show anything beyond the language/categorization link? From your description, (I haven't read the article in detail), this shows that the effect is occurring already in language processing areas rather than due to up-stream modulation of the signal. This opens the field to find new ways to understand exactly what is happening in these language areas.

I should also note that in some imaging studies, large task related effects are not optimal. If you see a huge accuracy variation in condition, it could mean radially different things are happening and it is harder to interpret results. For example, if a distractor makes it impossible to even see the target, this might be a good effect for a behavioral study, but if you don't know what a person is seeing in an imaging study, it's harder to make a good comparison.

BTW I responded to your critique of Cog Neuro a couple of posts ago.

Hope you enjoy your coffee tomorrow. :)

bsci, I don't think it is a good fMRI study, though. I understand that you don't always want behavioral differences, but to infer that an effect on perception is being caused by different processing in language areas, you'd need to see some effect on perception, and they don't. Instead, they just see more activation when labels are easy to access, which is exactly what you'd expect if the color activates the label.

And so far, with the visual field stuff, they've only looked at categorical perception. But there are other studies that look at color memory, for example.

Just to be clear. I only skimmed the fMRI article and was working off of your comments. Perhaps in this study a behavioral effect is required, but it is not always the case. If you'll look back on this thread and continue this conversation on Monday, I'll try to find time to give the article a more thorough read and critique. If this discussion will just get dropped, I have things more relevant to my own work (fMRI physiology/methodlogy) to do an read.

This is an interesting criticism but I'm not sure it is completely justified. And I'm certainly doubtful that it really supports the claim that "cognitive neuroscience sucks" or that fMRIs don't tell us interesting things. For example, Gilbert repeated with fMRI could be potentially very interesting. We can make reasonable predictions about what areas should be active if the hypothesis in Gilbert is correct. This gives us another form of testing the matter at hand.

Furthermore, there are some fMRI studies where the fMRI has given us real understanding of a phenomenon. Look at synesthesia for example.

By Joshua Zelinsky (not verified) on 21 Mar 2008 #permalink

The first non-science thing that jumps out at me during this article is that Kay, a middle author from a different location as all other authors, is the NAS member who contributed this article for publication. Getting a National Academy of Science member to contribute your article is a shortcut through the PNAS review process (i.e. if one of our members says it's good, the article doesn't need as rigorous a review). This is common, but having that person tacked on as an author with no clear understanding how that person contributed seems very odd. I'd need to look at more PNAS bylines to see how common this is.

The paper itself does seem good (perhaps not what should be PNAS level, but good). I think you've misread the paper slightly and the lack of a behavioral effect is actually a vital part to the finding. Part of the misreading seems to be that they are placing their results in the context of the studies you mentioned above where their task design is very different.

The question I seem them asking is, "Even if language has absolutely nothing to do with a task, does your brain process language information?" To answer this question, they chose a color discrimination task. Note that this was not a color naming task as you say in the original post. Also note that the task with color names was completely separate and was just used to localize the regions that were explicitly related to language to show that there were the same regions activated in the main experiment.

The point of using the color discrimination task is precisely that it should be just as easy to tell that blue and red are different as telling that teal and tan are different. As such, there is no behavioral difference. If anything the task has one purpose. That is to make sure you are paying attention to the colors while doing something that should not require you to access any of the color information.

Their results show that when you do a red-blue discrimination your brain more heavily activates language areas compared to when you do a teal-tan discrimination. Thus, activation of language regions is inherently part of color viewing.

A weakness of the past tasks like Aubrey Gilbert's circle/square search task is that we know color/size/pattern differences make search easier. Showing that the color difference is greater in the right visual field is a good start, but perhaps more than just language is lateralized.

This weakness could be addressed using another behavioral study modeled off this fMRI task. Gilbert's task would be repeated twice. For the first time, the colors would be blue and green. For the second time the colors would be harder to name colors with an equal amount of hue/color difference. There would probably still be a right visual field performance improvement with the hard-to-name colors, but it should be less than with the commonly named colors. Still, since the reaction time differences are already so small, it might be hard to get a significant result.

I could probably write more here, but I figure this is a good point to see if you have any other comments.

bsci, first, let me say that if I indicated that the task in Tan et al. was a color naming task at any point, that was a mistake. I discussed it as a discrimination task, though.

Before I start replying, let me give you Tan et al.'s conclusion, from the abstract:

This finding suggests that the language-processing areas of the brain are directly involved in visual perceptual decision, thus providing neuroimaging support for the Whorf hypothesis.

This is, to put it mildly, wrong. It shows nothing of the sort. At most, it shows the opposite: that color perception is involved in lexical activation. This is, again putting at mildly, extremely well known. Just look at a Stroop effect. Seeing a color primes its label, and seeing an easy-to-name color will prime its label more easily. Since they only present the colors for 100 ms, it's not surprising that only the easy-to-name colors show activation consistent with lexical activation. This is not only not surprising, it's something we've known for 70 years or so.

In order to show that, instead of the perception leading to lexical activation (which is uninteresting), the lexical activation is influencing perception or that "language-processing areas of the brain are directly involved in visual perceptual decision," you'd have to have some behavioral difference that suggests that the activation in language regions influenced the decision. They don't have any such difference. So their conclusion is not supported by the data at all. And since that conclusion is directly related to the previous research (which they cite, by the way), I see no problem in pointing out that the conclusion they drew is better supported by behavioral data that's already out there.

I'll agree that their abstract conclusion is wrong. They specifically and intentionally do not show a behavioral effect, which is a key aspect of the Whorf hypothesis. As such, this is a flawed paper.

Still, I think this study does advance our understanding of object/language processing beyond the Stroop effect. The Stroop effect shows that language is a distractor for color naming. Part of the stroop task is to actively pull color/language information. The fact that adding contradictory language information makes the task harder still doesn't remove the language elements of the task.

The question I'm seeing is whether the brain uses an area even when it has no relevance to the conscious task. The authors here show that language processing occurs at a level below both conscious need for language and even unconscious benefits from language. This was not shown in the Gilbert studies or with the Stroop effect. I can't think of any way to replicate this finding with any behavioral study.

I'll stick with my opinion that this is a good paper that really does inform us in a way that would have been impossible without imaging. Their background context was flawed and the paper probably shouldn't have been in a journal as prestigious as PNAS, but it is a relevant paper.