What the structure of the eye tells us about color categories

i-eca0cf2af9fc3ac4445c7dff7d8aab70-research.gifColor categories, as we pointed out in this post, are remarkably consistent, even across different cultures and languages. "TLTB" pointed out in the comments that for people with color blindness, the color categories might not make much sense. He brought up an excellent point, one that becomes doubly perplexing when we realize that no two individual eyes are the same -- indeed, retinal scanning is considered more accurate than fingerprints in establishing someone's true identity.

The distribution of cones and rods across the retina varies substantially. What's more, the macula, a region in the center of the retina, has a pigment that varies from individual to individual, filtering out the many of the shorter wavelengths of light before they even reach the photoreceptors on the retina -- so those signals never reach the brain. Thus, we "see" different colors in our peripheral vision than in normal vision. Finally, the lens itself has pigments which also filter out some wavelengths of light. So do individual variations in the eye affect how we categorize colors?

Take a look at this diagram to see how this filtering can affect the light reaching the receptors in your eye:

i-190909e082824988227879f7db3522ee-color1.gif

The graph on the left represents the light entering the eye -- notice that it's concentrated around a specific wavelength -- about 480 nm, a slightly greenish blue. The second graph shows the filtering by the macula (long dashed line) and the lens (short and long dashed line). The final graph shows the light actually reaching the receptors -- a substantially smaller amount.

But now consider what happens when we change the bandwidth of the light entering the eye:

i-af296ebf363840bd06c66da88b3bd92d-color2.gif

In this example, the light still peaks at the same point, around 480 nm. But when this light is filtered by the lens and macula, a very different curve reaches the receptors, one skewed towards the green end of the spectrum. So do we see this as the same hue, despite the fact that the light reaching the eyes peaks at a different point on the spectrum?

In 1910, William Abney discovered that when a single wavelength of light was mixed with a white light (a mixture of all wavelengths), the perceived hue changes. In this scenario, the light peaks at the same point in the spectrum, so the hue should not change -- but change it does.

But until recently, no one had ever tested whether a broad bandwidth of light like in the second example above would also be perceived as a different hue from a single wavelength. A team led by Yoko Mizokami built an apparatus to test this question (note that this can't be done with just a computer monitor, which is only capable of displaying three different light wavelengths).

Observers were asked to adjust the dominant wavelength of a broad and a narrow band of light until it appeared to be the best example of "blue," "green," and "yellow." Here are the results for blue.

i-b2119f236e4f418eb1f7d0d6bfdfe195-color3.gif

The dark dots on the left show the peak wavelength of the narrow-bandwidth light (nearly approximating a single wavelength) selected by each observer as "blue." Note that these values cover a wide range -- from 470 to 500 nm. Each observer had his or her own notion of "blue," but the key test is whether they saw the same "blue" when a wide bandwidth of light was shown. The open dots on the right show the peak wavelengths of broad bandwidths selected as "blue." Dashed lines connect the choices of each individual observer. The darker dashed lines show what the choices were predicted to be, if observers were to respond based on the wavelengths of light actually reaching their receptors.

Clearly, the pattern of actual responses doesn't match the prediction. Mizokami's team argues that the visual system is compensating for the filtering of the lens and macula, so that we perceive broad and narrow bandwidth colors as the same. But why don't we compensate in the same way to eliminate the Abney effect? The team argues that while wide bandwidths are common in nature, a single wavelength mixed with white light is rare, so the visual system can adapt to the former, but not the latter.

So it appears that each individual's visual system can adapt to compensate for many of the quirks of his or her eye -- at least for the type of light usually seen in nature. Indeed, as we age, our sensitivity to light changes: we're much less sensitive to shorter wavelengths of light. Yet our color categories remain the same. So it appears likely, from this research, that color categories are due in large part to environment and culture, not physiological differences.

Mizokami, Y., Werner, J.S., Crognale, M.A., & Webster, M.A. (2006). Nonlinearities in color coding: Compensating color appearance for the eye's spectral sensitivity. Journal of Vision, 6,, 996-1007.

More like this

Makes sense. It reminds me of the way our auditory system compensates for variation amongst human voices (and variation amongst individual auditory systems) to 'normalize' speech sounds so that we can usually understand the speech of individuals no matter their tone or pitch.