Can you tell the difference between the images below?
At first, they just look like fuzzy diagonal lines — there doesn’t appear to be a significant difference between them. But if you look at them closely, you begin to notice that the images at the top of the picture (category A) tend to have single dark bands, while the images towards the bottom have dark bands that come in pairs. The “phase angle” refers to the technique used to generate the images, and based on this angle, the images can be divided into two categories.
With a lot of work, people can be trained to quickly distinguish between these categories. The training, on a computer, involves showing the images over and over again, requiring participants to place each image in category A or B. Then, if these trained “experts” at categorizing fuzzy lines (actually called compound gratings) are tested by showing them two gratings at the same time and asking if they are the same or different, a curious result occurs. People who’ve been trained to categorize the gratings are much better at distinguishing between gratings in different categories than they are when the gratings are in the same category. But if the images are rotated by 90 degrees (so they are slanted to the left, instead of the right), the training advantage disappears:
These are the results of an experiment conducted by Leslie Notman, Paul Sowden, and Emre Ozgen, and illustrates a phenomenon called categorical perception. But why are we better at recognizing differences between categories we’ve learned only when the pictures are displayed in the same orientation? After all, at an abstract level — for example, using language to describe the images — the difference between the categories does not depend on the rotation of the images.
To get a better sense of the level of abstraction at which the categories are learned, the team performed a second experiment. As before, the task was to look at two gratings and determine if they were the same or different. This time, the gratings were rotated in very small increments compared to the trained gratings. Here are the results:
Notice that this graph charts the difference between pre- and post-training. As before, when the gratings were in the same category, training did not lead to an improvement. But when gratings were in different categories, training only led to a significant improvement within an extremely narrow range — about plus or minus 3 degrees — around the original orientation.
Other research has shown that the only portion of the visual system sensitive to such small differences in rotation is the V1 level: the lowest level of visual processing that occurs outside of the eye itself. Notman et al. reason that learning of categorical perception must be occurring at this low level of processing, before it’s even a part of conscious thought.
Notman, L.A., Sowden, P.T., & Ozgen, E. (2005). The nature of learned categorical perception effects: A psychophysical approach. Cognition, 95, B1-B14.