The other day, our car wouldn’t start and Jim had to ask a neighbor over to help him jump-start it. There was much rushing in and out of the house looking for flashlights and other tools to help get the job done. After the neighbor left, Jim wanted to drive somewhere and couldn’t find the keys. Clearly he had just had them because he was working on the car. Where could they be? We searched up and down throughout the house, but we couldn’t find them and eventually had to use a spare set.
The next morning as I was getting ready to leave for our school carpool in our other car, I found them sitting on a workbench in the garage, just a few feet from the car! Why couldn’t we find them when we needed them? Everyone’s experienced a similar problem at some point, whether it’s trying to find the remote for the TV or the olives in the fridge. Why are objects so hard to find sometimes and at the tip of our fingers other times?
One part of the answer is a phenomenon called “contextual cuing.” Basically, this means we’re good at finding things in places we’ve found them before, but bad at finding them elsewhere. The more often we see an object in a certain place, the quicker we are at finding it there. It doesn’t take long to train people to locate new objects in this way. For example, a researcher might ask people to search for the letter “T” or “L” in an array of letters. Most of the time, the array is completely new, and the T or L is in a different place. But if occasionally the same array appears, people are quick to recognize it and find the letter much faster. It even works if only the portion of the array around the target letter stays the same and the rest of it changes.
So the question then becomes this: what are they key elements of the area you’re searching? If I usually leave my keys on the kitchen counter, will I still be able to find them if Nora has cluttered it up with a baking project? In one contextual cuing study, the researchers showed viewers scenes filled with distractor objects that were either black or white, and asked them to search for a target. After they had learned to find the target object quickly among a particular pattern of distractors, the experimenters changed the distractors from black to white (or vice-versa). The advantage of contextual cuing disappeared.
Is the color of the surrounding environment really an important part of how we find objects? Krista Ehinger and James Brockmole suspected that in more realistic environments, color may not matter as much. They showed volunteers images like this and asked them to search for a tiny letter T or L:
Can you spot it? How about in this picture?
The colors here are unrealistic, but it’s still obviously a shoreline scene, just like the first. Now try one more:
Again, the colors have changed, but it’s clearly the same scene as the first image. Did you spot the T faster?
Ehinger and Brockmole created 16 different versions of each image, manipulating it to create different unrealistic color combinations, like this:
Viewers searched for letters in blocks of 16 images at a time. Half of the images were ones they had seen before, with the search item in the same place. The other half of the images were brand-new each time. This was repeated 16 times — so eight the images were seen 16 times each, and 128 images were seen once each.
But the key to the study was this: Viewers were divided into groups. One group saw the identical images repeated consistently. They might have had an odd coloration, but it was the consistent each time they saw it. A second group saw a variable coloration of the repeated images. The target letter T or L was in the same place each time, but the color of the image it was in was varied each time they saw it. Here are the results:
The graph shows the cuing effect as the experiment progressed. The cuing effect is measured by subtracting the reaction time for the repeated image (whether the color of the image was variable or consistent) from the reaction time for the new images. As you can see, the effect got stronger quite quickly for both groups (as well as for a control group not shown here). Most importantly, there was no significant difference between the groups — contextual cuing occurred whether or not the color of the repeated image changed.
In a second experiment, the colors were kept consistent until the final block, when all the colors were changed. Once again, there was no difference in the cuing effect for the last two blocks — suggesting that the first group hadn’t simply been trained to ignore colors.
So why does color not affect contextual cuing in this study when it clearly does have an impact in other studies? Ehinger and Brockmole suggest that other aspects of the scene may be more important. In the real world, we must be able to recognize scenes at different times of the day, with different weather and lighting conditions, so color changes — even unrealistic ones — don’t faze us. Unfortunately when we put our keys in a completely unfamiliar location, no amount of coloring or other environmental cues will help us find them efficiently.
EHINGER, K., & BROCKMOLE, J. (2008). The role of color in visual search in real-world scenes: Evidence from contextual cuing Perception & Psychophysics, 70 (7), 1366-1378 DOI: 10.3758/PP.70.7.1366