The impact of words on visual attention and memory

i-eca0cf2af9fc3ac4445c7dff7d8aab70-research.gifOur visual system is exceptionally good at detecting change -- as long as the change takes place while we're looking. If you glance at a scene, then look away for a moment, your ability to detect a change is substantially impaired. Changes that would be obvious when we're looking can become maddeningly difficult to detect if we're distracted for even a tenth of a second.

Take a look at this quick movie (QuickTime required) -- the picture will alternate flashing with a distractor pattern. Each time the picture flashes, a portion of the picture will change in some way. Can you see what's changing?

Next, take a look at this version, where the distractor pattern has been removed.

A little easier to spot the difference now? Most people probably didn't have enough time to spot the change in the first movie, but spotted it instantly in the second movie. This phenomenon, called "change blindness," has been discussed before on Cognitive Daily. It's likely that it occurs because the visual system simply doesn't maintain a detailed description of a scene -- why should it bother, when the information is right in front of us?

Instead, visual memory probably only includes the key semantic information about the photo -- it's a narrow cobbled street, with cars parked on one side, and lots of potted tropical plants.

Interestingly, if I had given you a hint about the nature of the change -- even just mentioning the word "manhole" before showing the first movie, you would have been able to notice the change much faster. This makes sense -- this semantic information can then be compared to with the two versions of the photo, and the change observed.

But how exactly does such a hint direct your attention to the change? Elizabeth Walter and Paul Dassonville presented volunteers with dozens of movies like this, preceded by several different types of hints, in order to see which hints diminish change blindness. The hints were actually "primes," because observers were never told that the words were related to the change blindness task. Instead, the words were flashed rapidly and observers were required to say the word (if they could) and then proceed with the change blindness movie, pressing a button as soon as they noticed a change.

Half the time, the prime was explicit: the word appeared for 200 milliseconds and was easy for all viewers to read. Half the time, it flashed for just 33 milliseconds, and viewers were unable to read it. This is an implicit prime. Implicit primes can make a lot of tasks related to the prime much easier. If viewers were later asked to unscramble the letters in the word, for example, they'd be faster than if they hadn't been primed at all.

In the experiment, some of the primes were helpful ("manhole"), some were misdirecting ("awning"), and some were irrelevant ("pizza"). How did the primes affect change blindness? Here are their results:

i-0c9206ecd6b0ad16441ed02c51a70cc7-walter3.gif

Participants were significantly faster -- they showed less change blindness -- when the prime was helpful compared to when it was irrelevant or misdirecting. There was no significant difference between irrelevant and misdirecting primes. Most importantly, even when the prime was implicit, observers were faster to locate the change with helpful versus non-helpful primes.

Even though observers had no memory of the implicit primes, helpful primes still improved their ability to notice a change in the photos. So visual attention can be guided by semantic information even when viewers are not conscious of that information.

This research may mean that in some ways, the vivid memories we have of visual events such as a romantic sunset or a traumatic accident are no different than if we had only experienced those events through a verbal description.

Walter, E., & Dassonville, P. (2005). Semantic guidance of attention within natural scenes. Visual Cognition, 12(6), 1124-1142.

More like this

Take a look at the QuickTime movie below. It will show a still image for 10 seconds, then a blank screen. Then it will show you the image again. Your job is to look for a detail that has been changed between the two images. Most people have difficulty with this task. Even when the part that…
[originally published March 2, 2005] Take a look at the following movie (quicktime required). The movie will alternately flash a picture of a desk and a patterned block. Your job is to see if anything about the picture of the desk changes each time it flashes. Don't replay the movie when you get to…
Take a look at the following movie (quicktime required). The movie will alternately flash a picture of a desk and a patterned block. Your job is to see if anything about the picture of the desk changes each time it flashes. Don't replay the movie when you get to the end; just stop. Did you…
Take a look at this movie (you'll need a video player like QuickTime or Windows Media Player installed in your browser to see it). You'll see four different outdoor scenes flash by, one at a time. The scene itself will only be displayed for a fraction of a second, followed immediately by a…

I tried to see if I could recreate the experiment by blinking my eyes at a pile of dishes in my sink. I'm not sure if I'm doing it wrong, but the dishes just stayed there. They never disappeared.

Interesting read btw.

By joltvolta (not verified) on 27 Feb 2007 #permalink

Very cool! However, I disagree slightly with the characterization of scene representation between displays:

You state: "It's likely that it occurs because the visual system simply doesn't maintain a detailed description of a scene"

There are several examples of the maintenance of scene representations between presentations (from Hollingworth, Henderson, Simons, etc.), but one particular example is striking: In Hollingworth (2003), participants view a scene and after it is removed, they are presented with a random spatial cue. When the spatial cue correctly indicates the location of a change, changes detection percentage increases (these are one-trial deals, so RT is less important) relative to the irrelevant cue.

This is essentially a similar result to what you've posted above, but would indicate that some detailed representation of a scene WAS stored between presentations, since the cue is entirely spatial and is presented after the first scene is gone. As a side note, the effectiveness of the cue appears to be tied to its timing. If it is presented at (or after) the presentation of the second scene, its effectiveness essentially disappears.

Anyway, I don't think that the result is at odds at all with what you've presented above, but I thought it might be worth noting that to some extent, detailed representations can be shown to persist from one presentation to the next.

Hollingworth (2003) Failures of Retrieval and Comparison Constrain Change Detection in Natural Scenes. JEP:HPP, 29(2), 388-403.

PS: Forgive me if I'm being too nit-picky.

Change blindness is cool, but I don't feel this is a particularly good example. You could just as easily explain the effect by attention in early vision. The single stimulus blinking in the second movie is an isolated change in one feature channel making this amenable to the pop-out effect of bottom-up attention. The first movie, by contrast, is masking the bottom-up saliency of the stimulus thus requiring sequential search to find.

I love Hollingworth, he wrote the book on this stuff.

I disagree with your conclusion, though, Brian. The reason change blindness works is because local changes are masked by global changes (either lots of noise all over or a blank screen), so attention isn't drawn to the manhole the way it is in the second video (where the only change is local). So by putting a shape in the same location as the manhole focuses attention on that location, resulting in a more detailed representation of that area. If your eye isn't focused there, you don't encode it at all.

Annnnnd... I just realized Janne just said the same thing. Oh well, I'm posting this anyway!

I'll also mention that my favourite change blindness study involved football experts being shown football plays and noticing changes much quicker than novices. It really shows how differences in "gist" encoding (which is much more specific for experts) can improve resilience to change blindness.

We're in total agreement about how change blindness is working .

The study I mentioned was actually the wrong one (similar, but wrong...sorry!). Instead of being Hollingworth (2003), it should have been Becker, Pashler, and Anstis (2000). The description of their study, though, was accurate (batting .500 so far!). They used a modified paradigm (i.e. not the flicker paradigm) that only includes two presentations, which is a key element that separates it from some other CD studies. You study an initial presentation for a while, see a global mask, and then view a final presentation. The idea is that you can't see the first presentation anymore, and will never see it again (because in this paradigm you only see two images, total), but you can still 'use' part of your representation of that first image to detect a change later on.

When participants were presented with cues indicating the potential location of a change, they get that information and a final image of a scene. That's it. They can't ever compare it to anything, because the first presentation is gone for good. If they didn't have any representation of the initial scene, they would not have detected changes above chance in this paradigm (or at least, would not have had a non-zero d'). The fact that they were able to do so should indicate that they had some sort of iconic memory trace of the initial image that was useful for performing this kind of task.

The only point I'm trying to make is that _some_ information is stored between presentations. The failure of change blindness, by this interpretation, is a failure to compare presentations, which does not necessitate a failure to store them.