I really do love illusions of all sorts, in large part because they fit nicely into my narrative about the fallibility of human thought, but illusions are also great as windows into the ordinary working of our brains. For example, color afterimages provide direct evidence for opponent-processing theories of color vision, and when we find aftereffects for a particular class of stimuli, we can be pretty certain that class of stimuli has particular neurons or populations of neurons that encode it. And speaking of aftereffects, there's a really cool paper in the March issue of the journal Psychological Science that uses motion aftereffects to test an interesting hypothesis about the processing of static images that I thought I'd tell you about.
The classic example of the motion aftereffect is the waterfall illusion, an example of which you can see here. Exactly what's causing motion aftereffects is still a matter of some debate, but the basic story is probably something like this. There are populations of cells in your visual cortex that respond to motion in particular directions and orientations (e.g., straight down ). These neurons are always competing with cells that respond to motion in the opposite direction. Basically, these cells are firing a little bit all the time, but only when they receive some sort of push (e.g., through the input they happen to respond to) do they fire enough to out pace their competing cells and create the perception of their preferred motion. When you stare at motion in a particular direction and at a particular orientation for a while, the cells that respond to that sort of motion adapt -- get worn out, in essence -- and when you take the stimulus away, their firing rate drops below their resting rate. This allows the competing cells to create an imbalance, and suddenly, they cause you to perceive motion in the opposite direction. The effect can be so strong that it can make for some really trippy visuals (see this, for example).
Anyway, an interesting question which, at first glance, seems to have nothing to do with motion aftereffects is, how do we infer motion from static images? How does your brain process the implied motion? For example, how does your brain figure out not only that the horses in the picture below are moving, but in what direction they're moving?
Presumably all of you can tell that the horses are moving, and more specifically, moving from left to right. There is some evidence from neuroimaging studies that this implied motion is processed in the same brain area, the medial temporal area (or V5, or MT, or if we're being specific, hMT+), that processes actual motion(1). Unfortunately, as is generally the case, the imaging studies don't really tell us what's going on in MT. Most importantly, it doesn't tell us whether implied and actual motion are processed by the same cells and in the same way. To determine that, you have to use behavioral data, just as you have to use behavioral data to learn just about anything except that something happens in the brain.
Enter the motion aftereffect. In their Psych Science paper, Winawer et al.(2 presented participants with a series of static images like this one (from their Figure 1a, p. 277):
They then tested them for a motion aftereffect using moving dot configurations. When a motion after effect is present, it will warp people's perception of moving displays of randomly configured dots. So, if the cells in the brain that are responding to actual motion also respond to implied motion, then viewing a bunch of photos, one after the other, that imply motion in the same direction, should cause those cells to adapt, resulting in a motion aftereffect, and thus distort the participants' perception of the moving dot displays.
Since I'm writing this post, you already know that's what they found. In their first experiment, participants viewed static images implying motion in the same direction for sixty seconds, and immediately afterwards saw the moving dot images. In their second experiment, they observed a 60 second series of displays containing two images that appeared to be either moving toward each other or away from each other, and thus toward or away from a point in between the two implicitly moving objects. In both these cases, Winawer et al. were able to observe the motion aftereffect in the random dot displays. In another experiment, they showed participants the series of images for 60 seconds, and then placed a three second delay between the image series and the moving dot display. In this condition, they didn't observe the motion aftereffect, indicating that the aftereffect for implied motion decays much as the aftereffect for real motion.
Now, this doesn't tell us exactly how implied motion in static images is processed, but it does tell us that at some point, the same cells that process real motion are at work. In the imaging studies mentioned above, there appears to be a delay in the activation of cells in MT after seeing a motion-implying static image. Winawer et al. suggest that this may be because the different cues to motion (e.g., blur and situation cues) have to be processed first, and at different stages, before the implied motion itself can be processed. Future research will undoubtedly further explore these issues, and I'd bet the motion aftereffect will play a big role.
1E.g., Krekelberg, B., Dannenberg, S., Hoffmann, K.P., Bremmer, F., & Ross, J. (2003). Neural correlates of implied motion. Nature, 424, 674-677.
2Winawer, J., Huk, A.C., & Boroditsky, L. (2008). A motion aftereffect from still photographs depicting motion. Psychological Science, 19(3), 276-283.
This 2008 Boroditsky visual system research seems fully consistent with her earlier language and metaphor/embodied cognition work to me; it all seems to be about how habitual modes of thought or action are deterministic of the way cognition proceeds in a variety of ways, at any given time.
Maybe it s too early and I haven't had enough coffee, but do you have a comment or a better elaboration of the relationship between the two "separate" bodies of her work?
Michael, yeah, it's something I've thought about as well. I'm not sure exactly how to connect these two. On the one hand, it's interesting that motion information ends up being processed in MT even when there's no real motion. On the other hand, it doesn't seem that surprising. While it does suggest a neat inference process within the perceptual system itself, it doesn't seem to be of the sort of perception-action connection that you usually find in the embodiment literature. That is, it's really just the perceptual system doing some induction beyond the information actually in the sensations it's given, but it's not really getting outside of the perceptual system. It would be interesting, however, to see how this can be turned around and shown to affect conceptual processing, though. That would directly connect it with the embodiment literature, and with the perceptual symbol systems literature in particular.
My guess is that these folks tried to extend their methods to look at how the organization of abstract concepts can also be influenced but it didn't work for whatever reason.
AE is a great new software addition that has been introduced by Adobe, you need to use software to help improve your profession and go with the latest on the market