The binding problem is one of the great mysteries of modern neuroscience. Briefly, we know from a variety of studies in humans and primates that the specific features of the sensory world -- particularly the visual world -- are broken down into their separate components by the brain to be processed in parallel. This means that information for say color and information for say orientation are processed separately by the brain.
The benefit of this system is that it allows your brain to take the insane amount of information in the perceptual world and process more of it, more rapidly. The downside is that it begs a very important question: how are all the features of a stimulus brought back together to form one coherent object? This is the binding problem. How are features bound together to form objects?
The prevailing theory to solve the binding problem -- using the example of the visual system -- is that neurons and system that process simple features combine later on in processing to associate the features together. Features are combinatorially combined to form visual objects. However, this is a difficult finding to test particularly in people because while you can show that a particularly neuron is active only when two features are combined, does that really represent an object? Is the actually perception of binding related to the activity in that neuron?
Researchers at the Salk institute have come up with an incredibly cunning way to study the binding problem. Bodelon et al, published in the latest issue of the Journal of Neuroscience, used very fast-refreshing LCD screens to show that perceptual binding takes time and that this time is longer than the processing of specific features. I will talk about the significance of this in a second, but first let me explain what they did.
Participants in the study were asked to fixate on LCD screens with gratings of colors on them. The gratings were comprised of colored lines at particular orientations. The gratings move in a direction perpendicular to their orientations. At low frequencies (read: speeds) of motion, the lines can be differentiated. The participants could see the colors and the orientations of the lines. However, as the frequency of the motion increases, the lines blur into one another creating a gray where you can't tell either the color or the orientation.
What the percepts looked like is in the picture to the right (click to enlarge). Note that they made several of these percepts so that when the frequency increased the color would be the same gray regardless of which percept they used (you can do this by balancing the color constituents). The orientation could also be varied.
Here is how the experiment works. You test the subjects at a variety of frequencies for the grating using a forced choice. Can you tell me the orientation? Can you tell me the color? Then you graph the probability of them getting the correct color, orientation, and both (called conjunction in the study) vs. the frequency.
The data is graphed below (click to enlarge):
B, Probability of discriminating the colors, orientations, and conjunctions present within a sequence, as a function of grating duration for two observers (top plot for subject MD and bottom plot for subject NJ). Plots show maximum likelihood estimates of the probabilities of discriminating the orientations (green points), the colors (blue points), and the conjunctions (red points) present in a sequence, as inferred from error patterns on the four-alternative forced-choice task. Vertical lines centered on each point indicate 95% confidence intervals on each estimate, as computed using Agresti-Coull interval estimation (Brown et al., 2001). Curves are Weibull function fits (Wichmann and Hill, 2001a) to the estimated probabilities for orientation (green), color (blue), and conjunction (red). The orange line is the product of the probabilities of perceiving the two features, which is the probability of discriminating conjunctions if features were independently discriminated and instantaneously integrated. Asterisks indicate points where pcolor * porientation > pconjunction, p < 0.05. Vertical dashed lines indicate the 75% threshold frequencies for orientation, color, and conjunction. Confidence intervals for thresholds were determined by bootstrap (Wichmann and Hill, 2001b) and are indicated by horizontal bars falling on the 0.75 line. C, Threshold grating durations for orientations, colors, and conjunctions, averaged over 12 observers. The average threshold for orientation is 8.4 ± 1.0 ms (mean ± SEM), for color is 22.9 ± 1.4 ms, and for conjunctions is 32.4 ± 1. 6 ms.
What you see is very interesting. The green line is the probability of getting an orientation correct at a particular frequency. The blue line is the probability of getting a color correct at a particular frequency.
The orange line is what hypothetically would happen if the probabilities of the green and blue were multiplied. Basically, this means that if orientation and color perception were done totally separately AND they were instantaneously bound by the brain, this is what the probability would look like. However, this is not what happened.
What happened was the red line. The red line shows the probability of getting both right at a particular frequency. It shows the probability of getting the conjunction -- the perceptual binding -- correct. Because this line is to the right of the orange line, it indicates that perceptual binding is not instantaneous. Perceptual binding takes more time because it is not processed instantaneously by the brain. The grating has to move more slowly and give the brain more time to get both features.
You can take the frequency numbers and do a mathematical calculation to obtain the time it takes for the brain to perceive something. The graph on the right shows the time that it takes to do orientation, color, and the conjunction. You can see that the conjunction takes about 10 ms more.
That is not a very long time, but it is very important. It is important because now we know that perceptual binding has to take place in some brain area. We know that because it takes time. The information has to be transmitted there and processed. The researchers have shown definitively that brain processing is necessary for the perception of binding and that is process takes time.
In reference to the significance of her work, Bodelon had this to say:
"Nobody knew whether a separate computation step was necessary to integrate individual attributes of objects and, if so, how long it would take," explains Bodelon. "The fact that it takes time to reliably perceive the combination of color and orientation points to the existence of a distinct integration mechanism. We can now start to test different hypotheses about the nature of this mechanism," she adds.
Hat-tip: Eurekalert.
- Log in to post comments
Interesting!
Though I agree with Bodelon's conclusion that "a separate computation step [is] necessary to integrate individual attributes of objects," it does not follow that a separate brain area is necessary for this computation.
One theory of perceptual binding is that it is achieved through synchronization of the spike trains from the neurons representing the features that are bound together. Synchronizing spike trains through reciprocal coupling takes time, but it does not require "some other brain area" than those representing the bound features. Rather it's a dynamic property of those feature areas.
Interesting post, though I think that discussions about the so-called "binding problem" tend in my humble view to miss the point. Here's how I see it:
unbundling the binding problem and opportunity not problem.
By the way I've added Pure Pedantry to my blogroll on mumbo jumbo and it would be great if you would return the favour.
cheers
Just wanted to say I love this blog. I had a vague memory of this paper and sure enough, your blog directed me to it... thanks.