People are remarkably bad at switching tasks - and research focusing on this fact has isolated a network of brain regions that are involved in task-switching (I'll call it the "frontal task network" for short). One of the stranger findings to emerge from this literature is the fact that we're actually worse at switching to a more natural or well-practiced task after having performed a less natural one.
One potential explanation for this "switch cost asymmetry" is that the task network may recognize the potential for errors when performing the unnatural task, and therefore "help it along" through the "biasing" or active maintenance of task-relevant representations. These representations may in turn suppress competing representations (such as those involved in more natural or dominant tasks) through lateral inhibition, an established feature of biological neural networks. When it comes time to perform the dominant task, the frontal task network may not recognize the now-large risk for errors in performing the dominant task - and will therefore "tone down" its active maintenance of task-relevant representations. The end result is that more dominant tasks suffer a greater switch cost than non-dominant ones, because they don't have as much "help" from the frontal task network.
To me, this explanation feels awfully post-hoc. Why should the task network be "smart" enough to anticipate errors in one situation, but too "dumb" to anticipate them in the immediately following situation? One answer to this question is presented in Brown & Braver's 2005 Science article, in which they describe how one component of this task network - anterior cingulate cortex - may not exactly be performing online monitoring of error likelihood (as previous proposals of that region suggest) but rather comparing the current situation against a historical record of error likelihood in each potential situation.
The basic idea is that dopamine encodes reward prediction error (but see this opposing argument), such that dips in dopamine reflect overanticipation of reward. In a clever reversal of previous proposals, Brown & Braver suggest that these dips may themselves act as the training signal for anterior cingulate, causing it to respond more strongly when dips in dopamine are more likely. This general scheme was implemented in an artificial neural network model with six layers: an error input layer (representing decreases in dopamine) projecting to an anterior cingulate layer, which also received input from two layers representing stimulus features. Finally, the ACC layer projected to a control layer, which itself provided non-specific inhibition at the response layer, causing generally slowed responses. Importantly, the authors employed both Kohonen-like local lateral excitation and global lateral inhibition in the ACC layer.
The network's task involved activating one of two output units, depending on whether one or two of the input units in each input layer had become active. However, for one of the input layers (what I'll call the "long-delay layer"), the second unit would become active only relatively late during the trial - not giving the network much time to activate the correct output. Over the course of training on this simple paradigm, the ACC layer learned to predict that errors were more likely when inputs were active in the long-delay layer, and therefore to inhibit activation in the output layer, so as to provide more time for a correct response.
The authors noticed an unanticipated difference between this model and a competing model which measures conflict only by coactivation of mutually exclusive units at the response layer: this model, but not the competing one, showed activity in the ACC layer even when no second input unit was active in either layer. In other words, the network showed ACC activity and slowed responding even on trials where no second unit became active, and even on trials where the network encountered no conflict.
The authors then tested these differing predictions with fMRI, and demonstrated that two regions of interest in the human anterior cingulate showed exactly the predicted pattern of activity. In other words, these regions showed 1) greater activity during "change trials" than "go trials," as well as greater activity during both 2) change and 3) go trials where errors were more likely than those where they were less likely. Furthermore, the error likelihood model also predicted that ACC activity should become stronger over the course of the session, whereas the vanilla conflict monitoring model did not; this prediction was also confirmed with fMRI in human subjects, whose ACC reactivity gradually increased over the course of several trials.
This new proposal addresses a shortcoming with previous models of anterior cingulate activity, in which it was thought to provide an online "gauge" of conflict at the response layer. This version allows for scenarios characterized by conflict to be learned, and thus to be actively prepared for in situations where conflict is likely. Future research will be important in determining whether this proposal is developmentally accurate as well - whether children show a prolonged development of anterior cingulate, as they do in other regions of the frontal task network.
- Log in to post comments
wow...that's interesting. this means that we can't see the ACC as a performance monitor anymore, but somewhat perhaps a "warning" system that makes the brain more alert towards matters that have higher risk-taking activity.
Perhaps, this new find can be used to aid in tasks like the Iowa Gambling Task (IGT) even further to determine the which parts of the prefrontal cortex support this warning system.
On another note, this gives us an idea that the brain might be wired to one of the most primitive systems as shown in other mammals, the "flight or fight" system. Because the active monitoring of the past errors greatly contributes to enhancing a person's reaction with "flight or fight" criteria. This may be the next step in discovering individual differences in adapting from past mistakes.
Right - those are exactly the kind of wider implications this research has! good observations.