Among nature's most impressive feats of engineering is the remarkably flexible and self-optimizing quality of human cognition. People seem to dynamically determine whether speed or accuracy is of utmost importance in a certain task, or whether they should continue with a current approach or begin anew with another, or whether they should rely on logic or intuition to solve a certain problem. A topic of intense research in cognitive neuroscience is how cognition can be made so flexible.
One possibility proposed by by Brown, Reynolds & Braver is that cognitive control is multi-faceted, in that different forms of control are engaged in response to different scenarios. For example, if tasks or responses shift unpredictably, then behavior may be generally slowed in order to reduce the chances for responding before stimuli have been adequately processed. Alternatively, even if the responses and task remain the same, new stimuli may appear that disrupt ongoing processing. A different form of control may then engage increased or "tightened" attentional focus, so as to reduce interference from the irrelevant or incongruent stimuli.
Brown et al. argue that these two functions each fill distinct computational roles: general slowing is not helpful if the cause of conflict is from incongruent stimuli, and tightened focus on the current task is not helpful if that task has unpredictably switched to something entirely different.
To test whether this design is sufficient for accounting for cognitive flexibility, the authors implemented these dual control mechanisms in an artificial neural network model. The model includes a few fundamental mechanisms: active maintenance of goals through recurrent connectivity in a "task set" layer; an "incongruency detector" layer which is activated when multiple conflicting tasks become active in the task layer (and which serves the purpose of amplifying the biasing influence of the task layer via "set strengthening"); and finally, a "change detector" layer which is activated when responses or tasks change across trials (and which serves the purpose of slowing down all processing by inhibiting tonic excitation of the response pathway).
The model was implemented in RNS++, and consisted of a Stroop-like network (including input layers for the target, the current task [the "cue" input], a hidden layer, an output layer, and a task layer which could actively maintain the cue and influence processing in the hidden layer) with lateral inhibition between tasks (forcing them to compete with one another for dominance in the network).
A layer of four "incongruency detector" units received input from incompatible units in the task and hidden layers (this was hardwired from the start); if transiently activated, it would then activate a second recurrently-excitatory layer which itself increased activity in the task layer. This has the ultimate effect of upregulating goal-relevant activity throughout the network.
Similarly, two "change detector" units each received input from either the task or output layers; if activated, these units ultimately inhibited a unit that was providing a persistent excitatory influence on the response layer, causing a slowdown in the rate of activation change at the output layer. This has the ultimate effect of slowing down responses, by slowing the rate of activity accumulation in those units.
Activation of units was calculated on the basis of both excitatory, inhibitory, and leak currents based on the neurobiology of membrane dynamics. Connections between units were changed on the basis of associative/Hebbian learning with a fast decay parameter, such that Hebbian learning basically served as a priming mechanism (temporarily strengthening the connections between coactivated units).
In any given trial, the network would receive a cue input, triggering maintenance of the current task in the task layer through recurrent self-excitation. If this was a new task, the "change detector" layer might become active based on transient activation of mutually exclusive tasks in the task layer, causing slowed response times. Subsequently, a target stimulus was presented to the network: if this target stimulus conveyed information that was irrelevant to the currently activated task, the "incongruency detector" units would become active and further increase the activity in the task layer. If multiple response units became active, such as might occur if the previously inactive response unit was now becoming active, the "change detector" layer would also become active and decrease the level of activation in the response layer. Ultimately activation due to the task layer and the target would propagate through the hidden layer, and activate a unit at the output layer, signifying the network's response.
This model was fit to trial-by-trial reaction time data acquired from 16 human subjects in a task-switching paradigm. Despite having 21 free parameters (used as degrees of freedom in statistical tests), the model significantly predicted many of the characteristics of human performance, including slowed reaction time on trials where the task had changed (switch cost), where the response had changed (alternation cost), where task-incongruent stimuli were also presented (incongruency effect), and where previous trials had been either alternation or switch trials.
One difference between the data from the model and that acquired from humans is that the model responded too slowly on task-repeat trials, although only one out of 64 modeled data points was significantly different from that acquired in humans. The importance of this apparent discrepancy is put into perspective when one considers how accurately the model predicted many detailed interactions, such as the increased switch cost if the previous trial was congruent while the current one was incongruent, and a decrease in switch cost if both the previous and current trials were incongruent. In addition, the model simulated human errors and speed-accuracy effects as well as the effects of a trial more than 3 trials ago, despite not being fit to this data explicitly.
The authors also damaged the network's change and incongruency detector layers, and then refit the model to the data, to determine whether these improvements to the typical task-switching network architecture were warranted. The results showed that despite the increase in free parameters, the fit was significantly better for the model with these additional cognitive control layers. In particular, these layers seemed important for capturing sequential effects - across multiple trials - in the reaction time and error data.
One of the most interesting aspects of the model is that it predicts a very counter-intuitive finding from the task switching literature: it's actually harder to switch to a frequent task from a less frequent task than vice versa. The model captures this effect because the incongruency detector upregulates the maintenance of the current goal, which then is more difficult to overcome when switching to the more frequent task.
The model also captures residual switch cost: despite having a long time to prepare for a task switch, target stimuli may still have a short-term "pairing" with a previous task-set, resulting in slowed RTs despite the adequate updating of the task layer.
In their conclusion the authors argue that anterior cingulate cortex may implement something like the incongruency detector, similar to previous models of conflict detection. In addition, however, they suggest that the supplementary eye fields may implement the "change detector," based on evidence from their increased activity in monkeys performing antisaccade tasks.
This paper is most impressive for the general computational architecture it proposes, which might be helpful in the creation of a system approaching "cognitive flexibility" using only simple mechanisms. Adaptations of this general scheme (including those that might include "infrequency sensitivity" as a mechanism for predicting conflict) will be important avenues for future research.
- Log in to post comments
hey, great post, really interesting find...
Quote:
"One of the most interesting aspects of the model is that it predicts a very counter-intuitive finding from the task switching literature: it's actually harder to switch to a frequent task from a less frequent task than vice versa."
in my opinion, the longer time it takes to change back to a more frequent task might be able to be interpreted in another way. It takes more time to change back to frequent tasks from not-so-frequent tasks. Perhaps the time difference is caused by the person accumulating information related to the task that he is told to do. Then this could account for the finding because: (1) the more information needed to accumulate, the more time taken to switch tasks; and (2) let's say neurons fire together and information reaches "working memory" together, it might be that more information takes much more time to compute. When compared to the less time taken to switch to less frequent tasks, it might be just because there isn't much to go around with as compared to information stored from frequent tasks.
Maybe one could determine this possibility by testing mastered persons compared to novice persons...... (the amount of time taken to switch tasks)