Developing Intelligence

A principal insight from computational neuroscience for studies of higher-level cognition is rooted in the recurrent network architecture. Recurrent networks, very simply, are those composed of neurons that connect to themselves, enabling them to learn to maintain information over time that may be important for behavior. Elaborations to this basic framework have incorporated mechanisms for flexibly gating information into and out of these recurrently-connected neurons. Such architectures appear to well-approximate the function of prefrontal and basal ganglia circuits, which appear specialized for maintenance and gating, respectively.

However, these architectures do not capture some subtleties in the temporal dynamics of prefrontal cortex and other regions showing delay-period activity. Single unit recording studies show that individual neurons in the prefrontal cortex fire according to complex temporal patterns: some neurons decay in their firing rates over time, others ramp their firing rates up, and yet others show more chaotic trajectories. Furthermore, self-connected neurons (via so-called “autapses”) have not been observed in the prefrontal cortex (as far as I know), though it’s possible of course that neurons are self-connected indirectly (neuron A talks to B, which talks to A).

And there’s the rub – models of high-level cognition don’t actually implement things this way anymore! Models with subcortical gating mechanisms into PFC – clearly important for high level cognition – typically “abstract away” from this indirect self-connection model that was present in the simple recurrent networks of the 90′s, basically utilizing autapses instead. A fascinating paper in Neuron by Mark Goldman shows why this abstraction may actually be a step in the wrong direction.

Goldman begins by describing how a “functionally feedforward” mechanism (i.e., neurons all talk to one another, but not directly to themselves) can generate persistent delay activity that looks much like what’s observed in PFC. Critically, this includes neurons that fire early, late, in the middle of the delay period, or a mixture of these, but only a very small minority (5%) which respond consistently throughout the delay period. In contrast, such complex temporal dynamics would not be generated by a purely self-recurrent network with autapses (a feature of the neural data that is not often acknowledged in models of this sort).

One of the interesting techniques for analysis of neural data introduced by Goldman is the use of the Schur decomposition, which identifies feedforward as well as feedback patterns in neuronal firing. More traditional eigenvector based analysis of neural firing patterns does not reveal the former, and can lead to overestimates of neuronal responses by nearly a factor of 3.

Goldman concludes with a fascinating discussion of how feedforward interactions may underlie many of the features of delay period activity previously assumed dependent on recurrently connected or feedback-based neural networks. While feedback based models are simpler, and require fewer neurons to model, feedforward models of persistent neural activity have a number of other advantages: they can maintain multiple distinct modes of activity across the delay period, they can demonstrate more complex delay period patterns of firing, enabling the intrinsic representation of temporal duration, and they have “the possible advantages of a built-in reset and clearing of their memory buffer and robustness against runaway growth.”

It’s this last claim that seems a little simplistic to me. Many of the autapses in the brain are inhibitory, such that neurons negatively regulate their own firing patterns. If feedforward networks of the kind advocated here really underlie neural processing, then it seems that such negative self-regulation would be unnecessary. Furthermore, built-in reset and “memory clearing” just seems like positive spin on what is actually a shortcoming: neural tissue which does contain feedforward functionality of this type can not be sufficient for explaining indefinite delay period activity without additionally positing the influence of a different architecture for delay-period activity from some other region of neural tissue.


  1. #1 Ron C. de Weijze
    March 1, 2009

    Do I understand correctly that if there is no recurrent connectivity, impressions must somehow be kept? That would not be working memory but ‘random access’ memory. Not sure about the adequacy of the cpu metaphor though.

  2. #2 D.S. Blank
    April 18, 2009

    Interesting comments on these newer models. Some of us are still using these simple models from a decade ago, so this is quite useful information. Our research group doesn’t think we fully understand what these simple recurrent models do, and haven’t moved on yet. Thanks!