Computational Modeling

Developing Intelligence

Category archives for Computational Modeling

When I started this blog back in ’06, new hypotheses were appearing on a possible functional architecture of the lateral prefrontal cortex – a recently-evolved brain area implicated in high-level cognitive functions like planning, analogical reasoning, and cognitive control. Since then, these hypotheses have been refined, and the results replicated numerous times. Today, it’s essentially…

How many times did Pavlov ring the bell before his dogs’ meals until the dogs began to salivate? Surely, the number of experiences must make a difference, as anyone who’s trained a dog would attest. As described in a brilliant article by C.R. Gallistel (in Psych. Review; preprint here), this has been thought so self-evident…

Every now and then, I read some science from some other dimension. That is, the methods are so unusual, the relevant theories so fringe, or the conclusions so startling that I feel like the authors must be building on work from a completely separate science, with its own theories and orthodoxy. This can be good…

Most computational models of working memory do not explicitly specify the role of the parietal cortex, despite an increasing number of observations that the parietal cortex is particularly important for working memory. A new paper in PNAS by Edin et al remedies this state of affairs by developing a spiking neural network model that accounts…

One theoretical model of the prefrontal cortex posits that we can achieve goal-directed behavior via “biased competition” – that is, representations of our current goals and context are maintained in the prefrontal cortex and exert an influence on downstream areas, ultimately biasing our behavior in a goal-directed and context-appropriate way. By theory, this relatively simple…

A principal insight from computational neuroscience for studies of higher-level cognition is rooted in the recurrent network architecture. Recurrent networks, very simply, are those composed of neurons that connect to themselves, enabling them to learn to maintain information over time that may be important for behavior. Elaborations to this basic framework have incorporated mechanisms for…

It’s been said that psychology is a primitive discipline – stuck in the equivalent of pre-Newtonian physics. Supposedly we haven’t discovered the basic principles underlying cognition, and are instead engaged in a kind of stamp collecting: arguing about probabilities that various pseudo-regularities are real, without having any overarching theory. Some of this criticism is deserved,…

Reductionism in the neurosciences has been incredibly productive, but it has been difficult to reconstruct how high-level behaviors emerge from the myriad biological mechanisms discovered with such reductionistic methods. This is most clearly true in the case of the motor system, which has long been studied as the programming of motor actions (at its least…

An astonishing recent discovery in computational neuroscience is the relationship between dopamine and the “temporal differences” reinforcement learning algorithm (which Jake describes wonderfully here, and I’ve described in a little more detail here). The essential principle is that the difference between expected and received reward can be used to drive learning, and that this abstract…

There’s little evidence that “staging” the training of neural networks on language-like input – feeding them part of the problem space initially, and scaling that up as they learn – confers any consistent benefit in terms of their long term learning (as reviewed yesterday). To summarize that post, early computational demonstrations of the importance of…