Artificial Intelligence

Developing Intelligence

Category archives for Artificial Intelligence

A principal insight from computational neuroscience for studies of higher-level cognition is rooted in the recurrent network architecture. Recurrent networks, very simply, are those composed of neurons that connect to themselves, enabling them to learn to maintain information over time that may be important for behavior. Elaborations to this basic framework have incorporated mechanisms for…

Reductionism in the neurosciences has been incredibly productive, but it has been difficult to reconstruct how high-level behaviors emerge from the myriad biological mechanisms discovered with such reductionistic methods. This is most clearly true in the case of the motor system, which has long been studied as the programming of motor actions (at its least…

There’s little evidence that “staging” the training of neural networks on language-like input – feeding them part of the problem space initially, and scaling that up as they learn – confers any consistent benefit in terms of their long term learning (as reviewed yesterday). To summarize that post, early computational demonstrations of the importance of…

An early classic in computational neuroscience was a 1993 paper by Elman called “The Importance of Starting Small.” The paper describes how initial limitations in a network’s memory capacity could actually be beneficial to its learning of complex sentences, relative to networks that were “adult-like” from the start. This still seems like a beautiful idea…

What if training ourselves on one task yielded improvements in all other tasks we perform? This is the promise of the cognitive training movement, which is increasingly showing that such “far transfer” of training is indeed possible, while short of being “universal transfer.” Interestingly, this phenomenon might be most likely to occur for some of…

Much evidence supports the idea that parietal cortex is involved in the simple maintenance of information, such as in object permanence paradigms (also here) and other tasks. This evidence is part of the justification for the “parietofrontal integration theory”, which suggests that parietal areas work in concert with prefrontal regions of the brain to accomplish…

To enhance any system, one first needs to identify its capacity-limiting factor(s). Human cognition is a highly complex and multiply constrained system, consisting of both independent and interdependent capacity-limitations. These “bottlenecks” in cognition are reviewed below as a coherent framework for understanding the plethora of cognitive training paradigms which are currently associated with enhancements of…

Working memory – the ability to hold information “in mind” in the face of environmental interference – has traditionally been associated with the prefrontal cortices (PFC), based primarily on data from monkeys. High resolution functional imaging (such as fMRI) have revealed that PFC is just one part of a larger working memory network, notably including…

The organization of the human prefrontal cortex (PFC) is a lasting mystery in cognitive neuroscience, but not for lack of answers – the issue is deciding among them, since all seem to characterize prefrontal function in very different but apparently equally-valid ways. If this mystery were resolved, it could revolutionize cognitive neuroscience and neuropsychology as…

Peter Hankins has written an excellent commentary criticizing the “positive comparisons” I make after contrasting brains with computers. Peter says: “… the concept of processing speed has no useful application in the brain rather than that it isn’t fixed.” While this statement may intuitively appeal to some philosophers, temporal limitations in neural processing are both…