I’ve been busy writing up a new paper, and expect the reviews back on another soon, so … sorry for the lack of posts. But this should be of interest:
The Dana Foundation has just posted an interview with Terrence Sejnowki about his recent Science paper, “Foundations for a New Science of Learning” (with coauthors Meltzoff, Kuhl & Movellan). Sejnowski is a kind of legendary figure in computational neuroscience, having founded the journal Neural Computation, developed the primary algorithm in independent components analysis (infomax), contrastive hebbian learning, and played an early role in linking the mathematical concept of “prediction error” to dopamine function.
One snippet from the interview:
Q: In what ways has the study of how children learn been used to solve engineering problems?
A: Children’s brains are still developing and we need to understand how that helps them to learn. One example is imitation learning, which has been studied by Andrew Meltzoff, Ph.D., at the University of Washington in Seattle, who is trying to understand what makes children such effective learners. Babies and children are really good at imitation. Right out of the womb, babies can imitate facial expressions. If you stick out your tongue, a baby who can barely see will repeat your action. Children have fantastic abilities to mimic actions and behaviors. They learn a lot simply by observing and mimicking, and they will try to repeat not only the action itself – say, reaching out with the arm – but the purpose of the action – say, picking up a ball. This is something humans do much more effectively than any other animal.
Engineers, having seen that imitation is highly effective in humans, combined imitation learning with reinforcement learning to boost the performance of control systems. In apprenticeship learning, for example, a powerful computer tracks the actions of an expert human controlling a complex system, and then programs the reinforcement system to imitate and learn the very complex motor commands that the human makes. Engineers are now able to reproduce human skills that were previously thought beyond the reach of machines. For example, Andrew Ng, Ph.D., at Stanford has used apprenticeship learning with reinforcement to automatically control helicopters that do stunts like flying upside down.
Read more of the interview here.