Live from Neuroscience 2007...

Today, 30,000 scientists descended upon San Diego for Neuroscience 2007, the Society for Neuroscience's annual scientific meeting. With more than 16,000 presentations over just five days, the conference is more than any one reporter could possibly cover. But I'm going to do my best, posting daily wrap-ups here and highlighting some of the most interesting, mind-bending (no pun intended) presentations.

The meeting kicked off today with its annual "Dialogues Between Neuroscience in Society" talk, which is traditionally given by someone who's not a neuroscientist. (In previous years, this talk has been given by Frank Gehry and the Dalai Lama.) This year, we heard from Jeff Hawkins, the computer scientist and entrepreneur who founded Palm Computing, Handspring, and, most recently, Numenta.

Hawkins, the author of On Intelligence, spoke about his effort to create intelligent machines by building computers that more closely mimic the workings of the brain. Artificial intelligence is a decades-old goal that has turned out to be more even more challenging than the experts once imagined. The root of the difficulty, Hawkins says, is that an intelligent machine must have an extraordinary amount of information about the world. We haven't the foggiest idea about how to go about collecting all the necessary information, let alone how to program it into a machine. But humans manage to get all that data into their own heads. And by figuring out how, Hawkins believes, we can train computers to do it, too.

Hawkins's theory is that the central feature of intelligence is the ability to predict. The brain's neocortex, as Hawkins explains it, stores and transmits information using vertical hierarchies—with low-level sensory data converging on higher and higher processing centers. When the highest levels have interpreted the data, they pass that information back down the chain, helping the brain predict what it might sense next. Now, Hawkins and the scientists at Numenta are trying to create computers that process information in this hierarchical way (rather than linearly, as they do now), enabling them to learn, generalize, and predict.

The details of this effort have been covered widely (see here, here, and here), and Hawkins was at his most interesting when speculated about the more distant future. If we can build these machines—and Hawkins acknowledges that it is an if—what might we be able to do with them? There's no reason we should limit ourselves to merely trying to duplicate human capabilities, Hawkins says. Once we've created machines that use hierarchical processing, we could design them to be bigger or faster than the human brain. Or design machines that can become intelligent about sensory data that humans can't even detect. For instance, we could create systems that can learn about, interpret, and predict barometric pressure or other weather and climate data. Or scientists could play around with the settings and designs, making the computers better or worse at generalizing, for instance. This sort of tinkering could reveal important new insights about the brain itself. And thus, we come full circle.

Back tomorrow with more from the conference.

Tags
Categories

More like this

Just a quick correction - last year's Dialogues speaker was Frank Gehry, speaking about his thought processes as he designs his architectural wonders. The Dalai Lama spoke at Neuroscience 2005 in Washington, DC.

All three Dialogues speakers have provided very engaging views of neuroscience from a non-neuroscientist's perspective. A press release summarizing the Dalai Lama's speech in 2005 can be found at http://www.sfn.org/index.cfm?pagename=news_111205. An article on Frank Gehry's talk can be found on the AIArchitect site at http://www.aia.org/aiarchitect/thisweek06/1110/1110n_gehry.cfm

Tom Benton
CIO, Society for Neuroscience

Thanks, Tom. Sorry about the mistake. I've fixed the entry.