Apropos of the Chess/AI discussion that’s going on on the front page of ScienceBlogs today (and here at CogDaily), I noticed this little gem in a book I’m currently reading for a review (Sandra and Michael Blakeslee’s The Body Has a Mind of Its Own):
Meaning is rooted in agency (the ability to act and choose), and agency depends on embodiment. In fact, this is a hard-won lesson that the artificial intelligence community has finally begun to grasp after decades of frustration: Nothing truly intelligent is going to develop in a bodiless mainframe. In real life there is no such thing as disembodied consciousness.
That’s a bold assertion. But is it true? Can’t agency occur in a disembodied environment? Isn’t that what Second Life is all about? We can have online discussions, play online games, even make and lose real fortunes, all online. Why couldn’t an “intelligent” computer do the same thing?
The authors do offer some compelling reasons why physical presence is needed for consciousness. Consider this thought experiment:
If you were to carry around a young mammal such as a kitten during its critical early months of brain development, allowing it to see everything in its environment but never permitting it to move around on its own, the unlucky creature would turn out to be effectively blind for life. While it would still be able to perceive levels of light, color, and shadow — the most basic, hardwired abilities of the visual system — its depth perception and object recognition would be abysmal. Its eyes and optic nerves would be perfectly normal and intact, yet its higher visual system would be next to useless.
I’m not sure this assertion is true — the claim is that without exploring the environment on its own, these abilities wouldn’t develop because without tactile feedback, the visual information would be useless. For a kitten, maybe, but for a computer, it seems to me the tactile feedback could be virtualized.
Any comments from CogDaily readers?