Dreaming of Conscious Artificial Intelligence?

i-5eea0771d9a995c6981942c08662ff7d-Gelernter-Essay.jpg A while back I read an essay on Artificial Intelligence at TR by David Gelernter wherein, besides other things, he discusses where AI research stands at present (the short answer, nowhere). Like all discussions on AI, it inevitably led to the question of Consciousness. As always, I promptly got confused about it. What is it? Is it an epistemological impossibility? However much we try, will we be never sure if we have understood even a little of it?

I fear that I'll go diarrheal (verbally, with this blog post, you understand) if I continue thinking about Consciousness. I'll let you read Gelernter's essay and ponder over it rather than rub your nose in my confusion.

It's clear from the essay: Gelernter isn't in the AI camp and has no love for questions that do not lead anywhere. Artificial Intelligence Is Lost in the Woods he tells us plainly. In the later part of the essay, Gelernter suggests a useful framework (a pre-scientific theory, he notes) to think through what he calls Cognitive Continuum, a continuous spectrum of mental states with varying focus:

AI--and software in general--can profit from progress on these problems even if it can't build a conscious computer.

These observations lead me to believe that the "cognitive continuum" (or, equally, the consciousness continuum) is the most important and exciting research topic in cognitive science and philosophy today.
...
The cognitive continuum is, arguably, the single most important fact about thought. If we accept its existence, we can explain and can model (say, in software) the dynamics of thought. Thought styles change throughout the day as our focus level changes. (Focus levels depend, in turn, partly on personality and intelligence: some people are capable of higher focus; some are more comfortable in higher-focus states.)

It also seems logical to surmise that cognitive maturing increases the focus level you are able to reach and sustain--and therefore increases your ability and tendency to think abstractly.

Even more important: if we accept the existence of the spectrum, an explanation and model of analogy discovery--thus, of creativity--falls into our laps.

As you move down-spectrum, where you inhabit (not observe) your thoughts, you feel them. In other words, as you move down-spectrum, emotions emerge. Dreaming, at the bottom, is emotional.

Fascinating. I sure hope all this would lead to some consumer products (Quantum Theory has led to iPods, you see). When I am old, all I want from computer scientists is my quantum logic laptop - AI or no AI - that can choose something good from yottabytes of songs and videos, appropriate for the lucid moments of consciousness during my senility.

More like this

Here is another philosophy paper of mine, which I find to be increasingly relevant, all the time. It describes how a computer might soon have a consciousness equivalent or surpassing the human consciousness: philosophy with a bit of AI theory mingled with a touch of neuroscience. When I got the…
My thoughts on the talks at The Singularity Summit 2009 below the fold.... Shaping the Intelligence Explosion - Anna Salamon: A qualitative analysis of the implications of the emergence of artificial general intelligence. Having talked to Anna before, and knowing the general thrust of the work of…
I'm off to a wedding this weekend, so no posts for a few days. But I wanted to give you a heads up that six computers will be competing in a Turing test on Sunday. The competitors, named Alice, Brother Jerome, Elbot, Eugene Goostman, Jabberwacky and Ultra Hal, must converse for five minutes and…
Reposted from last year: Michael Posner and Brenda Patoine make a neuroscientific case for arts education. They argue that teaching kids to make art has lasting cognitive benefits: If there were a surefire way to improve your brain, would you try it? Judging by the abundance of products, programs…