The Computer and the Consciousness

i-b0c54806063d1948685bc5c6d1c52997-repost.jpg

Here is another philosophy paper of mine, which I find to be increasingly relevant, all the time. It describes how a computer might soon have a consciousness equivalent or surpassing the human consciousness: philosophy with a bit of AI theory mingled with a touch of neuroscience.

When I got the paper back from my philosophy instructor, it had a perfect score and hardly any marks. I balked. (I'm one of those self-critical perfectionist types--it couldn't have been 100% without an editor!) When I approached him about it, he told me it was one of the best arguments he had heard on the subject, and couldn't find the flaw in my argument... although he admitted he couldn't agree with me.

Perhaps he didn't want to agree... the idea of an intelligent computer scares the crap out of some folks. I believe these sort of fears have a common basis with fears of biotechnology, linking the two subjects. That, of course, is another issue for another day.

The Computer and the Consciousness

By Karmen Lee Franklin

One of the most enigmatic goals of scholars has been to grasp the seat of consciousness. Ancient Egyptians believed such a force was located in the heart. This made good sense; the heart, suspended in the center of the body, pumped life-giving blood to every limb and tissue. The lump of gray goo in the head, on the other hand, was discarded during mummification rituals and considered essentially useless. Today, researchers busily scan and study that "useless" lump of goo. Rather than use the knowledge for funerary rituals, however, researchers have more noble goals (at least, by modern perspectives,) such as explaining the debilitating effects of diseases, like Alzheimer's and Parkinson's, or, as discussed here, reproducing consciousness in a machine. Is it possible to replicate a force as complex and elusive as the human mind? Is a computer as unlikely to have a mind as a mummy is to walk away from a pyramid tomb? Or has technology evolved to a level where consciousness can be mapped, and artificial intelligence (AI) is inevitable, as recent advances in neurobiology suggest?

While the distinctions between the body and the mind have been argued over for centuries, the debate gained considerable force with the dawn of the digital age. The uniqueness of the human mind could no longer be explained as simply "a center of reason" when, suddenly, a machine could reason (by performing arithmetic or winning a chess game) at a rate surpassing the above-average human. This became especially evident when the IBM computer, Deep Blue, defeated the world-champion, Gary Kasparov, in a famous round of chess (Lawhead 233.) Philosophers began to name other criteria for defining the mind.

Traditionally, physicalists, on one side of the debate, strictly believed the mind to be an objective and determined product of the brain, which interacted with the environment. They often doubted a machine could match the mechanical sophistication of the human mind in areas such as language, reason, and emotion. Dualists, on the other hand, believed the brain and the conscious mind to be distinct and separate entities. They argued that a computer might potentially simulate the analytical brain, but it could never have empathy or be creative, let alone be "spiritually" self-aware like the intangible human mind.

In the late 20th century, a new view arose. Seeking a compromise that could result in progress towards AI, the new breed of philosophers shifted the focus to the unity of the interconnected parts. This view, called functionalism, suggested the essence of the human mind lay in its complexity, rather than in any one individual aspect. Jerry Fodor described this view by saying "in the functionalist view the psychology of a system depends not on the stuff it is made of (living cells, mental or spiritual energy) but on how the stuff is put together" (Fodor 273.) Functionalists believed the concept of intelligence to have multiple realizability; in other words any form might be possible for intelligence. Wine can be found in a bottle, a flute, a flask, a chalice, or even a box lined with Mylar, yet it is still wine; likewise, any vessel may conceivably harbor a mind.

Has technology advanced to a level at which the entire human mind, rich in complex aspects, can be explained and defined? Some philosophers believe we are close to such answers, and as a result, to the search for AI. Marvin Minsky is one of the hopeful. "In years to come," he writes, "...we will come to think of learning and thinking and understanding not as mysterious, single, special processes, but as entire worlds of ways to represent and transform ideas. In turn, those new ideas will suggest new machine architectures, and they in turn will further change our ideas about ideas" (Minsky 243.) In the tradition of functionalists, Minsky and his colleagues believe a sufficiently complex computer would be considered to have a mind. This is referred to as the strong AI thesis.

The opponents of strong AI argue that even a complex computer is only a simulation, as John Searle illustrated with his example of the Chinese room (Lawhead 243.) Imagine a person was locked in a room, given cards containing Chinese ideograms and asked to write out a response. Since he had no previous exposure to the Chinese language, he was forced to rely on a set of instructions for composing sentences in the correct syntax. Eventually, he was able to produce cards which would be perfectly understandable by anyone able to read Chinese. The trouble is, as Searle shows, the man in the room would not be able to understand what he wrote. He was only mimicking the process of language, rather than using it. Similarly, a computer could be capable of processing language well enough to fool a human speaker of that language (criterion referred to as the Turing Test) and yet still not have a human level of understanding.

Searle's argument seems convincing, but still applies to relatively simple levels of computation. In the Chinese room example, information is processed in the form of symbols and rules, but these bits are not applied to any sort of perception, as at the human level. If the man in the Chinese room were given a form of sensory perception, such as a window through which the actual Chinese speakers could be seen and heard translating his messages, he might be capable of understanding them. Potentially, a complex computer could perform as well. In a similar sense, a computer equipped with sensory perception, processing skills, and proper tools or mediums, could possibly create art or music that some human observers would find aesthetically pleasing.

Many of these abilities, such as those influencing creativity, once thought to be incorporeal, have been identified as activity in certain groups of neurons in distinct regions the brain. Emotions, for instance, begin as neural signals in frontal lobe, located behind the forehead. (See figure 1.) From the frontal lobe, they are sent to the hypothalamus at the base of the brain, which is used to trigger chemical reactions throughout the body. Visual perception, on the other hand, occurs in the back of the brain, in the occipital lobe (Grubin 1.)

Can these sorts of processes explain such ethereal concepts as consciousness? John Searle has been skeptical. In his book Mind, Language, and Society, he suggests that once the chain of reactions in the mind for an event has been explained, there is an "irreducible subjective element" left over: consciousness. For instance, he contrasts consciousness with metabolism. "Once you have told the entire story about the enzymes, the rennin, the breakdown of carbohydrates, and so on, there is nothing more to say. There isn't any further property of digestion than that…. But in consciousness, the situation seems to be different" he explains (Searle 55.) (He does make the caveat that each process can be reduced to atoms and quarks, however.) He summarizes this view, saying, "The subjectivity of consciousness makes it irreducible to third-person phenomena, according to the standard models of scientific reduction" (Searle 55.)

i-308e2e8eb5bbd6a640386ed9cd1e809b-awarebrain.jpg

It seems Searle felt it would be impossible to scientifically pinpoint a sense of awareness in an observable manner. If so, an article in the November issue of Scientific American may have him eating his words. In the article, "The Neurobiology of the Self", writer/scientist Carl Zimmer describes how scientists recently identified the sections of the brain which are responsible for the sense of self. Essentially, the anterior insula, near the center of the brain, activates when a person is actively thinking of themselves (such as when seeing a picture of their own face) (Zimmer, Neuro. 98.) These signals are sent to the medial prefrontal cortex, near the forehead. There, they are combined with autobiographical memories retrieved from the membrane linking the left and right hemisphere of the brain called the precuneus. Together, this network defines consciousness--independently of the networks used for memories and thoughts about external substance. Zimmer believed it is one of the most distinctively human traits yet discovered. "Humans have evolved a sense of self that is unparalleled in its complexity," (ibid.) he wrote.

Compared alongside the evolution of life, our technology has barely crawled from the ocean onto land. (See figure 2.) The evolution of thinking machines has at times, like ecology, undergone dramatic explosions, so that now the possibility of a computer with a mind is not only likely, but inevitable. The hurdles in front of AI, once impossibly high, are being removed; one by one. Neurobiological research of the brain has led to the development of many drugs, ranging from those which can disorders such as depression or Alzheimer's, to those that can increase logical abilities and raise IQ scores (Gazzaniga 33.) Next, scientific and philosophical inquiry may finally provide a functional model which can be used to synthesize an artificial, yet complex form of intelligence, encompassing aspects such as reason, creativity, language, and a sense of self. It may not be long before technology rivals the collective abilities of humanity. Then, perhaps those machines will question the potential for artificial versions of themselves.

(Figure 2 follows the sources, below. I'm having a little trouble getting the formatting to work right... it might look like garbage while I try to sort it out.)

Sources

3-D Brain Anatomy. Director David Grubin. "The Secret Life of the Brain." PBS. February 2002.
Decarlo, Finkelstien, Rusinkiewicz, Santella. Suggestive Contour Gallery. Princeton: 2005. (Source for Figure 1; modified and labeled)
Fodor, Jerry. The Philosophical Journey. Ed. William F. Lawhead. New York, NY: McGraw-Hill, 2006. 236-237.
Gazzaniga, Michael S. "Smarter on Drugs." Scientific American Mind. Vol. 16, No. 3. 33-35. 2005.
"History of Computing." The Great Idea Finder. 2005. (Source for Figure 2.)
Lawhead, William F. The Philosophical Journey. New York, NY: McGraw-Hill, 2006. 241-249.
Minsky, Marvin. Quoted in: The Philosophical Journey. Ed. William F. Lawhead. New York, NY: McGraw-Hill, 2006. 242-243.
Searle, John R. Mind, Language, and Society. New York, NY: Basic Books.1998. 55-56.
Zimmer, Carl. "The Neurobiology of the Self." Scientific American. November 2005. 92-101
Zimmer, Carl. Evolution: The Triumph of an Idea. New York, NY: HarperCollins, 2001. 70-71. (Source for Figure 2.)

Figure 2. The Evolution of Ecology & Computing

Ecology Computers
Chemicals necessary for life present 4.4 BYA 15,000 years ago

Creatures capable of building computers present

Amino acids formed in oceans

3.8-3.5 BYA 3,000-2,400 BCE Abacus formed in Mesopotamia and China

First multi-celled organisms

2.7-1.8 BYA 1623 AD First mechanical adding machine
Life diversifies (Cambrian explosion: limbs, skeletons, etc) 535 MYA 1940s

Machines diversify
(Digital age: calculators, transistors, computers)

Complex animals move on land and cover the earth

450-360 MYA 1980s-1990s Computers move into homes and offices, cover the and earth

Land animals complex enough to use tools, language, be self aware, and capable of disrupting the ecosystem appear (like humans)

15,000 years ago ? Computers complex enough to use tools, language, be self aware, and capable of disrupting their world (and humans) appear
Categories

More like this

Speaking as someone who did his M.S. in Artificial Intelligence and Cybernetics in 1975 (yup, 32 years ago), and TA'd the subject, and published in the field, I consider this to be an outstanding student paper in every way.

I could start to critique it at a deeper level than the professor did, but I'm conscious of some of my acute deadlines right now.

I hope to be back later this month.

Interesting article. Some random thoughts:
I am a believer in the strong A.I. thesis, but I also believe the road to synthetic intelligence will be long and very slow. Not for lack of raw computational horsepower, but for lack of understanding of how minds are put together and function, and because of the sheer complexity and difficulty of the task.

Has technology advanced to a level at which the entire human mind, rich in complex aspects, can be explained and defined?

Probably not; our ability to measure brain activity and structure is still fairly crude, and without studying the brain I do think our theories of the mind will be limited. There may be no satisfactorily simple explanation to find; mind may be a fundamentally complex phenomenon.

... The trouble is, as Searle shows, the man in the room would not be able to understand what he wrote. He was only mimicking the process of language, rather than using it.

How is the man in the room any different from Broca's area of my own brain? If there are homonuclei in our brains that give us "real" understanding as opposed to mere simulation thereof they are remarkably well hidden. It seems to me that Searle confuses the Chinese room for the person in that room; they are not the same thing.

By Andrew Wade (not verified) on 08 Feb 2007 #permalink

John Searle talking about consciousness here:
http://www.abc.net.au/rn/philosopherszone/

(I must admit to never quite being able to grasp what Searle actually thinks consciousness is, although he seems quite good on what it is not.)

John Searle talking about consciousness here:
http://www.abc.net.au/rn/philosopherszone/
(I must admit to never quite being able to grasp what Searle actually thinks consciousness is, although he seems quite good on what it is not.)

Interesting. I must admit I wasn't able to grasp much of that myself. But:

The problem with that is, and if I'm right in defining consciousness in terms of its first person ontology, then you can't reduce it to anything that has a third person ontology for the trivial and obvious reason that you would lose the first person character of consciousness if you did such a reduction.

This seems to deny the possibility of (completely) defining consciousness at all, at least as pertaining to people other than onesself. Saying that an A.I. program doesn't have the right first-person ontology is begging the question.

By Andrew Wade (not verified) on 09 Feb 2007 #permalink

Many years ago I read about the idea of a human brain (without the rest of the human) used to drive a spaceship. So I tried to imagine that suppose this was my brain. And suppose all the life-support systems and all the connections to the various peripherals worked just fine. Would this cyborg still be "me"? What if all my memories were still stored intact in there? What if its "Anterior Insula" still provided a "sense of self"? I guess at this time I was still obsessed by Descartes "I think therefore I am". One way or the other, my intuition was telling me I wouldn't be in there.

Much later I tried to read Hilary Putnam's Reason, Truth, and History but most of it went over my head. After this effort I decided philosophy is not my thing. Recently however, a dear friend, Christopher Ott (master's degree in philosophy), sent me his book Evolution of Perception & the Cosmology of Substance which, although it doesn't have brains in a vat, didn't go over my head. I find his idea that perception is independent of subject and object very welcome. "What but imagination can imagine? What but consciousness can be conscious? What but perception can perceive?" His explanation for the universe relies simply on an evolution of perception.

So now I ask myself the next question: "can this cyborg (or an even more efficient computer that doesn't need my brain in a vat) be the next step of evolution after the human?" Does consciousness really need a better medium than the human form to be self-conscious? I am afraid the answer to this I can only give intuitively: Koyaanisqatsi. And I'm pretty sure it's the right one, although I am open to criticism.