Does an artificial intelligence require a body?

Apropos of the Chess/AI discussion that's going on on the front page of ScienceBlogs today (and here at CogDaily), I noticed this little gem in a book I'm currently reading for a review (Sandra and Michael Blakeslee's The Body Has a Mind of Its Own):

Meaning is rooted in agency (the ability to act and choose), and agency depends on embodiment. In fact, this is a hard-won lesson that the artificial intelligence community has finally begun to grasp after decades of frustration: Nothing truly intelligent is going to develop in a bodiless mainframe. In real life there is no such thing as disembodied consciousness.

That's a bold assertion. But is it true? Can't agency occur in a disembodied environment? Isn't that what Second Life is all about? We can have online discussions, play online games, even make and lose real fortunes, all online. Why couldn't an "intelligent" computer do the same thing?

The authors do offer some compelling reasons why physical presence is needed for consciousness. Consider this thought experiment:

If you were to carry around a young mammal such as a kitten during its critical early months of brain development, allowing it to see everything in its environment but never permitting it to move around on its own, the unlucky creature would turn out to be effectively blind for life. While it would still be able to perceive levels of light, color, and shadow -- the most basic, hardwired abilities of the visual system -- its depth perception and object recognition would be abysmal. Its eyes and optic nerves would be perfectly normal and intact, yet its higher visual system would be next to useless.

I'm not sure this assertion is true -- the claim is that without exploring the environment on its own, these abilities wouldn't develop because without tactile feedback, the visual information would be useless. For a kitten, maybe, but for a computer, it seems to me the tactile feedback could be virtualized.

Any comments from CogDaily readers?

Tags

More like this

I think recent articles about using virtual reality to trigger out-of-body experiences could add fuel to the arguments against such a supposition. If the body is so integral, why does our sense of self seem so tenuous and easily fooled?

Having a body might be something that makes us "human", or aids in the way we learn and become intelligent, but I think that is just the path we take to becoming intelligent because it happens to be the path we find ourselves on.

The fallacy, of course, is that computers are not kittens and therefor have no need to have development depend on anything related to biology at all. In fact, they'd be hard pressed to prove that an artificial intelligence has need of development at all.

What they are getting at is that intelligence requires interaction, but that is trivially true.

What about an intelligence that uses network packets as it's way of interacting with the world? That comes as close to 'bodiless' as you could imagine and still stays away from the kitten analogy.

I think it depends on how you define intelligence.

If you use the most simple definition of intelligence I've ever heard of, that intelligence is the degree to which an organism to respond appropriately (eg. adaptively) to stimuli, then this statement could be construed to be true in a sense.

This is because not allowing a cat (or a human child trying to learn a second language from just watching a video which I've also heard won't work) to meaningfully interact with its environment will not allow it to develop appropriate responses to stimuli and thus shape their nervous system in a normal fashion.

However, upon further inspection this really only says that a disembodied machine couldn't have human-like intelligence. It all depends on the type of data being fed into the system that is trying to act intelligently... to get a uniquely human intelligence you'd have to have the body (or input, so perhaps a totally virtual world would count) with which you could be shaped by the kinds of activities and experiences that shape humans.

The difficulty here (and in the computer chess discussion) lies, I think, in our tenuous grasp of what exactly intelligence is. The real challenge of building intelligent machines is not imitating some subset of human behavior but in convincing people that what you've created really is intelligent. This, in turn is difficult because when we ascertain the intelligence (or lack thereof) of something we do it by a sort of Turing Test: the question we really ask is "is there something going on in that thing's 'head' that's somehow the same as what's going on inside my head?" It seems that the whole idea of intelligence (and, it seems, consciousness) rests in the assumption that we can meaningfully attribute mental states like our own (which we have first person access to) to something that we have only "third person" access to.

This is the "other minds" problem of philosophy, in a nutshell, and it is a real stumbling block for anyone wishing to do science about the mind. Understanding how we attribute minds to other people is in some ways the same problem as understanding how we might somehow come to attribute minds to things that are not people (or are not animals at all).

I'm currently working on symbolic computational models of cognition - we're embedding an existing architecture in a digital human model from the biomechanics/ergonomics community. This "embodies" the cognitive architecture in a virtual human in a virtual environment... the virtual human's intelligence may be limited by the fidelity of the environment, but it does give the "AI" a way to "interact" with "reality." As we've begun to implement models of actual tasks, we're faced with the prospects of implementing more and more sensory systems (tactile, kinesthetic/proprioceptive) to really be able to perform tasks in a human-like manner.

Our goal isn't AI but a tool for evaluation of new products and workspaces in CAD systems -- but embodiment and fuller sensory mechanisms do seem to be helpful in creating a better model that exhibits intelligence - depending on your definition.

By Daniel Carruth (not verified) on 29 Aug 2007 #permalink

@Daniel Carruth - let me guess; the software is called "Blue-pill"?

Bodiless mainframe? Isn't that a contradiction? The mainframe is the AI's body! I don't see that as a bold assertion at all. For a consciousness to be in the world in a meaningful way, it must have some means of perceiving it, and the body is just the instrument of perception. A chess AI can perceive a tiny universe that consists only of chess moves, and whatever electronic instrumentation that provides that perception is logically its body.

I'm not sure the using the idea of virtual reality is a valid point against the idea that "disembodied" consciousness is possible. The example given of the kitten is true to some extent, although a lot of the sensory processing structures and apparatus of the brain develops prenatally. The experiments about blindfolding kittens who therefore never develop sight is more about reorganization, atrophy, and lack of cognitive development. Those biological issues are beside the point when we talk about developing AI and interfacing it with senses.

What is required, apparently, for a human like consciousness and for the development of many of the goals of AI (whether they involve consciousness or not) is a much deeper network for "understanding". For example, the ability to understand language. No matter how many rules, grammars, vocabularies, and neural networks are involved, the "meaning" of language has to be associated and encoded to memories and patterns of sensations. Can an AI understand the written or spoken word "ball" without associating it with the visual patterns of seeing various balls, the feeling (real or virtual) of holding one, the intuitive physics of throwing, bouncing, and catching one, the concept of games, other spherical objects, etc.?

I believe that computational AI is possible (along with machine consciousness) and that many tasks expected of an AI may be possible with limited sensory connectivity, but some tasks necessary, say, to pass a Turing Test might require a "body" (real or virtual, humanoid or abstract) with a full (or augmented) set of senses in order to understand the depth and nuance of language, not to mention human behavior, art, humor, etc.

Oops, I forgot to actually say what my point is, which is this: Whether or not you need a body to be intelligent, you likely need a body in order for people to attribute intelligence (or consciousness, emotions, etc.) to you, for you to "count" as intelligent to a naive observer. And that average joe on the street is who we're really after in terms of convincing people that cognitive science works. We are a long way off, as a culture, from having a definition of intelligence that we can all agree on, so we have to rely on our folk-psychological, Turing-test based definition of being similar enough to ourselves.

For interesting debate about Turing Testing, its possible limitations, and a stronger version, check out Searle's Chinese Room thought experiment (wikipedia link) and Harnad's writing on machine consciousness (link and link).

For yet more interestingness, I'd like to point out that we as humans are remarkably willing to see ourselves in other things, animate or not. See, for instance, a previous Cog Daily post about soldiers speaking about their bomb-defusing robots as if they were clever dogs or even people. Or check out "The Soul of Mark III Beast" for a story/thought experiment highlighting this issue. OR think about how you (if you are at all like, which I assume you are) are perfectly willing to ascribe complex mental states or even consciousness to dogs or cats or any other animal, when they don't exhibit many of the things normally associated with intelligence in people.

Consider the following thought experiment: attach a kitten's brain to a computer that perfectly simulates the real world. If the kitten attempts to move a "paw", it sees the virtual paw move in the virtual world. Would that kitten be able to develop consciousness? Would its experiences be different from a kitten that was in the "real" world?

Body is nothing more than the brain's interface to the world, translating perceptions into mental concepts and mental concepts into actions.

As pointed out above, the agency does not have to depend on embodiment since the interface can be simulated.

However, the brain "thinks" by building hypotheses about relationships/causality of mental concepts, based on available mental concepts; and then by experimentally verifying these hypotheses. To verify, some kind of an interface to the world is required. Such an interface can be limited to a monitor and a keyboard, but it will effectively limit AI's brain capacity to learn.

What an interesting question.

I'd say that intelligence (artificial or living) can not exist without a purpose. The purpose of living intelligence is to support behavior choice in support of that life. Every computer program has whatever purpose the programmer gave it.

I'd say artificial intelligence implies endowing some decision-making device with an intelligence similar to the examples found in nature in living things - which all have some ability to make autonomous decisions that support their own survival. Even a plant that tilts toward the sun is in that sense making a life support decision.

* The purpose of most computer programs is not to support the "life" of the program, although some multi-tasking programs do have anti-crash tasks. In that sense I'd say they qualify as artificial intelligence.

Therefore, IMO - an artificial intelligence is a non-living decision-making device programmed to make behavior decisions that support its own survival.

How it is embodied is immaterial. All you need is an non-living "entity" - that makes it artificial - and a decision-making device that can somehow control that entity's behavior along those lines.

By Pelican's Point (not verified) on 29 Aug 2007 #permalink

I suggest that we avoid use of the undefinable term "consciousness" and instead focus our attentions on something more definable such as "intelligence" -- even though that term itself is hellaciously difficult to define.

This question strikes me as either tautological or nonsensical. Either we're talking about human cognitive behavior, in which case it's tautological that we include the body, or we're talking about abstract cognitive behavior, in which case it's nonsense to include a body.

By Chris Crawford (not verified) on 29 Aug 2007 #permalink

"The real challenge of building intelligent machines is not imitating some subset of human behavior but in convincing people that what you've created really is intelligent."

I think the real challenge is to "crack the code"... most gedanken experiments have as a premise that interaction between machines and brains will be posible, but think we have no idea about how perceptual information is processed in the higher levels, or in other words, what is "the language of tought" like. I think that Larry Barsalou's PSS theory is one of the major attemps in on this. I do believe that thought is caried in a multi-modal code, not an amodal language-like one.

Indeed, it seems like everyone liked "The Matrix" way too much... how can we talk about mind-computer interaction and simulations of embodied cognition if we can't "read" the mind beyond the coarse level that current technologies allow?

"Body is nothing more than the brain's interface to the world, translating perceptions into mental concepts and mental concepts into actions."

And what's a concept? :-) (That's my main area of interest BTW)

@Remis,

Yep, the "concept" point is a big problem. In some ways, a "body", physical or virtual, at least establishes a vague concept of self, object, space, and time. It could be argued that each of these concepts can only be defined in relation to their opposites of such as self vs. other. When we speak of human concepts and human nature, much of that is rooted in the knowledge or theory that other people have similar mental processes, so as others have pointed out, how can we recognize machine consciousness (and vice versa) if we cannot perceive that they function in the way similar to how our mind works?

From my understanding of the current theories, a "concept" can't be categorized as a set of symbols, but as a pattern. As I stated above, no matter how complex, a language processing system, whether implemented by grammar, neural networks, or whatever, that only deals with the processing of symbols can't be any more intelligent that Seale's Chinese Room. In fact, recent studies have shown that while humans are likely "hardwired" for language, that can only consist of a facility for a basic grammar of concepts, intents, causes, and effects and the basis for quickly picking up the particulars of language. The actual mechanics of a particular human language, e.g. English, must be learned and each symbol (word) acquired must be be fitted into the basic hardwired language processing, modified by the learned facts of English grammar, and then cross-associated with the other concepts that have been acquired through learning via the other senses. It is that last step where, I believe, true "understanding" can take place.

Of course, in humans, the system is tuned for the acquisition of spoken language, but that doesn't mean that it is absolutely required. Congenitally deaf people forge similar connections from the auditory wiring to the speech centers via the visual centers in order to learn sign language. Understanding written language comes much later and requires dedicated cognitive effort since we aren't "hardwired" to read. The fact that we can read and write relatively easy (compared to, say, learning calculus) means that the abilities are able to piggyback onto our capabilities for spoken language.

Jeff Hawkin's theories on predictive neural networks that depend more on connections coming down from other areas of the cortex as they do from sensory inputs into the cortex and the time sensitivity seem to underly that idea. A predictive grammar of English might predict that the next word in the sentence "John tossed the..." might be 'ball', but it couldn't evaluate that assumption on anything but statistics unless many other associative factors are involved.

But overall, I think we are saying the same thing - a "body" doesn't have to be a physical body (biological or robotic) or a simulation of a human body, but for most AI tasks, the "mind" in question needs both a way to accept some kinds of sensory input (it may be necessary for their to be multiple kids) and a way to query the environment (whether through language, movement, or some other forms of selective attention) in order to test assumptions and predictions made.

I agree (generally) with the authors' contention that physical presence is necessary to produce consciousness. However, here I'm defining consciousness as more abstract intellectual constructs (e.g., imagination, wisdom, creativity) that have a foundation in environmental interactions (e.g., prehistoric man's ability to transform rocks into tools). This position is based on Piaget's widely accepted cognitive development theory, which proposes that abstract reasoning develops via four qualitatively distinct stages (sensorimotor, preoperational, concrete operational, formal operational), with the first stage broken down into 6 substages whose progression depend heavily on environmental interaction. The last stage involves abstract reasoning, and is assumed to be connected to the definition of consciousness presented here. As further support for this position, a substantial amount of evidence exists for the involvement of the cerebellum in higher-order cognitive processes (e.g., http://www.bbsonline.org/Preprints/OldArchive/bbs.neur4.thach.html ); a structure that was once thought only to be involved in coordinating body movements. So concerning humans, environmental interactions (i.e., via the body) and consciousness are intimately linked.

It's less likely that a machine needs physical interaction in order to achieve consciousness (i.e., abstract thinking) consistent with human imagination, wisdom, creativity. Hypothetically, the three stages required for humans can be bypassed and the fourth stage implemented by 'simply' borrowing what we already know about intellect at advanced stages. For example, one could imagine taking all of the experiences of an individual and interfacing those experiences (via neural engrams) to a virtual brain. In essence, the virtual brain could have such experiences downloaded into its virtual network without needing any environmental interactions involving a body. However, this makes some large (and probably false) assumptions that one should not consider things like a keyboard, computer screen, and other hardware associated with machines, as not being representative of things that could be considered a "body" that interacts with the environment. As a somewhat more blunt analogy, is it really possible to determine whether any individual person is conscious without some kind of communication taking place? For instance, what assumptions can we make about the consciousness of a person who is deaf, blind, and unable to either detect physical stimuli or talk)? Likewise, what assumptions can we make about a machine if the hardware that it is connected to (e.g., keyboard, screen, etc) malfunctions?

Of course, the preceding discussion is based on the definition of consciousness in the context of higher-order cognitive processes. Other research with humans, however, seems to be questioning whether one even needs a living body to have consciousness--that is, what appears to be evidence for awareness in the context of non-measurable brain activity ( http://www.world-science.net/exclusives/070520_consciousness.htm ). If true, this goes well beyond any kind of artificial intelligence programming as one would have to figure out how to achieve environmental awareness in a machine, when for all-intents-and-purposes, the machine is unplugged.

By Tony Jeremiah (not verified) on 29 Aug 2007 #permalink

So, somebody born paraplegic would not - could not - be conscious? I'd like to see some verification on that.

As has been noted above, the inability of a kitten to develope cognitive skills absent an ability to move around in its environment is not shown to be fundamental; more likely, it's an artefact of its cognitive design wich implicitly assumes (reasonably so) that a kitten will normally be able to explore its world and thus take serendipitous advantage of this fact.

@John re: kittens connected to a virtual world...

It probably wouldn't be enough just to simulate external sensory input, but would also require simulation of all feedback related to proprioception. Not that it negates your thought experiment, it just adds another level of complexity.

Similarly, other systems connected to a disembodied brain or AI (if the goal was to perfectly mimic the real world organization) would need to be simulated, such as stimuli that connect to lower brain functions like hunger, tiredness, pain, pleasure, etc. - there is a lot of complex biochemistry there that isn't purely just sensory processing.

On the non-academic text side, anyone interested in these kinds of things should read "Permutation City" by Greg Egan. The first half of the book explores, in SF novel form, many of the thought experiments about artificial consciousness.

As always Dave, your post made me to write a response longer than acceptable for comments. :)

My response is over here, exploring the idea that defining intelligence is not only difficult but actually counter productive. Rather, we should look at why intelligence is present (ie. evolutionary advantage) and therefore extrapolate the necessary requirements to recreate it. The post also touches on the idea that realities are subjective to the individual experiencing it, which in turn makes intelligence a function of the individual stimuli received. If intelligence is a function of the stimuli, the stimuli itself need not be human-like to evoke true intelligence. This could easily give rise to intelligent organisms with completely different stimuli and their own unique reality humans can never truly understand.

Well, I would argue that we already have "artificial intelligence" that matches or exceeds human intelligence. However that intelligence is embodied in a fundamentally different way such that we have very limited grounds on which to interact or communicate.

By CBrachyrhynchos (not verified) on 30 Aug 2007 #permalink

I'll just pose for the sake of argument, that Google is an example of a system that meets or exceeds human capabilities in many ways. However, the problems, challenges, and concerns of the Google database system are very different from my concerns as a person who really needs a lunch break.

The assumption that an artificial intelligence would be interested in "taking over the (human) world" is one of the reasons that I would bet my $1 against a singularity. (That, and the fundamental problem that singularity theory seems to consider technology as an independent entity rather than a product of human culture and economics.)

By CBrachyrhynchos (not verified) on 30 Aug 2007 #permalink

You don't need a 'physical' body, but you need an environment to interact with and cause the feedback loop. I think it would be much easier to create a simulation of the world to teach the program, which would have a body in the simulation, than it is to create a physical body that has all of the information gathering abilities that we have. Here is a demonstration of cutting edge video game ai: http://gamedrift.com/articles.php?a=265

I remember reading about some sort of condition which causes a complete disconnect of the 'mind' from the body, and a description of what it was like from someone it happened to. He said he had to do things like multiplication tables to keep from simply splintering and drifting away

I would say that you are both right, you need a body and you need an environment. Neither of these things need to be 'real'. This line of thought always brings me to the possibility of being in a simulation already, but that's already been discussed a lot recently.

hmmm -- what if something were both intelligent and self-aware but disembodied: isn't this much parallel to somebody who's fully paralyzed, or even "locked in" their own heads? don't we still grant them full credit as conscious beings, even if they are incapable of action (or even, in the latter case, of communication)?

********************************************************
@@So, somebody born paraplegic would not - could not - be conscious? I'd like to see some verification on that.

As has been noted above, the inability of a kitten to develope cognitive skills absent an ability to move around in its environment is not shown to be fundamental; more likely, it's an artefact of its cognitive design wich implicitly assumes (reasonably so) that a kitten will normally be able to explore its world and thus take serendipitous advantage of this fact.

@@hmmm -- what if something were both intelligent and self-aware but disembodied: isn't this much parallel to somebody who's fully paralyzed, or even "locked in" their own heads? don't we still grant them full credit as conscious beings, even if they are incapable of action (or even, in the latter case, of communication)?
***********************************************************

**#17,23: #23, your comment reminded me of Stephen Hawkings, the brilliant physicist whose body is paralyzed due to Lou Gehrig's disease. So I have no disagreement concerning the intellectual and conscious status of persons with physical disabilities. My point of contention (and this is also in response to Janne's comment), is that the capacity to label an individual as conscious (or intelligent) is dependent on the ability to communicate/interact--which ultimately depends on some kind of physical mechanism that makes interaction possible.

Consider Stephen Hawkings once again. He has technology built into his wheelchair that allows him to communicate his ideas using the limited physical movements he has left (i.e., his eyes). Were it not for this technology, it does not seem possible that we could declare Stephen Hawkings a genius (barring what was already known about him prior to his physical incapacitation by ALS). Extending this further, could one say anything about Stephen Hawkings' intelligence (and further, level of consciousness) if he slipped into a coma--again, barring what is already known about him (e.g. imagine you just walked into a hospital, have no idea who Stephen Hawkings is, and saw only a person lying in a hospital bed in a coma).

Another example. Consider a chess grandmaster playing a computer program. We are likely to believe that both are intelligent, but less likely to believe that the computer is conscious. Why? I hypothesize it is because we assume that anything that looks human is intelligent and conscious (all-be-it to varying degrees as suggested by Piaget's theory). To support this idea, consider again a chess grandmaster playing the computer built to look human. If you watch them play, you are likely to believe that both are conscious and intelligent without further interaction. However, you decide to engage in communication with both-- you ask the chess grandmaster how they are feeling about the game and then the computer that looks human (and programmed only to make physical movements in response to chess moves). What assumptions can one then make about the intelligence and consciousness of the chess grandmaster vs. the computer that looks like a human and can only respond to chess movements?

Most likely, the conclusion one will reach is that the intelligence (well, depending on who wins the match) and consciousness of the chess grandmaster is greater, and all owing primarily to the interaction taking place. Thus far, I cannot think of any form of communication (except perhaps telepathy) that does not require physical mechanisms.

By Tony Jeremiah (not verified) on 30 Aug 2007 #permalink

The fact about kitten vision is very interesting, but I'm not sure about what the moral of the story is. The fact is that the normal development process for a kitten's visual system involves interaction with the world---without this interaction, the visual system does not develop.

However, after the visual system has developed, the kitten's brain is in some particular state. There is nothing inherently impossible about the kitten's brain reaching this state without interaction. Imagine a surgeon taking the blind kitten and altering its brain using microsurgery to put it in the same state as the seeing kitten. It may not be practically possible, but there is nothing in principle impossible about it. The surgically altered kitten could certainly see as well as the normal kitten. Ability to see only depends on the current state of the brain, not on how it got to that state.

Question: So why aren't kittens born in that state? I think that's mostly a matter of what's the most efficient coding of the knowledge a kitten needs. Some knowledge is "built in" and some knowledge is acquired from experience. It's easier to evolve a kitten that is capable of learning than it is to evolve a kitten that already knows, at birth, everything a kitten needs to know.

Question: Why can't a kitten acquire the knowledge it needs from pure passive observation? Why does it need to interact with the environment? I think the answer is that for some kinds of information, particularly cause-and-effect relationships, interaction is a much more effective way to learn the information. I'm not exactly sure why vision, in particular, requires a lot of cause-and-effect knowledge...

Anyway, I think that a distinction can be made between the claim that feedback from the environment is a very efficient way to go about learning information, and the claim that without feedback, understanding of the world is impossible.

Re the kitten.

I am not sure the immobilized kitten being carted around would not see.

I remember reading (ca '80) that kittens brought up in a "vertical stripe" world for the first part of their life could not thereafter see horizontal stripes.

However the immobilized kitten being carted around would still have an "inertial" and "gravity" sensor in it's middle ear, and could detect it's motion. I believe that those motion sensors would cross correlate with vision & permit the kitten to "see" by associating the changing visual field with it's sense of motion. Perhaps if it was immobilized in space & just shown movies, the question might arise...

Re the rest:

Skinner denied that consciousness existed because it could not be defined. In a sense the concept of consciousness is a little bit like the concept of god.

I believe there are two approaches to creating an AI. One is to define what it could do, theorize about circuits that might do that, then create & program a 'puter to do all of those things.

The other is to painstakingly model the neural circuits to the best of your ability, not requiring analytic comprehension of what is being modeled, pray to a likely deity and cross your fingers.

Modelling is not easy. In the Cerebellum (one of the simpler structures of the brain) there are about seven different types of cells which appear to form a complex pattern recognition circuit, with most of the cells being inhibitory in a complex of feedback & feedforward circuits. Each of the estimated 10^12 neurons operates on a time cycle of ~1 millisecond, however some cells (e.g. Purkinje) can integrate the input from ~200,000 parallel cells via multiple synapses. (Those synapses seem to be "plastic", and some theorists believe the synapses are the basic building blocks of memory.)

I consider that a biological mind is a 4D mechanism that by it's geometry is hardwired to only apprehend a 3D universe. If we could directly apprehend the 4D universe in which we live, then we would intuitively grasp Reimann 4D space, particles that were waves, and the complexity of human consciousness. Support for that hypothesis begins with the observation that most people cannot without training recollect a string of more than eight digits.

For that reason, I do not favour the first approach. OTOH the second approach is probably beyond our current technical ability. The potential computing power of the cerebellum might need the computing power of more than 10,000,000,000 desktop computers.

Wow, this is my first read of this interesting science blog. As a researcher in advanced algorithms for space systems and exploring feasibility of autonomous machines- spacecraft, I've always been intrigued about discussions on the mind, soul, consciousness, the Chinese room, and more. Thanks for a great read and interesting observations.

dave,
perhaps it is right to tweak the question a little. artificial intelligence covers a lot of programs whose capacities and aims vary, but whose internal processes allow intelligent solutions to arise or embody intelligent solutions. perhaps the right question to ask is "does artificial life require a body" in which case the answer would seem self evident. Jim, Zachary, Tony and Palmer are all correct from differing perspectives. When we refer to artificial life as different from artificial intelligence we say that such programs have a need to perpetuate or persist themselves in an environment. Such an environment may be virtual or physical, whatever be the environment the intelligent program should be able to create solutions that aid its persistence. Such a programs behavior and its results can be useful to us.

For instance a stock exchange program in a trading environment should be able to generate solutions that do not nuke its balance to zero, in which case it dies. You can see that it becomes easy to create artificial environments that copy natural environment data in real time on to itself.

You can think of many such programs which model real world interactions and which are under real world pressures. Within such environments such programs if running succesfully can give us important pointers to aid our own decision making skills. It is also possible to run a multitude of programs that simulate multiple almost similar agents running in that space.

I am sure that we will soon be seeing public versions of such programs and their API's in the near future. From virtual environments to physical environments is just one step away.

We can see that in the limited sense of the environment, each program is conscious and persistence intentional. The body whether ephemeral or real is dependent on the environment.

This would be akin to having nitrogen eating bacteria in volcanic hot spots. The enviroment determines the life form, though its characterisitics of consciousness and intent remain.

Hope this helps...

udayapg@gmail.com