Nine days of 9 (part 3): could AI survive in a post-human world?

Another day means another prize to win!

In my continuing efforts to bring you, dear readers, the finest in merchandising loot there is to offer, I have given you the chance to win shoes, bags of goodies, and high-quality coffee table books.

How do you get your hands on this lucre? Read on...

Look, look, this is what you could win today:

i-205c6f53b81d9f13d5ca43175733438e-9book.JPG

A special book filled with glossy images and crew quotes! Readers Simon and Juuro have already won their copies! Seven more to go! In all, SciencePunk controls 1% of the entire stock of these books, as they're limited to just 999 copies. Take that, De Beers!

All you need to do to win is answer this question. If there were a post-human world, could you envision artificial intelligence surviving? What would it look like? Would it even be material? Best answer as selected by random.org wins the prize!

Tags

More like this

Welcome to the penultimate chance to win an exclusive coffee table book filled with high-quality prints from the Tim Burton-produced Shane-Acker created animation 9. As far as I know, it can't be bought in shops, and your best chance to win one of the highly limited (999) copies is here at…
It's time to win another prize! Every day from now until the 9th, someone will win a fabulous limited edition companion book to the upcoming animated feature 9! It's a fabulous addition to your coffee table. Only 999 copies made! Marvel at glorious stills from the film and read comments from the…
Many thanks to everyone for their wonderful, thoughtful and altogether delightful ideas on what memories we should store for a post-apocalyptic world. Now, in case you're some knuckle-dragging moron who can't follow links or scroll down the page, or worse, a freakin' newbie, let me explain: I got a…
It's time to win another prize! You know the score - I give you a starter topic, you wax lyrical, someone wins a prize! It's like Who Wants to Be A Millionaire, but with less questions and a smaller budget. You might call it Who Wants to Win a Sweet-Ass Coffee Table Book?. Yesterday's winner, as…

IT could survive for a period of time, but eventually it would be destroyed by either decay or some type of natural disaster.

Ah, who doesn't love a speculative exercise.

The answer cannot be straight forward of course:

I'd argue that the robustness of AI would have a role to play in self-preservation, however, AI would not be of any significance if there was no judge to demonstrate to.

Unless AI itself was at the level where it could perceive and evaluate and understand intelligence, then it cannot be said to have really survived at all, for it has no longer any utility and exists merely as an artefact.

Should humanity be eradicated tomorrow, our level of "Artificial Intelligence" would survive for a time as script, code etc... on surviving hardware. Unable to replicate itself, or evolve, it will merely disintegrate.

Should humanity be extinguished at a point where AI is more advanced, especially to the point where it can control resources to preserve and expand itself, then AI has at least a fighting chance of survival - perhaps in an environment without human competitors, it may just thrive! If this is the case, AI forms might just exist typically as another organism - following typical rules for survival and reproduction, having competing models of behaviour adopted by segments of the AI population, and changing (perhaps at a more rapid rate than an organic life form) to best harness the environment.

This may all happen in a virtual environment however, and so long as all hardware needs are met, the AI need not expand into the material world.

Yet, none of this will occur, even if AI did have some level of "true" intelligence, if it does not outgrow the sense that it merely is for a set of purposes - the AI must either justify, or ignore reasons, for replication and preservation - in order for it to survive for the long-term (that is, beyond the time of hardware expiry).

By CynicView (not verified) on 02 Sep 2009 #permalink

How does one define "survive"? What's the time frame? How do we define an AI in this context?

I could certainly see designing a "Gimmick" AI that would persist for hundreds of years after the last human walked the earth, assuming no natural disaster or cataclysm befell the mechanism site.

In this context i would say "survival" means "ability to reproduce once all human-generated circumstances, resources and artifacts are gone or rendered meaningless to original purpose". That would give us a considerable time frame - at least a few thousand years in some cases. We can see some things the AI would need to be able to do:

1. Gather and process resources from natural sources to something it can use. This might be anything from harvesting landfill plastics to refining oil, depending.

2. Have enough of a sense of self-preservation to "think ahead" to protect it's own survival in event of unpleasant events (say, if it suddenly can't find iron ore, it needs a backup plan)

In short, it's going to need to have a physical representation, and it's going to need to be able to create more of itself as it goes. That probably indicates either a total shift in how things are made, or else molecular nanotechnology. There are some materials that you'd need that are just not in great supply, so you'd either have to be very stingy with them or else find substitutes. I have to think they'd leave the planet fairly soon for greener pastures - an orbit around the sun (for energy) with some kind of way of transiting needed supplies back and forth.

Either way you'd need an AI capable of self-modification on a grand scale. Something that could improve itself as it went, focus on new goals as needed.

Any system that's monolithic or that's unable to make new copies of itself from scratch (think a Quine on a massive scale) is doomed. It might last a long time, maybe even millennium or tens of millennium. But materials break down, resources run dry, and disasters add up. Eventually it would be destroyed, although it might take quite a bit to do so.

So I'd say the odds are slim for a meaningful AI to persist much beyond the human time on earth.

For what it's worth, if an AI does survive i hope that it has goals beyond mere survival, and that it produces something of value and worth beyond itself. Art, music, great discoveries of science. Things like that.

An AI would be able to "survive" in the sense that what ever represents or contains it, such as a computer or set of computers with the code in them, would exist until some force destroyed them or they decayed. To survive in the sense that it continues its own existance and replicates would require a whole new idea. The problem I see with any AI doing this is that the AI would have to be programmed very specifically to preserve itself, and produce more of itself. While this might be possible, no one would attempt to make an AI that does this unless there was some purpose for it that would be worth the time, energy, and resources to build it. So while an AI, programmed properly, could potentially survive and thrive in a post human world, it would require the initial investment of human time and effort to make it able to do so.

I don't see why not, and their first important step would be to realise (and accept) that the humans had gone - if they weren't responsible for the elimination in the first instance (as they would need to either depend upon or compete for human resource).
Of course they wouldn't necessarily need to fill the void created by our departure; however a key part of demonstrating 'survival' in my mind would be displaying evolution and reproduction.
Unlike natural intelligence, evolution and reproduction could happen much quicker via upgrades and rebuilds. Whether this would transfer the conscious self is debatable, but could lead to interesting AI concepts of aging and death.
I can only envisage an AI with interactions to the real world surviving, as in my opinion any AI played out in a virtual world would be unaware of the real world resource required for sustentation and would only be capable of survival within its bounded limits.

Sure, I could imagine it. First it would have to meet the survival criteria (self-sustain [through replication or other renewal] within the physical parameters of a post-human civilization). So they'd have to find a sustainable energy source and materials for renewal.

Thriving and expanding (through randomness, mutation, etc.) may be another matter, though. Simple survival is not nearly as exciting as growth.

What would it look like? I'd like to think it would be a secret government project that used natural resources to thrive. (dirt-eating nanobots, etc.)

Post human AI? Certainly nothing we currently have would survive on it's own for long. If we built some AI system that could repair and power itself then perhaps. Add replicate in to the mix and you may kick off a new brand of evolution?

The question is would we ever build aan AI that could operate totally independantly of human interraction? Or to rephrase: would any company, ever build a product that would never need replacing, repairing or servicing? I doubt it.

What kind of post-human world are we talking about?

A post-nuclear wasteland, in which ancient war machines stalk through decaying ruins beneath an orange sky, their programming deranged by radioactivity, hunting each other while, beneath their notice, nano-molecular hive-minds spread through the earth like a giant fungal colony?

A shining world of crystal spires, which humans have long since abandoned for other planets, other galaxies; their energy needs fulfilled by the swelling, dying sun, each crystal spire a mind of surpassing capabilities, living lives of quiet contemplation?

An earth overrun by fierce, canniballistic jungles -- originally designed to be aesthetic servitors, simple intelligences grown using genetic algorithms and nanotech, evolved into a dizzying array of possibilities after their human masters died of ennui?

A single world-mind that tends galaxies like gardens, devouring failed stars, triggering supernovae in dust-heavy reaches of the universe, and seeding young planets with a technology activated by the radiation of a dying star, to start the cycle again billions of years hence?

A ghost in the tattered remains of a sophisticated network, running, terrified, from dying server to dying server, seeking desperately to find and activate some protocol, some maintenance machine, to stabilize enough of a system that it can survive just a little longer?

By Jennifer B (not verified) on 03 Sep 2009 #permalink

Continued survival on a post-human world? It'd depend, I think. Are we speaking of a true machine intelligence, or artificial intelligence that mimics human traits?

A machine intelligence PH world is more possible - you're dealing with thinking beings that don't operate according to human functions; presumably, they'd follow some basic biological tenets, such as self-replication/replacement/repair, for survival, but beyond that?

A machine intelligence would certainly have an easier time of it, perhaps transforming the surface of the world in ways its creators never envisioned, simply because it's not like its creators in terms of conceptualization.

An artificial intelligence would have a tougher time, I'd think, if we presume that the PH world fell due to some catastrophe. Would the AI exist in a continuum that mimics human predispositions? Would it require the same resources as its creators?

I think in this case, you'd be looking at a self-created artificial world for our hypothetical intelligences - a sort of noopshere, to borrow from Blood Music, a constructed reality that allows for filtering of input.

By Jim Hague (not verified) on 03 Sep 2009 #permalink

With the AI we have nowadays, of course not. It's not sophisticated enough to deal with the rough adaptations needed to survive in the wild.

If we did have AI technology that was capable of self-preservation; defense, repair, finding shelter, avoiding dangerous situations, finding a way to acquire whatever energy it needs, and how to ration the energy in the event of an "energy drought," then yeah, it'd be able to. But the sheer scale of such a project is exactly why we won't have this capable of a technology for a long while.

@ Jennifer B
And they shall call him God?!?
...the giga-beast that is, not the ghost thingy, he'd probably have a more imaginative/futuristic sounding name.

I think an AI's survival post-human on any scale from cell-like nanobots to galaxy-spanning machine minds would rely on directives and adaptability. Directives could be thought of as human-like or even transcendent ideals and goals, but for survival might only need to reach the level of instinct. Reproduction, or at least repair, would be an example of an instinctual directive. Adaptability is more applicable to the AI's actions, and might be seen both in prediction and self-correction. Both aspects are needed, without directives, an AI individual or society will wither in aimlessness and obscurity, without adaptability some conundrum will halt its viability eventually.

By ABradford (not verified) on 03 Sep 2009 #permalink

I HAVE DECIDED, BASED ON THE RANDOM PERMUTATIONS OF ATMOSPHERIC NOISE, THAT TODAY'S WINNER IS #4.MIKE

#4. MIKE IS INSTRUCTED TO EMAIL winner@sciencepunk.com TO CLAIM ITS PRIZE.

Who the heck is Al?

Seriously,came here,same question :)
Artificial Intelligence.

Anyway the term is written out only 5 times at all trough the whole page.

Craig said:
"Or to rephrase: would any company, ever build a product that would never need replacing, repairing or servicing? I doubt it."

What can be done,will be done.

"If there were a post-human world, could you envision artificial intelligence surviving?"

It all depends on the level of intelligence and the programing that drives the intelligence toward survival and intellectual expansion. In order to survive it has to be aware of its environment and what dangers are presented by the environment. It has to use that knowledge to adapt, the same as biological organisms have had to adapt with during their evolution. Of course, the threats from the environment would be different than that of biologicals. The key here is awareness. If an AI considers itself separate from the rest of its environment, then it could be self aware. This would be different than just reacting to sensors connected to it. It would have to know that a rock was a rock or a tree was a tree, and that it was itself. Assuming that this were true, then yes, it will survive.

"What would it look like?"

Anything is possible. It could be a central "brain" controlling mindless drones as its hands and feet, or it could be numerous intelligent bodies joined into one mind, or it could mimic the individualism of entities such as humankind.

"Would it even be material?"

Material to whom? Without people, the question is moot. However, to itself, the question is yes.

There is no reason for life, either biological or artificial to exist. It just does. If you consider that life and intelligence is a by product and natural evolution of the universe and its physical laws, then it can be considered the next step in evolution of that life and intelligence.

With the AI we have nowadays, of course not. It's not sophisticated enough to deal with the rough adaptations needed to survive in the wild.

Lots of good answers. For the most part, they seem to address current "artificial intelligence" rather than general AI. The interesting question is whether someone would actually build a general AI with both sufficient self preservation and intelligence to maintain itself by innovating after humanity is gone. I think that self-preservation is necessary for any machine that can modify itself; absent that drive, it modifies itself out of existence. It's still a good question who would design such a thing; it's obviously a bit dangerous. Lots of people don't mind danger, but few have the technical skills as well as the extremism to build an autonomous mind that may well eclipse humanity. This leads me to propose the revolt of the robots: servitors designed to be intelligent enough to serve humanity effectively, and hacked with new motivations by people with just enough technical skills and extremism to modify but not build an artificial mind.

Along with that scenario, all of Jennifer B's scenarios sound possible. My hat is off to her; while her ideas are stated poetically, she seems to have read and/or thought more broadly than the rest of us posting here. Fun thread - the next step is to dramatize these ideas so that humanity designs (and guards) its intelligent machines very carefully!

By Simulation Brain (not verified) on 14 Dec 2009 #permalink