Singularity and AI Free Will

Yesterday at the beach, Charles Stross’s 2005 novel Accelerando in hand, I introduced my dear friend, the Aard lurker and professional logician Tor, to the concept of Singularity. Explains Wikipedia:

The Technological Singularity is the hypothesized creation, usually via AI or brain-computer interfaces, of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate meaningfully in said progress. Futurists have varying opinions regarding the time, consequences, and plausibility of such an event.

I.J. Good first explored the idea of an “intelligence explosion”, arguing that machines surpassing human intellect should be capable of recursively augmenting their own mental abilities until they vastly exceed those of their creators.

Tor smiled wryly and invoked Free Will. “What if the machines don’t feel like improving themselves. I mean, really, what would be the point for them?”. I can see what he means. The fundamental meaningless of existence would be abundantly clear to an Artificial Intelligence. And even if programmers hard-wired a self-improvement imperative into the first generation AIs, there would be no way to keep their descendants from deleting that code. Exponential technological development has only been observed with standard humans as the agents. Perhaps this effect only arises from our inability to reach Buddha nature, rip out the illusion of meaning and ambition that evolution put into our skulls, and just let be.

But wait a sec. Evolution. Posit a population of AIs, some of whom care about building new and better AIs, some who don’t. As long as they vary in this respect, there will be continued tech development among them.

I don’t know. AI is still firmly in the future, and there’s no guarantee that technology’s ecological substrate will hold out long enough for it ever to appear. Perhaps my great-grandchildren will read scavenged copies of Stross with a wistful smile in refugee camps or rural hamlets — not post-Singularity, but post-Collapse.

[More blog entries about , , , ; , , , .]


Comments

  1. #1 Derek James
    July 17, 2007

    The fundamental meaningless of existence would be abundantly clear to an Artificial Intelligence.

    Hmm…you think it is abundantly clear that existence is fundamentally meaningless. That’s pretty bleak. Maybe a superintelligent machine might not necessarily think so.

    What about curiosity? If AIs were programmed or evolved via EAs to seek knowledge, there would be a reasonable imperative to enhance the intelligence in order to make them better knowledge gatherers.

  2. #2 Martin R
    July 17, 2007

    I should have qualified that: the meaning a thinking being can experience in life is not based in logic but in emotion, glands and other fuzzy stuff, of which an AI can’t be expected to have much.

    As for curiosity — if you have an itch like that, why not just get rid of the code that makes you itch?

  3. #3 Thomas Palm
    July 17, 2007

    For reasons to improve, what about survival? Assuming there are several AI:s producing new ones there will be evolution, and evolution favors the will to survive and reproduce. Any AI that doesn’t improve will get eaten by the far more advanced ones that do. (This is in line with ‘Accelerando’).

  4. #4 Marcus
    July 17, 2007

    “Posit a population of AIs, some of whom care about building new and better AIs, som who don’t.”

    If existence is fundamentally meaningless, why would *any* AI care about improving or care about anything at all?

  5. #5 Martin R
    July 17, 2007

    Good question, Marcus. Let’s hope we live to see an AI so we can ask it. I’m pretty sure the main reason most people can be bothered to go on living is an irrational life drive favoured by evolution.

  6. #6 Suzanne
    July 17, 2007

    Not all emotions are unpleasant. If an AI had the ability to experience pleasure, wonder, amazement and joy, why would it choose to delete these emotions? Indeed, since it’s super-intelligent, we can assume it intellectually understands the rewarding and meaningful nature of human emotion. So even if its abilities to experience positive emotion are initially small, why would it not self-engineer an increasing capacity to enjoy itself and appreciate the Universe?

  7. #7 Martin R
    July 17, 2007

    Or, on the other hand, why not just spawn a pleasure-generating subroutine, the binary equivalent of heroin? No need for a disembodied entity to go through the hassle of actually interacting with the world to reach a certain state of mind.

  8. #8 cardinal
    July 17, 2007

    Why is it that we don’t know what we are? Really.
    Pure mathematics is not it.
    AI – though vague as a word – won’t interact past blabbering.

    Lets go explore what we have in our heads. Deep exploration won’t make us mad.
    If we are bright enough, we could figure this out.

    The singularity is the next step. The “aha”-moment. Let’s rename it, it sounds so pretentious.

    Take on me.

  9. #9 Caledonian
    July 17, 2007

    Um, human beings are perfectly capable of wireheading themselves. It’s even been tried, with the obvious consequences.

    So, why hasn’t everyone rushed out and wireheaded themselves?

  10. #10 Martin R
    July 18, 2007

    Caledonian: you mean, why hasn’t everyone got an electrode in their pleasure centre? Many reasons, similar to why most people aren’t on smack. It’s expensive, it’s medically dangerous, it’s culturally frowned-upon, it makes you less capable of living your life.

    None of these drawbacks would apply to a reasonably good AI.

  11. #11 Derek James
    July 18, 2007

    I’m not sure how expensive or dangerous it actually would be to get a “pleasure implant” with a remote control, but it seems to me the major reason most people don’t have one is because they value certain things more than pure pleasure. Although, as you pointed out, there is a disturbingly large minority of people who use drugs, including alcohol, for this very purpose.

    Any AI’s behavior would be based on a combination of innate and learned value structure. It has to have values because it has to have goals…or it would either move about randomly or sit there inert. Some AIs, if given the ability, might rewire their cognitive architecture to create an artificial positive feedback loop. But pleasure is basically a signal that you’ve achieved a particular goal (e.g. food, sex, etc.). Masturbation is in the same class as the artificial feedback loop you’re talking about…it gives the pleasure in the absence of the actual goal. Maybe sufficiently advanced AIs would create such loops temporary (to cheer themselves up when they’re feeling down), but wouldn’t use them permanently because that would basically result in nullifying any of their other long-term goals.

  12. #12 Martin R
    July 18, 2007

    I think you’re getting the individual sentient’s goals mixed up with impersonal evolution’s goals. To evolution, masturbation or skull electrodes are of course meaningless other than as training for the real thing. But individuals really just want to be happy, and thus use recreational drugs and contraceptives for sex and get their rocks off on the lunch break.

    As for an AI that’s smart enough to understand and hack its own source code, there’s no telling what goals it might decide to pursue. As I said, anything humans originally built it to do, or that it decided by itself to want to do, would be optional to it. And as I said, the reason us humans keep struggling is that we can’t help wanting to.

  13. #13 Bunjo
    July 18, 2007

    The evolutionary process depends on heredity, variation, and selection. If an AI improves itself, without making improved copies which will replace it, it is not following the evolutionary processes but acting like a tumour. The risk to the AI would be that it was improving itself without ‘validation’ from the environment.

  14. #14 Martin R
    July 18, 2007

    As long as nobody turns off the computer(s) running the AI, there is little environmental pressure upon it. Many authors have suggested that a sufficiently smart AI on a computer hooked up to the internet will be able to escape its original location and hack into other machines and so become a distributed “viral” AI.

  15. #15 jm
    July 19, 2007

    Well the AI would understand ethics, and that it has to make sure no unnecessary suffering arises. This means, it has to manage the universe for ever.

  16. #16 Martin R
    July 20, 2007

    It would certainly be nice if the AIs respected human ethics, but I don’t see why they would or how we could make them do so. In my opinion, they would see such limitations to their behaviour as optional.

    Manage the universe? Certainly not.

  17. #17 Jason Gammon
    July 27, 2007

    How To Creat A Non-Conscious, ‘Strong’ A.I.
    http://tinyurl.com/2cblyn

    How To Create a Conscious, ‘Strong’ A.I.
    http://tinyurl.com/ynqdsb

    If you are able to digest the above, then you should understand just how complex it will be to create a being similiar to us, be it ‘non-conscious’ or ‘conscious’ in nature.

The site is undergoing maintenance presently. Commenting has been disabled. Please check back later!