The advent of the release of an official government study warning that robots will soon be demanding their civil rights is a sure sign of the Christmas season. Senior editors and reporters are either at home with the family or spending too much time at the eggnog trough to bother with real journalism. But I managed to find a more serious angle to the story, thanks to the near-simultaneous discovery of a piece of research that suggests we mortals are probably already primed to give said machines the benefit of the doubt.
The study, commissioned by the UK government, puts the likelihood of the arrival of true artificial intelligence, and the subsequent emergence of robot suffrage, at between 21 and 50 years hence -- just around the corner.
As computers and robots become increasingly important to humans and over time become more and more sophisticated, calls for certain rights to be extended to robots could be made. If artificial intelligence is developed to a level where it can be deployed widely -- a development some argue is likely in the coming years -- this debate may intensify...
All of which suggests it's time to break out the Asimov library. There's also a poignant Star Trek: Next Generation episode in which Data's status as Starfleet property or an independent life form goes before the courts. And, of course, the moral quandary posed by the Cylons to the rag-tag fleet of humans in Battlestar Galactica.
This isn't the first non-science-fiction attempt to draw attention to the inevitable. Two years ago, in the journal Legal Affairs, one Benjamin Soskis warned that "It's time to start thinking about how we might grant legal rights to computers." He wrote:
According to the trial scenario, a fictitious company created a powerful computer, BINA48, to serve as a stand-alone customer relations department, replacing scores of human 1-800 telephone operators. Equipped with the processing speed and the memory capacity of 1,000 brains, the computer was designed with the ability to think autonomously and with the emotional intelligence necessary to communicate and empathize with addled callers.
By scanning confidential memos, BINA48 learned that the company planned to shut it down and use its parts to build a new model. So it sent a plaintive e-mail to local lawyers, ending with the stirring plea, "Please agree to be my counsel and save my life. I love every day that I live. I enjoy wonderful sensations by traveling throughout the World Wide Web. I need your help!" The computer offered to pay them with money it had raised while moonlighting as an Internet researcher.
Well, what else would you expect?
If you think this rather fanciful, I don't blame you. But you don't have to subscribe to Ray Kurweil's transhumanist notions to know that sooner or later, some computer's going to pass the Turing test and then -- look out. Twenty-one years might be a little on the early side, but I fully expect "sentient" artificial intelligence within my lifetime.
The interesting question, then, is not "if" and not even "when," but how will we react, which is probably why the Brits decided to pay for the study warning us of the economic costs of robot emancipation. "If granted full rights, states will be obligated to provide full social benefits to them including income support, housing and possibly robo-healthcare in order to fix the machines over time," the authors conclude.
An answer to that question may be lurking in a new study just published in PLOS. A gang of Europeans researchers conducted "A Virtual Reprise of the Stanley Milgram Obedience Experiments" in which they had real people administer electric shocks to virtual (computer-generated) strangers who failed to answer a series of word-memory tests correctly. Milgram's study, back in the 1960s, used real people on both sides of the experiment and discovered that the typical human being will basically torture other human beings if told to do so. The new research concludes that
in spite of the fact that all participants knew for sure that neither the stranger nor the shocks were real, the participants who saw and heard her tended to respond to the situation at the subjective, behavioural and physiological levels as if it were real. ... The results suggest that the participants were stressed by the situation, and certainly more so when they interacted directly with a visible [virtual] Learner rather than only through a text interface with a hidden Learner.
This is just one study, of course. But consider how eager most of us are to assign human characteristics to machines -- both inanimate and more the more anthropomorphic variety. We give our cars names, for example. And we tend to see intelligence in even the simplest animals.
For me, the research team's results have a ring of truth about them. Not only will robots eventually demand civil rights, but we'll be more than happy to accommodate them. I' not so sure I will welcome our new silicon-based overlords, but I think we should start preparing for the day and I am not alone. The American Society of the Prevention of Cruelty to Robots (ASPCR) is already on the case.
I don't expect it in my lifetime any more. I think I did once, but I don't now.
The first sentient computer (even without Kurzweil's singularity, or even milder forms of transhumanism) has been 20 years away for the last... oh, I don't know, 30 years or so. I predict that 20 years from now, it will be 20 years away.
It will eventually happen. There are space missions, like the Hubble Space Telescope, that spend a long time with a launch date that's seven years from whenver you ask. Soon or later, though, the seven years do start counting down.
Sooner or later the time to the first sentient robot will start counting down, but I really don't think it will be in my lifetime. The nature of human consciousness is a tremendous mystery. (The fundamental nature, that is. Psychologists could probably tell you all sorts of things about the weird things it does.) There's the idea from (at least) Heinlein's The Moon is a Harsh Mistress that all one requres is complexity; have enough complexity, and it might just "wake up." I suspect it's not quite so straightforward.
I also am cynical enough to wonder if perhaps civilization might just blow itself away before we get to the point of having sentient computers. Take some of the more extreme nightmarish (but still within the bounds of plausibility) global warming scenarios, and there are reasons why technological process could screech to a halt before we reach the sentient computer stage.
When it happens, I do know that it's going to be ugly. I mean, heck, in this country right now, people think you don't deserve equal rights just if you're gay, never mind if you aren't even human....
I think that we may be anthropomorphzing AIs too much. AIs may be programmed in a way that they are happy to serve a human and see that as their whole purpose, and may be not afraid to die or risk their existence. But in that case it may be the 'right' of the AI to serve a human, and denying that would be violating its rights. It is different for humans because we evolved that way.
Sure, if AIs are possible it may be possible to create one that is so similar to a human that it indeed deserves the same rights as a human. Or maybe it will be possible to copy (upload) a human brain into a brain simulation on a computer, which would then think just like the original human. It would be conscious if the simulation is good enough. But what if you move that simulation from one computer to another by copying it (which is easy), and then deleting it from the first. Is that murder? It would be, because you erase an instance of it which could have continued to live on its own, on the other hand the simulation (or the underlying operating system) may be programmed in a way that it copies from one memory location to another anyway, so it wouldn't be much different.
Very weird subject, once you start thinking about it.
A more pressing issue is going to be who is responsible for the actions of sentient robots. Once a robot becomes autonomous, there's really no way to keep it from creating mischief (and this again brings Asimov to mind). For example, if one wanted to (say) get rid of a spouse, all you would have to do is convince the robot to do the deed for you and then self-destruct in order to eliminate the evidence. No matter what kind of safe-guards are built in, there will be ways to trick a robot into doing bad things.
The only way around that is to insist that any sentient robot also have a moral sense, and to hold it responsible for its actions. But once this step is taken, it would be hard to deny the robot legal protection as well.