Robotics: Where we're at right now

There's been a lot of news about robots lately, so I thought I'd take the opportunity to synthesize what's going on in this field and offer a bit of speculation about where robotics is headed.

First: From Neurodudes comes news of an artificial robotic limb that not only responds to nerve impulses but also has the potential to give feedback to its human host -- as if she was sensing her environment with her own hands. Is this the first cyborg? What's next -- direct mind control of machines?

Actually, a company is working on just such an interface -- a video game controller that works by monitoring brain activity. This isn't just a lab experiment, either, but a device intended to be marketed to mainstream users. This device reminds me of some of the research I read about in Malcolm Gladwell's Blink, where he discussed using similar helmets to help kids with ADHD control their impulses. The controllers Gladwell described, however, operated on only one dimension, while with this new controller, people have been able to perform as many as three simultaneous activities.

Meanwhile, while we're getting better at linking humans to robots, we're also developing robots that are better at controlling themselves:

While this robot may at first look like it's loaded up on a little too much synthetic schnapps, it's actually learning how to walk. You'll notice in the video that another robot can't push it over -- something that would be much easier with a traditional bipedal robot. Neurodudes has much more on this robot's remarkable abilities.

It's pretty clear from each of these examples that as robots develop, we're not only going to have to improve our technology, we'll also need to improve our ethical understanding of robots. If robots have the ability to learn, then what's to stop them from "learning" to destroy property, or even to kill humans? South Korea has taken the first steps toward building an ethical code for robots. Ultimately, the question for robots boils down to this: who's responsible for the robot's behavior when it's acting autonomously? The builder? The owner? The robot itself? I think ultimately the answer will be much more complex than the famous laws of robots set forth in Asimov's I, Robot. I'm still hopeful that this is a solvable problem: as computers become more sophisticated, it won't be difficult to implant such a code in every robot, no matter how complex it becomes.

Tags
Categories

More like this

It's not really a question of robots so much as artificial intelligence. A disembodied intelligence can cause just as much harm as an embodied one; maybe more.