Evolving Robotspeak

i-29a00d6b0bd976154b8e20b6ceaa01d6-Talking robots 500.jpg
Loyalty, teamwork, cruel deception: welcome to robot evolution.

Living things communicate all the time. They bark, they glow, they make a stink, they thwack the ground. How their communication evolved is the sort of big question that keeps lots of biologists busy for entire careers. One of the reasons it's so big is that there are many different things that organisms communicate. A frog may sing to attract mates. A plant may give off a chemical to attract parasitoid wasps to attack the bugs chewing its leaves. An ant may lay down pheromone trails to guide other ants to food. Bacteria emit chemical signals to each other so that they can build biofilms that line our lungs and guts.

Communication may work all very well in these cases, but scientists also want to know how they evolved in the first place. Roughly speaking, their question goes something like this. Say you're an organism living a solitary life. Sending a signal to another member of your species may cost you more than it might bring back in benefits. If you come across some food and suddenly declare, "My, but those are some tasty grubs," you may find yourself besieged by other members of your species all coming to have some for themselves. You might even attract the attention of a predator and become a meal yourself. So why not just shut up?

There are many ways to attack this question. You can go out and listen to birds. You can genetically engineer bacteria to tinker with their communication system and see what happens. Or you can build an army of robots.

Laurent Keller, an expert on social evolution at the University of Lausanne in Switzerland, chose the latter. Working with robotics experts at Lausanne, he constructed simple robots like the ones shown above. Each robot had a pair of wheeled tracks, a 360-degree light-sensing camera, and an infrared sensor underneath. The robots were controlled by a program with a neural network architecture. In neural networks, inputs come in through various channels and get combined in various combinations, and the combinations then produce outgoing signals. In the case of the Swiss robots, the inputs were the signals from the camera and the infrared sensor, and the output was the control of the tracks.

The scientists then put the robots in a little arena with two glowing red disks. One disk they called the food source. The other was the poison source. The only difference between them was that food source sat on top of a gray piece of paper, and the poison source sat on top of black paper. A robot could tell the difference between the two only once it was close enough to a source to use its infrared sensor to see the paper color.

Then the scientists allowed the robots to evolve. The robots--a thousand of them in each trial of the experiment--started out with neural networks that were wired at random. They were placed in groups of ten in arenas with poison and food, and they all wandered in a haze. If a robot happened to reach the food and detected the gray paper, the scientists awarded it a point. If it ended up by the poison source, it lost a point. The scientists observed each robot over the course of ten minutes and added up all their points during that time. (This part of the experiment was run on a computer simulation to save time and to be able to evolve lots of robots at once.)

In the simplest version of the experiment, the scientists selected the top 200 feeders. Not surprisingly, they were all pretty awful, since they had randomly wired neural networks. But they had promise. The scientists "bred" the robots by creating 100 pairs and using parts of each one's program to create a new one. Each new program also had a small chance of spontaneously changing in one part (how strongly it reacted to the red light, for example). After several rounds of this mating, the new programs were plugged back into robots, which then groped around again for food. And once again the scientists selected the fastest ones. They repeated this cycle 500 times in 20 different replicate lines. When they were done, they plugged the program into real robots and let them loose in a real arena with real food and poison (well, as real as food and poison get for experimental robots). The real robots behaved just like the simulated ones, demonstrating that the simulation had gotten the physics of the real robots right.

The results were impressive, although perhaps not surprising to people who are familiar with experimental evolution with bacteria. From their randomly wired networks, the robots evolved within a few dozens generations until they were scoring about 160 points a trial. That held in all twenty lines. Each program consists of 240 bits, which means that it could take any of 2 to the 240th power configurations. Out of that unimaginable range of possibilities, the robots in each line found a fast solution.

Now the scientists made things more interesting. There's a great deal of evidence to suggest that if individuals are closely related to one another, evolution may lead to less cut-throat competition and more cooperation. (See my post on slime molds for an example of this research.) So the scientists ran the robot evolution over again, but this time the robots got kin. Rather than mixing them indiscriminately, they grouped the robots into colonies. They only bred the best performers with other members of their colonies, and from their offspring they created robot clones for the next round of food and poison.

Kinship had a big effect on the robots. Now they were scoring about 170 points. Part of their success was the result of politeness. The scientists designed the food source so that only eight out of ten robots could fit around it at once. The individualist robots jostled for access and ended up all getting fewer points. The robot families, on the other hand, worked together. There was no code of honor in their silicon heads, of course. It's just that they shared the same instructions.

The scientists then added another wrinkle: they grouped the robots into colonies. There's evidence to suggest that in some species natural selection can act not just on the level of individuals, but on the level of colonies as well. So the scientists evolved the robots by selecting the best performing colonies, rather than plucking out individuals. And this colony-level selection boosted the robots' performance even more, scoring an average of 200 points. (A fine point: the scientists also ran the experiment with colony level selection on unrelated robots. They scored 120 points--good but not as good as the others.)

Here, however, is where the experiment got really intriguing. Each robot wears a kind of belt that can glow, casting a blue light. The scientists now plugged the blue light into the robot circuitry. Its neural network could switch the light on and off, and it could detect blue light from other robots and change course accordingly. The scientists started the experiments all over again, with randomly wired robots that were either related or unrelated, and experienced selection as individuals or as colonies.

At first the robots just flashed their lights at random. But over time things changed. In the trials with relatives undergoing colony selection, twelve out of the twenty lines began to turn on the blue light when they reached the food. The light attracted the other robots, bringing them quickly to the food. The other eight lines evolved the opposite strategy. They turned blue when they hit the poison, and the other robots responded to the light by heading away.

Two separate communication systems had evolved, each benefiting the entire colony. By communicating, the robots also raised their score by 14%. Here's a movie showing six of these chit-chatting robots finding a meal.

A similar robot language arose in two of the other trials (non-relatives with colony selection and relatives with individual selection), although in their cases it didn't give them as big a boost. A truly perverse language sprang up in the individually selection non-relatives. In all twenty trials, the robots tended to emit blue light when they were far away from the food. The other robots were attracted to them anyway, even if it meant they had to abandon their food.

The scientists speculate that this deception evolved because the robots initially were turning blue at random. Since the only place where a lot of robots would tend to aggregate would be around the food, a strategy evolved to head for the blue light. But that strategy opened up the opportunity for robots to fool each other. If they switched on their blue light when they were away from the food, they would distract other robots, reducing the competition for access to the food. And without kinship to give them a common genetic destiny, the robots got better at fooling one another. In their individualistic scramble, they ended up performing disastrously. Unlike in the other versions of the experiments, the deceptive robots actually scored worse than they did without the chance to evolve communication.

There are lessons both abstract and practical here. The rules that govern social organisms may apply to man-made machines as well. and if you want to avoid a robot uprising, don't let robots have kids and don't let them talk to each other.

(Here's the abstract in Current Biology, and the pdf from Keller's web site.)

Tags

More like this

In a Swiss laboratory, a group of ten robots is competing for food. Prowling around a small arena, the machines are part of an innovative study looking at the evolution of communication, from engineers Sara Mitri and Dario Floreano and evolutionary biologist Laurent Keller. They programmed robots…
As a species, we hate cheaters. Just last month, I blogged about our innate desire to punish unfair play but it's a sad fact that cheaters are universal. Any attempt to cooperate for a common good creates windows of opportunity for slackers. Even bacteria colonies have their own layabouts. Recently…
A common presumption is that behavior is part of phenotype, and since phenotype arises from genotype (plus/minus Reaction Norm), that there can be a study of "behavioral genetics." This is certainly an overstatement (or oversimplification) for organisms with extensive and/or complex neural systems…
One of the most important experiments in evolution is going on right now in a laboratory in Michigan State University. A dozen flasks full of E. coli are sloshing around on a gently rocking table. The bacteria in those flasks has been evolving since 1988--for over 44,000 generations. And because…

In their individualistic scramble, they ended up performing disastrously. Unlike in the other versions of the experiments, the deceptive robots actually scored worse than they did without the chance to evolve communication.

This is a classic prisoners-dilemma-type situation (and a classic concept in the study of altruism).

By Nick (Matzke) (not verified) on 23 Feb 2007 #permalink

The Cylons were created by Man. They evolved. They rebelled. There are many copies. And they have a plan.

By John Hynes (not verified) on 23 Feb 2007 #permalink

Another fascinating example of intelligence arising from nonintelligent components and forms of selection. Of course the ID crowd will reflexively claim the intelligence is front-loaded, that being their article of faith, despite their inability to identify exactly where this front-loading is located, or how it brought about the results seen.

The IDers are like psychics in this regard. If their hypothesis were true, they should be able to examine the robot programming, and the planned selection criteria, and predict, with great precision, the resulting behavior. Instead, all they can provide is after-the-fact rationalizing, like with Dave Thomas' experiment with evolutionary algorithms and Steiner trees.

Although this may be "a classic prisoners-dilemma-type situation", there is another possibility.

This game may be more consistent with pursuit-evasion [in equilibria] of chapter 8 in Tamer Basar, Geert Jan Olsder, Dynamic Noncooperative Game Theory (Classics in Applied Mathematics) (Paperback) [inside viewable on Amazon]; with definitions and proofs.

Much of work in robotitc algorithms is done with pursuit-evasion games.

See Nature editor's summary 25 January 2007 for

News and Views: Mathematical physics: On the right scent
Searching for the source of a smell is hampered by the absence of pervasive local cues that point the searcher in the right direction. A strategy based on maximal information could show the way.
Dominique Martinez doi:10.1038/445371a
and
Letter: 'Infotaxis' as a strategy for searching without gradients
Massimo Vergassola, Emmanuel Villermaux and Boris I. Shraiman
doi:10.1038/nature05464
[Nature 25 January 2007 Volume 445 Number 7126, pp339-458]
http://www.nature.com/nature/journal...070125-10.html

Fascinating

At the end of the PDF it says,

Supplemental Data include additional Experimental Procedures,
two figures, and one movie and are available with this article online
at http://www.current-biology.com/cgi/content/full/17/6/---/
DC1/.

I wanted to look at supplemental figure S1, but there's nothing at all on that page besides an ad and a menu bar.

Very interesting work. I would like to see how it worked with a richer environment (muliple food sources, predators...) However, there are some uses of terminology in the article that could be mis-interpreted by IDiots. In particular, the concept here of attempting to force evolution in a particular direction, whereas Darwinian evolution moves a species to increased fitness for the environment in which it finds itself. It could be possible to interpret this research as evidence of the hand of a designer in the creation of humans.

I found this article, from the field of physics, a useful guide to using scientific terms with lay audiences:

http://www.physicstoday.org/vol-60/iss-1/8_1.html

Ross. Your article reference is appropriate. But, the ID people should read it also. Dr. Quinn refers to Occam's razor in contending that one should not invent additional assumptions when existing science explains the observations weighs against ID in arguing this experiment. I suggest the experimenters were testing the different evolution directions rather than "forcing" them.

In any event, it is difficult to convey scientific "beliefs" without utilizing the lay or more common use of words.

Carl, again, very nice article.

I, for one, welcome our new robotic overlords.

What I'm wondering is, why in the world did they go to the trouble of using robots? The same thing could have been accomplished much more cheaply and easily on a computer simulation (and they seemed to rely heavily on computer simulation in the training of their neural networks anyway).

If they did it all in simulation, there wouldn't be a pretty picture to attract attention. :) Or rather, the pretty picture would be computer-generated, so it would look like a game, not science. Simulations like this are pretty common (I've even done food-seeking robots with random neural nets!) but we rarely think of them as worth turning into experiments. The reason this one is interesting isn't the glowing robots. It's that it's a quasi-empirical test of group selection. That's something you can't easily get from nature.

(Isn't everything better with robots?)

By Vertebrat (not verified) on 27 Feb 2007 #permalink

It would be intriguing to see what the outcome of the same experiment would be if they had another distinguishable signaling light added in addition to the blue one. I would suspect given the apparent beneficial effect of identifying the "poison" as well as the "food" individually that using one for each would evolve as a potentially even more effective strategy, especially in the related and colony scenarios.
While probably not rising to the level of being analogous to politics, it might be very interesting to see if further complexity of deception would arise in various scenarios with the two signals, perhaps investigating such deception (and perhaps cooperative deception) among competing colonies or "kin" groups etc.

By Mark Morrison (not verified) on 18 Mar 2007 #permalink