Cooperation without cognition

How does cooperation evolve? It is in an organism's best interest to screw its competitors in order to best convey its genes to the next generation, yet we see a variety of human and animals examples of cooperation. The answer falls to a division of mathematics and economics called Game Theory. Game Theory examines the behavior of individuals (or software constructs designed to replicate individual behaviors) as they interact. Generally, this interaction occurs in terms of simple games where the effects of different strategies on the outcome -- cooperation or competition -- can be determined.

One of the archetypal games is one called a public goods game. If you wanted to play this you would put four people around a table with 5 dollars given to each of them. You would say to them that at the beginning of each round that they can put as much or as little as they want into the center. Then that money will be doubled and divided evenly among the four individuals. There are a couple of strategies that one can adopt for this game. You can play fair and put all your money in the middle, then everyone doubles their money. Also, you can choose not to put any money and disproportionately benefit from the generosity of other individuals.

Because such games are repeated, some very interesting behaviors emerge. For example, because people know that there is going to be more than one trial, cooperation can emerge in this environment. If you know that if you screw all the other people at the table, you are never going to be able to profit off them again, you are inclined to play nice.

Another example is something called altruistic punishment. Say I add a rule that at the end of every round you could give 5 dollars in order to take 5 dollars from someone else -- to punish them. If any of the people at the table are holding back in their decision to put money in the center, what you find is that people will go out of their way to punish them, even if it costs them to do so. This trait -- the desire to punish cheaters -- is almost intrinsic to human cultures. It is one of the ways that we enforce cooperation.

You can see that simple games like this are interesting ways to study how cooperation could emerge from a system which would tend towards competition.

Which brings us to our paper, Killingback et al use a computer program to follow the success of actors (code for programs that replicate organisms in a game) using a variety of strategies in the public goods game described above. They use a genetic algorithm whereby multiple games are going on within a large group of individuals, and the more successful individuals are likely more likely to transfer their strategy to a new generation of actors.

It had been shown that if you have a lot of individuals playing the game above -- and there are a variety of strategies that are distributed uniformly in those individuals -- cooperation does not emerge. As it is the ideal strategy to compete rather than cooperate, all the cooperators are competed out of existence.

However, if you don't make everyone compete with everyone else at the same time, but rather you make them compete in tiny subgroups, you find something interesting happens. While the cooperators do not do well in mixed groups, they do swimmingly in cooperator-only groups. The cooperator-only groups expand (according to their genetic algorithm) and eventually take over the actors that only compete:

Since cooperation cannot evolve in the public goods game in a well-mixed population, it is important to consider the effect of other population structures. In many social situations, individuals do not interact with all members of the population in every generation--rather, in a given generation, individuals only interact socially with a sub-group of the population. Consider now the total population to be composed of m disjoint interaction sub-groups. We assume that each individual in the population obtains a payoff by playing the public goods game with the other individuals in its interaction group. We also assume that individuals compete with all other individuals in the population. Thus, social interactions are local, while competition is global. We implement the assumption of global competition by having individuals reproduce in their group in proportion to their fitness, subject to the condition that the total population size remains constant. To achieve this constraint on the total population size we allow individuals to reproduce in their group (in proportion to their fitness) and then rescale the size of all groups to maintain a constant total population size. During reproduction occasional mutations occur, which change the investment level of the offspring. Finally, a fraction d of the individuals in each group disperses randomly to the other groups in the population. We assume that initially all groups are of equal size, containing n>k individuals, so the public goods game in each group is a social dilemma. We also assume that, if any group consists of only a single individual, then this individual does not play the public goods game, and receives zero payoff.

Despite its simple definition, it is not easy to study this group-structured model analytically. Thus, our investigation is based on extensive evolutionary simulations (source code available in the electronic supplementary material). Our simulations show that the evolution of cooperation in such a group-structured population can be dramatically different from that in a well-mixed population and that with such a population structure substantial cooperative investments can readily evolve from low initial levels and be maintained indefinitely. Typical simulation results are shown in figure 2. The following mechanism is responsible for the evolution of cooperation in the group-structured situation. The combination of reproduction within groups and limited random dispersal among groups results in groups of varying size (although the mean group size remains constant at n). For certain parameter values the variation is such that groups with fewer than k individuals form. In such groups, the public goods game is no longer a social dilemma, in that zero investment is no longer the dominant strategy. Although lower investors always have greater fitness than higher investors, in any given group, it is now possible that Simpson's paradox (Sober & Wilson 1999; Hauert et al. 2002) applies--the fitness of higher investors, when averaged over all groups, will be greater than that of lower investors--and higher investors will increase in frequency. Thus, interaction and reproduction within groups, together with limited dispersal among groups, results in a natural mechanism for the evolution of cooperation.

The Figure 2 that they mention is below:

i-c6c28f0af58443714d6fb004fd1d8fb2-200791gr2.jpg

You see that as the number of rounds (generations) increases the amount of money that each actor puts in the middle increases until all of it is in there. This shows the increase in cooperative behavior over time.

This is a very interesting result because it doesn't require any of the traits that are generally thought to be necessary for cooperation to develop:

This model does not depend on kin selection, direct or indirect reciprocity, punishment, optional participation or trait-group selection. Since this mechanism depends only on population dynamics and requires no cognitive abilities on the part of the agents concerned, it potentially applies to organisms at all levels of complexity.

This strategy is not cognitive in nature. It doesn't require thinking about who to punish or being able to recognize who your brother is. Most of the other strategies do and thus would be restricted to higher organisms. This strategy -- just hanging out with other cooperaters -- can emerge from a species with any level of complexity.

More like this

A few days ago I introduced how higher levels of selection could occur via a "toy" example. Obviously it wasn't realistic, and as RPM pointed out a real population is not open ended in its growth potential. I simply wanted to allude to the seeds of how Simpson's Paradox might occur, where…
A few days ago I began a survey of Martin Nowak's treatment of modern game theory in his book Evolutionary Dynamics. Today I'm going to hit the Prisoner's Dilemma. Roughly, this scenario is one where two individuals are isolated, and if they both keep their mouths shut (cooperate) they get off,…
Author's Note: This post was selected as the topic for the ResearchBlogCast as part of ResearchBlogging.org. Listen to the discussion here.    Could punishing bad behavior be the origin                  of human cooperation?Humans are one of the most cooperative species on the planet. Our ability…
Two weeks ago, I wrote about a Science paper which looked at the effects of punishment in different societies across the world. Through a series of fascinating psychological experiments, the paper showed that the ability to punish freeloaders stabilises cooperative behaviour, bringing out the…

This is really not my area, so I don't know the technical definition of group selection. My understanding however is that group selection occurs when a trait becomes more prominent in a group even though it is deleterious to individual members in isolation. They basically say that is what is happening.

I guess I don't know why they would deny it is group selection.

I cite from the paper
"It is also clear that our model is very different from classical group selection models (...) as selection only acts purely at the individual level in our models".

I guess it depends on what is classic and non-classic group selection.

By Julian Garcia (not verified) on 01 Jul 2006 #permalink