Over the last couple decades there's been a pretty heated debate about which, if any, nonhuman animals possess a "theory of mind," that is, the ability to think about what others are thinking. Much of the research bearing on this debate has used false belief tasks. There are many variants, but the standard false belief task goes something like this. One experimenter puts something interesting (e.g., food or a toy) in one of two boxes while another experimenter and the subject (a child or a primate, usually) watches. The observing experimenter then leaves the room. While that experimenter is out of the room, the first experimenter takes the object out of the first box and places it in the other one. The experimenter then asks (if the subject is a human child) or induces the subject to indicate which box the experimenter who left the room will think the object is in. If the subject answers the box the object is now in, then it's clear he or she doesn't understand that others can have false beliefs, but if he or she answers the box the first experimenter originally placed it in, then he or she does understand that others can have false beliefs. Here's a summary of the logic behind the use of the false belief task, from Bloom and German(1):
Suppose you want to know whether a chimpanzee can reason about the mental states of others. As Dennett (1978) and others pointed out, it is not enough to demonstrate that individual A can predict the actions of individual B. In many cases, A can do so without an understanding of B's mental states, but by simply observing the actual state of the world. (Suppose A knows the chocolate is in the basket and observes B searching for food. A might expect B to look in the basket, not because A is attributing a belief to B, but because the chocolate actually is in the basket.) A more robust test involves predicting the behavior of another animal based on an inferred mental state that differs from reality Â± a false belief. This would show that the individual understands that it is the mental state, rather than the state of the world, that causes the action. (p. B27)
In short, if an animal understand that others can have false beliefs, then it has to understand that they have beliefs, period, and therefore has some understanding of the relationship between mental states (beliefs) and behavior.
Some nonhuman primates and most human children over the age of 4 can pass most false belief tests (children with autism spectrum disorders tend to fail false belief tests until they are much older). But there are problems with such tests that make it difficult to conclude that an individual (animal or human) has or does not have theory of mind abilities based on false belief task performance alone. Bloom and German, for example, have argued that false belief tasks require more than just theory of mind (e.g., sufficient attentional resources). This means that failing the task may be due to deficits in other areas. Furthermore, some primate species seem to have trouble with standard false-belief tasks, but do well on false-belief tasks that involve deception, indicating that, at least for non-humans, the context of the test matters(2) On the other side, Premack(3) has argued that nonhuman primates can pass false belief tests (and other "theory of mind" tests) without actually possessing theory of mind. Instead, Premack believes that they are actually just very good associating actions and states of the world with specific behaviors in others.
(Image from here)
These objections highlight the importance of using alternative theory of mind tasks to supplement or replace false-belief tasks. One of the most interesting alternative tasks was used by Santos et al. with Rhesus macaques on Cayo Santiago (the monkeys in the picture above are from the same population)(4). The experiment was quite simple. Apparently, monkeys on the island, which have been used in experiments for several decades, are fond of human food, but are a bit wary of humans. So Santos et al. reasoned that they'd be more likely to take food from humans if they thought they could get away with it without being noticed. To take advantage of this situation, Santos et al. set up a scenario in which individual macaques could choose one of two containers that they'd seen an experimenter place food in (see the figure below, from p. 1177). The containers were visually identical, but one was quite noisy, while the other was noiseless (the experimenter shook them in front of the monkeys so that they'd know which was noisy and which wasn't).
After the experimenter showed the food, he turned around and placed his head between his knees so that it was clear he couldn't see the containers. More than half of the monkeys (14 out of 27) chose one of the containers in the allotted time, and of those, 86% chose the noiseless container. In a second experiment, the experimenter faced the containers the whole time, and this time the monkeys who chose one of the containers (16 out of 21) tended to pick the noisy one (69%).
These results suggest that the monkeys could tell what the experimenter could and could not see, and chose a container accordingly. When they knew the experimenter couldn't see the containers, they chose the silent one so that they could get the food without the human noticing, but when they knew the experimenter could see them approaching, they preferred the noisy container. This implies that the macaques understood how both visual and auditory information would affect the experimenter's behavior, and furthermore that auditory information would only affect his behavior if he couldn't see what the monkey was doing. If the experimenter wasn't looking, the monkeys knew to pick the container that the experimenter wouldn't hear, so that they could get away with it. It would be difficult to argue that these monkeys were able to do this through knowledge of associations between sensory information and behavior alone, particularly since, as Santos et al. note, the monkeys had never heard the bell sounds that the noisy box made. Instead, the macaques must have been reasoning about what was going on inside the experimenter's head as they approached the containers. In other words, they must have theory of mind abilities.
1Bloom, P., & German, T.P. (2000). Two reasons to abandon the false belief task as a test of theory of mind. Cognition, 77, B25-B31.
2Tomasello, M., & Call, J. (1997). Primate Cognition. Oxford: Oxford
3Povinelli, D. J. (2004). Behind the ape's appearance: escaping anthropomorhism in the study of other minds. Daedalus: Journal of the American Academy of Arts and Sciences, Winter, 29-41.
4Santos, L.R., Nissen, A.G., & Ferrugia, J.A. (2006). Rhesus monkeys, Macaca mulatta, know what others can and cannot hear. Animal Behavior, 71, 1175-1181.
I think it's anthropomorphic to say that monkeys have theories. Also I'm not convinced that monkeys have a theory or any kind of idea of mind based on this evidence. Santos et al. show that rhesus monkeys associate sensory awareness with a human primate. It is not clear that the monkeys regard the connection between hearing and knowing as taking place in anything like a mind. We don't know that they are "reasoning" about "other minds" or using "representations about what others perceive." Obviously there is some mental activity going on. Probably they are projecting some kind of rhesus monkey self-image onto humans, so that would be a kind of representation, but we don't know that they're not representing the whole person rather than specific attributes or abilities, i.e., they may just be taking it for granted rather than thinking that humans can hear.
Well, over time, the word "theory" in "theory of mind" has come to be used rather loosely, and it's only anthropomorphic to the extent that it compares monkey folk psychology to human folk psychology.
Also, I think each of your alternative explanations is testable, as is the one that argues that they can reason from situations to behaviors and vice versa in a way that can't be done simply by knowledge of associations.