The "Illusion of Explanatory Depth": How Much Do We Know About What We Know?

There's nothing like having a curious child to make you aware of just how little you actually know about the world. Often (more often than I'd like to admit), my son (Darth Vader over there on the left) will ask me a question about how something works, or why something happens the way it does, and I'll begin to answer, initially confident in my knowledge, only to discover that I'm entirely clueless. I'm then embarrassed by my ignorance of my own ignorance. This is the illusion of explanatory depth, and it's more common than you or I probably want to admit.

I'll get to reasons why it's common in a moment, but first, let's take a look at the research. In several studies, Rozenblit and Keil1 had participants a series of tasks designed to show that people's initial confidence in their explanatory knowledge will drop significantly after they are asked to actually retrieve that knowledge. First, participants were given a list of objects (e.g., a speedometer and a sewing machine), and asked to rate their confidence in their knowledge of how those items worked. Next, they were asked to give detailed descriptions of how the objects worked, and were then asked to rate their confidence in their knowledge of the objects again. If that wasn't enough to shake their confidence, they were then given several questions about the objects, and rated their knowledge again. Finally, they were given explanations of how the objects worked, and were asked to rate their initial knowledge (i.e., pre-explanation knowledge) and their current (post-explanation) knowledge. Below is a graph presenting the data from two of the studies (Rozenblit and Keil's Figure 3). T1 represents the participants' mean initial ratings of their knowledge. T2-T4 show their mean ratings of their knowledge after completing each of the tasks (T4 is their final rating of their pre-explanation knowledge), and T5 represents the mean rating of post-explanation knowledge.

i-4f4c193d55633e9a5f529d8b4768fb51-Rozenblit&Keil2002.JPG

As you can see, their initial ratings (T1) were significantly higher than their subsequent ratings, and their confidence in their explanatory knowledge only rose again after reading a detailed explanation (T5). In other words, the participants were as ignorant of their own ignorance as I often am when my son asks me to explain things. Rozenblit and Keil used adult participants, but Mill and Keil showed that you can get similar effects with second and fourth graders, indicating that our lack of awareness of our lack of knowledge develops pretty early2.

After demonstrating the existence of an illusion of explanatory depth in adults and children, Keil and his colleagues set out to determine why it occurs. Using the same paradigm, Rozenblit and Keil tested for illusory knowledge for facts and stories, and found that participants' ratings of their own knowledge were consistent over time. Mill and Keil found that for second and fourth graders remained as confident in their knowledge about procedures over time as well. Thus, the effect seems to be limited to explanations. Rozenblit and Keil also found that in adults, the illusion of explanatory depth was more pronounced for objects with more visible parts.

These findings led Rozenblit and Keil to suggest three factors that play a role in the illusion of explanatory depth:

  • "Confusing environmental support with representation": Often, when we need to think about how something works, we have it right in front of it, and can observe it. This is what Rozenblit and Keil refer to as "environmental support." They argue that we mistakenly believe the explanatory knowledge is in our heads because we can explain it when the object is right in front of us. Only when we're forced to explain it without the object in front of us do we realize how little we know about it. This factor would explain the finding that the illusion of explanatory depth is greater for objects with many visible parts.
  • "Levels of analysis confusion": Think for a moment about how your toilet works. If asked to explain this, you could just answer "you press down on the flusher, and the water drains, then fills up again." Or you could give an explanation involving the flusher being connected to the flapper, so that when you press the flusher, it lifts the flapper, causing water to flow out of the tank, and so on. Or you could describe the physics of flushing, in which a siphon is created and, once the water has flowed out, is broken, allowing water to flow back into the bowl and the tank. Each of those explanations occurs at a different level. Rozenblit and Keil argue that people tend to have knowledge at one level of explanation (e.g., pressing the flusher causing the water to drain and then fill up again"), and this causes them to mistakenly believe that they have knowledge at the other levels of explanation when they really don't. This explains why they don't exhibit the illusion of depth for facts and stories. Facts and stories generally only involve a few causal relations (some facts might not involve any) that can be described at one level of explanation, and thus it's more difficult to mistakenly believe we have explanatory knowledge that we don't actually have.
  • "Indeterminate end state": Because there are many different levels at which we can explain the functioning of many objects, it can be difficult to know when we have enough knowledge to explain how those objects work. Rozenblit and Keil argue that this may lead people to be overconfident in their knowledge of such objects. This also explains why story knowledge is easier to estimate. Stories have a beginning and an end, making it easier to determine when we know enough to explain them. I suspect, however, that if people were tested on more complex stories with multiple narrative levels, you might find an illusion of depth for stories as well, based on this factor and the previous one.

Keil and his colleagues have spent the last couple years working out the practical and theoretical implications of these findings in several papers and book chapters. If you're interested, you can read some of them here, here, and here. I'm just going to talk about the implication that I find the most interesting: our reliance on the division of cognitive labor. As the explanation of our confusion of "environmental support" with internal representation implies, one reason we can get away with our illusions of explanatory depth is that when we're confronted with a problem that requires explanatory knowledge, we often have the information we need to form an explanation right in front of us, and therefore don't have to have it in our heads. Keil and his colleagues have also discussed something else that lets us get away being ignorant of our own ignorance3: we can rely on the knowledge of others. I may think I know a lot about how computers work, but when my computer breaks down, I send it to an expert, because an expert really does know how my computer work (unless that "expert" works for Circuit City, as anyone who's ever sent their computer to them probably knows). So my mistaken belief that I know how my computer works doesn't really hurt me.

In order to utilize this division of cognitive labor, we have to know who knows what. Lutz and Keil4 have shown that from a very young age, we're able to make inferences about people's specific knowledge from their general area of expertise. They had young children (3-5 years old) make inferences about the knowledge of two types of familiar experts, doctors and car mechanics. Three, four, and five year olds were able to make inferences about "stereotypical" knowledge, such as who would know more about fixing a broken arm vs. a flat tire, and four and five year olds are able to make inferences about knowledge of "underlying principles," such as "who would know more about why plants need sunlight to grow?" vs "who would know more about whether a ladder is strong enough for a person to climb?" (p. 1075). It's probably not a coincidence that we begin to demonstrate knowledge of expertise at about the same time that we begin to display the illusion of explanatory depth, because knowing whose an expert in what allows us to be less aware of what we don't know.

I probably don't need to tell you that this reliance on the division of cognitive labor is increasingly important in science. As our body of scientific knowledge grows, individual scientists are forced to become more and more specialized in their knowledge. Inevitably, the illusion of explanatory depth will plague scientists as well, but hopefully, when specialists arrive at a problem that requires knowledge that lies outside of their expertise, they will be able to recognize their ignorance and know whom they should consult to aid in the solving of the problem. I wouldn't be surprised, however, if it turns out that the illusion of explanatory depth leads many researchers down the wrong path, because they think they understand something that lies outside of their expertise when they don't. Thus the illusion of explanatory depth provides yet another reason for scientists to work on increasing the amount of interdisciplinary communication.

1Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521-562.
2Mills, C.M., & Keil, F.C. (2004). Knowing the limits of one's understanding: The dvelopment of an awareness of an illusion of explanatory depth. Journal of Experimental Child Psychology, 87, 1-32.
3E.g., Keil, F.C. (2005). The cradle of categorization: Supporting fragile internal knowledge through commerce with culture and the world. In W.K. Ahn, R.L. Goldstone, B.C. Love, A. Markman, and P. Wolff (Eds.), Categorization Inside and Outside the Laboratory: Essays in Honor of Doug Medin, (pp. 289-302). Washington, D.C.: American Psychological Association.
4Lutz, D.R. and Keil, F.C. (2002). Early understanding of the division of cognitive labor. Child Development, 73, 1073-1084.

More like this

I've been thinking about the IED too. Wilson (2004) argues that part of the explanation for this illusion stems from our unthinking reliance upon the division of cognitive labor. Our reliance upon the expertise of others is so extensive and so automatic that we take ourselves to possess the expertise itself. I think it's possible to go slightly further than Wilson. In a sense, I think we do possess the relevant expertise: just as I really can perform five digit multiplications, though I would be at a loss if I had to rely upon my unadorned brain, so I really do know what causes tides if I am able to rely upon Wikipedia to summon the explanation. Of course thats not the sense of knowledge thats usually in question when we ask whether someone knows what causes tides, but I think it's a perfectly good sense.

Yeah, the extended mind view. I think that's really where the division of cognitive labor, and the reliance on external support, takes you. We mistake the expertise available to us and external support for internal representation because most of the time, we don't need to distinguish the external from the internal.

The 'illusion of explanatory depth' (and several other illusions and delusions that appear in human cognition) can, to some extent be ameliorated (not entirely removed) by application to our discussions of complex issues of a small extension to our conventional 'prose mode' of thought and discourse.

I call this the 'prose + structural graphics' mode of thought and discourse: the structural graphics extension to conventional prose is based on 'mathematical graph theory'. More information about prose + structural graphics is available at the URL provided http://www.i-sum.com, and a useful prototype software package is available for free download on request (along with a small amount of free guidance). [One does NOT have to become a graph theory expert in order to understand and use 'prose + structural graphics' - simple logic will do].

--- GSC

The URL displayed in my message just posted has picked up a "," at its end that will cause a problem. Just remove that comma, and it should work.

GSC