Originally posted on the old blog on 4/3/05.
Self-Perpetuating Paradigms: How Scientists Deal With Unexpected Results
Previously, I discussed Kevin Dunbar's research on the use of . However, Dunbar is better known in cognitive psychology for his in vivo work on scientific cognition. I'll get to the basics of that work, but first I want to say something about unexpected results and paradigms. The connections between all of these will be apparent in a moment.
Over at Universal Acid, Andrew reports on research findings desribed in Nature that are apparently irreconcilable with the "traditional Mendelian paradigm" in biology. I won't get into the findings themselves, because I know jack about genetics, and Andrew explains them fairly well in his post. What I find interesting are Andrew's observations about the treatment of unexpected results, or in this case, results that don't fit within the accepted paradigm (which is pretty much what "unexpected results" means in science). He writes:
[T]he uncertainties involved in actual experiments are so huge that there is no way you can make sense of anything without a paradigm. For anything beyond the most simple of experiments, you have to take a lot of things for granted. And the truth is that experiments fail so often that it is quite sensible to dismiss experiments that don't work as expected as unexplainable failure rather than a blow to the reigning paradigm, and to pursue more productive lines of research instead.
He then applies these insights to the irreconcilable results from the Nature article (note that he uses an analogy to his own work, which is important), writing:
This is the kind of bizarre finding that, if you are too stuck within a paradigm, you just toss out as some weird error. I used to work in a fruit fly lab, and we constantly worried about contamination of fly lines (i.e., if a stray fly sneaks its way in while you are transferring flies from one bottle to another). If you had a bottle full of fly mutants with white eyes (the wildtype is red), and suddenly one day some red eyes (or orange eyes, which is the heterozygous color) appeared, you'd think the stock got contaminated, and you'd just throw out the red-eyed flies (or, more likely, the entire stock).
Why do scientists handle unexpected results in this way? Why, when they get results that are inconsistent with the "traditional paradigm," are they so quick to assume (and, as Andrew notes, it is usually just an assumption) that experimenter error, or something equally theoreticlaly innocuous, is to blame? The answer to that question is likely to be pretty complicated, touching on several aspects of the interplay between human cognition and scientific contexts. I won't attempt to provide a complete answer here (if I had a complete answer, I'd publish it for real, not in a blog post!), but I will provide a part of the answer, which can be summed up in one word, a word that happens to be one of my favorites: analogy.
But now I'm getting ahead of myself. Before I can say why analogy is a part (and likely a big part) of the answer to the question of why scientists treat unexpected results the way they do, I should first go back to Dunbar's work. Dunbar's in vivo method is really quite impressive. He initially chose moleclar biology as the field to study, because, as he put it1,
Many of the brightest and most creative minds in science are attracted to this field, and molecular biology has now taken over the biological and medical sciences as the major way of theorizing and as a set of methodologies. As a consequence, the field of molecular biology is undergoing an immense period of scientific discovery and breakthroughs, making it an ideal domain within which to investigate the scientific discovery process. (p. 119)
After familiarizing himself with much of the literature, consulting experts in the field, and interviewing the members of several labs, he chose four to study extensively (he's since studied more than ten, in the U.S., Canada, and Italy). While most of the data he collected came from lab meetings, where scientists usually present research and discuss it, he also spent an unbelievable amount of time with the members of the labs, interviewing, observing, and just hanging out with them. Using the data he gathers in the in vivo settings, Dunbar develops hypotheses that he then tests using experimental methods, the designs of which are also influenced by his in vivo observations. Given the time and effort it took to do the in vivo studies, it's no wonder most of us lazy cognitive psychologists just spend our time running experiments in our own labs.
The most interesting (to my mind, and since I'm the one writing this, my mind is the only one that really counts) insights gained from Dunbar's research have been in the area of analogy. It's been widely recognized for centuries that analogical reasoning is important, even ubiquitous, in scientific thinking. Kepler famously used several analogies2 to arrive at his laws of planetary motion (for example, his analogy between light and motion3), and Rutherford's atom-solar system analogy has been used in science education (and in cognitive psychologists' papers on analogy) for decades. But until Dunbar came along, no one had systematically studied the use of analogy among scientists. Sure, there have been some "case studies," but scientists' own reports of their use of analogy (in their writings, e.g.) are pretty much worthless, because people are terrible at remembering the analogies they use. So Dunbar's work has been invaluable.
One of the ways in which Dunbar's work has helped to paint a picture of the scientific use of analogies is by showing under what circumstances scientists are likely to use analogies. Not only do scientists use analogies frequently (he coded 99 uses of analogy in 16 lab meetings4), but they tend to use them for the same purposes. Dunbar describes four typical uses of analogy: to formulate hypotheses, design an experiment, fix an experiment, or explain a result5. It's the last of these that I'll talk about here.
Unexpected findings are extremely common in scientific research. Anyone who's ever conducted actual research can tell you that at least as often as not, experiments "don't work," i.e., you don't get the results you were expecting. In fact, in the labs that Dunbar has observed, more than 50% of the experiments have yielded unexpected results6. And in almost every case Dunbar observed, scientists use analogies to try to explain these unexpected findings. The types of analogies that scientists use to explain unexpected findings differ depending on how many unexpected findings there are. In the case of unexpected findings from a single experiment, scientists tend to use "local analogies," or analogies to highly similar experiments. Dunbar writes (Dunbar, 2001):
Following an unexpected finding, the scientists freuently draw analogies to other experiments that have yielded similar results under similar conditions, often with the same organism... Using local analogies is the first type of analogical reasoning that scientists use when they obtain unexpected findings, and is an important part of dealing with such findings. Note that by making the analogy they also find a solution to the problem. For examle, the analogy between two experiments with similar bands ["yes in my experiment I got a band that looked like that, I think it might be degredation...I did... and the band went away."]... led one scientist to use the same method s the other scientist and the problem was solved. (p. 316)
When there is a series of unexpected findings, the types of analogies that scientists uses change. The analogies used under these circumstances tend to involve source domains that are quite different from the target domain (the unexpected finding). In other words, scientists stop using local analogies. Dunbar writes (Dunbar, 2001):
In this situation [a series of unexpected results], they drew analogies to different types of mechanisms and models in other organisms rather than making analogies to the same organism. This also involves making analogies to research outside their lab. The scientists switched from using local analogies to more distant analogies, but still within the domain of biology. For example, a scientist working on a novel type of bacterium might say that "IF3 in ecoli works like this, maybe our gene is doing the same thing.
By now, you probably see where I am going with this. When scientists see unexpected results, their first instict is to go back to experiments in which they received similar results (in most cases, also unexpected), and use the explanations from those experiments to explain the current results. If experimenter error is at play (as in the example that Dunbar gives in the first quoted paragraph above) in the previous experiment, then experimenter error will be used to explain the unexpected results of the current experiment. And even if we acheive a whole series of unexpected results, while our analogies will be to more (conceptually) distant research, our explanations are still likely to be based on the explanations given for previous findings. As is often the case, analogies are serving as schemas, or templates, from which we can derive explanations. And while in most cases this practice is very productive (as Andrew notes, somewhat sarcastically), it can also be pernicious. It can cause us to miss potential alternative explanations for unexplained results. Perhaps experimenter error was not the reason for the unexplained result (in this experiment, or even in the previous experiment that is used as the source in the analogy). Perhaps there's something really important going on, which isn't explained by the current dominant paradigm. But because of the way the human (and that means scientific) mind works, our first recourse is to look for answers in previous research, and previous research is almost always conducted within the dominant paradigm. Thus, our answers, even for unexpected findings, will tend to be consistent with the dominant paradigm.
So once again, we have an example of the schema-driven mind at work. The paradigm itself serves as a schema, determining what is expected (and therefore, what we're likely to find), but it also spawns little schemas, in the form of previous experiments and analogies to those experiments, which determine how we interpret unexpected (and potentially paradigm-inconsistent) results. In this way, paradigms and theories, like all schemas, are self-perpetuating (note that in a way this all sounds like talk of memes, and I'll just briefly mention here, and perhaps explain in more detail later, that I think memes are really just schemas, and explainable using ordinary cognitive mechanisms). Of course, we can eventually get beyond our schemas, and come up with new explanations, new theories, and new paradigms. The example that Andrew discusses in his post demonstrates this. The scientists kept getting their unexpected result, and in all likelihood, none of the analogies they made to previous findings held up under the scrutiny of further experimentation. So they had to step outside of their paradigm, outside of their schema, and come up with a new explanation. But as Andrew's example from his own research shows, this shedding of schemas is not easy to do, and thus in the grand scheme of theings, it is quite rare.
1Dunbar, K. (2001). What scientific thinking reveals about the natre of cognition. In K. Crowley, C.D. Schunn, & T. Okada (eds.). Designing for Science: Implications From Everyday, Classroom, and Professional Settings, pp. 115-140, Mahwah, NJ: Lawrence Erlbaum Associates.
2 Kepler once wrote, "I cherish more than anything else the Analogies, my most trustworthy masters. They know all the secrets of Nature, and they ought to be least neglected in Geometry."
3 "Let us suppose, then, as is highly probable, that motion is dispensed by the Sun in the same proportion as light. Now the ratio in which light spreading out from a center is weakened is stated by the opticians. For the amount of light in a small circle is the same as the amount of light or of the solar rays in the great one. Hence, as it is more concentrated in the small circle, and more thinly spread in the great one, the measure of this thinning out must be sought in the actual ratio of the circles, both for light and for the moving power." From Kepler, J . (1956/1981). Mysterium cosmographicum 1,11(A . M. Duncan, Trans.) . (2nd ed.) . New York: Abaris Books . As quoted in Gentner, et al. (1997). Analogical reasoning and conceptual change: A case study of Johannes Kepler. The Journal of the Learning Sciences, 6(1), 3-40.
4Dunbar, K. (1999). How scientists build models: In vivo science as a window on the scientific mind. In L. Magnani, N. Nersessian, & P. Thagard (eds.), Model-Based Reasoning in Scientific Discovery.
6 Dunbar, K. (2001). The analogical paradox: Why analogy is so easy in naturalistic settings, yet so difficult in the psychological laboratory. In D. Gentner, K.J. Holyoak, & B. Kokinov (eds.), The Analogical Mind: Perspectives from Cognitive Science, pp. 313-334. By the way, this is an excellent collection of essays, and I highly recommend it to anyone who's interested in the cocnigitive scientific study of analogy.
"Why do scientists handle unexpected results in this way? Why, when they get results that are inconsistent with the 'traditional paradigm,' are they so quick to assume (and, as Andrew notes, it is usually just an assumption) that experimenter error, or something equally theoreticlaly innocuous, is to blame? The answer to that question is likely to be pretty complicated, touching on several aspects of the interplay between human cognition and scientific contexts."
In my own laboratory, I am indeed quick to assume that experimeter error or something else theoretically innocuous is the reason for any single unexpected result. The reason for this is that experimenter error or other "mess-ups" are exceedingly common. Even under the best of circumstances, the vast majority of experiments simply fail to provide interpretable results.
So when a single experimental result is inconsistent with current theoretical paradigms, it is *much* more likely--all else being equal--that the reason is experimental failure than it is that the theoretical paradigm requires revision. It is therefore perfectly rational to make experimental failure one's default assumption.
Good scientists, of course, keep an open mind even in the face of this assumption. It is when multiple experimental results independently seem inconsistent with a theoretical paradigm, that one begins to entertain thoughts of revision.
It is, of course, possible that this is all just irrelevant rationalization.