Over at DrugMonkey, PhysioProf has written a post on the relative merits of “correct” and “interesting”, at least as far as science is concerned. Quoth PhysioProf:
It is essential that one’s experiments be “correct” in the sense that performing the same experiment in the same way leads to the same result no matter when the experiment is performed or who performs it. In other words, the data need to be valid.
But it is not at all important that one’s interpretation of the data–from the standpoint of posing a hypothesis that is consistent with the data–turns out to be correct or not. All that matters is that the hypothesis that is posed be “interesting”, in the sense of pointing the way to further illuminating experiments.
I spend a lot of time with my trainees on this distinction, because some of them tend to be so afraid of being “wrong” in their interpretations that they effectively refuse to interpret their data at all, and their hypotheses are nothing more than restatements of the data themselves. This makes it easy to be “correct”, but impossible to think creatively about where to go next.
Some tend in the opposite direction, going on flights of fancy that are so unmoored from the data as to result in hypotheses that are also useless in leading to further experiments with a reasonable likelihood of yielding interpretable results.
I think this is a really good description of a central feature of scientific activity.
Scientists are trying to build reliable knowledge about the world (or about particular phenomena that are part of the world). To make their accounts “knowledge”, they need to be grounded in empirical data — observations of particular features of the world (either how things were unfolding on their own, or what happened in experimental set-ups). Empirical data is useful, because its the kind of thing to which other scientists have access — indeed, the kind of thing other scientists can get for themselves, either by following the precise methodology you used (i.e., attempting to reproduce your results), or by conducting some related experiment on the same system.
The empirical data are the publicly accessible facts about the world we share. That other scientists can inspect our empirical data and generate their own gives us some assurance that we really are living in the same world. Living in the same world, we should be able to come to approximately the same empirical facts.
But, any given pile of empirical facts — even ones sufficiently reproducible that we’re happy to call them correct facts — is not sufficient to settle the matter of the precise nature of the world in which those facts were obtained. Theories are underdetermined by the data. Knowing what’s happened so far gives us clues to what might happen next (or to what might have happened instead under slightly different circumstances), but no firm guarantees. (The problem of induction has some real logical teeth.)
What you make of these empirical facts — the picture of the world you start to put together from these clues — can’t be the end of the process, at least not for a scientist. Any such picture, on its own, is your subjective interpretation of what the facts mean. And a scientist wants to pull this interpretation back into the arena where it can be examined objectively by a community of scientists.
In other words, the scientist is looking for an interpretation of the pile of data that both fits the data and could itself be tested against additional data. An “interesting” interpretation of the data will be one that, implicitly or explicitly, makes falsifiable claims and suggests further lines of experimentation (tractable experiments are especially nice) by which the interpretation can be tested. If you like, the connection of interpretations to existing data and to additional data we could go and get is what keeps the discourse “in bounds” for the scientists. Or, from the point of view of Sir Karl:
Popper has this picture of the scientific attitude that involves taking risks: making bold claims, then gathering all the evidence you can think of that might knock them down. If they stand up to your attempts to falsify them, the claims are still in play. But, you keep that hard-headed attitude and keep you eyes open for further evidence that could falsify the claims. If you decide not to watch for such evidence — deciding, in effect, that because the claim hasn’t been falsified in however many attempts you’ve made to falsify it, it must be true — you’ve crossed the line to pseudo-science.
Being wrong about what the data mean is not a crime against science. Being unwilling to test your guesses about what the data might mean, however, is shirking your scientific duties. Since any worthwhile interpretation is going to need to be tested — by you and by your fellow scientists — getting a feel for drawing inferences that lend themselves to empirical probing is an important scientific competency. As well, making your peace with having new data blow your interpretation to bits — then picking yourself up and coming up with a new interpretation to test — is a valuable life skill.