What scientific explanation is

Rob Skipper has an excellent post at his blog entitled What Scientific Explanation Isn't. It's a good explanation of the DN (deductive nomological) model of explanation offered by Carl Hempel, which has come across some serious criticism of late.

By coincidence, this was a major issue in the two conferences I recently attended. I want to say some things about explanation based on these discussions, and offer a qualified defence of the DN model.

The standard story I was taught some two decades ago was that explanation is a deduction from a law or laws and a set of initial conditions that gives the thing to be explained. This is the DN model. Since then there have been a number of different accounts of explanation, which are well covered by the Stanford Encyclopedia article, but I want to focus on the role of not laws, but generalisations. There are several objections to the use of the DN model and its inductive statistical sibling, not least being the symmetry of explanation: if the relation of the sun to a flagpole explains the shadow, then the length of the shadow explains the height of the flagpole, and this is clearly wrong (an old objection). So you need something more than inference to supply the explanation - you also need a causal relation.

But the trouble with causal relations, as any parent of a four year old can attest, is that there's no end to them: one can always ask "but why?". How can one isolate the important causal relations? It seems to me that the recent proposals of the Causal Mechanism crew have some merit. If they are invariant with respect to the phenomenon being explained, causal relations explain it. How to tell? Intervene. Ian Hacking, in his famous book Representation and Intervention hinted at this - the way to test what is and what is not important is to include counterfactual claims. If the things doing the explaining (not, not the Lucys, the explanans) were not there, then the thing being explained (explanandum) would not have occurred.

This view has been taken up by James Woodward, author of the Stanford article, in a series of papers and a recent book. It is broadly known as the causal account of explanation. I won't go into it in any detail, because it's in the article and Skipper's discussion, and in several papers listed in the references. So what can I add to this very debated topic? It is this - I think that there are times and cases when the DN account of explanation or something very like it is correct in science, and times when a causal account is correct, and that the two are obversely related. It all has to do with types and tokens.

One of the early objections to the DN account was offered by an example of Michael Scriven's: if I knock over an inkwell by hitting the table on which it stands with my knee, it appears to be without laws, but chock full of causal chains. The problem is, what causal chains should we track when we give such an explanation? Or, to put it another way, what things do we include among the manipulations that we might intervene with to test the counterfactuals? It is my view that we rely on something very like laws to do this. From what I have read, so too does Woodward.

You ask me why my knee knocking the table caused the table to move - I give an explanation in terms of the causal processes that make rigid objects move when inertial masses hit them. They impart kinetic energy. One view of the causal account is that causation imparts a "mark" to the caused objects - such as the transfer of energy. This seems to me to require that we already understand the generalisations of the types of objects that feature in the explanation. We might not have formal or explicit laws, but we certainly have some type descriptions that function more or less that way. Knees are relatively solid objects with mass, and that type of thing imparts energy to other things of that type when they collide. Ink is a fluid held in an open container by gravitational attraction, and such things are unstable when enough energy is imparted to them, and so on. These are all something that can be put in terms of an asymmetric DN-style of explanation.

At the ecology conference the question of when mathematical explanations applied, in contrast to the "individual-based models" (IBMs) of single organism tracking for entire ecosystems. Mark Colyvan offered the claim that mathematical models have explanatory force. Clearly they are nothing much like causal stories, but equally clearly they don't apply to real systems unless the variables of the model are filled in with types of causal objects. A formal model explains why the variables covary, as in the Ideal Gas Law. But a real world interpretation, in which the variables represent physical, causal, properties, does. Many things - for example, economic systems - might meet the dynamics of the Ideal Gas Law - but the explanation only describes how a system whose causal types can be fitted into the model will behave. The real story for that system is about how traders or molecules, and all the other physical objects that make up the system, behave causally (and how when they don't, the model fails to apply; since gas molecules are not hard inelastic particles, and traders aren't rational egoists).

Likewise the IBMs will only give an account of that ecosystem, and only when the other factors either cancel out or fail to change the way the system behaves, so that the causal story is all that matters. To generalise an IBM explanation, of course, you have to generalise the types, which you have already done by excluding all the other physical factors of that system. In short, by limiting yourself to a particular series and domain and case, you already know many of the generalisations to be applied.

It's my view that we do both of these. To be sure, the classical DN model was overly ideal, and lacked causal direction, so we needed to be able to apply causal relations to prevent the symmetry. But equally, to explain things in terms of a particular set of events is to presuppose some of the generalisations that could apply in a modified or limited DN account. And each explanation will be more or less general, and hence use broader or narrower generalisations, in their syllogism of explanation.

Consider evolution by natural selection (EBNS). It is, as a formal model, an analytic statement. As such it says nothing about the sorts of objects that can be properly fitted into an EBNS explanation. It is empirically vapid. But the dynamics are pretty unintuitive once you start to work it out in its implications. But each case of EBNS involes a particular set of causes - the ability to detoxify a fungus as food, or the coloration of the wings of a butterfly fooling a predator. You can give a causal explanation, but you will implicitly appeal to generalisations when you do (such as "birds have a visual acuity and ability to perceive such and so patterns"). You can give a formal explanation, but implicitly appeal to some rules of causal interpretation to fit the actual particulars into the equation and antecedent conditions.

It strikes me this is a general point about scientific explanations (and probably also extends to ordinary explanations). We always have some types into which we can fit the tokens that are the empirical foundation of the explananda, and we always have some tokens that call for an explanation in terms of general types and their properties. The DN intuition is real and valid - explanation involves an appeal to general types. They may be very limited or they may be very general - I believe that in physics the two types of explanation coincide, but in the rest of science, particularly the special sciences like biology, generalisations are always fairly limited in their scope.

And, since I'm well outside my area of competency now, let me go on. The role of explanations is at least partly to identify the things that do vary from the expected dynamics of each system, so we can go looking for anomalies that call for a further explanation. This is an implication of a view of science I hold to that a lot don't - the so-called semantic conception of theories, in which a theory is a formal system including interpretations of the variables (the semantic element). Some other time, I'll explore this here...

Oops: references:

Hacking, Ian (1983), Representing and intervening: introductory topics in the philosophy of natural science. Cambridge UK: Cambridge University Press.
Mayes, G. Randolf (2006). Theories of explanation. Internet Encyclopedia of Philosophy.
Salmon, Wesley C. (1984), Scientific explanation and the causal structure of the world. Princeton, N.J.: Princeton University Press.
------ (1998), Causality and explanation. New York: Oxford University Press.
Woodward, James (2000), "Explanation and invariance in the special sciences", Br J Philos Sci 51 (2):197-254.
------ (2003), Making things happen : a theory of causal explanation. New York: Oxford University Press.
------ (2003) Scientific explanation. Stanford Encyclopedia of Philosophy.
------ (draft) Countertfactuals and causal explanation [Microsoft Word].

More like this

One problem with discussions of evolution in particular is that most of the explanations are post facto. Only in the simplest cases, such as antibiotic or pesticide resistance, can future possibilities be enumerated well enough for prediction or experiment. Even the post facto explanations can suffer from the fact that intermediate stages may not be available or obvious. And then of course, some features may be truly accidental -- or, they may represent adaptations to prior circumstances which are not currently present, or which have since been mooted by further developments elsewhere. And, of course, nature is hardly obligated to respect our categories or classifications....

By David Harmon (not verified) on 09 Jul 2006 #permalink

I think this is a problem of the complexity of the applicable generalisations and the number of possible antecedent conditions; or, if you want a causal account, on the variance of the various causal chains.

History is like this in general, I think.

By John Wilkins (not verified) on 09 Jul 2006 #permalink

I'm currently reading a rather relevant (and interesting) book titled Insights of Genius, by Arthur I. Miller. The subtitle is "Imagery and Creativity in Science and Art". (ISBN 0-387-94671-3, pub. by Copernicus/Springer-Verlag) The first few chapters (I'm only on Chapter 4 of 10) have been discussing the developments and interchanges in physics over the virst quarter of the 20th century, with considerable discussion of modes of thought, and how social factors affected the development of the various theories. Certainly worth checking out!

By David Harmon (not verified) on 11 Jul 2006 #permalink

It seems to me that the counter-examples to the DN criteria for "scientific
explanation" show that some things which meet the criteria are not
scientific explanations, but do not show the converse. Is one permitted to
say that all explanations meet the DN criteria?

My interest is in showing that Intelligent Design is not an explanation.
Would it be adequate to that end to show that ID does not meet the DN
criteria?