In an interesting post, *Think Gene* poses what they call “the inherent problem” of scientific theories:

The inherent problem of scientific theories is that there exists an infinite equally valid explanations. Why? Because unlike in mathematics, we never have perfect information in science. …

OK, so our world understanding improves as we verify models, like if the Large Hadron Collider finds the Higgs? right? Theoretically, no. An infinite number of theories that are just as ?probable? as the others still exist to be tested. All that was done was eliminate some of the theories. Subtracting anything from infinity is still infinity.

This is an interesting question. Here are my conjectures.

I don’t think that the question here has much to do with imperfect information. Even if we had perfect information, there are still a very large (infinite? I guess so if the data can be conintuous) number of compatible models. So, how do we make progress?

I think that while we may be left with an infinite, or at least Very Large, number of possible hypotheses, we still make progress. Here is how:

Suppose at *t* we have a set of possible models that are consistent with the data. Then we make a finer measurement (say, for *c*), with a smaller uncertainty. The volume of possible values has reduced. We now have a more accurate and precise value to the speed of light. How is that not progress? Okay, we may have a *formal* infinity of possible models, but if we take the volume of the metric space of alternatives at *t* to be 1, now at *t*+*n* that volume is reduced. If the remaining volume is infinitely divisible, or there are an infinite number of possible models of that metric space, we are less uncertain about the speed of light than we were before.

It doesn’t matter than we can come up with an infinite number of models for the remaining metric volume. We have reduced the volume in which we must search for a solution. And to a certain degree of precision, we have a result. At one time we might have thought light travelled instantaneously. Learning that it has a speed tells us something, even if we cannot accurately measure it at first. Learning that it is around 300,000 km/s tells us something (and something we can *use* in telecommunications. Learning that it is invariant of the speed of the transmitter tells us something, etc. While this may leave an infinite number of possible metrics at infinite resolution, at ordinary scales it is a definite outcome.

Explanation is, in my view, still very like the old deductive nomological account, at least in most sciences. If a model delivers, reliably and consistently and even predictively, results that are close enough to the observed results, depending on the resolution of the assay techniques – that is, if the model hits the right volume of metric space – then we can say the model is explanatory. Yes, there may be an infinite number of alternative models – that is pretty well guaranteed by the properties of model math – but science like evolution doesn’t consider *possible* competitors, only *actual* ones, and that set is greatly restricted by the mere fact that individuals cannot come up with infinite numbers of these models. Science explains relative to competing approaches, not against all possible models.

In a way, this is like trying to count the possible bald men in that doorway; there are an infinite number of possible bald men, but all we can *count* are the actual observed bald men. As assays are precisified (a horrible word, like “incentivised”, needed neither by logic nor English), so too the models that compete must either hit the volume of that smaller metric space or at worst be not inconsistent with it. And that is progress.

So the inherent problem is not so inherent. It is, I think, a trick of the light. If you introduce infinities into any historical process, absurdities result. Yes, logically we can generate an infinite number of slightly different or radically different models that are consistent with the data so far. But logical competitors are not real competitors. If we have several equally empirically adequate models then we seek to find a way to discriminate between them. If one is consistently better, then it gets adopted. This applies also if the model is merely semantic (like “continental plates move around”), awaiting metrication.

The problem known as the *Pessimistic Meta-Induction* (or the Pessimistic Induction – PI) goes like this:

Every scientific theory of the past has been proven wrong. By induction, every scientific theory we know now will also be proven wrong. Therefore we know nothing [Or, therefore we cannot believe our best scientific theories].

Is PI worrisome, or is there a way around it? The PI was first proposed by Henri Poincaré at the turn of the twentieth century. It has become a major argument against scientific realism – which is the view that the objects posited by the theories of science can be said to exist. Antirealists hold that if one is a scientific realist, one has to hold that our best theories refer to real things, but this either means that past scientific theories failed to, in which case there’s no sense in which our theories are getting more accurate about the world, or that they did, but were wrong about the details, in which case you end up with a “metaphysical zoo” in which, for example, dephlogistinated air refers to oxygen. Either way, scientific theories then were referentially inadequate, and we have no confidence that our present theories are any better.

There’s a lot of literature on this, but I am moved to comment by a paper by blogger P. D. Magnus, who has put up a draft paper on the topic.

To start with, I find arguments about the reality of theories burdensome. If one holds that one knows anything about the world, then what one knows is *real enough* for ordinary purposes. Antirealism and scientific realism are, I think, extremes from an epistemic perspective. But the issue here is whether we *do* know anything from the success of scientific theories. Or, to put it another way, if the success of science is good enough to argue that the ontology of scientific theories – the things they posit to exist – is correct.

This resolves to a question of the reference of theoretical objects, and whether theories that are eliminated make the remaining players likely to be true. The inherent problem is therefore the same issue as the PI. How can we take cognitive heart from our best theories (for simplicity here I’m equating theories with models)? The answer is that we have *learned* things, things that are true no matter what subsequent models or theories we might come to adopt. No cosmology that requires the pre-Cartesian universe, with the stars as points of light on a sphere, is ever going to be viable again. It’s been ruled out of contention. We *know* this, come what may. We know that inheritance involves molecules rather than an *élan vital*. We know that modern species did not come into existence magically, without prior ancestral species. We know that objects are composed of atoms, and not some undifferentiated goop, and so on. Nothing that contradicts all or most of these knowledge items we now possess is going to be (rationally) adopted by science ever again. As Koestler overstated it, we can add to our knowledge but not subtract from it. [It's not quite right - we can find mistakes, and we can have periods of loss of knowledge, but it'll do for this context.]

So I will once again make a plea for Hull’s evolutionary view of science. As theories, along with techniques, protocols, and schools of thought, compete for attention and resources, those that are more empirically adequate will tend to be adopted, reducing the space of possible models. Who cares if it’s still infinitely differentiable? We know more than we did.