In response to this blast from the past about Kuhnian scientific revolutions, SteveG has an interesting discussion about the inadequacy of Popperian falsification for understanding paradigm shifts, or to use Imre Lakantos' phrase "research programme" (italics mine):
Imre Lakatos was a student of Popper's who also found certain things about Kuhn's view deeply attractive. He realized the problems with the use of falsifiability of individual hypotheses as a criterion of demarcation for science that arose from Kuhn's insights. but he also saw one of the glaring problems with Kuhn's system. If a paradigm is a worldview and defines the questions, means of answering them, and what counts as acceptable answers, then all of rationality resides within the paradigm. As such, there can never be good reason to move from one paradigm to another as reasons only make sense within a paradigm.. There is no way to comparison shop for paradigms and so paradigm shift is akin to religious conversion.
Lakatos used Popper to solve this problem in Kuhn. Popper pointed out that falsified propositions could be saved by the use of ad hoc hypotheses and ruled them out as not allowable. For Kuhn, they are allowable. Lakatos' insight was to reformulate Kuhn so that while they are permitted, they are a liability to theory acceptance. And research programme (as he renamed paradigms) could be saved by tweaking some other part of the theory, but when your tweak limits the relative testability (making less falsifiable in a sense), it becomes "degenerate." When the research programme is able to explain more and more without ad hoc modifications, it is seen to be progressive. Kuhn is right (and Popper wrong) that you are never forced to rule out any theory, it can always be saved from problematic data and still be scientific. But Popper was right (and Kuhn wrong) that the ad hoc manner of saving it doesn't come with a rational price.
As such, when we look at Intelligent Design and Darwinian Evolution, we have two research programmes that can be maintained regardless of the data. But it happens that ID is quite degenerate requiring all kinds of patches that do not increase its independent testability to account for observable phenomena. Evolutionary theory, on the other hand, is an unbelievably progressive research programme that accounts for a staggering amount of data ranging from macro-ecological facts, to micro-level genetic facts to geological facts. Darwinian evolution is testable in so many, many ways and in the overwhelming number of them easily accounts for observations. Are there anomalies? Of course. Every theory has anomalies. Will some of them be resolved with the addition of facts now unknown? Sure. Will others force us to rethink parts of the theory as it is now accepted? No doubt. Are there some that will cause the entire research programme to become degenerate and make it less than rational to cling to? Possible, but I'm a better bet to win the Tour de France next year.
One minor quibble is that I don't think ID requires "all kinds of patches"--it relies on one fundamentally unknowable patch: the mode of action of a Designer. In that sense, ID can not be addressed within the research programme of science as a whole (as opposed to evolutionary biology).
This allows me to stumble into one of my philosophy of biology pet peeves: despite the popular acceptance of Popperian falsification, much of science actually uses a likelihood framework: hypotheses, big or small, are compared to one another to determine which hypothesis is the most likely. When examined from this perspective, ID still fails: I can't place a likelihood estimate on the actions of an Intelligent Designer. By comparison, if we were to find that, overwhelmingly, most of the diversity observed in natural populations is not due to natural selection (i.e., only one in thousands of cases studied), then the theory of evolution by natural selection, while not 'falsified' does a very poor job of explaining natural diversity, and is an unlikely explanation.
Suffice it to say, natural selection is pretty likely.....
Both the above and SteveG's posts were interesting to me. Maybe there is a difference in educational programs that makes the english speaking world so interested in such questions. Where I come from you don't need to take courses in science history and philosophy to become a scientist, more the pity.
SteveG's account of Kuhn, Popper and Lakatos is excellent. But as usual my reaction to attempts to explain what Kuhn tries to describe is that it seems to describe how new theories develop.
Even on small scales new theories will "bring an entire worldview" with "foundational propositional beliefs", defining questions, methods and answers. I honestly don't see an establishment of a qualitative difference.
As testability is important for theories I feel more at home with Popper, who at least established a model for how it works (falsification). And here I think both SteveG and this post makes the mistake to reduce testability to naive falsification, that every statement of a theory must be predictive and testable.
To reject false theories we need at least one prediction, and to replace an old theory the new theory is either more parsimonious or must make one new prediction. So AFAIU testability is necessary for picking theories but not sufficient, it isn't enough for picking ultimately viable theories.
And with this testability Kuhn and Lakatos becomes incomprehensible to me. If a theory is rejected, if so a parameter range is ruled out, and is proposed in a modified form it isn't "saved" but new.
What saves us from bloviating theories is that comparably predictive theories are chosen by parsimonity. In analogy to probabilistic modeling parsimonity prevents the models to become "overtrained" on data, explaining old data well by using many parameters and ad hocs but crashing catastrophically on new data.
This circuitous argument leads back to the observation that likelihoods are used in (proposing and) evaluating theories. With bayesian methods one can do that and also compare parsimonity as between cosmological models and cladograms.
One can even use bayesian probabilities to do so. And repeatedly adding new data and arriving with much the same parsimonious model isn't a quantifiable test, but close IMO.
Einstein once said that the whole of science is just a refinement of everyday thinking.
Torbjï¿½rn Larrson wrote: "Maybe there is a difference in educational programs that makes the english speaking world so interested in such questions. Where I come from you don't need to take courses in science history and philosophy to become a scientist, more the pity."
It's not really a difference in educational programs--very few scientists in the USA know much about history and philosophy of science, either. And it doesn't really hold them back. Intelligent Design people like to get into it a lot precisely because scientists aren't as comfortable discussing such things, and they think they can "score points" by appearing smarter than the scientists about "big picture ideas." But science, like art, exists independently of the theories used to explain its creations after the fact.
Iain wrote: "It's not really a difference in educational programs--very few scientists in the USA know much about history and philosophy of science, either. And it doesn't really hold them back. Intelligent Design people like to get into it a lot precisely because scientists aren't as comfortable discussing such things, and they think they can "score points" by appearing smarter than the scientists about "big picture ideas." But science, like art, exists independently of the theories used to explain its creations after the fact."
My degree is in political science (if it can be called that). That being said, I have had above average exposure to science classes (I switched from Chemical Engineering) when compared to most of my fellow social scientists. My exposure to Popper, Kuhn and Lakatos comes from having read "What is This Thing Called Science" while taking a history and philosophy of science course during undegrad.
I point this out because I think it is important to note that there is a real loss when science majors are not given at least a cursory introduction to the philosophy of science. Governments often look to experts to bridge the gap between scientific theories and public policy. This philosophical knowledge would allow scientists to buffer their scientific findings by placing them in a rational framework more famialiar to those trained in social science, law and economics. This might really help in certain controversial debates, such as what policies to adopt in the face of climate change or the above mentioned decision whether to teach intelligent design in science classes.
Perhaps it need not be stressed in undergraduate work, but gradute and doctoral candidates ought to be conversant in the philosophical descriptions of what it is that they do. Plus, more knowledge is hardly ever a bad thing. My $0.02.
I don't recall if it was in "Structure" or a later work, but Kuhn discussed two other criteria for deciding between paradigms independent of falsification:
Elegance: in the sense that there are fewer independent "prime causes" (my wording) and
Opportunity: for original research.
As applied to ID, elegance would probably be perceived very differently: proponents see a single cause for it all where scientists see a separate cause for each intervention or design decision.
Steven Pinker has an interesting discussion of the neural primitive(s) of perceived causation (my wording, I think) in his new book The Stuff of Thought. Highly relevant to kuhnian paradigm analysis IMO.
My internal model of a Kuhnian paradigm is sort of an (n+1)-dimensional space (with large n) where each of n dimensions represents a specific question (usually with a few discrete answers such as yes/no). For each possible point in this n-dimensional space assign a probability of "truth" (i.e. correctly representing reality) which is mapped onto the +1 dimension.
The result is a map that sort of resembles the "fitness maps" used in evolutionary theory. A theory (or hypothesis) would represent a single value for one dimension, with perhaps a few tweaked values for related dimensions (questions). If the result corresponds with experiment so as to improve the overall Probability of "truth" (relative to an implied or explicit alternative) it can be accepted.
A paradigm shift would involve changing a large number of these values simultaneously, usually along with adding and removing a number of questions/dimensions from the space. You could never get to the new paradigm from the old one question at a time without crossing a very deep "truth probability valley". That's why the paradigm shift implies a revolution.
Note that in this model there's no hard and fast distinction between a new theory and a paradigm shift, new theories can be more or less revolutionary depending on how many related questions have to be "re-answered" simultaneously.
As for ID, I'm not very well read on it, but doesn't it sort of assume that the whole thing was started up in a specific state with the human design (etc.) built into the starting conditions, then allowed for move forward in a clockwork predetermined manner?
That's not consistent with uncertainty theory is it? You would have to assume that all the mutations necessary for selection to produce humans and all their ancestors going back to the the first chordates (or eumetazoans, or eukaryotes) have a reasonable probability of happening in the relevant populations at the appropriate times.
But as the advances in Evo-Devo are suggesting, many of these mutations may be homeotic, occurring at a point in the developmental pathway where they affect many features at once. I would suppose the probability that any such mutation would be beneficial would be far lower than in a one-function mutation, while the number of possible mutations that could occur would be higher.
Thus if we "re-ran the tape" there is a high likelyhood that different beneficial homeotic mutations would occur at different times in different populations. This would render the whole evolutionary scenario "unreproduceable", and thus it couldn't be predetermined.
Wouldn't this falsify "intelligent design"?
I've never quite understood why people see Popper and Kuhn as being in opposition. Popper is essentially addressing the logical basis of scientific discovery, whereas Kuhn does not address why, or even whether, it works--he merely describes how it is done in practice. It remains possible, then, that the success of scientific investigation is attributable to the extent to which the behavior described by Kuhn results, however indirectly and imperfectly, in the outcome required by Popper--rejection of theories whose predictions are disconfirmed by experiment. Likelihood-based decision making certainly provides one mechanism whereby this could occur.
But it has always seemed to me that Popper failed to prove that scientific investigation under his paradigm actually converges on the truth, because he did not show that the universe of possible hypotheses is finite. If it is not, then rejection of any number of hypotheses does not necessarily move us any closer to the truth.
This is a very goods