Theories with the fewest assumptions are often preferred to those positing more, a heuristic often called “Occam’s razor.” This kind of argument has been used on both sides of the creationism vs. evolution debate (is natural selection or divine creation the more parsimonious theory?) and in at least one reductio ad absurdum argument against religion. Simple theories have many advantages: they are often falsifiable or motivate various predictions, and can be easily communicated as well as widely understood.
But there are numerous reasons to suspect that this simple “theory of theories” is itself fundamentally misguided. Nowhere is this more apparent than in physics, the science attempting to uncover the fundamental laws giving rise to reality. The history of physics is like a trip down the rabbit hole: the elegance and simplicity of Newtonian physics has been incrementally replaced by more and more complex theories. At the time of writing, this has culminated in M-Theory, positing no less than 10 dimensions of space and the existence of unobservably small “strings” as the fundamental building block of reality. It seems safe to assume that the fundamental laws of reality will be even more complex, if we can even discover them.
So where did Occam’s Razor go wrong?
Occam’s Razor is actually a vestigial remnant of medieval science. It is literally a historical artifact: William of Ockham employed this principle in his own 13th century work on divine omnipotence and other topics “resistant” to scientific methods. The continuing use of parsimony in modern science is an atavistic practice equivalent to a cardiologist resorting to bloodletting when heart medication doesn’t work.
And it is in the life sciences where Occam’s razor cuts most sharply in the wrong direction, for at least three reasons.
1) First, life itself is a fascinating example of nature’s penchant for complexity. If parsimony applies anywhere, it is not here.
2) Second, evolution doesn’t design organisms as an engineer might – instead, organisms carry their evolutionary history along with them, advantages and disadvantages alike (your appendix is the price you pay for all your inherited immunity to disease). Thus life appears to result from a cascading “complexifying” process – an understanding of organisms at the macroscale will be anything but simple.
3) Third, we know that the even the simplest rules of life (click the button at the upper left, labelled “Enjoy Life”) can give rise to intractable complexity. Unless you’re a biophysicist, the mechanisms at your preferred level of analysis are likely to be incredibly heterogenous and complex, even at their simplest.
Of course, some disciplines have injured themselves with Occam’s razor more than others. A theoretical cousin of Occam’s razor, maximum parsimony, has been quite useful for understanding evolutionary relatedness. Yet similar methods have led to particularly disastrous results in psychology. For several decades experimental psychology was dominated by an approach known as radical behaviorism, in which concepts related to “thinking” and “mind” were quarantined from mainstream journals.
Likewise, Occam’s Razor cut deep and wide through developmental psychology. How many apppropriately complex theories of development were excised in favor of those advocating four or five tidy “stages” of cognitive development? The entire field is lucky to have survived the ridiculous nature-vs-nurture debate, a false dichotomy itself grounded in the pursuit of parsimony.
Thus, the utility of Occam’s Razor is highly questionable. Theories which it would soundly eliminate are usually questionable for other reasons, while useful theories might be discarded for a lack of parsimony relative to their over-simplified competitors. The theory which states “height determines weight” can do a reasonable job of providing evidence that seems to support that theory. And it’s highly parsimonious – Ockham would love it! But the theory which says “nutrition, exercise, and a collection of more than 100 genes predict both height and weight” is highly unparsimonious, even though we know it’s better than its competitor theory. Statisticians have quantified the appropriate penalty for various theories based on the number of variables they involve, but the more theoretical modes of quantitative science have yet to catch up.
“I hope at least that the next time you’re tempted to consider parsimony as a desirable aspect of whatever you are doing, you’ll give some thought to whether you really want to advocate a simplistic and nonexistent parsimony, rather than an appropriately complicated and meaningful psychology.” – William Battig
4/16 EDIT: I think the following quote sums up my argument even better:
“The aim of science is to seek the simplest explanation of complex facts. We are apt to fall into the error of thinking that the facts are simple because simplicity is the goal of our quest. The guiding motto in the life of every natural philosopher should be “Seek simplicity and distrust it.” – Alfred North Whitehead
Related:
Of Molecules and Memory, Part I
Ockham”s Razor cuts both ways: the uses and abuses of simplicity in scientific theories
Against Parsimony, Again (A Statistcian’s View of Parsimony)
How Reductionism Leads Us Astray in Cognitive Neuroscience (using planning as an example research domain)
Reconstructing a Preference for the Complex Out of Science’s Reverence for the Simple (using symbol use as an example research domain)
Complexity From the Simple: Choice RT and Inhibition