Why The Simplest Theory Is Never The Right One: Occam's Razor Has A Double Edge

Theories with the fewest assumptions are often preferred to those positing more, a heuristic often called "Occam's razor." This kind of argument has been used on both sides of the creationism vs. evolution debate (is natural selection or divine creation the more parsimonious theory?) and in at least one reductio ad absurdum argument against religion. Simple theories have many advantages: they are often falsifiable or motivate various predictions, and can be easily communicated as well as widely understood.

But there are numerous reasons to suspect that this simple "theory of theories" is itself fundamentally misguided. Nowhere is this more apparent than in physics, the science attempting to uncover the fundamental laws giving rise to reality. The history of physics is like a trip down the rabbit hole: the elegance and simplicity of Newtonian physics has been incrementally replaced by more and more complex theories. At the time of writing, this has culminated in M-Theory, positing no less than 10 dimensions of space and the existence of unobservably small "strings" as the fundamental building block of reality. It seems safe to assume that the fundamental laws of reality will be even more complex, if we can even discover them.

So where did Occam's Razor go wrong?

Occam's Razor is actually a vestigial remnant of medieval science. It is literally a historical artifact: William of Ockham employed this principle in his own 13th century work on divine omnipotence and other topics "resistant" to scientific methods. The continuing use of parsimony in modern science is an atavistic practice equivalent to a cardiologist resorting to bloodletting when heart medication doesn't work.

And it is in the life sciences where Occam's razor cuts most sharply in the wrong direction, for at least three reasons.

1) First, life itself is a fascinating example of nature's penchant for complexity. If parsimony applies anywhere, it is not here.

2) Second, evolution doesn't design organisms as an engineer might - instead, organisms carry their evolutionary history along with them, advantages and disadvantages alike (your appendix is the price you pay for all your inherited immunity to disease). Thus life appears to result from a cascading "complexifying" process - an understanding of organisms at the macroscale will be anything but simple.

3) Third, we know that the even the simplest rules of life (click the button at the upper left, labelled "Enjoy Life") can give rise to intractable complexity. Unless you're a biophysicist, the mechanisms at your preferred level of analysis are likely to be incredibly heterogenous and complex, even at their simplest.

Of course, some disciplines have injured themselves with Occam's razor more than others. A theoretical cousin of Occam's razor, maximum parsimony, has been quite useful for understanding evolutionary relatedness. Yet similar methods have led to particularly disastrous results in psychology. For several decades experimental psychology was dominated by an approach known as radical behaviorism, in which concepts related to "thinking" and "mind" were quarantined from mainstream journals.

Likewise, Occam's Razor cut deep and wide through developmental psychology. How many apppropriately complex theories of development were excised in favor of those advocating four or five tidy "stages" of cognitive development? The entire field is lucky to have survived the ridiculous nature-vs-nurture debate, a false dichotomy itself grounded in the pursuit of parsimony.

Thus, the utility of Occam's Razor is highly questionable. Theories which it would soundly eliminate are usually questionable for other reasons, while useful theories might be discarded for a lack of parsimony relative to their over-simplified competitors. The theory which states "height determines weight" can do a reasonable job of providing evidence that seems to support that theory. And it's highly parsimonious - Ockham would love it! But the theory which says "nutrition, exercise, and a collection of more than 100 genes predict both height and weight" is highly unparsimonious, even though we know it's better than its competitor theory. Statisticians have quantified the appropriate penalty for various theories based on the number of variables they involve, but the more theoretical modes of quantitative science have yet to catch up.

"I hope at least that the next time you're tempted to consider parsimony as a desirable aspect of whatever you are doing, you'll give some thought to whether you really want to advocate a simplistic and nonexistent parsimony, rather than an appropriately complicated and meaningful psychology." - William Battig

4/16 EDIT: I think the following quote sums up my argument even better:

"The aim of science is to seek the simplest explanation of complex facts. We are apt to fall into the error of thinking that the facts are simple because simplicity is the goal of our quest. The guiding motto in the life of every natural philosopher should be ``Seek simplicity and distrust it.'' - Alfred North Whitehead

Related:
Of Molecules and Memory, Part I
Ockham''s Razor cuts both ways: the uses and abuses of simplicity in scientific theories
Against Parsimony, Again (A Statistcian's View of Parsimony)
How Reductionism Leads Us Astray in Cognitive Neuroscience (using planning as an example research domain)
Reconstructing a Preference for the Complex Out of Science's Reverence for the Simple (using symbol use as an example research domain)
Complexity From the Simple: Choice RT and Inhibition

More like this

I've been following your web site for some time now and finally, I decided to write this post in order to thank you for providing us such a fine content!!! :)

I think the point most people miss with Occham's razor is they forget the "all things being equal" part. Most of the time people use Occham's Razor to disregard complex explainations in face of strong evidence. Occham's Razor is meant to be applied to theories that are both equally possible given the current evidence.

Furthermore, it is a guidline that is meant to keep scientists grounded when reviewing results. You can make up all kinds of crazy explainations for results. The pre-galileo star charts for example that had the Earth of the center of the Universe. Those worked for a while, but the much simpler answer was that the earth is not the center of the universe.

So, I think Occham's Razor is valid, but like many things it is misuderstood and misused.

Dude, Occam's Razor says you should accept the simplest explanation that fits the data. If you want to find out whether a more complex solution might be the real one: collect more data.

By Mustafa Mond, FCD (not verified) on 14 May 2007 #permalink

Yeah, I should have addressed the issue about how the razor applies only when two theories are equally supported. But in the real world, how often does that happen?

Maybe in physics, but in psychology there is almost NEVER such "clearly equal" support between two theories. So the razor gets used when someone's too intellectually lazy to apply rigorous logic or do real analysis/modeling instead.

Didn't Ockham suggest the rule for things of equal explanatory power? If there is some advantage to a more complicated model, then they aren't quite equal.

Simplicity for its own sake isn't Occam's Razor, it is simply simplification.

Adjusted R^2, AIC, BIC and other statistical penalty functions attempt to balance between complexity and explanatory value, In the cases where Occam's razor would strictly apply, it would improve these measures.

Certainly the title of the post is wrong; some of the examples also appear to be. To say that the simplest theory is never the right one is absurd, to say that the simplest theory is always the right one is similarly wrong.

Where you go off the rails on your claim that simple theories don't work is that you ignore an important component. The simplest acceptable theory must also fit all the evidence. Newtonian physics was discarded in favor of relativity because it did not fit all the evidence. Height determines weight must similarly be discarded because there are tall stringbeans and short fatsos.

You first state the theory correctly and then go on to explain why something completely different isn't correct and imply that parsimony and "simplest is usually right" are the same thing. This is a fundamental misstatement of Occam's razor. Given two equally plausible explanations for a given phenomenon the one that makes the fewest assumptions is usually correct. Newton and Einstein don't explain the same things. Both are correct, Einstein happens to include Newton. There is no presumption that the thing being explained is "simple" or that the explanation is "simple". The weight of Occam's razor falls to the explanation that makes the fewest assumptions, no matter how complex that explanation is.

By Henry Culver (not verified) on 14 May 2007 #permalink

Bozo - I attempted to admit in the post itself that statisticians apply the razor in an appropriate way. The problem is for more verbal theories which are not yet mathematically implemented (and which abound in psychology, and I think in other life sciences).

Pat - You caught me ;) I thought I'd be controversial and leave out the word "almost," which was in the story's original title (look at the URL).

I think the best way I've heard Occam's razor explained is that if you have two points, you draw a straight line between them - you don't zigzag it across the space in between. Similarly with three points the line goes through all three in the most direct manner, and the same with four points etc. Explained in that manner I can see no situation where applying Occam's razor would be incorrect. The key point is that it is the most parsimonious explanation that fits the data.

Just a thought: Assuming that the world is infinitely complex, or essentially incomprehensible, Occam's razor makes sense in that it helps one to identify and describe regularities in that world. This process would be essential to any development of culture.. One uses the razor to craft a tool, as it were.

On second thought, that doesn't sound like the Razor anymore, as it represents more of a pragmatist than an essentialist view.

By Jan-Maarten (not verified) on 14 May 2007 #permalink

You're wrong on behaviorism: behaviorism didn't rule out thinking (see _Verbal Behavior_, 1957) or indeed
Skinner, B. F. (1945) _The Operational Analysis of Psychological Terms_ Psychological Review, 52, 270-277. Skinner's radical behaviorism treated thinking and feeling as behaviors and subject to scientific inquiry, wheras cognitive science, relying on methodological behaviorism, regards the scientific study of thinking and feeling as unscientific since their existence cannot be observed, just inferred.

On a different note, behaviorism did not rule out the mind, just as atheism does not rule out God: neither exist and can, as such, not be ruled out, just pointed out as unnecessary hypothetical constructs.

Furhermore, it is difficult to argue that behaviorism was in any way disasterous: cognitive science has, forty years after its inception, failed to deliver any methods substantially better than the behaviorists'.

Thanks Rolf - my charicature of behaviorism was intended to make the larger point that oversimplification is not good, not to alienate people who follow in the "radical behaviorist" tradition.

Nonetheless, computational modeling is definitely a substantial improvement on behaviorism (though I'll admit it is largely rooted in work by pioneering behaviorists).

I wonder if you have bothered to read any Ockham. Or philosophy of science. Or history of science. Special (and later General)Relativity was the simplest model we could come up with that explained all relevant observations. There were *more complex* (in the sense of the Razor) models that preserved aspects of Classical (Newtonian) Physics. When those models became too complex people searched for a simpler result.

The Razor is not a heuristic, it is a basic fundamental principle underlying all of science. The Razor enabled philosophers to slice theology out of philosophy and so put science on a firm enough ground to grow. Science is, in many ways, a rigorous application of parsimony: scientists refuse to put terms in their models that are not required. Your attempt to suggest that biology fails this test is wrong, biological theories (Evolution included) absolutely are parsimonious. The "baggage" in the genome is evidence of evolution *because* we want a simple explanation. It is simpler to assert imperfect reproduction than to assert individual designers who tinker with the genome along the way.

As for physics, it is exactly the Razor that is causing trouble for string and M theories. There are string theories that reduce to our current physics. The problem with them is that they require a host of special assumptions for initial values. The difficultly is producing a string (or M) theory that produces our physics as a necessary result.

Matt - I think you misunderstood.

My point is that nature/evolution converges on the more complex rather than the more simple solution to any given problem, and so the razor is a poor heuristic for explaining the functioning of these structures (e.g., the brain).

I understand that in the abstract world inhabited by philosophers of science, where everything is perfectly measurable, the razor is ultimately correct because it supports the simplest theory that can explain all evidence. But my post is about the razor as a heuristic in the PRACTICE of science.

In support of the other criticisms...

Another way of understanding Occam's Razor is "never invent complexity."

We've only progressed from Newtonian physics into string theory because there was data unaccounted for in Newtonian physics. Absent that data, nobody would've invented string theory for the hell of it.

Philo wrote: "Another way of understanding Occam's Razor is 'never invent complexity.'"

Unless you're evolution, in which case you do it all the time. If you're trying to understand an evolved structure (e.g., the brain), that's a problem.

This an absurd straw man argument. As many people have posted before me (why so many? because it's totally obvious), Occam's razor favors the simplest explanation that's consistent with the evidence. To oppose Occam's razor is to say that you favor positing unnecessarily complicated theories that overfit the data.

Your argument that many initially simple theories have later had to be expanded to fit additional evidence seems to suggest that you think we should only posit theories that will fit all current evidence and all evidence that will ever be. I only know of one such theory, and that's called Ineffable Supernatural God Done It. Sadly, the ISGDI theory has historically had extremely poor predictive power. Just ask the worshippers of Baal about that.

Chris wrote: "My point is that nature/evolution converges on the more complex rather than the more simple solution to any given problem."

The crux is that our theories must converge on nature, regardless of nature's complexity.

Complexity beyond the data we have to describe nature is less useful, unless it happens to produce testable predictions which we can use to justify the complexity.

Ouch. Sorry Chris, you tried to apply an incorrect interpretation of Occam, and they're punishing you for it.

It's still an entirely valid principle. Only when people misinterpret it does it seem unhelpful.

Chris: The statistical methods like Adjusted R^2, AIC, and BIC are still sort of arbitrary, they add a penalty and use that to optimize against. It is an easy thing to do with formulas and numerical models. They just try to balance model complexity versus model 'accuracy', and whether the penalty is (n-1)/(n-k-1) (for R^2_adj) or something more complicated, it's still an arbitrary model.

For more verbal theories you could do something similar--Maybe comparing word-counts or sentence-counts. Your formulation of Occam's razor is as a penalty function that seems to rate the complexification as infinitely more expensive than any benefit from increased explanatory power, and that is what I think people are responding to. Maybe you are looking for some tradeoff like the 80/20 Pareto rule?

The comment that we have moved on from Newtonian Physics to String Theory is no reason to say that the razor is not true even "almost" all of the time. In fact, many of the multiple dimensions were added to M-Theory because it was necessary to make the mathematics of the model work out. So, rather than find a model that better explained the data, physicsts just further complicated the model with added dimenions for which there is no otherwise supporting evidence other than the mathematics; certainly nothing in the lab to support the theory of the presence of these additional dimensions. And, as elegant as that mathematical solution might be, it does not prove it is correct, as was shown with Newton's beautifully elegant mathematics that began to break down in certain situations outside of "everyday experiences of motion". Even Einstein's Relativity breaks down when dealing with motion in sub-atomic particles. Just because it is complex and calculates to the desired solution, does not mean it is correct. Sometimes, the simpler answer is the correct answer, which means that at least some of the time the razor cuts true. Thus, it is quite a stretch to even contend that the razor is "almost" never right.

Well, since the original statement of Occam's razor is "entities should not be multiplied beyond necessity", I don't think it has any bearing on observed complexity at all. If you look at an animal and it has 20 parts where you think two would do, it's still necessary to call that 20 parts.

Something like the game of life (example 3), is actually a good demonstration of the power of Occam's razor, not its weakness. The observed complexity, but the rules you have to infer are not very complex at all. Theoretical complexity would be a situation where you have to posit a new rule for every single pattern you see in a game of Life, or even worse, posited rules that had no new consequences.

Evolution has got to be the most dramatic example of Occam's razor in action ever. The claim is that all of life's complexity, all the different species and organs and molecules are the result of a single process: reproduction with variation, natural and sexual selection creating differential rates of reproduction.

By Matthew L. (not verified) on 14 May 2007 #permalink

in the life sciences i like to think that though occam's razor is your friend, you should expect redundancy and multiple causation in most systems. And simple model should be treated as such, as models, that are most likely wrong. Though you should certainly take them as a starting point.

Some people really over think some things and end up completely missing the point. The author's argument is correct, although his examples and delivery could be improved upon. The most obvious example of what the author is talking about is in everyday life. Take a moment and step outside of the labratory and talk to a typical, even college educated, person walking down the street and ask them about everything from politics, nature, child development, and yes, even physical science and they will give you the most absurdedly simplisitic and innaccurate (if not completely wrong) assertations on how and why the world around them operates. Most people cling to the simplest explanation or reasoning behind something like a warm security blanket, but reality is anything but simple, being the result of an almost infinite series of "cause and effect" going back to the very beginning. Regardless, if you are dealing with the motivations behind a war, climate change in a region, or why your 4 year old child is throwing a tantrum on the floor. The most elementary understanding of Chaos Theory should make that obvious to any academician.

Tbell wrote: "in the life sciences i like to think that though occam's razor is your friend, you should expect redundancy and multiple causation in most systems."

Douglas wrote: "The crux is that our theories must converge on nature, regardless of nature's complexity."

I think these comments show where my argument really goes wrong. In a nutshell: although evolved structures are full of redundancy and multiple causality, our theories about their function need not be - so Occam's razor is not completely misleading.

Bozo wrote: "Your formulation of Occam's razor is as a penalty function that seems to rate the complexification as infinitely more expensive than any benefit from increased explanatory power, and that is what I think people are responding to"

Interesting and well-put - this could be behind the misunderstanding. I actually think the Razor is problematic because there's *any* penalty for complexification, given that nature is consistently more complex than we expect.

As so many others here, I have difficulties cutting this strawman down to size.

First, Occam's razor is about theories, not about facts. We can observe very complex phenomena without being able to judge them on the razor.

Second, there is a difference between elegance and the razor. Theories are attempted to be elegant, i.e. simple, because it is easier to understand them and get them right, i.e. predictive, if there are fewer laws. We could incorporate some complexity of application models in the main theory, but we abstain.

Third, equally simple theories are judged on the razor to get the parsimonious one. For example, bayesian methods can minimize the number of independent parameters ("entities should not be multiplied beyond necessity") in cosmology or similarly choose candidate phylogenies in cladistics. (Which theories must be tested later of course, I'm not a hardcore bayesian. :-)

One of the reasons the razor gives a better theory or model is that it minimizes reversals, i.e. it gives least likelihood for later changes. Distinguishing between simplicity ("few laws") and parsimonity ("few parameters") is important here.

Natural theories have few parameters with natural values (i.e. ~ 1 in fundamental units), but if that is expressed in one or more laws is not about naturality and reversals. Elegance is mostly subjective as seen above, while the razor can be quantitative.

For example, 'goddidit' is a perfectly simple theory that doesn't predict anything.

and in at least one reductio ad absurdum argument against religion

Um, that is a parody of religion, and I can't see the razor here. Perhaps you are thinking about the reductio of atheists as "disbelieving in one more god".

But that isn't the razor, which would be about the number of gods. (Naturally ~ 1, I guess, to minimize later changes. ;-)

That is really simplicity, one rule less, about predictivity. So I guess 'no gods done anything' gives better predictivity, at least compared to 'goddidit'. :-) You know, we may be on to something here.

The Razor enabled philosophers to slice theology out of philosophy and so put science on a firm enough ground to grow.

I don't know, since I am a lousy student of the history of science. But I would argue that the main reason theology today continues to be kept out is because methodological naturalism was found to work best. For example, we can in fact explain how life evolves.

By Torbjörn Larsson, OM (not verified) on 15 May 2007 #permalink

I actually think the Razor is problematic because there's *any* penalty for complexification, given that nature is consistently more complex than we expect.

That may be true, but that doesn't mean you can just make up complexities which are not supported by the data, which is what the Razor objects to. If, during the course of investigation, you discover that things are more complex than you initially thought, Occam's Razor does not apply. It only says "don't pull stuff out of thin air".

If you were to propose a modification to evolution that postulated that certain types of mutations were in fact caused by magical pink unicorns, that would be a violation of Occam's Razor. You can't just make up arbitrary complexity and justify it on the basis that the world is complex. You have to show that it's based in observable reality.

I sincerely hope no one thinks I'm advocating the invention of arbitrary complexities. That's a bigger strawman than my original argument! :)

The simplest theory (i.e., the one with the fewest assumptions) that fits the available data is generally an oversimplification of the true state of reality, which is itself always more complex than the available data. Therefore, the simplest theory (i.e., the one with the fewest assumptions) is *almost* never the right one.

I sincerely hope no one thinks I'm advocating the invention of arbitrary complexities.

No, nobody thinks that. But that is all that Occam's Razor forbids. Occam's Razor says nothing about the simplest theory being the correct one - it just says that you're not allowed to make stuff up without any basis.

The simplest theory (i.e., the one with the fewest assumptions) that fits the available data is generally an oversimplification of the true state of reality, which is itself always more complex than the available data. Therefore, the simplest theory (i.e., the one with the fewest assumptions) is *almost* never the right one.

But you're playing fast and loose with the term "available data". You're assuming that there is data beyond the available data. That is often the case, but it's not necessarily always the case, and even when it is the case, you can't sensibly take it into account, because the data isn't available. It's all very well assuming that the universe is more complex than it currently appears, but you can't throw Occam's Razor overboard because you don't know anything about the complexity you don't know anything about, by definition. You can't sensibly hypothesise about unknown unknowns.

But yes, you're more-or-less right that without perfect knowledge of the entire universe, our theories are never likely to be perfectly complete. I don't see that as much of a problem, to be honest...

In what way has "computational modeling" contributed to understanding of the brain more than behaviorism (or psychophysics? or intracerebral recording in animals?)? In what way has "computational modeling" contributed to improving public health?

in all seriousness, this is an area that tends to point out the fallacy of the animal righties argument that science can be done "with computer models". we may be learning a lot about how computers can be taught to "think" in much different ways. but the brain-computer information flow seems a one-way ticket to me...

can you point to de novo discoveries about the way the brain functions that came from "computational modeling" that don't depend, in essence, upon empirical study of actual organisms?

I ask this as someone who likes psychology as an ivory tower discipline but finds the federal funding structure (NIMH, specifically) questioning the relevance of such work. How are these fields to justify their continued place at the NIH trough?

Dunc, I think that was the most productive part of the discussion so far.

Drugmonkey wrote: "can you point to de novo discoveries about the way the brain functions that came from "computational modeling" that don't depend, in essence, upon empirical study of actual organisms?"

The origin of the temporal difference algorithm, now widely applied as a way of understanding dopamine release, has its origins in chess programs from the 1950's, subsequently cited in Marvin Minksy's thesis, and subsequently used as a way of making sense of animal data. So TD learning was originally discovered as an important and necessary component of cognition in a purely computational framework.

Of course there are now new doubts about TD learning so perhaps this comment will be out of date in a few years. But for now I think it's a good example :)

Chris wrote: "Therefore, the simplest theory (i.e., the one with the fewest assumptions) is *almost* never the right one."

Perhaps we are abusing the word theory. A hypothesis can be based on data we don't yet have, a theory must always be based on real data. There may be a hypothesis "better" than any current theory, to the extent that an unfounded hypothesis can be better than a tested theory.

In the context of the current data, the simplest theory is *always* the best one, even if it isn't the "right" one. Going from "best" to "right" is a leap of faith. Perhaps someone will have enough faith in that hypothesis to create a new research project, and so replace the best theory with a better one. You can't blame your lack of data on Occam, but we can call "better" progress.

Your example makes my point. This is a post-hoc explanation or structure to understand primary empirical observation. At best, the subfield now does some back and forth to ask, how well does biology match our "pure" mathematical/computational algorithms. Your overview of the "new doubts" further reinforce the uni-directional nature of the relationship between empirical biology and predictions made from computational neuroscience.

tri-chromatic psychophysics is perhaps the first alternative i think of. vision psychophysics made a very strong prediction about the biology of the visual system before it was established that we are only sensitive to three colors, biologically speaking.

in the same arena, the ramachandran phantom-limb sensory phenomenon was (in contrast) not a triumph of psychophysics but rather a use of psychophysics to confirm what was predicted by the established sensory map in cortex. some of this is an accident of history of course. had someone observed that touching the cheek of amputees caused a specific "missing limb" sensation to predict what the cortical map should look like in advance, well that would have been a score for psychophysics.

my point is that to date the computational modeling approach trends more toward the latter example than the former...

Drugmonkey, I don't understand - my example clearly shows that TD was not a post-hoc explanation, but rather a "prophesied" computational requirement subsequently confirmed in the dopamine signal.

In any case I admit that computational neuroscience has yet to prove its worth, and so I begrudgingly agree with your larger point :)

My question, I suppose, is how directly the "prophecy" leads to the body of knowledge and in how many cases the argument is made post-hoc.

Your off hand comment about the value of computational neuroscience vis a vis behaviorism just got me thinking. However it is an important issue in these times when the generic argument for basic research is simply untenable. People sitting on study section, program officers and Institute Directors have to make very tough decisions about the value of research projects. Certainly in retrospect one can see how behaviorism has contributed tremendously to biomedical science in terms of the abstract(making behavior amenable to replicable study and therefore a legitimate area of scientific enquiry) and the practical (just about any study you can think of that relies on behaving organisms has technical roots in the behaviorism tradition). That's even before we get to specific scientific observations.

It is conceivable that in the future we'd look back on these early days of computational modeling with similar gratitude and appreciation. In my view, however, we don't have a lot of clear evidence for the direct contribution of this subfield and I have difficulty seeing where it will contribute in such a fundamental way in the future.

I would just like to bring in the idea that "The simplest theory is never the right one" is paradoxical in that it is the simplest theory to deal with that fact that simple theories aren't always right.

By TH3_3XPOSITOR (not verified) on 15 May 2007 #permalink

I was trying to come up with a good example illustrating the difference between elegance and parsimony. Perhaps this is it: classical EM as Maxwell's formulation (two vector fields) or as covariant formulation (scalar and vector potential).

The later is more elegant and generalizable, but is essentially expandable to the former. Quantitative penalty functions would not make a fair comparison - or comparing unpacked versions state that they are essentially the same. And I see now that Bozo have made this point much earlier, describing strict applications of the razor and all.

By Torbjörn Larsson, OM (not verified) on 15 May 2007 #permalink

I wonder: are the difficulties of "hyper"-reductionism not present, or simply not appreciated, outside of the brain sciences?

ROFLMAO...
May you misunderstood the motives for the "vehement response".
It seems to me that it is not so much a matter of defending this or that scientific methodology but rather a purely emotional response to a perceived attack on the cherished (and partly uncounscious) beliefs underlying the (would be) scientists way of life.
Cannot trust Occam Razor? Whaaah! Scary!
OTOH the more recent article is "too elaborate" and cannot reach the lizard brain so directly.
And whenever it does the easiest and preferred way of denial is to reject its relevance to the personal field of interest of the reader : doesn't apply to my turf no need to bother and reply.
This cannot be done with a broad general principle like Occam Razor.

By Kevembuangga (not verified) on 15 May 2007 #permalink

I don't know if this happens often in other sciences, but in the cognitive sciences, the simplest models are generally too powerful. The only time parsimony ever becomes relevant is when you have to models that make the exact same predictions, but one has fewer df. And that almost never happens.

If you want an example where computational neuroscience proved useful, just look up the Jeffress model of binaural hearing. It was a prediction of how the brain would compute a basic mathematical algorithm (cross-correlation) in determining the location of sound sources. There was no physiological data to back it up, but it later turned out to be spot-on for birds. It is not as clear for mammals, mammals definitely do the calculations he predicted but they may or may not do it in the way he predicted. There were similar predictions for another method of determining sound location using paired excitation/inhibition that was made long before the neurons that carried out the calculation were found.

By TheBlackCat (not verified) on 17 May 2007 #permalink

are the difficulties of "hyper"-reductionism not present, or simply not appreciated, outside of the brain sciences?

That was a large question! To answer somewhat:

First, sciences are layered so reductionism only get you so far. We don't try string theory to describe chemistry. So I don't think scientists in general feel concern about "too much reductionism"; but that may be a personal reflection.

Second, basing models on basic elements can make the resulting model more complex (but predictive) compared to a descriptive (postdictive) ad hoc. So reduction doesn't automatically mean simplifying or less predictive power.

Third, neuroscience or more generally biological problems are at times so complex that I am amazed that you do headway at all. I don't think there is any easy and general method to balance 'resolution' in models.

Several different types may be needed to cover a systems behavior - see for example computers, from transistor level through circuit descriptions to different layers of software. Yet optimizing software rely on hardware knowledge, so there is a need for various "mixed models".

By Torbjörn Larsson, OM (not verified) on 17 May 2007 #permalink

In my opinion, as a summary to all this...

Occam's Razor dictates that: you should not create a theory out of pure insight, but use some kind of plausible evidence to support it.

Though, simple theories in general are very powerful, they offer predictions that most of the time may be correct, but only to the extent of the context to which they apply.

We're running on a thin line between a theory and hypothesis. You can play around with a hypothesis to create a well founded theory, but you can't afford to play around with a theory because you cannot create reality itself through theories. Maybe that's what Occam's Razor is trying to tell us.

Though, it's always been the case that theories are converging into supertheories and then simplified again to become a theory that adapts to new data. Perhaps it's time to stop that and make a simple theory, with a "mutation factor" that allows for change. Let's hope that the "mutation factor" is absolutely accurate before we ever make a theory with it.

By Moonshadow (not verified) on 04 Jun 2007 #permalink

There are always infinitely many explanations compatible with a given set of evidence. The explanations don't necessarily have implications on the same order of complexity, though. Given two explanations, one of which has many more implications than the other, the more complex one has more opportunities to diverge from reality and is therefore less desirable.

Your argument hinges on a misrepresentation of the Razor.

By Caledonian (not verified) on 15 Jul 2007 #permalink

"At the time of writing, this has culminated in M-Theory, positing no less than 10 dimensions of space and the existence of unobservably small "strings" as the fundamental building block of reality. It seems safe to assume that the fundamental laws of reality will be even more complex, if we can even discover them."

These theories are just that, theories, just because people think up complex explanations does not mean that that simpler ones are not possible.

I think all the references to biological complexity defying the Occam's Razor principle miss the point. The existence of complexity does not contradict simplicity of principles through which it is existence is explained.

The dizzying array of biological complexity can be explained through a fairly simple evolutionary principle: differential reproductive success in the context of genetic diversity. It is a rather parismonious explaination for complexity, no?

There's a really interesting perspective on Occam's Razor that comes from probability theory (Bayes theorem, specifically). The Evidence (or marginal likelihood), which is the normalising factor on the bottom of the equation, can be used to calculate the relative likeliood of two competing hypotheses, given the available data. (this is often called "model selection"; BIC, for example, is an approximation to this).

Now, the Evidence is broadly governed by two things (for a given hypothesis). It will be higher if the data are a better fit to said hypothesis. And it will be lower the more possible outcomes the model can explain (this is to do with the prior and how thinly it's spread out in parameter space).

So, if you have a hypothesis that makes a unique prediction (eg. gravity is exactly an inverse square law in Newton's theory, not distance^-2.1 or distance^1.3), then this factor will be higher. If our hypothesis allows for lots of different values, it would be lower.

(this all drops out of the maths, BTW)

So you end up with the case that Bayesian Evidence will favour hypotheses that are either overwhelmingly favoured by the data or, when two (or more) theories fit similarly well, it favours the simpler one (because of the prior).

So, I would argue that one interpretation of Occam's Razor is that it refers to the relative likelihood of different hypotheses, given the available data.

WRT the discussions above, it's perfectly possible in this case for the "true" model to be selected against simply because the data are too poor to show us the fine details of the more complicated model. This happens all the time in science and really just means we have to be alert to our simple theories breaking down.

(also an important point that in all the above, there's no guarantee that the competing hypotheses you've picked are the best ones. There may always be a better theory that no-one's suggested yet!)