One of the hot topics of the moment is the E. O. Wilson op-ed lamenting the way math scares students off from science, and downplaying the need for mathematical skill (this is not news, really– he said more or less the same thing a few years ago, but the Wall Street Journal published it to promote his upcoming book). This has raised a lot of hackles in the more math-y side of the science blogosphere, while some in less math-y fields (mostly closer to Wilson’s home field of evolutionary biology) either applaud him or don’t see what the fuss is about.

The split, I think, comes from the fact that Wilson’s comments are coupled to a larger point that is basically unobjectionable: that math alone is not sufficient for science. Scientists working in a particular field need to have detailed knowledge of that field in order to even known what math to do:

In the late 1970s, I sat down with the mathematical theorist George Oster to work out the principles of caste and the division of labor in the social insects. I supplied the details of what had been discovered in nature and the lab, and he used theorems and hypotheses from his tool kit to capture these phenomena. Without such information, Mr. Oster might have developed a general theory, but he would not have had any way to deduce which of the possible permutations actually exist on earth.

Over the years, I have co-written many papers with mathematicians and statisticians, so I can offer the following principle with confidence. Call it Wilson’s Principle No. 1: It is far easier for scientists to acquire needed collaboration from mathematicians and statisticians than it is for mathematicians and statisticians to find scientists able to make use of their equations.

This imbalance is especially the case in biology, where factors in a real-life phenomenon are often misunderstood or never noticed in the first place. The annals of theoretical biology are clogged with mathematical models that either can be safely ignored or, when tested, fail. Possibly no more than 10% have any lasting value. Only those linked solidly to knowledge of real living systems have much chance of being used.

That’s absolutely fine, and the same can be said of a lot of physics. The mark of a useful physical theory is that it accurately describes reality, and that requires math to be constrained by empirical observations. To the extent that the much-ballyhooed “crisis” in physics exists, this is the root of the problem: high-energy theorists have not had the data they need to constrain their models, and that has impeded real progress.

What I, and many other physical scientists, object to is the notion that math and science are cleanly separable. That, as Wilson suggests, the mathematical matters can be passed off to independent contractors, while the scientists do the really important thinking. That may be true in his home field (though I’ve also seen a fair number of biologists rolling eyes at this), but for most of science, the separation is not so clean.

As much as I agree with Wilson’s statement about the need for detailed knowledge to constrain math, even in physics, there is also some truth to the reverse version of the statement, which I have often heard from physicists: If you don’t have a mathematical description of something, you don’t really understand it. Observations are all well and good, but without a coherent picture to hold them all together, you don’t really have anything nailed down. Big data alone will not save you, in the absence of a quantitative model.

Of course, that’s physics, which Wilson exempted from his comments at the beginning of the piece, so maybe we’re just oddballs on the boundary where math shades into science. But the close marriage of math and science pops up even in the life sciences. There’s no small irony in the fact that one of the *other* big stories of the week in science is a study showing that many neuroscience studies are woefully underpowered, in a statistical sense. This is a hugely important paper, because it calls into question a lot of recent results, and common practices in the field.

It also shows up the problem with Wilson’s contract-out-the-math-later approach. Because, after all, the problematic studies are doing essentially what he talks about– they’re out in the field, making observations of phenomena, and thinking about mechanisms to explain them. The problem is, many of these observations turn out to be of questionable value, because *they didn’t do the math right*. They didn’t have enough test subjects to reliably test the things they were trying to test. And this has very real negative consequences for the field, as people waste time and resources trying to duplicate results that turn out to be a statistical fluke. To say nothing of the career risks for an early-career scientist who plans to build on one of these results, who not only can’t replicate it, but can’t publish the failure to replicate.

As a general matter, science and math are just not cleanly separable in the way that Wilson asserts. If there’s an exception here, it’s his field, not the handful he airily waves off as inherently mathematical. You need observations to constrain mathematical models, yes, but you also need math to know what observations you need to do, and to determine the reliability of your results. The notion that the two can be cleanly separated, and the scary math bits farmed out to somebody else is not just faintly insulting to mathematicians, it’s flat out wrong. And that’s why people are annoyed.