Right around the time I shut things down for the long holiday weekend, the Washington Post ran this Joel Achenbach piece on mistakes in science. Achenbach’s article was prompted in part by the ongoing discussion of the significance (or lack thereof) of the BICEP2 results, which included probably the most re-shared pieces of last week in the physics blogosphere, a pair of interviews with a BICEP2 researcher and a prominent skeptic. This, in turn, led to a lot of very predictable criticism of the BICEP2 team for over-hyping their results, and a bunch of social-media handwringing about how the whole business is a disaster for Science as a whole, because if one high-profile result is wrong, that will be used to argue that everything is wrong by quacks and cranks and professional climate-change deniers.
This happens with depressing regularity, of course, and it’s pretty ridiculous. The idea that climate-change denial gains materially from something as obscure as the BICEP2 business is just silly. They don’t need real examples, let alone examples of arcane failures that even a lot of professional physicists can’t explain. We had a conversation at lunch a week or so after the initial announcement, and none of the faculty in the department could manage a good explanation of why the polarization pattern they saw would have anything to do with gravitational waves. There was some vague mumbling about how we should see if we can get somebody here to give a talk about this, and that was about it. If tenured faculty in physics and astronomy take a shrug-and-move-on approach to the whole business, it’s not likely to make much of an impression on the general public; certainly not enough to be politically useful.
People profess doubt about climate science not because of any rational evidence about the fallibility of science, but because it’s in their interests to do so. A handful of them are extremely well compensated for this belief, while for many others it’s an expression of a kind of tribal identity that brings other benefits. It’s conceivable, barely, that a “Scientists can’t even properly account for the effects of foreground dust on cosmic microwave background polarization” line might turn up in some grand litany of claims about why you can’t trust the scientific consensus, but it’s going to be wayyyyy down the list. They’re perfectly happy to run with much splashier and far stupider claims that have nothing to do with the physics of the Big Bang. (Which a non-trivial fraction of their supporters probably regard as heretical nonsense, anyway…)
I’m not even sold on the complaints about “hype,” and particularly the notion that somehow the BICEP2 results and possible implications should have been kept away from the public until after the whole peer review process had run its course. For one thing, that’s not remotely possible in the modern media environment. Even if the BICEP2 folks had refrained from talking up their result, posting a preprint to the arxiv (as is standard practice these days) would’ve triggered a huge amount of excitement anyway, because there are people out there who know what these results would mean, and they have blogs, and Twitter accounts. This isn’t something that you’re going to just slide under the radar, and if there’s going to be excitement anyway, you might as well ride it as far as it will take you.
(Really, the fact that there’s any market at all for hype about cosmology ought to be viewed as a Good Thing. It means people care enough about the field to be interested in hot-off-the-telescope preliminary results, which isn’t true of every field of science.)
And I don’t think the BICEP2 people have done anything underhanded, or behaved especially like hype-mongers. Confronted with issues concerning their data analysis, they quite properly revised their claims before the final publication. A real fraud or faker would double-down at this point, but they’re behaved in an appropriate manner throughout.
Most importantly, though, as Achenbach notes, science is a human enterprise, and is every bit as prone to error and misinterpretation as anything else hairless plains apes get up to. (In fact, as I argue at book length, this is largely because all of those enterprises use the same basic mental toolkit…)
All those other enterprises, though, seem to have come to terms with the fact that there are going to be mis-steps along the way, while scientists continue to bemoan every little thing that goes awry. And keep in mind, this is true of fields where mistakes are vastly more consequential than in cosmology. We’re only a week or so into July, so you can still hear echos of chatter about the various economic reports that come out in late June– quarterly growth numbers, mid-year financial statements, the monthly unemployment report. These are released, and for a few days suck up all the oxygen in discussion of politics and policy, often driving dramatic calls for change in one direction or another.
But here’s the most important thing about those reports: They’re all wrong. Well, nearly all– every now and then, you hit a set of figures that actually hold up, but for the most part, the economic data that are released with a huge splash every month and every quarter are wrong. They’re hastily assembled preliminary numbers, and the next set of numbers will include revisions to the previous several sets. It’s highly flawed provisional data at best, subject to revisions that not infrequently turn out to completely reverse the narrative you might’ve seen imposed on the original numbers.
Somehow, though, the entire Policy-Pundit Complex keeps chugging along. People take this stuff in stride, for the most part, and during periods when we happen to have a functional government, they use these provisional numbers more or less as they’re supposed to be used. which is what has to happen– you can’t wait until you have solid, reliable numbers from an economic perspective, because that takes around a year of revisions and updates, by which time the actual situation has probably changed. What would’ve been an appropriate policy a year ago might be completely wrong by the time the numbers are fully reliable. So if you’re in a position to make economic policy, you work with what you’ve got.
And everyone accepts this. You won’t find (many) economists bemoaning the fact that the constant revising of unemployment reports makes the profession as a whole look bad, or undercuts their reputation with the general public. They know how things work, policy-makers know how things work, and everyone gets on with what they need to do. And, yeah, every report gets some political hype, blasting the President/Congress for failing to do something or another, but every round of these stories will include at least a few comments of the form “Yeah, this looks bad, but these are preliminary numbers, and we’ll see how they look a few months from now.”
So, this is the rare case where scientists need to act more like economists. Mistakes and overhype are an inevitable part of any human-run process, and we need to stop complaining about them and get on with what we need to do. If people still trust economists after umpteen years of shifting forecasts, science will weather BICEP2 just fine.