Jon Stewart on the stolen Climategate emails:
I have two responses to the release of these admittedly unflattering emails. Firstly, they shed virtually no light on the actual climate science. Tyler Cowen says it best:
I see science, including climate science, as very much a decentralized process, based on the collective efforts of thousands of researchers. The evidence for our current understanding of climate change also comes from a wide variety of disciplines, including chemistry, meteorology, oceanography, geography, tree ring studies, ice sheet studies, and a good body of theory, which has held up well. These results all point in broadly similar directions. Call me naive but, with apologies to Robert Sugden, I don’t think many scientific results depend on what comes out of East Anglia, even if you include its emailing affiliates from Penn State and the like. Even very, very simple climate models generate many of the basic results.
I’d add peer-review to the list of decentralized institutions that help ensure the steady progress of scientific knowledge. Scientists complain endlessly about the peer-review process (snarky anonymous reviewers, conservative journals, etc.) but such a system ensures that every paper gets screened by researchers who have no vested interested in the thesis. They want to find its mistakes; those who falsify get the glory.
My second response is that I’m surprised that people seem so surprised that scientists are human beings, stuffed full of all the usual flaws and biases. Here’s a sneak preview of my next Wired article, which examines these cognitive blunders in detail:
Kevin Dunbar is a scientist who studies how scientists study things–how they fail and succeed. In the early 1990s, he began an unprecedented research project: observing four biology labs at Stanford University. Philosophers have long theorized about how science happens, but Dunbar wanted to get beyond theory. He wasn’t satisfied with abstract models of the scientific method–that seven-step process we teach school kids before the science fair–or the dogmatic faith scientists place in logic and objectivity. A scientist himself, Dunbar knew that scientists often don’t think the way the textbooks say they are supposed to think. He suspected that all those philosophers of science–from Aristotle to Karl Popper–had missed something important about what goes on in the lab. (As Richard Feynman famously quipped, “Philosophy of science is about as useful to scientists as ornithology is to birds.”) And so Dunbar decided to launch an “in vivo” investigation, attempting to learn from the messiness of real experiments.
He ended up spending the next four years staring at post-docs and test tubes: The researchers were his flock, and he was the ornithologist. Dunbar set up tape-recorders in the coffee room and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and conducted countless interviews. “I’m not sure I appreciated what I was getting myself into,” Dunbar says. “I asked for complete access and I got it. But there was just so much to keep track of.”
Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were all using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data, because the data didn’t make sense.” Perhaps they hoped to see a specific protein but it wasn’t there; or maybe their DNA sample showed the presence of an aberrant gene. The details always changed, but the story remained the same: The scientists were looking for X, but they found Y.
Dunbar was a little disturbed by these statistics. The scientific process, after all, is supposed to be an orderly pursuit of the truth, full of elegant hypotheses and control variables. (Twentieth-century science philosopher Thomas Kuhn, for instance, defined “normal science” as the kind of research in which “everything but the most esoteric detail of the result is known in advance.”) However, when experiments were looked at up close–and Dunbar interviewed the scientists about even the most trifling details–this idealized version of the lab fell apart, replaced by an endless supply of disappointing surprises. There were models that didn’t work and data that couldn’t be replicated and simple studies pockmarked with mistakes. “These weren’t sloppy people,” Dunbar says. “They were working in some of the finest labs in the world. But experiments rarely tell us what we think they’re going to tell us.”
How did the researchers cope with all this unexpected data? How did they deal with so much failure? Dunbar realized that the majority of people in the lab followed the same basic strategy. First, they would blame the method. The surprising finding was classified as a mere mistake; perhaps a machine malfunctioned or an enzyme had gone stale. “The scientists were trying to explain away what they didn’t understand,” Dunbar says. “It’s as if they didn’t want to believe it.”
The experiment would then be carefully repeated. Sometimes, the weird blip would disappear, in which case the problem was solved. But the weirdness often remained, an anomaly that just wouldn’t go away.
This is when things start to get interesting.
You’ll have to buy the magazine, or at least check the Wired website in a few weeks, to find out what happens next. (And which brain mechanisms are responsible for our irrational attachment to erroneous theories.) The larger point, though, is that the effectiveness of science has never depended on the inhuman objectivity of scientists. Instead, science works – and it really does work – because of the institutions that help correct for our innate imperfections. Scientists don’t have to be rational, because science is. Here’s Richard Rorty:
There is no reason to praise scientists for being more ‘objective’ or ‘logical’ or ‘methodical’ or ‘devoted to truth’ than other people. But there is plenty of reason to praise the institutions that they have developed and within which they work, and to use these as models for the rest of culture. For these institutions give concreteness and detail to the idea of unforced agreement.