Some thoughts on ClimateGate.

It's quite likely, if you're reading anything else on the internets besides this blog for the past few weeks, that you've already gotten your fill of ClimateGate. But maybe you've been stuck in your Cave of Grading and missed the news that a bunch of emails from the Climate Research Unit (CRU) webserver at the University of East Anglia were stolen by hackers (or leaked by an insider, depending on who's telling the story) and widely distributed. Or maybe you're still sorting out what you think about the email messages in question and what they mean for their authors, the soundness of scientific consensus on climate change, or the responsible conduct of science more broadly.

Honestly, I'm still sorting out what I think, but here's where I am at the moment:

Email messages are frequently written for a private audience, rather than the general public.

Some of the reaction to the released CRU emails apparently focuses on the senders' smack-talk about other scientists. As RealClimate describes it:

Since emails are normally intended to be private, people writing them are, shall we say, somewhat freer in expressing themselves than they would in a public statement. For instance, we are sure it comes as no shock to know that many scientists do not hold Steve McIntyre in high regard. Nor that a large group of them thought that the Soon and Baliunas (2003), Douglass et al (2008) or McClean et al (2009) papers were not very good (to say the least) and should not have been published. These sentiments have been made abundantly clear in the literature (though possibly less bluntly).

In my experience, scientists are at least as prone to bluntness (and profanity) in their private communications as anyone else. If this is a problem, perhaps it speaks to the poor judgment of our scientific forebears in not actively recruiting more proper Victorian ladies to the scientific ranks a hundred-odd years ago when they had the chance.

On the other hand, when one's smack-talk about a fellow scientist that was not intended for public consumption ends up being available for public consumption, one might at least have the grace to be apologetic about the style, if not the substance, of the critique.

Indeed, if that critique has to do with shortcomings in the scientific results of the subjects of the smack-talk, there is arguably an obligation to bring the criticism to a public space (where the relevant public includes at least the scientific community to which the smack-talkers and the smack-talkees belong) and fully explore substantive reasons particular scientific data, methods, or conclusions are judged deficient.

If the smack-talk is focused on body odor and annoying mannerisms, the scientific community is probably OK leaving it in private email correspondence.

Expressing unfavorable opinions of others may be mean, but it doesn't necessarily amount to a well-orchestrated conspiracy to disseminate lies.

From RealClimate again:

More interesting is what is not contained in the emails. There is no evidence of any worldwide conspiracy, no mention of George Soros nefariously funding climate research, no grand plan to 'get rid of the MWP', no admission that global warming is a hoax, no evidence of the falsifying of data, and no 'marching orders' from our socialist/communist/vegetarian overlords. The truly paranoid will put this down to the hackers also being in on the plot though.

Instead, there is a peek into how scientists actually interact and the conflicts show that the community is a far cry from the monolith that is sometimes imagined. People working constructively to improve joint publications; scientists who are friendly and agree on many of the big picture issues, disagreeing at times about details and engaging in 'robust' discussions; Scientists expressing frustration at the misrepresentation of their work in politicized arenas and complaining when media reports get it wrong; Scientists resenting the time they have to take out of their research to deal with over-hyped nonsense. None of this should be shocking.

Some questions have been raised about whether the CRU emails indicate that particular pieces of scientific work from particular scientists were suppressed (or whether concerted efforts were made to suppress them, even if these efforts did not succeed); more on that in the context of responsible conduct of research in a moment. However, on the basis of even the most damning evidence from these emails, many have noted that there is ample reason to believe that the scientific agreement on the reality of anthropogenic global warming is still solid. For example, Chris Mooney writes:

Those of us who think this is all smoke and no fire are starting from the following position: There is a massive body of science, tested and retested and ratified by many leading scientific bodies, showing that global warming is real and human caused. So then we pose the following question: What would it take for "ClimateGate" to significantly weaken this body of evidence in a serious way?

Let's say, just for the sake of argument, that all of the worst and most damning interpretations of these exposed emails are accurate. I don't think this is remotely true, but let's assume it.

Even if this is the case, it does not prove the following :

  1. The scientists whose emails have been revealed are representative of or somehow a proxy for every other climate scientist on the planet.
  2. The studies that have been called into questions based on the emails (e.g., that old chestnut the "hockey stick") are somehow the foundations of our concern about global warming, and those concerns stand or fall based on those studies.

Neither one of these is true, which is why I can say confidently that "ClimateGate" is overblown-and which is why I've never been impressed by systematic attacks on the "hockey stick." Even if that study falls, we still have global warming on our hands, and it's still human caused.

My sense is that the climate skeptic commenters we're seeing aren't actually familiar with the vast body of climate science work out there, and don't realize how most individual studies are little more than a drop in the evidentiary bucket. It is because of the consilience of evidence from multiple studies and fields that we accept that climate change is human caused, and it is because of the vast diversity and number of scientists, and scientific bodies, who find that evidence compelling that we talk of a consensus.

Of course, not being part of a worldwide conspiracy doesn't mean you're not a jerk. Nor, for that matter, does it mean that you're a good scientist. In fact, it strikes me that there are plenty of ways that being a jerk can lead to the erosion of trust within the scientific community, making it harder for individual members of that community to coordinate their efforts, engage with each other's work meaningfully (whether through peer review or post-publication discourses), and so forth. I do get the sense, though, that there may have been some pretty substantial gaps in trust between different camps of climate scientists before these emails ever came to light.

Trying to interfere with peer review is always a bad call.

What makes scientific findings into scientific knowledge is that these findings can -- and do -- stand up to the careful scrutiny of other scientists looking for ways they could be wrong. Being able to prove it (to a reasonable degree of confidence) to an audience of skeptical peers is how you know it.

Derek Lowe explains the problem like this:

You have talk of getting journal editors fired:

This is truly awful. GRL has gone downhill rapidly in recent years.
I think the decline began before Saiers. I have had some unhelpful dealings with him recently with regard to a paper Sarah and I have on glaciers -- it was well received by the referees, and so is in the publication pipeline. However, I got the impression that Saiers was trying to keep it from being published.

Proving bad behavior here is very difficult. If you think that Saiers is in the greenhouse skeptics camp, then, if we can find documentary evidence of this, we could go through official AGU channels to get him ousted. Even this would be difficult.

And of trying to get papers blocked from being referenced:

I can't see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow - even if we have to redefine what the peer-review literature is !

Two questions arise: is this defensible, and does such behavior take place in other scientific disciplines? Personally, I find this sort of thing repugnant. Readers of this site will know that I tend to err on the side of "Publish and be damned", preferring to let the scientific literature sort itself out as ideas are evaluated and experiments are reproduced. I support the idea of peer review, and I don't think that every single crazy idea should be thrown out to waste everyone's time. But I set the "crazy idea" barrier pretty low, myself, remembering that a lot of really big ideas have seemed crazy at first. If a proposal has some connection with reality, and can be tested, I say put it out there, and the more important the consequences, the lower the barrier should be. (The flip side, of course, is that when some oddball idea has been tried and found wanting, its proponents should go away, to return only when they have something sturdier. That part definitely doesn't work as well as it should.)

So this "I won't send my work to a journal that publishes papers that disagree with me" business is, in my view, wrong. The East Anglia people went even farther, though, working to get journal editors and editorial boards changed so that they would be more to their liking, and I think that that's even more wrong. But does this sort of thing go on elsewhere?

It wouldn't surprise me. I hate to say that, and I have to add up front that I've never witnessed anything like this personally, but it still wouldn't surprise me. Scientists often have very easily inflamed egos, and divide into warring camps all too easily. But while it may have happened somewhere else, that does not make it normal (and especially not desirable) scientific behavior. This is not a standard technique by which our sausage is made over here. ...

And that brings up an additional problem with all this journal curating: the CRU people have replied to their critics in the past by saying that more of their own studies have been published in the peer-reviewed literature. This is disingenuous when you're working at the same time to shape the peer-reviewed literature into what you think it should look like.

I tend to agree with this take. Pointing to peer review as an objective measure of scientific quality cannot work if you've got a thumb on the scale.

But, I think there is a related question here which may come up more generally, even if it does not apply here: What are your options if a journal editor seems to be interfering with the peer review process?

Surely just as there are faulty spectrometers, there may also be unfairly biased editors, and they may skew what journals publish. Do you have to submit to them anyway? Can you mount arguments (publicly? privately?) that make the case that a particular editor is biased or isn't playing fair in the exercise of her editorial duties? Can you elect to submit your work instead to journals whose editorial processes seem more fair (or at least more transparent)? Is there something not-quite-ethical about avoiding particular problematic journals either as venues to submit your own work or as sources of work in your field that ought to be taken seriously?

Because I have a feeling that there might be a backstory on some of these journal discussions in the CRU emails.

Still, as Chad puts it:

Back-channel maneuvering to scuttle papers from people you consider kooks is not a good thing, no matter how noble your cause.

The best move, of course, is to let the kooks make their case and give the scientific community the time and evidence to conclude that they are kooks.

If you don't thoroughly document your code, no one but you will have a clear understanding of what it's supposed to do.

Actually, if you don't thoroughly document your code, you yourself, at a later moment in time, might not have a clear understanding of what it's supposed to do. (Then there's the question of whether, when executed, it actually does what it's supposed to do, but as far as I can tell, that's not a central issue in the discussions of ClimateGate.)

Some of this documentation might explain what kind of "trick" or "VERY ARTIFICIAL correction" you are applying in your computations, and more importantly, why applying it is appropriate. Josh Rosenau discusses the reasoning behind the "trick" revealed in the CRU emails:

Part of the fuss arises from a single line in one email which refers to using a "trick" to "hide the decline." Deniers try to claim that the "decline" in question is a decline in global average temperature since 1998, despite the fact that statisticians can find no such decline. In fact, the "decline" discussed in the email is an artifact of certain temperature proxies, which have shown a decline in their estimate of regional temperature compared to instrumental measurements (which is to say, thermometers). Since those data are known to be erroneous, the scientists have determined standard ways to represent the real data and to set aside the bogus data. This is what the scientist is referring to as his "trick.

Below you see the different datasets used to construct the temperature record for the last thousand years, with the green line showing northern hemisphere treering data, the black line showing thermometer measurements, and the other lines representing various other proxy measures. As you can see, some treering data are just wrong in the time since 1960, so the scientists substitute the thermometer record for the bogus treering record in graphing the results. As CRU explains, "CRU has published a number of articles that both illustrate, and discuss the implications of, this recent tree-ring decline." The goal is not to hide the data, but to accurately represent the real state of global temperatures.

Meanwhile, Tim Lambert has a look at the code that is supposed to be so incriminating and argues that it shows no evidence of deception or even a plausible attempt to deceive. And Phil Plait explains more about what scientists mean when they talk about "tricks":

I am a scientist myself, and I'm familiar with the lingo. When we say we used a "trick" to plot data (as one of the hacked emails says), that doesn't mean we're doing something to fool people. It means we used a method that may not be obvious, or a step that does something specific. Plotting data logarithmically instead of linearly is a "trick", and it's a valid and useful method of displaying data (your senses of sight and hearing are logarithmic, for example, so it's even a natural way to do things).

Of course, an ethical scientist will identify and explain these "tricks" when reporting the measurements or calculations made with them.

Maybe there's reason to worry that scientists are drawing conclusions from motley data sets from which they've had to excise bogus data and replace them with their best estimates of the real data. Clearly, it would be a happier situation if there were no bogus data in need of correction. But again, if scientists are completely transparent about which data were chucked out and why, and about what sorts of corrected data they used and why these are plausible, then other scientists can exercise their critical faculties in evaluating both the methods and the conclusions drawn from them.

It's a good thing to keep your original data.

If other scientists question your result (or are still trying to figure out how your scantily documented code actually works), being able to give them access to the original data is a much better option than having to say, "Just trust me."

Intentionally destroying original data (something that is alleged to have happened in the CRU case) makes it look like you're hiding something, or at least trying to obstruct your competitors' progress.

Proprietary data makes it much harder for other scientists to do quality control on your work.

If you can't release sought-after data because it's not yours to release, it may leave you in the position of having to tell a fellow scientist who has questions about your results, "You'll just have to trust me" -- at least until the date when it permitted to release the data publicly.

There may be some larger lesson we want to draw here about the wisdom of proprietary data versus fully open data.

Folks may judge you by your behavior, not just your data.

If those folks include the broader public, and if the science you're doing has recognizable implications for their lives, they might even judge the reliability of your data by your behavior.

This might seem totally unfair, but that's how it goes. (If your apparent behavior is deceptive, it's not even that unfair for folks to question your data.) As Chad notes:

Anybody in the field has had more than ample warning that they need to pick up some people skills, because mass media and public perceptions are going to be hugely important. You can't hide behind "I'm just a nerdy scientist" any more. If you really don't have anybody in your lab with communications skills-- which I highly doubt, by the way-- then hire somebody who does. ...

This could've been avoided if the people involved had half a clue about what they were doing. The belief that science is somehow above (or at least apart from) petty issues of perception and communication leads directly to this sort of catastrophe.

To the extent that getting the science about climate change right -- not just to climate scientists but to all earthlings -- scientists need to be on their very best scientific behavior here. There just isn't room for the sloppiness that might otherwise be tempting, nor for the faith that the self-correcting nature of science will catch all the serious problems in the fullness of time. Scientists need to be active instruments of scientific self-correction here, scrutinizing the results of their fellow scientists and their own results. They need to be very transparent about their methods, to share their data as widely as possible within the scientific community and manage that data so it can be usable in future research, and to conduct their scientific disagreements in public spaces in order that they be decided on the firmest scientific grounds available.

Basically, to earn the public's trust here, climate scientists need to be as squeaky clean in their conduct as Tiger Woods was presumed to be until he ran into that fire hydrant. And since scientific knowledge is premised on honesty and transparency, the climate scientists have a steeper climb to reestablish trust than Tiger does.

More like this

Since we're arguing over global warming this week, I thought I'd post a commentary piece that was published in the Morris newspaper this week, by my colleague Pete Wyckoff. Pete is our local tree and climate expert, who works in both the biology and environmental studies discipline, and is very…
Many of the climate change denialist sites have been up in arms by comparisons of climate change denial to holocaust denial. In particular Marc Morano at climate depot has had multiple articles attacking and expressing hysterical outrage at these comparisons. We know they don't like the comparison…
On December 9th, National Public Radio broadcast an interview between NPR’s Steve Inskeep and Senator Ted Cruz on the subject of climate change. Below is an annotated transcript of that interview with my [bracketed] responses to the consistently false scientific claims made by Senator Cruz.…
There's this notion among the climate denial community that somehow the entire professional climatology community has overlooked an obvious flaw in the science behind anthropogenic global warming. Their hypothesis is that too many of the thermometers used to record temperatures over the last 200…

One thing that annoys me is this sort of relaxing of standards about laypeople.

Why don't we expect laypeople to evaluate us by our data, not our attitude? Why don't we chide them if they act fallacious?

People skills are important - but this sort of communicating that it's somehow okay for them to be stupid emotional deciders while saying 'oh, we have to capitulate to them!' indicates that a huge part of the problem is with the public.

It's not entirely our problem.

By Katharine (not verified) on 09 Dec 2009 #permalink

But, I think there is a related question here which may come up more generally, even if it does not apply here: What are your options if a journal editor seems to be interfering with the peer review process?

The frau ran into this problem a few years back when she was a postdoc. In attempting to publish a significant finding in a notable journal (not top tier, but among the best for her field) the journal editor wanted to override the reviewers decisions (publish as submitted). Why? Because he had an ongoing feud with the last author. She had to go to her society and the president of which who told the offending editor that his position was dependent on accepting the judgment of the reviewers when it was clear they were making a true best effort and free of bias. (His feud was known in the circles.)
This was not the first nor shall it be the last. Her work challenges the prevailing paradigm, and her submissions to Nature and Science have been blocked by scientists whose work she "undermines." The ironic part is that the journal in the first instance, readily accepted the work and placed it on the front cover. Now that she's a PI, she is a regular reviewer for that journal too.

By Onkel Bob (not verified) on 09 Dec 2009 #permalink

The issue of journal editors being deliberately biased is neither new nor uncommon. Many years ago, when I was a scientist, a colleague remarked that though he would submit to a particular journal he did not expect his paper to be published because of the known bias of the editor, but had prepared to send it to his second, much less prestigous choice as soon as he got the rejection notice. This was a discussion in the tea-room and was followed by rude remarks about the editor in question from others around the table.

This was normal chat among scientists in the same field, but should my colleague have kept quiet, just in case the room was bugged and his remarks leaked to the public? Your remarks here suggest that he should.

You seem to be suggesting that scientists should not communicate informally among themselves using the jargon ("trick" is just such jargon) and common understanding of the field lest someone, not party to the conversation, misunderstand what is being said.

By Keith Harwood (not verified) on 09 Dec 2009 #permalink

Thank you for a thoughtful post.

From the National Academy of Sciences, Here is the crux of climategate (assuming the reader is familiar with the FOIA and peer review aspects, and the "lost" source data aspects of the issue):

Ensuring the Integrity, Accessibility, and
Stewardship of Research Data in the Digital Age

ISBN: 978-0-309-13684-6 National Academy of Sciences.

http://www.nap.edu/html/12615/12615_EXS.pdf

Data Access and Sharing Principle: Research data, methods, and other information integral to publicly reported results should be publicly accessible.

Recommendation 5: All researchers should make research data, methods, and other information integral to their publicly reported results publicly accessible in a timely manner to allow verification of published findings and to enable other researchers to build on published resultsâ¦

Data Stewardship Principle: Research data should be retained to serve future uses.

Data that may have long-term value should be documented, referenced, and indexed so that others can find and use them accurately and appropriately. Curating data requires documenting, referencing, and indexing the data so that they can be used accurately and appropriately in the future.

Recommendation 9: Researchers should establish data management plans at the beginning of each research project that include appropriate provisions for the stewardship of research data.

The solution to this problem â as with so many others â is honesty.

At the minimum, the CRU utterly failed in the Data Access and Data Stewardship principles.

Good post.

One point:
"Email messages are frequently written for a private audience, rather than the general public."

Anyone who sends email counting on it remaining private is (at best) incautious. If the subject is of intense public interest, then the sender is downright foolish.

This has nothing to do with the Internet, by the way. When I joined a large multinational IT firm in 1969, I heard about the "mom test" - if you would not be happy to have your mom read it in her local newspaper, don't write it.

By Scott Belyea (not verified) on 09 Dec 2009 #permalink

At the minimum, the CRU utterly failed in the Data Access and Data Stewardship principles.

This is often said in regards to "climategate" but it is a misunderstanding. The CRU gets data from a number of sources. Some of those sources are national meteorological agencies. Some of those agencies sell their data and, because they sell the data, include "no redistribution" clauses with the sale. It is those data, data for which the CRU had no right to redistribute, that the CRU did not redistribute.

In addition, the data the CRU deleted some 20-25 years ago when data storage was very expensive, was a copy of original data, not the original data itself (which was then, and still is now, held by the original data providers).

Let me add to the "good post" chorus. After reading a lot of the heated words on the issue it is refreshing to get a calm and thoughtful perspective.

The peer review interference issue is taken out of context. Essentially the journal, Climate Research, had a lax editor and a rogue associate editor who managed to get at least one very poor (i.e. scientifically incorrect) article through to publication.

More generally, what is a scientist's proper course of action when they see a journal in their field is printing papers that contain bad science, or that perpetuate known errors?

The criticisms about the papers in question were happening in public spaces as well as in the e-mails. They were more polite in public spaces, but the issues with the papers were discussed in places accessible to the general public (including several climate blogs, including RealClimate), at least. (I don't remember whether they were also discussed in comments-and-replies or not. The traditional ways to discuss problems with papers are slow compared to the news cycle, however.) But the discussion of problems with the papers got a lot less attention than the papers did - I think that experience is part of the frustration expressed in the e-mails.

I think the issue of journal editors is complicated because there are very few geoscientists who don't have strong opinions about climate change science, even if they have no expertise in the topic. You want someone to be a fair, neutral referee, but I think that person may be difficult to find.

I'm glad you were civil and fair enough not to mention or criticize the people who committed felonies by breaking into the CRU computers and stealing all of these private emails; and committed felonies by trying to hack into realclimate.org and put these files up on their front page.

Katharine @1:
Why don't we expect laypeople to evaluate us by our data, not our attitude? Why don't we chide them if they act fallacious?

I think it's about the gap between "is" and "ought". Ideally, non-scientists would be hip enough to the rules of scientific engagement to avoid fallacious evaluation (and I've written before about the standards up to which I think non-scientists could live up). Realistically, it doesn't always happen that way. Modeling above-board engagement with other scientists (rather than crappy behavior toward them) removes a variable from the system that might be taken for the driver of scientists' decisions.

Keith Harwood @3:
You seem to be suggesting that scientists should not communicate informally among themselves using the jargon ("trick" is just such jargon) and common understanding of the field lest someone, not party to the conversation, misunderstand what is being said.

Not at all. Communication is essential, and jargon is efficient. The point is that if folks who aren't party to the conversation stumble upon it, and if it has something to do with a topic that matters a lot to them -- and one where you'd like them to take account of the knowledge you've been working hard to build -- you may need to take pains to explain it to them. And, if they're not completely up on the customs of your tribe of scientists, it may take some convincing for them to see that your disputes are really about data or methodology rather than personal animus.

Joe @6:
The CRU gets data from a number of sources. Some of those sources are national meteorological agencies. Some of those agencies sell their data and, because they sell the data, include "no redistribution" clauses with the sale. It is those data, data for which the CRU had no right to redistribute, that the CRU did not redistribute.
In addition, the data the CRU deleted some 20-25 years ago when data storage was very expensive, was a copy of original data, not the original data itself (which was then, and still is now, held by the original data providers).

This is precisely why my thoughts here about data management and sharing were about the broad issues -- it seems like there are lots of particulars in this case that haven't been laid out clearly enough for those of us who are not climate scientists. If all the relevant original data still exists somewhere (and if it can be shared), that's much better than losing data.

Kim @8:
The criticisms about the papers in question were happening in public spaces as well as in the e-mails. They were more polite in public spaces, but the issues with the papers were discussed in places accessible to the general public (including several climate blogs, including RealClimate), at least. (I don't remember whether they were also discussed in comments-and-replies or not. The traditional ways to discuss problems with papers are slow compared to the news cycle, however.) But the discussion of problems with the papers got a lot less attention than the papers did - I think that experience is part of the frustration expressed in the e-mails.

If the substantive points get worked through by the scientific community in public, then private trash-talk strikes me as a pretty innocuous mechanism to release pressure. It's interesting, though, that the larger public might largely ignore the polite but serious public disagreements and only start paying attention when the engagement between scientists starts looking palpably nasty. (I'm not going to blame this on reality television ... yet.)

Douglas Watts @9:
I'm glad you were civil and fair enough not to mention or criticize the people who committed felonies by breaking into the CRU computers and stealing all of these private emails; and committed felonies by trying to hack into realclimate.org and put these files up on their front page.

Yes, I completely sidestepped this question, partly because people have raised some reasonable questions (in the general case, if not in this case specifically) about whether theft of such private emails by someone on the inside might count as whistleblowing under certain circumstances, about what information about the scientific "process" (possibly on display in email communications of this sort) the public might be entitled to under freedom of information laws, and so forth.

Generally, I think if you're going to break a law, there needs to be a very compelling reason to do so -- a worse wrong that your release of information will prevent, for example. I've not seen compelling evidence that that's what's going on here. However, that doesn't mean there might not be some cases where we'd judge an illegal release of this sort of information to be a good thing on balance.

Complications like this are why I'm still thinking about it. My silence about the legal status of the hacking is not an endorsement of it, and if it was taken as such, that's my fault for not being clearer.

Taking some bits in turn:

On the other hand, when one's smack-talk about a fellow scientist that was not intended for public consumption ends up being available for public consumption, one might at least have the grace to be apologetic about the style, if not the substance, of the critique.

"At least have the grace"? I disagree, at least in this case. It's one thing if you accidently hit "reply-all" when you meant "reply" and thus made, in public, a comment whose style was, let us say, "robust". Similarly, if you walk out of a talk with a friend and say, "That talk was *shit*" in what you imagine to be sotto voce but, y'know, you still have the mic on. Then you should apologize and you may have an interpersonal problem.

But you are ranting in your hotel bedroom and someone confronts you later because they heard you say, "That talk was shit" because, you know, they bugged your room? I would hope that both you and the target of the rant (if they aren't the bugger) would go to town on that person.

When I was young, I was obsessed with knowing what people said about me when I wasn't there. That is, their "true" opinion. But what people say in private is by no means clearly their "true" opinions, nor is it relevant *unless* it has other effects. Ranting is ranting. Venting is venting. Neither is incompatible with professionalism. Ranting can be a very useful safety valve that, by catharsis allows you to either get to a more measured judgment, or keeps you from making a bad public decision.

(It's not all good, of course. Other people can take it too seriously, or be turned off by it. It can cement your negative opinion. Etc.)

Indeed, if that critique has to do with shortcomings in the scientific results of the subjects of the smack-talk, there is arguably an obligation to bring the criticism to a public space

There's a lot of weight on the "arguably" there. It really depends. Not every rantworthy complaint is something which can be usefully or effectively "brought to the light". It can be a severe waste of time. It can prevent you from doing other work. It can be directly or indirectly career damaging.

As I said in comment on the plagiarism post, there's a real cost/benefit analysis to be done in a lot of these cases, where a large chunk of the cost comes out of one's spirit, moral, enthusiasm, and quality of life. I think it's perfectly reasonable to start shunning certain fora as a way of expressing that, in general, those fora's standards are inappropriate. I think it's appropriate to communicate with friends, colleagues, and students these judgments. I don't think it's required that I write all these up. After all, a large chunk of such critique are unpublishable, at least, in my field. If they are about obviously hideous crap, then I expect my peers to recognize the crap directly. I don't expect them to read a paper saying, "Here's a bunch of obvious, hideous crap."

(There are exceptions, of course. Great take downs can be useful. But lots of takedowns aren't interesting.)

Personally, I find this sort of thing repugnant. Readers of this site will know that I tend to err on the side of "Publish and be damned", preferring to let the scientific literature sort itself out as ideas are evaluated and experiments are reproduced. I support the idea of peer review, and I don't think that every single crazy idea should be thrown out to waste everyone's time. But I set the "crazy idea" barrier pretty low, myself, remembering that a lot of really big ideas have seemed crazy at first.

...

the CRU people have replied to their critics in the past by saying that more of their own studies have been published in the peer-reviewed literature. This is disingenuous when you're working at the same time to shape the peer-reviewed literature into what you think it should look like.

...
I tend to agree with this take. Pointing to peer review as an objective measure of scientific quality cannot work if you've got a thumb on the scale.

I really, really disagree with this. It's one thing to point out cases where control of the process by entrenched figures is both frustrating and prevents good work from getting published. (That happens. I've had it happen to me. I presume everyone knows that some people get published on the strength of their name well after the work is good, and that novel work can face a high barrier to getting published (and moreso to getting attention). But that's *part* of peer review, not a distortion of it. Not a great part, but a part.) It's part of everyone's job to assess "how things are going", which includes choosing which journals/conferences to submit to, be PC on, recommend, cite, etc. Peer review doesn't end with the review of a paper. If someone incompetent in my judgment is made editor or PC chair, I'm going to complain to the relevant people. I'll, of course, marshall evidence for my complaint. But for known kooks the evidence threshold is pretty low. Track record matters. (E.g., when reviewing certain people I pay more attention to places where I know they've screwed up before.)

So, where is the line between "interfering with peer review" and, well, peer review? When is it putting your thumb on the scale and when it is appropriately calibrating the scale? What do we do when there are systematic attempts to defeat peer review in order to achieve ideological goals (see, intelligent design and climate science)?

Finally, I do object to the implication that 1) the climate scientists need to be "squeaky clean" and 2) that they weren't already pretty damn squeaky clean. At least, we can distinguish the prudential from the moral aspects of this. AFAICT, with the possible exception of some email about FOI, the UEA folks are squeaky clean from a moral perspective. Indeed, their public and private faces seem aligned (as people have pointed out). There is no evidence of fraud or other scientific malpractice. (Frankly, there are a number of scenarios of data *loss* that would be unfortunate, but not indicators of problems. Or not even unfortunate. If you have an instrument capable of gathering a certain amount of data but you only gather a tenth of it (since more would overwhelm your collector and be pointless), have you done wrong?) Prudentially, if you have to sanitize your every writing to exclude words like "trick" when used in a perfectly ordinary way, then I'm not sure any sufficient level of cleanliness is possible. People are not acting in good faith. There's only so much one can do to counter that by being "clean". Anyone can be torn down. Anyone.

Consider the issue of Barack Obama's birth certificate. If *that* can't be settled in the public mind by good behavior (e.g., releasing copies, etc. etc. etc.) then how on earth can we hope that "best behavior" is even relevant to countering antiscience?

Btw, there's a very nice page at RealClimate.org listing publicly available climate data and code. The climate science community seems rather robustly transparent. This is another factor to consider when evaluating the data/code sharing requirements on any specific researcher. If their specific data/code aren't essential to replicate their results, the burden of sharing is diminished.

Joe Wrote >>"Some of those agencies sell their data and, because they sell the data, include "no redistribution" clauses with the sale. It is those data, data for which the CRU had no right to redistribute, that the CRU did not redistribute."

However, the Stewardship and Sharing Principles apply not only to data, but methodology and process, which CRU pointedly resisted sharing.

The principle makes no provision for the difficulty of this effort. In fact, given the stakes, it is unforgivable.

Moreover, the "owners" of the data are ultimately those who comissioned the study. I have paid parties for studies and R&D. If they ever failed to retain the data and the permutations thereof, they would have violated the principle that data and methods is leverageable for future use.

These principles are not new. They have been pillars of research and codified since 1934. Whether the data was on tape, disks, paper binders, or stone tablets is irrelevant.

"Some of those agencies sell their data and, because they sell the data, include "no redistribution" clauses with the sale. It is those data, data for which the CRU had no right to redistribute, that the CRU did not redistribute."

Doesn't work. They've refused to say which data they are using, preventing people from going to the original source.

Also, they have refused to release the code used, and that is completely under their control.

Since they have broken criminal law in the UK on FOI requests, the deserve their come uppance.

@jallen, if the principle makes no provision for the difficulty of this effort *and* the expected utility, then I'm pretty sure it's a silly principle. Your hyperbolic invocation of the stakes notwithstanding, it's pretty unclear that there is any problematic behavior wrt data handling or sharing. How have CRU pointedly resisted sharing methodology and process? Isn't it pretty clear from the literature what their methodology and process are? Other climate scientists (the first line judges, aka, peers) seem quite comfortable with the methodology/process sharing going on. Indeed, we need to distinguish between sharing the level of description that is, in fact, required, for which purposes it's required, and how it should be properly enforced. It is not unknown, or even not well understood, how to construct a temperature record of the sort CRU has. There are several other such data sets, some of which are publicly available. So there is no mystery here. The datasets are extensively discussed (afaict) in the literature and results of analyses on independent datasets are strongly congruent. (I speak, of course, as a climate layperson who has been following discussions reasonably casually. If I have made a factual error, that could change my conclusions.)

Thus, what's the problem? What's the burden on them to do more? We need to be very concrete about that burden since it's just not the case that they have an arbitrary burden. It's a nonstarter to put superhuman, or even unreasonable, demands on anyone. If we do that, then we will have no science done. Any principle that ensures that no science is done, esp. good (and ethical!*) science, is a (prima facie) bad principle.

So, what scientific wrong are we worried about? It's pretty clearly not what's the facts of the matter, as the general results are confirmed by multiple independent sources (and, indeed, as some people e.g., on RealClimate have pointed out, not sharing the exact data encourages independent replication, which produces better confirmed results). But then, what?

Fraud? There's no credible evidence of fraud that I know of. Consider different sorts: They could have stolen some other dataset, munged it slightly, and passed it off as their own. I believe *history* refutes that (i.e., the historical appearance of different datasets). Furthermore, public access isn't necessary to investigate that.

Gross error? There's no credible evidence of that either. While we can imagine a more detailed audit trail (let's do full recording, including video, of everything!) I'm hard pressed to imagine that preserving or analyzing such an audit trail is remotely useful. To anyone. And it doesn't seem necessary. Given independent development, confirmation by prediction and retrodiction, testimony by individuals, the existing audit trail, etc. etc. there is no reason to believe that it's grossly, or even minorly, erroneous (except for known bits like not including artic bits which makes it the coolest of the major datasets; known and discussed).

Violation of the public trust? No. Just because they are funded with public money does not mean that they or their university must release all their data. There obligations depend on the particular terms of the contracts. For example, most (I think) EPSRC funding allows the university to retain rights to the IP they generate on EPSRC funded grants. (In the States, SBIRs are designed to fund IP owned by companies! They are *subsidies*.)

Being bad or unhelpful scientists? Impeding progress on climate science? Where exactly is the evidence of that? As far as I can tell, the CRU bunch are held with the highest esteem by their peers and colleagues. I suppose they could all be deluded or colluding, but that should be a *conclusion* that is reached on the basis of quite a lot of evidence. Which has not yet been presented, afaict.

Bad PR? Well, maybe. I'm unconvinced that there was a substantively better position for them to have taken. (Yes yes, if they had all been ubertemperate and circumspect in their personal email, some wedges would be harder to exploit. But you then have to ignore the harassment they were under. And if we have to forgo normal words like "trick", then we're screwed. Just screwed.)

Violation of FOI law? @Nick, actually, there is no evidence, afaik, that actual law breaking occurred. I don't believe there's a criminal investigation of the scientists. If they did actually break the law, then that should be dealt with.

Obviously, if they committed fraud, that should be dealt with! But, the evidence is pretty thin to nonexistent.

@jallen, you really cherry picked from that document. Right after the Data Access and Sharing principle, we have:

Although this principle applies throughout research, in some cases the open dissemination of research data may not be possible or advisable. Granting access to research data prior to reporting results based on those data can undermine the incentives for generating the data. There might also be technical barriers, such as the sheer size of datasets, that make sharing problematic, or legal restrictions on sharing as discussed in Chapter 3.

Indeed, the whole document is quite good. It affirms the ideals, while talking sensibly about how to balance them. It points to the historical changes we face and the difficulty in adjusting to them.

Here's an interesting quote (from after Recommendation 6):

If researchers are to make data accessible, they need to work in an environment that promotes data sharing and openness.

While this was in the context of encouraging institutions to appropriately reward data sharing, I think it applies to making the environment less hostile. Part of what drives the climate researchers nuts is that the data requests are transparently in bad faith. They go through the effort to release stuff and none of the requesters use it. If they don't package things up in the most user accessible way possible, they are trashed. They don't just have to prepare stuff for wide consumption, but they have to harden it against distortion. How does that serve any legitimate interest?
I think your reading is harsh and unwarranted by the document itself.

*Other values, such as respect for persons, constrain what kind of science we can do. E.g., there are loads of clinical trials that are simply forbidden, and rightly so. These don't have to be monstrous! If we have moral certainty that one treatment is better we're obliged (with all the right caveats) to give it to the control group as well even if that means we don't get our statistical significance. That's a totally different ballgame, however. Here we're talking about data sharing as either an intrinsic good or where there is no conflict with extra-science progress or practice values, other than possible public ownership of the results.

Lets just ignore the emails, the code, the enhanced data, and just take the final finding at face value. I mean why actually verify anything, when a self-selected peer review of the unreproducible study has already been performed.

Stupid to even think about it, really.

@Bijan

Bijan - Thank you for your thoughts. I did not cherry-pick, I cited the entire link to the document in my post. The quotations were for general information in the interests of brevity on a comment blog.

Speaking of brevity, my time precludes a more lengthy response, although I thoroughly enjoy the discussion. At the risk of being accused of further "cherry-picking" ;) allow me to address some of the issues you raised.

First, Jones at CRU admitted deleting data accidentally. The data is not voluminous, and similar data is readily available for FTP download. Retention, therefore is not difficult. I will wager that the UAE investigation will find fault with CRU's data retention.

Regarding methodology and process, I disagree with your contention. Slow-walking FOIA requests which have been vetted by the attendant ombudsman (and deemed not to be requests in bad faith from cranks) is de facto proof.
I will not debate the importance of source code, adjustments, regression, and statistical algorithms, which are as imortant to the conclusions as the data. These algorithms were never disclosed.

Granted that data owned by third parties and covered under NDA are problematic, but not insurmountable. I may have missed their reference to the owners of that portion of the data, but if specific citation was not made then that, too, is an oversight. The other issues are not at play here, as issues of intellectual property appear not to rise in publicly-funded research.

I am interested in this from a scientific perspective. I assiduously avoid making the more volatile implications you raise in your comment. I am interested solely in the scientific practice.

As part of issues relating to the scientific practice is the nature of peer-review that you rely quite heavily upon. I am most interested in the outcomes of the investigations should they look into whether they had control of their own process. There is some indications that it was a closed-loop system comprised of several dozen researchers in an incestuous, self-affirming academic relationship. I do not know the players, but I believe the CRU investigation will look into the degree of disclosure to peer-reviewers and the process itself.

I will abide by the findings of the investigations, although, again, I will wager that the findings will be absolutely unfavorable on the retention issue, likely unfavorable on the sharing principle, and a small chance it will be unfavorable on the peer review issues.

Finally, it is not hyperbole to note the stakes of the outcome.

Here is an interesting reconstruction of the emails according to the timeline they were written. If you believe this reconstruction, (and you can check it of course against the leaked emails), then it seems plain enough that "the trick" and "hide the decline" were not innocuous at all, but were intended to obscure and cover up problematic data in order to better sway opinion leaders.

http://climateaudit.org/2009/12/10/ipcc-and-the-trick/

(This is a great post. I find it interesting how so many different sources fall out on this issue. I'm of the AGW is probably real, but wow, there were some real bad and unethical actors involved at the CRU, and it's very reasonable and understandable for the public to be appalled, dismissive, and distrustful of all scientists based on that behavior.)

@jallen,

Thanks for your response.

Forgive me if I find your follow up more measured than your original comment.

Re: Cherry picking. Mere citation of the whole is not a defense (i.e., it's possible to cherry pick and yet cite the whole. While citing the whole opens you up to discovery of cherry picking (and is thus praiseworthy, thanks! it was an interesting document to peruse), the fact of cherry picking or selective quotation is independent of citation. Brevity of circumstances also doesn't provide a defense esp. when your conclusions (which are very strong) depend on the pick. If you cited the principles as ideals and called for more funding and a (non-judgmental) shift in culture (e.g., different citation principles, impact measures, etc.) then your appeal to the principles would have been in line with the overall presentation in the document.

(Please note that I do not accuse you of malicious cherry picking in any way.)

Could I get a citation on the admission of accidental deletion? This is, of course, a prima facie issue, though whether it is severe or minor is quite a different matter. The document you cite acknowledges that the culture of data retention and sharing varies across time and discipline. It's one thing to say, "I want to move all scientific cultures to a retain and share everything model" (a goal I endorse) and another to say "These guys so violated a basic principle that they are worthy of great censure" and yet a third to say, "The standards of this field were not so great and much needed reform is on the way."

Re: methodology and process. I'm confused by your use of "de facto". Did you mean "prima facie"? That (at least some of) the requests were denied through an escalation process seems to muddle the picture.

You will not debate some stuff. Ok. I don't see where I denied that algorithms, etc. are important. But I do deny, at least on prima facie grounds, that they were never released. It is not a requirement that one share one's actual source code (or reagents, or equipment) in all instances. (Though that is changing.) Again, you have to provide a specific charge of scientific malpractice for us to evaluate the issue. If the charge is fraud (even if their answers are "right") then some investigatory body will want as much access at a sufficiently deep level to, one hopes, confirm or disconfirm the charge. (No need for public access, per se.) If the record is not very good, then the charge may be neither confirmed nor disconfirmed -- a bad outcome for the scientists (if honest) and not ideal for the dishonest. None of this touches the general science, however, as there is alternative support.

(And it's not always the case that deep historical audit is necessary. There can be data markers that reveal copying.)

If the charge is that the specific science is bad, i.e., the conclusions are basically right but right for the wrong reasons, then access to the internal workings of that lab are probably not necessary, as replication will do the job. If the published papers are not sufficient for replication (or at least analysis of the methodology) then that is a problem with the published papers, and is (one expects) determinable from the public record.

If the charge is impeding progress...ok, now I'm definitely repeating myself. But the burden is on you to 1) present a charge and 2) present credible evidence independent of the "mere" lack of release of everything that the charge is plausible.

This is really basic. It is important when evaluating scientific practice to ask what metrics you are using and toward what conclusion you are driving.

Re: your not making the "more volatile implications"...but these are the heart of understanding the principles you appeal to, and the problems, if any, of violating them. The principles are heavily instrumental, that is, we like them because of their presumed effects. If one doesn't share data to hide fraud, the first order problem is the fraud. You are presenting data retention and sharing as sui generis requirements (AFAICT, "the solution...is honesty", "utterly failed", "the principle makes no provision for the difficulty of this effort" <--This last seems utterly untrue, not just cherry picking. The document makes clear that the burden does not fall on scientists alone and clearly points to the need for systemic changes.)

And perhaps by trying to drill down on the direct scientific issues (was there fraud? impedance of progress?) that they would evaporate (from their volatility :)). But that's why it's important to figure out the space of implications.

I, too, am interested in scientific practice. But then a more nuanced approach is required to figure out what was going one and what, if anything and to what degree, was wrong. One could argue that working with access restricted data from third parties wasn't a good choice, in the long run, but it's definitely defensible. Similarly, storage limitation driven decisions to discard intermediate work is a decision worth pondering, but it seems defensible.

Re: the possible "closed loop, incestuous, self-affirming" (scare quote) academic relationship. Lots of academic relationship in, for example, specialized fields is close. Some open, non-incestuous relationships are self (or mutually) affirming. Esp. over the relatively short run. This is part of the "peer" in "peer review".

And it is hyperbolic to point to the (very serious) stakes which do not plausibly rest on the outcome of this investigation. For the stakes to rest on it, two things have to be true: 1) that the judgment that leads to one action or the other rest strongly or critically on the particular activities in question and 2) that the judgment based on the conclusion of those activities is the wrong judgement. Neither of these are true, afaict, nor, at this time, plausible. APW conclusions do not rest solely or critically on CRU's work, at this point. There is no evidence of widespread fraud across the many many people, in climate science and out, working on the issue.

One concern I have is that the nature of the investigation (and the hoopla surrounding it) will lead to worse overall practice. Aside from people leaving the field, or not entering it, judgment suffers under pressure.

@anon, sorry, I didn't get that from the Climate Audit post. I still haven't worked through the chart parts, but the first bits claim things to be smoking guns which just aren't. Consider the "fodder" email. The bits quoted seem perfectly in bounds. When writing for a non-scientific audience, you always have to take care to balance giving as complete picture as possible with presenting an intelligible picture. It's certainly possible to oversimplify to the point of distortion or to distort while claiming mere intelligibility editing, but neither is in evidence here (pace issues with the graph that I've not explored).

Explaining and explaining away anomalies is a standard (and sometimes fraught) part of science.

I don't think it's reasonable, though perhaps understandable, for the public to be distrustful of all scientists based on any few scientists' behavior, much less this behavior. Part of that understanding comes from the rather severe smear jobs that are going on. There has been famous cases of fraud in, for example, physics. Outright fraud. Does that reasonably undermine confidence in all scientists?

Hi Bijan â Any omission was not malicious and I understand the point. Although, in a blog response, circumstances do provide a defense, as formality is lax in this circumstance. Iâll be more careful if this blog has different standards, lest I be accused of the same charge that might be leveled at CRU!

The principles are indeed ideals, although it is unclear at this point whether funding, culture, or other factors are at play.
I share the goal you endorse, hence my post. We agree that the conduct of CRU is worthy of further scrutiny. The so-called âtransgressionsâ may prove rather minor and easily remedied. However, the information that I have examined leads me to stick to my wager.

Regarding the FOIA requests: I have read credible sources [citation needed but not available as I am travelling] that the UAEâs administrator/ombudsman directed CRU to respond. I used the term de facto to mean without sanction of legal proof. I do not dispute that some requests were indeed specious and rightfully denied. I do not agree that this muddies the issue, as one valid FOIA request is all it takes.

I tend to support that the stance statistical and regression analysis that relies, as its subject, on interpolated data requires evaluation of the algorithms. I agree that public access, per se, is not required. (Although, given the histrionics associated with CRU, a more open evaluation is likely unavoidable.) Because the data is interpolated, access to algorithms as well as data is important. This goes to both the retention and âsharingâ principles. Again, the investigations, I hope, will provide definitive answers regarding the issues youâve mentioned. They may prove inconclusive.

You are astute in noticing that I used some seemingly inflammatory language while, in the same breath not wishing to engage in these âmore volatile implications.â This was deliberate. At its best, the issue is merely a contretemps, but at its worst it potentially rises to the âunforgivable.â ;) That is why the objectives and metrics of the investigation must be defined. However I disagree that we need to define toward âwhat conclusion you are driving.â This, lest, we âinvestigatorsâ be accused of exactly what the harshest critics of CRU are contending!

I disagree with your contention that âstorage-drivenâ limitations to discard work make the deletion defensible. The data is just not that voluminous. Moreover, that data may be leveragable for the CRUâs future work. We have all sighed with relief and said âIâm glad we saved that.â I am confident the CRU will take a hit in the investigation on data retention.

We understand that the CRU is but a member (perhaps the most important and influential one) in the wider climate science community. For this reason, a critical outcome will have far reaching implications. I remain unconvinced as to your contention regarding the stakes.

Whether the nature of the investigation harms the field or scientific practice remains to be seen. Regardless of the outcome, the mere fact that there is an engaged, public dialogue will help laymen, researchers, and institutions alike. On this, I may be actually more optimistic than you!

Hi Jallen,

I don't want to belabor the point, but I don't think it's a matter of formality or informality. You set up the principle to be basic and inviolate and drew strong conclusions that are not, afaict, warranted by the document as a whole. Whether formal or informal, that's not helpful.

Re: "What conclusion you are driving". Rephrase it as "What hypothesis you are testing." Without a reasonable hypothesis, this is a fishing expedition and we know those are dangerous. Even exploratory science isn't blind search. Simple resource allocation issues makes that infeasible. (As Kitcher said, truth is cheap, we're after interesting truth.)

I'd be much more comfortable if this incident were driving calls for better sharing and retention tools and policies, rather than condemnation of specific people, then groups, then an entire field.

You disagree with my contention that storage-driven limitations are a sufficient defense in this case? or are not a defense at all. Earlier you seemed to be making the latter case. Now you are making the former (by saying that there were no storage limitations; and let's read "limitations" broadly, i.e., to include maintenance costs, questions about whether that data was stored elsewhere, etc.). It's perfectly possible that a good case could be made that better data retention would have been helpful (indeed, it might have made the current situation a touch easier to deal with!). That's very distinct from saying that what they actually did was indefensible. Compare with the decision not to perform some extra observation or an extra experiment. Years later, it may become obvious that that extra observation or experiment would have been tremendously valuable and is either infeasible or impossible to perform now. This turns out to have been the wrong judgement call, but it probably was defensible. (The details matter. What best practice is at the time of the decision, matters. Old best practice may be discarded because it turns out to be, in general, bad practice, but that it was best practice at the time is a fairly substantial defense.)

Re: the stakes. We really have to distinguish the scientific stakes from the policy stakes from the political stakes. This is part of what I object to in your invocation. This is why I want to know what the first order stakes are. It doesn't seem to be the general conclusions. It also doesn't seem to be the withdrawal of any paper. Finally, it doesn't seem that anyone should lose their position. If it prompts institutions to support better data retention, then that's a good. If it causes people to retrench and feel even more under siege, that's likely bad.

The PR stakes seem high as people are trying to whip this into a far reaching scandal such that policy decisions should go a different way. That is serious, but I don't see our escalation helps that. Indeed, it seems to merely feed the frenzy. It is a tricky line as, for example, there could be bad behavior that is not remotely defensible. Even by defending the defensible and worth of defense, we can be painted as wagon circling elitists (think Obama's birth certificate, or the anti-vaccination movement). But I don't see we gain by throwing people to the wolves.

So, yes, I don't feel optimistic because I don't see a general dialogue. Certainly not one that will lead to a dispassionate assessment of the science, the scientific infrastructure, or the policy. For example, I can't imagine that this will lead to the British govt directing EPSRC or JISC to give money to CRU to improve its data retention! Or worse, I can imagine it to the detriment of other funding priorities (or CRU having serious trouble getting funding). UK funding is already quite crap at the moment (e.g., I've had grant proposals that were just below the much constricted cut off with spectacular reviews that mean that UK academia lost the opportunity to secure a very good rising researcher).

Good morning, Bijan â Your reasoning is well nuanced insofar as the broad ethical implications for science as a whole, and I see substantial agreement between us.

âHypothesisâ is indeed a better word than âconclusion.â In the extant example of a CRU investigation, for the sake of discussion let us narrow the issues tentatively (not formally) solely to (a) Data Retention may have been insufficient and (b) Data Sharing may have been insufficient. We would need to agree on a definition of insufficient.

We would both be pleased to see the investigation drive toward enhanced sharing and retention principles and I remain hopeful that it will. Unfortunately, the histrionics, vested interests and politics are bound to devolve into condemnation of individuals, institutions, and the wider body of climate science. We do agree that the issues should be investigated.

You are correct that I contend that the storage limitation defense is insufficient in this specific case. My comment on âstone tabletsâ was directed to the issues of media, manner, method, and mode. I should have been more precise.

I have some minor experience with the mechanics and âphilosophyâ of data retention. This leads me to contend that the data was not voluminous enough, nor archaic enough, nor costly enough to raise storage limitation as a defense. Moreover, any defense as to the lack of importance of the data or lack of future value is, in my estimation, not valid. They may have another valid defense. If I think of one, Iâll share it.

Letâs limit the first order stakes to the principles of retention and sharing. But it would be disingenuous of us to ignore the possibility of broader implications. By this, I do mean a possible impugning of the general conclusions. It appears that if data has failed to be retained, and data and results cannot be replicated (unlikely) then in a broad sense there are greater implications.

I may say that an unfavorable result does not force researchers into retrenchment or circling of the wagons, likely or not. That is a choice they would make. To blame an unfavorable inquiry result for future individual unfavorable actions would merely demonstrate the quality of character, their view of science, and cultural and institutional factors.

Your last paragraph is telling. This affair, regardless of outcome, will likely have the effect of suppressing grant money and funding priorities and conditions. Perhaps this issue is why the climate science community is eager to leap to their defense. Perhaps not, but it certainly plays a role. If the investigation exonerates, blame the leaker or hacker. If the investigation finds fault, blame the CRU or the individual researchers.

P.S. I wouldnât worry if I were you. The cream always risesâ¦

Hi jallen,

Re: your hypotheses, I do agree that we'd need to agree on what counts as "insufficient". But we can't do that without reference to the purpose of that retention and sharing and without reference to the obligations on the people involved at the time when decisions were made. In any science, standards evolve and things that were considered the height of rigor become laughably handwavy. Similarly, some degree of non-sharing might be less than happy for the promotion of science generally, but compatible with acceptable science. (E.g., you can be not a great colleague and still publish and well.) If fraud is reasonably suspected, the degree of sharing required to produce a defense is much less than full public access. If our concern is fraud (our first order concern!), then we should ask whether there is or has been sufficient sharing to make us reasonably confident of fraud. Not confident beyond any possible doubt. Reasonably confident. This is where the Obama Birth Certificate analogy is, IMHO, telling. It's of course possible that a conspiracy including people having the foresight >40 years ago to place birth announcements in Hawaiian papers and involving multiple state officials in Hawaii, etc etc. is in place. Heck, perhaps his alleged mother isn't his mother! But there's zero point in pursuing this line of investigation. Bare possibility does not generate a reasonable hypothesis.

So, I'm still puzzled by what the hypotheses are, at least in their full implication. I would happily support the UK govt doing a general data retention and access review, perhaps as part of the next RAE, across disciplines. (Well, "happily", maybe not. It would be an enormous pain and I'd like to know first what we would hope to get out of it. Sampling might be a better way to go.) But I don't see that focusing on CRU is likely to be fruitful in improving practice in general or, frankly, in CRU. Since we both (I believe) that the investigation is highly (for me, very highly) unlikely to show cause for retraction of papers or even, I'll bet, that the CRU data set is worthless (or even much less useful, scientifically, than it already has been), then what is the point?

I've been looking at the Hocky Stick Controversy which seems to reasonably analogous. I wrote a lot about it, but don't want to put it all here right now. Just let me say that I don't see that the escalating involvement of committees and investigations was nearly as helpful as people publishing papers. It certainly doesn't seem to have improved matters overall.

This goes back to a key point that I don't think you're taking seriously enough: Hostility and sense of siege matters. A lot. Yes, ideally people should behave properly no matter what the circumstances, but as a matter of policy we shouldn't put people in known stress situations and expect them to do better! War crimes committed by soldiers are, of course, crimes and should be prosecuted. But there are known situations (e.g., lack of clarity on ROE; winking and nudging from commanders) which lead to a greater incidence of war crimes. If we care about reducing the incidence (esp. in the future) we should take seriously the effect of policy. If our goal is better data retention and sharing, then we should think about what is the best way to do that.

(Nothing I say should be taken as meaning that I am purely consequentialist about this and thing scientific malpractice must be shielded for the greater good. This is why the nature of the suspected malpractice is so important. If the result is that some climate scientists are jerks and sloppy in some aspects, that's probably less significant than any fraud.)

Re your kind remark about cream, I'm afraid that doesn't capture the situation. Losing momentum is a serious deal. I was talking with a senior colleague about why there was no one in the UK doing a certain sort of work when the UK had lead this in the 80s-early 90s and she said, that there was a funding hostility to maintaining the labs and expertise, so it all disappeared and rebuilding that just wasn't feasible. So I'll continue to do work and, I hope, good work, but my overall output will be diminished compared to the situation where key funding comes through. When I moved to the UK from the US, my research suffered as I built up a new team.

So, is it plausible that we're going to get better science out of this review, or even better PR? Is it plausible that we'll reveal and correct some scientific malpractice? I'm skeptical.

Slightly different tanget: I was trying to figure out how FOIA would apply to me. (Fair warning: I'm a pretty messy person. I have championed the use of SVN in every group I've been in for code, data, and papers. I am working, hard, to make software packages that would make it easy, indeed trivial, for people to rerun, modify, and otherwise tinker with experiments that I've designed and published. No funding though :)) The University of Manchester is definitely subject to UK FOI as a public institution (this is distinct from any specific funding from a project). I found an example of a partial response to a request.

Key bits:

We cannot unfortunately provide technical progress reports. The project was a collaborative project, and the technical specifications of the materials being tested are not the intellectual property of the University of Manchester.

and

The IPR for much of the technology for this project is held by one of the Partner Companies and can be characterised as a trade secret under the Act. Moreover, neither the Partner Companies nor the University have as yet had an opportunity to exploit the findings of the research commercially. The University is satisfied that the premature release of this information would prejudice the commercial interests of both parties as it would give competitors an unfair advantage.

There is a further point in this instance, in that some of the work done by the University on this project has not yet been paid for, and this payment is presently the subject of some dispute. To release information which has not yet been paid for would in effect put that information into the public domain.

(More along these lines on the second page.)

I notice that the main line is on protecting commercial interests. It's unclear whether protecting scientific interests (e.g., I'm still working with this stuff and don't want to hand over the work to other people who can then get credit for cool results that depend on having put in hundreds of hours into the crap work part without doing that crap work).

(I'm not saying that that 'tude is the best, but it seems no worse than a commercial interest.)