In search of accepted practices: the final report on the investigation of Michael Mann (part 3).

Here we continue our examination of the final report (PDF) of the Investigatory Committee at Penn State University charged with investigating an allegation of scientific misconduct against Dr. Michael E. Mann made in the wake of the ClimateGate media storm. The specific question before the Investigatory Committee was:

"Did Dr. Michael Mann engage in, or participate in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities?"

In the last two posts, we considered the committee's interviews with Dr. Mann and with Dr. William Easterling, the Dean of the College of Earth and Mineral Sciences at Penn State, and with three climate scientists from other institutions, none of whom had collaborated with Dr. Mann. In this post, we turn to the other sources of information to which the Investigatory Committee turned in its efforts to establish what counts as accepted practices within the academic community (and specifically within the community of climate scientists) for proposing, conducting, or reporting research.

First off, in establishing what counts as accepted practices within the academic community (and specifically within the community of climate scientists) for proposing, conducting, or reporting research or other scholarly activities, what kind of evidence did the Investigatory Committee have to work with beyond the direct testimony of members of that committee? The report provides a bulleted list:

Documents available to the Investigatory Committee:

  • 376 files containing emails stolen from the Climate Research Unit (CRU) of the University of East Anglia and originally reviewed by the Inquiry Committee
  • Documents collected by the Inquiry Committee
  • Documents provided by Dr. Mann at both the Inquiry and Investigation phases
  • Penn State University's RA-IO Inquiry Report
  • House of Commons Report HC387-I, March 31,2010
  • National Academy of Science letter titled, "Climate Change and the Integrity of
    Science" that was published in Science magazine on May 7, 2010
  • Information on the peer review process for the National Science Foundation (NSF)
  • Department of Energy's Guide to Financial Assistance
  • Information on National Oceanic and Atmospheric Administration's peer review
    process
  • Information regarding the percentage of NSF proposals funded
  • Dr. Michael Mann's curriculum vitae

Notably absent is the Authoritative Manual for Being a Good Scientist -- largely since no such manual exists. What the committee here was trying to get a handle on is accepted practices within a scientific community where what counts as an accepted practice may evolve over time, and where explicit discussions of best practices (let alone about the line between acceptable and unacceptable practices) among practitioners are not especially frequent.

This means that the Investigatory Committee ended up turning to circumstantial evidence to gauge the communities acceptance (or not) of Dr. Mann's practices.

As the allegation is connected to accepted practices in three domains of scientific activity, the committee considered each of these in turn.

Based on the documentary evidence and on information obtained from the various interviews, the Investigatory Committee first considered the question of whether Dr. Mann had seriously deviated from accepted practice in proposing his research activities. First, the Investigatory Committee reviewed Dr. Mann's activities that involved proposals to obtain funding for the conduct of his research. Since 1998, Dr. Mann received funding for his research mainly from two sources: The National Science Foundation (NSF) and the National Oceanic and Atmospheric Administration (NOAA). Both of these agencies have an exceedingly rigorous and highly competitive merit review process that represents an almost insurmountable barrier to anyone who proposes research that does not meet the highest prevailing standards, both in terms of scientific/technical quality and ethical considerations.

The committee's report then outlines the process of grant review used by the NSF, and points to Dr. Mann's successful record of getting his grants funded.

Let's pause for a moment to get clear on what kind of conclusions one can safely draw from the details of funding agency review mechanisms and an investigator's success in securing funding through such a process. When we're talking about accepted practice in proposing research activities, we might usefully separate the accepted practices around writing the grant proposal from the accepted practices around evaluating the grant proposal. These latter practices depend upon the particular individual judgment of those evaluating a particular grant proposal about the soundness and originality of the scientific approach being proposed, the importance of the scientific questions being asked, how likely it is that a researcher with these institutional resources and this track record is to be able to succeed in performing the proposed research, how well-written the proposal is, whether any important bits of the proposal were omitted, whether page limits were respected, whether the proposal was presented in an acceptable font, or what have you.

Just because a grant proposal is judged favorably and funded does not automatically mean that everything that went into the writing of that proposal was on the up and up. It might still have involved cooked (or faked) preliminary data, or plagiarism, or a whole mess of unwarranted background assumptions. Conversely, the fact that a particular grant proposal is not funded or highly ranked by those reviewing it does not mean that the researcher who wrote the proposal departed at all from the accepted practices in proposing research activities. Rather, it need only imply that the reviewers liked other proposals in the pool better.

After discussing the rigors of the review process for NSF funding, and mentioning that generally only about 25% (or less) of the proposals submitted for any particular program are funded, the committee's final report continues:

The results achieved by Dr. Mann in the period 1999-2010, despite these stringent requirements, speak for themselves: He served as principal investigator or co-principal investigator on five NOAA-funded and four NSF-funded research projects. During the same period, Dr. Mann also served as co-investigator of five additional NSF-and NOAA-funded research projects, as well as on projects funded by the Department of Energy (DOE), the United States Agency for International Development (USAID), and the Office of Naval Research (ONR). This level of success in proposing research, and obtaining funding to conduct it, clearly places Dr. Mann among the most respected scientists in his field. Such success would not have been possible had he not met or exceeded the highest standards of his profession for proposing research.

Let's be clear here. What Dr. Mann's success establishes is that the grant proposals he submitted have impressed the reviewers as well written and worth funding. His success in securing this funding, by itself, establishes nothing at all about the process that went into preparing those proposals -- a process the scientists reviewing the proposals did not evaluate and to which they were not privy.

Of course, I was not privy to Dr. Mann's process in preparing his grant proposals. Nor, I imagine, were the legion of his critics spawned by ClimateGate. This means that there is no positive evidence (from his successful grant proposals) that he did anything unethical in his proposing of research. The standing practice within the scientific and academic community seems to be to presume ethical conduct in proposing research unless there is evidence to the contrary. Grant review mechanisms may not be the most reliable way to get evidence of unethical conduct in the preparation of grant proposals, but short of bugging each and every scientist's office and lab (and installing the necessary spyware on their computers), it's not clear how we could reliably get evidence of grant-writing wrongdoing. It seems mostly to be detected when the wrongdoer makes a mistake that makes it easier to identify the methods section lifted from someone else, the doctored image, or the implausible preliminary data.

Still, I think we can recognize that "presumed innocent of cheating until proven otherwise" is a reasonable standard and recognize that success at grant-writing is mostly proof that you grok what the scientists reviewing your grant proposals like.

Next came the question of whether Dr. Mann engaged in activities that seriously deviated from accepted practices for conducting research. The focus here was on practices around sharing data and source code with other researchers.

[T]he Investigatory Committee established that Dr. Mann has generally used data collected by others, a common practice in paleoclimatology research. Raw data used in Dr. Mann's field of paleoclimatology are laboriously collected by researchers who obtain core drillings from the ocean floor, from coral formations, from polar ice or from glaciers, or who collect tree rings that provide climate information from the past millennium and beyond. Other raw data are retrieved from thousands of weather stations around the globe. Almost all of the raw data used in paleoclimatology are made publicly available, typically after the originators of the data have had an initial opportunity to evaluate the data and publish their findings. In some cases, small sub-sets of data may be protected by commercial agreements; in other cases some data may have been released to close colleagues before the originators had time to consummate their prerogative to have a limited period (usually about two years) of exclusivity; in still other cases there may be legal constraints (imposed by some countries) that prohibit the public sharing of some climate data. The Investigatory Committee established that Dr. Mann, in all of his published studies, precisely identified the source(s) of his raw data and, whenever possible, made the data and or links to the data available to other researchers. These actions were entirely in line with accepted practices for sharing data in his field of research.

These conclusions seem largely drawn from the testimony of the interviews we discussed in the last two posts.

With regard to sharing source codes used to analyze these raw climate data and the intermediate calculations produced by these codes (referred to as "dirty laundry" by Dr. Mann in one of the stolen emails) with other researchers, there appears to be a range of accepted practices. Moreover, there is evidence that these practices have evolved during the last decade toward increased sharing of source codes and intermediate data via authors' web sites or web links associated with published scientific journal articles. Thus, while it was not considered standard practice ten years ago to make such information publicly available, most researchers in paleoclimatology are today prepared to share such information, in part to avoid unwarranted suspicion of improprieties in their treatment of the raw data. Dr. Mann's actual practices with regard to making source codes and intermediate data readily available reflect, in all respects, evolving practices within his field. ... Moreover, most of his research methodology involves the use of Principal Components Analysis, a well-established mathematical procedure that is widely used in climate research and in many other fields of science. Thus, the Investigatory Committee concluded that the manner in which Dr. Mann used and shared source codes has been well within the range of accepted practices in his field.

There are two things worth noticing here. First is the explicit recognition that accepted practices in a scientific community change over time -- sometimes in as little as a decade. This means a scientist's practices could start out within the range of what the community considers acceptable and end up falling outside that range, if the mood of the community changes while the scientist's practices remain stable. Second, the committee points to Dr. Mann's use of an analytic technique "widely used in climate research and in many other fields of science". This observation provides at least circumstantial evidence of the plausible appropriateness of Dr. Mann's practice in conducting his analyses.

Then, the committee's report describes looking for more circumstantial evidence and I start shifting uneasily in my chair:

When a scientist's research findings are well outside the range of findings published by other scientists examining the same or similar phenomena, legitimate questions may be raised about whether the science is based on accepted practices or whether questionable methods might have been used. Most questions about Dr. Mann's findings have been focused on his early published work that showed the "hockey stick" pattern of climate change. In fact, research published since then by Dr. Mann and by independent researchers has shown patterns similar to those first described by Dr. Mann, although Dr. Mann's more recent work has shown slightly less dramatic changes than those reported originally. In some cases, other researchers (e.g., Wahl & Ammann, 2007) have been able to replicate Dr. Mann's findings, using the publicly available data and algorithms. The convergence of findings by different teams of researchers, using different data sets, lends further credence to the fact that Dr. Mann's conduct of his research has followed acceptable practice within his field. Further support for this conclusion may be found in the observation that almost all of Dr. Mann's work was accomplished jointly with other scientists. The checks and balances inherent in such a scientific team approach further diminishes chances that anything unethical or inappropriate occurred in the conduct of the research.

Here, it is surely true that difficulty in replicating a published result indicates a place where more scientific attention is warranted -- whether because the original reported result is in error, or because the measurement are very hard to control, or because the methodology for the analysis is sufficiently convoluted that it's hard to avoid mistakes. However, the fact that a reported result has been replicated is not evidence that there was nothing unethical about the process of making the measurements or performing the analyses. At least some fabricated or falsified results are likely to be good enough guesses that they are "replicated". (This is what makes possible the sorry excuse offered by fakers that their fakery is less problematic if it turns out that they were right.)

As for the claim that a multitude of coauthors serve as evidence of good conduct, I suggest that anyone who has been paying attention should not want to claim that nothing unethical or inappropriate ever happens in coauthored papers. A coauthor may serve as a useful check against sloppy thinking, or as someone motivated to challenge one's unethical (or ethically marginal practices), but coauthors can be fooled, too.

Again, this is not to claim that Dr. Mann got anything unethical or improper past his coauthors -- I have no reason to believe that he did. Rather, it is to say that the circumstantial evidence of good conduct provided by the existence of coauthors is pretty weak tea.

The report goes on:

A particularly telling indicator of a scientist's standing within the research community is the recognition that is bestowed by other scientists. Judged by that indicator, Dr. Mann's work, from the beginning of his career, has been recognized as outstanding. ... All of these awards and recognitions, as well as others not specifically cited here, serve as evidence that his scientific work, especially the conduct of his research, has from the beginning of his career been judged to be outstanding by a broad spectrum of scientists. Had Dr. Mann's conduct of his research been outside the range of accepted practices, it would have been impossible for him to receive so many awards and recognitions, which typically involve intense scrutiny from scientists who may or may not agree with his scientific conclusions.

This conclusions strikes me as too strong. A scientist can win recognition and accolades for his or her scientific work without the other scientists bestowing that recognition or those accolades having any direct knowledge of that scientist's day to day practice. Generally, recognition and praise are proffered on the basis of the "end product" of a stretch of scientific work: a published paper. As with grant proposals, such papers are evaluated with the presumption that the scientist did the experiments he or she claims to have done, using the experimental methods described in the paper, and that he or she performed the transformations and analyses of the data he or she describes, and that the literature he or she cites actually says what he or she says it does, etc. This presumption that scientific papers are honest reports can go wrong.

Of course, I know of no evidence that Dr. Mann's scientific papers are anything but honest reports of properly conducted research. Still, it makes me nervous that the Investigatory Committee seems to be drawing a conclusion that the awards and recognition Dr. Mann has received from colleagues in his field tell us anything more than that they have found his published papers important or original or persuasive or thorough. Especially given the comments of Dr. William Curry to the committee, discussed in the last post, that "transforming the raw data into usable information is labor intensive and difficult", I'm not sure it's even a good bet that the people praising Dr. Mann's work worked through the details of his math.

Dr. Mann's record of publication in peer reviewed scientific journals offers compelling evidence that his scientific work is highly regarded by his peers, thus offering de facto evidence of his adherence to established standards and practices regarding the reporting of research. ... literally dozens of the most highly qualified scientists in the world scrutinized and examined every detail of the scientific work done by Dr. Mann and his colleagues and judged it to meet the high standards necessary for publication. Moreover, Dr. Mann's work on the Third Assessment Report (2001) of the Intergovernmental Panel on Climate Change received recognition (along with several hundred other scientists) by being awarded the 2007 Nobel Peace Prize. Clearly, Dr. Mann's reporting of his research has been successful and judged to be outstanding by his peers. This would have been impossible had his activities in reporting his work been outside of accepted practices in his field.

More accurately, literally dozens of the most highly qualified scientists in the world scrutinized and examined every detail that is scrutinized within the scope of the peer review process of the scientific work done by Dr. Mann and his colleagues and judged it to meet the high standards necessary for publication. But this does not guarantee that that work is free from honest error or unethical conduct. Otherwise, journals with peer review would have no need for corrections or retractions.

Moreover, the conferral of the Nobel Peace Prize on the IPCC may speak to the perceived relevance of the Third Assessment Report as far as global policy decisions, but it's not clear why it should be taken as a prima facie certification that the scientific work of one of the scientists who contributed to is was fully ethical and appropriate. Indeed, given that the choice of the recipients for the Peace Prize is somewhat political, mentioning it here seems pretty irrelevant.

For those who hoped that this investigation might deliver Dr. Mann's head on a platter, the judgment of the Investigatory Committee delivers something more akin to his little toe:

One issue raised by some who read the stolen emails was whether Dr. Mann distributed privileged information to others to gain some advantage for his interpretation of climate change. The privileged information in question consisted of unpublished manuscripts that were sent to him by colleagues in his field. The Investigatory Committee determined that none of the manuscripts were accompanied by an explicit request to not share them with others. Dr. Mann believed that, on the basis of his collegial relationship with the manuscripts' authors, he implicitly had permission to share them with close colleagues. Moreover, in each case, Dr. Mann explicitly urged the recipients of the unpublished manuscripts to first check with the authors if they intended to use the manuscripts in any way. Although the Investigatory Committee determined that Dr. Mann had acted in good faith with respect to sharing the unpublished manuscripts in question, the Investigatory Committee also found that among the experts interviewed by the Investigatory Committee there was a range of opinion regarding the appropriateness of Dr. Mann's actions. ... The Investigatory Committee considers Dr. Mann's actions in sharing unpublished manuscripts with third parties, without first having received express consent from the authors of such manuscripts, to be careless and inappropriate. While sharing an unpublished manuscript on the basis of the author's implied consent may be an acceptable practice in the judgment of some individuals, the Investigatory Committee believes the best practice in this regard is to obtain express consent from the author before sharing an unpublished manuscript with third parties.

This ruling reminds us that there is frequently a difference between what a community recognizes as best practices, accepted practices, and practices that aren't so good but aren't so bad that members of the community think it's worth the trouble to raise a stink about them. In any given situation, the best course may be to embrace the best practices, but it is not the case that every researcher who falls short of those best practices is judged blameworthy.

If there were an Authoritative Manual for Being a Good Scientist, maybe it would spell out the acceptable departures from best practices with more precision, but there isn't, so it doesn't.

After a close reading of the Investigatory Committee's final report, I'm a little grumpy. It's not that I believe there is evidence of misconduct against Dr. Mann that the committee overlooked. Rather, I'm frustrated that the conclusions of his positive adherence to the community's establish standards and practices come across as stronger than the evidence before the committee probably warrants.

Here, we are squarely in the realm of trust and accountability (about which I have blogged before). Scientists are engaged in a discourse where they are expected to share their data, defend their conclusions, and show their work if they are asked by a scientific colleague to do so. Yet, practically, unless they intend to build a career of repeating other scientists' measurements and analyses and checking other scientists' work (and the funding for this career trajectory is not what you'd call robust), scientists have to trust each other -- at least, until there is compelling evidence that such trust is misplaced. And, while I have no objection to scientists pragmatically adopting the presumption that other scientists are conducting themselves ethically, I think it's a mistake to conclude from its general success in guiding scientists' actions that this presumption is necessarily true.

To be fair to the Investigatory Committee, they were charged with making a determination -- establish what counts as accepted practices for proposing, conducting, or reporting research in the community of climate scientists -- that would probably require a major piece of social scientific research to do it justice. Interviewing five members of that community cannot be counted on to provide a representative picture of the views of the members of that community as a whole, so it's understandable that the Investigatory Committee would try to bolster their N=5 data set with additional evidence. However, my sense is that the committee's conclusions would have been better served if they had included an explicit acknowledgment of the limitations inherent in the available data. As things stand, the over-strong conclusions the committee draws from Dr. Mann's successful track record of securing funding and publishing scientific papers are likely to raise more questions than would a support that stated that it had found no smoking gun suggesting improper conduct on Dr. Mann's part.

* * * * *

In a comment on the first post in this series, Bijan Parsia writes:

It would be nice to have a fourth post on the evaluation of ethics of the accusers (both "professional" and occasional) as well as the public at large and the scientific and academic establishments.

I personally think it was pretty obvious that this family of accusations was both frivolous and malicious, esp. given that no scientific result was on the table.

I'm happy to take on this assignment, though I'm inclined to deal with the the accusers as types rather than named individuals. If readers think that there are particular ClimateGate accusers who ought properly to be dealt with as named individuals in the upcoming part 4, I'd be grateful if they would email me links to accounts of their statements and conduct that suggest I should so treat them.

More like this

First, I want to thank you for this series of posts, which I have found very useful and informative, and which I will suggest that others read.

However, I want to raise an issue with you about two aspects of this third part of the post.

First, I think that your discussion about proposals and their review, at NSF and other places, is inaccurate. It is not --- very definitely not --- the case that "success at grant-writing is mostly proof that you grok what the scientists reviewing your grant proposals like." Yes, it is important that you know how to write a proposal that reviewers will like. But long-run and consistent success at grant-writing is evidence that you (1) have the creativity to generate questions that will strike your peers as innovative and new, and that you (2) have the skill to think up ways to answer those questions that will strike your peers as likely to succeed, and (3) that you have the expository skill to present the questions and the approaches to them in a compelling fashion. It is a little insulting and very inaccurate to conflate creativity and skill with grokking what reviewers like.

Moreover, to have long-term success with funding, you must keep the creativity and skill at a high level for a long time. It is possible that you could do that while cheating (engaging in some kind of unethical conduct), but that becomes harder and harder to do (1) the more often you subject yourself and your results to scrutiny (in publications, presentations, and proposals) and (2) the more people you collaborate and interact with. My second disagreement with your post is that you seem to miss this point, focusing instead on the fact that getting lots of grants, publishing lots of papers, and collaborating with lots of people doesn't make it IMPOSSIBLE to have engaged in unethical conduct. That is true, but is is not the end of the story. It is much easier for someone who sits in a lab all alone, seldom publishes, never gets funded, never quite shows up at meetings ... it is much easier for such a person to engage in unethical conduct without being caught. I think that the investigators here were drawing conclusions about the LIKELIHOOD, not the POSSIBILITY of unethical conduct. They were quite correct to document Dr. Mann's very public and wide-ranging interactions with the rest of the scientific community as evidence.

I will close with a (honest) question: would it ever be possible to find the kind of conclusive proof that you seem to be searching for. For example, you said "But this [referring to peer review] does not guarantee that that work is free from honest error or unethical conduct." It doesn't seem to me that such a guarantee could be found for any scientist or any piece of scientific work. For a specific allegation, maybe, but not for being "free from honest error or unethical conduct". No matter how much conduct you scrutinize, there will always be some that is unexamined, and it might have been unethical. So, if it is impossible to ever find this guarantee, why bother to present it as a standard that we should try to achieve?

Thanks again for the post.

Oh, and if anyone is curious, I do not know and have never collaborated with Dr. Mann.

By ecologist (not verified) on 05 Jul 2010 #permalink

"And, while I have no objection to scientists pragmatically adopting the presumption that other scientists are conducting themselves ethically, I think it's a mistake to conclude from its general success in guiding scientists' actions that this presumption is necessarily true."

In those industries where safety is paramount, this presumption is assumed to be false until tested independently. I think it's (sort of) fine to run with that presumption for research that is not going to inform radical public policy, but not for research that is.

A significant problem is that this discipline is set up with processes from research that does not require such rigour, so as you say (I think) comparing activity with 'accepted norms' tells us only whether Mann and others did what everyone else does, not what they should have done bearing in mind the impact. ie, it's not their fault as such, but it's not good enough. Yet.

Thanks for taking on the fourth article. I'm happy to discuss types.

I share ecologists concerns about the stringency of evidence required. One standard problem in science communication is conveying the appropriate certainty of conclusions -- see the neverending discussion of the "mere theory" of evolution/natural selection.

A few points: First, Mann et al have been through multiple investigations of various sorts. If he is such a superspy genius to elude all detection even with many people of hostile intent gunning for him, then there's not much to do. The fact that the science does hold up (e.g., replication, confirmation, fitting in with theory, etc.) makes the origin of the original science more and more irrelevant.

In other words, it matters less and less whether Mann did doublesecret unethical stuff early in his career. Not that being right makes any falsification ok, but there's less and less marginal gain to detecting that falsification.

Second, where is academic freedom in all this? Frankly, whether he adhered to best scientific practice wrt data/code sharing should be largely irrelevant to an investigation of malconduct. Unless there were specific agreements in place (e.g., with the uni, with funders, with partners), Mann is perfectly free to horde his data and his code or to share it selectively with people. I don't like it, in general. I might well think he was a jerk. It would, imho, make him less of a force for good in his field. But, if for whatever reason, he thinks that's the right way to go for his research, he is perfectly free to do so. He is not free to falsify data. He is not free to plagiarize.

I think we need to carefully separate best practices with regard to methodology and best practices with regard to presentation and curation. Both are important, but they are distinct. Furthermore, we have to distinguish enforcement mechanisms. This is an incredibly heavyweight mechanism to invoke given the complete absence of any evidence of impropriety (yes, I include the stolen email; those email don't rise to a weak prima facie case).

Third, uh, it's rather more than they found "no smoking gun"! AFAICT, they found no smoke, no gun, no problems at all, no indicator of problems, etc.

One example of this is the mountain made out of the molehill of Mann's sharing an unpublished manuscript. Really? This was scrutinized? His sharing a colleague's manuscript with a joint colleague that had, afaict, no effect on anything published? Wherein he explicitly reminded the recipient to check with the authors before acting further? Really?

I bet he jaywalks, too.

I've no doubt that in some circumstances that this could have been a very bad thing (e.g., if the authors were touchy or the recipients dodgy). But it clearly wasn't in this case. It has the feel of throwing things in the hopes of something sticking.

Probably the best evidence of no malconduct, such as it is, is that no one in the community has detected it, or even been made suspicious, and that the charges raised from outside the community are both scattershot and risible, when not malconducted (see the Wegman report). (The Wegman report would be a good specific to analyze, although it's been fairly done to death.)

I'm also interested in know how the duty to "maintain the public trust" kicks in in the face of an untrusting public (or hostile attack). I want to research evolution...this alone is sufficient to lose a good swath of the public's trust. I want to research climate change...again.

There's no level of best practice by scientists alone that will overcome denialism. Consider smoking and what it took to get the current level of public consciousness.

Does this conclusively show no malconduct of any kind? Of course not.

Martin, could you point to a scientific field wherein the scientists are presumed to be lying etc. until tested independently? And what the standards for independent testing are?

This doesn't seem to be true for foods and drugs, for example. Nor for forensic science (i.e., there's no presumption that lab techs falsify their tests, afaik). For example, the FBI quality assurance document for DNA testing reads:

12. Review

Standard 12.1.The laboratory shall conduct administrative and technical reviews of all case files and reports to ensure conclusions and supporting data are reasonable and within the constraints of scientific knowledge.

12.1.1. The laboratory shall have a mechanism in place to address unresolved discrepant conclusions between analysts and reviewer(s).

Standard 12.2.The laboratory shall have and follow a program that documents the annual monitoring of the testimony of each examiner.

But this is all internal, yeah? Now they do have audit procedures including:

Standard 15.2.Once every two years, a second agency shall participate in the annual audit.

But this is pretty different than a "suspect until independently confirmed".

Some very pertinent and thoughtful comments, Bijan. Thank you. Those interested in the fuss about Wegman can read all about it on Deep Climate's blog.

You refer to a distinction between different aspects of research, specifically mentioning the distinction between "methodology" and "presentation and curation". At noon, GMT tomorrow (1300 BST, 0800 EDT), the Muir Russell commission is to report its findings related to the theft of documents from the Climatic Research Unit last December. They've said they'll look into standards and practices from a number of different angles including data curation, sharing and security, and I expect that their report will provide a lot of material to those interested in scientific ethics in the internet age.

By Tony Sidaway (not verified) on 06 Jul 2010 #permalink

Thanks for the pointer, Tony. I'm looking forward to the report.

Lambert posts on the hate email campaigns against climate scientists.

I really wish this discussion could be about improving the collective value of produced work. There's so much work (both socially and technically) going on about trying to make what we produce and (in some manner) publish easier to exploit. (I've been annoyed quite recently with lack of access to both some programs and associated test generators. Lack of access to the programs makes it more difficult to compare our new algorithms against them. Lack of access to the test generators is generally less serious since it's good to make your own, though it'd be nice to compare with theirs as a cross check. A really nice test generator has been made available which is really good as those tests are quite difficult to generate correctly.)

Instead we have a ton of energy put in trying to find some fault in the behavior of some lightening rod scientists with the goal not of improving science (or event detecting fraud) but of discrediting a big swath of science.

I don't know the best way to combat this. How do you combat vaccine denialism? Effectively? (Actually, that's a good case to examine in parallel since the "trigger study" (Wakefield) did turn out to be fraudulent and the associated paper withdrawn. Even before that, the preponderance of vaccination science argued strenuously against anti-vaccination. The noise machine continues on, of course.)