Here we continue our examination of the final report (PDF) of the Investigatory Committee at Penn State University charged with investigating an allegation of scientific misconduct against Dr. Michael E. Mann made in the wake of the ClimateGate media storm. The specific question before the Investigatory Committee was:
“Did Dr. Michael Mann engage in, or participate in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities?”
In the last two posts, we considered the committee’s interviews with Dr. Mann and with Dr. William Easterling, the Dean of the College of Earth and Mineral Sciences at Penn State, and with three climate scientists from other institutions, none of whom had collaborated with Dr. Mann. In this post, we turn to the other sources of information to which the Investigatory Committee turned in its efforts to establish what counts as accepted practices within the academic community (and specifically within the community of climate scientists) for proposing, conducting, or reporting research.
First off, in establishing what counts as accepted practices within the academic community (and specifically within the community of climate scientists) for proposing, conducting, or reporting research or other scholarly activities, what kind of evidence did the Investigatory Committee have to work with beyond the direct testimony of members of that committee? The report provides a bulleted list:
Documents available to the Investigatory Committee:
- 376 files containing emails stolen from the Climate Research Unit (CRU) of the University of East Anglia and originally reviewed by the Inquiry Committee
- Documents collected by the Inquiry Committee
- Documents provided by Dr. Mann at both the Inquiry and Investigation phases
- Penn State University’s RA-IO Inquiry Report
- House of Commons Report HC387-I, March 31,2010
- National Academy of Science letter titled, “Climate Change and the Integrity of
Science” that was published in Science magazine on May 7, 2010
- Information on the peer review process for the National Science Foundation (NSF)
- Department of Energy’s Guide to Financial Assistance
- Information on National Oceanic and Atmospheric Administration’s peer review
- Information regarding the percentage of NSF proposals funded
- Dr. Michael Mann’s curriculum vitae
Notably absent is the Authoritative Manual for Being a Good Scientist — largely since no such manual exists. What the committee here was trying to get a handle on is accepted practices within a scientific community where what counts as an accepted practice may evolve over time, and where explicit discussions of best practices (let alone about the line between acceptable and unacceptable practices) among practitioners are not especially frequent.
This means that the Investigatory Committee ended up turning to circumstantial evidence to gauge the communities acceptance (or not) of Dr. Mann’s practices.
As the allegation is connected to accepted practices in three domains of scientific activity, the committee considered each of these in turn.
Based on the documentary evidence and on information obtained from the various interviews, the Investigatory Committee first considered the question of whether Dr. Mann had seriously deviated from accepted practice in proposing his research activities. First, the Investigatory Committee reviewed Dr. Mann’s activities that involved proposals to obtain funding for the conduct of his research. Since 1998, Dr. Mann received funding for his research mainly from two sources: The National Science Foundation (NSF) and the National Oceanic and Atmospheric Administration (NOAA). Both of these agencies have an exceedingly rigorous and highly competitive merit review process that represents an almost insurmountable barrier to anyone who proposes research that does not meet the highest prevailing standards, both in terms of scientific/technical quality and ethical considerations.
The committee’s report then outlines the process of grant review used by the NSF, and points to Dr. Mann’s successful record of getting his grants funded.
Let’s pause for a moment to get clear on what kind of conclusions one can safely draw from the details of funding agency review mechanisms and an investigator’s success in securing funding through such a process. When we’re talking about accepted practice in proposing research activities, we might usefully separate the accepted practices around writing the grant proposal from the accepted practices around evaluating the grant proposal. These latter practices depend upon the particular individual judgment of those evaluating a particular grant proposal about the soundness and originality of the scientific approach being proposed, the importance of the scientific questions being asked, how likely it is that a researcher with these institutional resources and this track record is to be able to succeed in performing the proposed research, how well-written the proposal is, whether any important bits of the proposal were omitted, whether page limits were respected, whether the proposal was presented in an acceptable font, or what have you.
Just because a grant proposal is judged favorably and funded does not automatically mean that everything that went into the writing of that proposal was on the up and up. It might still have involved cooked (or faked) preliminary data, or plagiarism, or a whole mess of unwarranted background assumptions. Conversely, the fact that a particular grant proposal is not funded or highly ranked by those reviewing it does not mean that the researcher who wrote the proposal departed at all from the accepted practices in proposing research activities. Rather, it need only imply that the reviewers liked other proposals in the pool better.
After discussing the rigors of the review process for NSF funding, and mentioning that generally only about 25% (or less) of the proposals submitted for any particular program are funded, the committee’s final report continues:
The results achieved by Dr. Mann in the period 1999-2010, despite these stringent requirements, speak for themselves: He served as principal investigator or co-principal investigator on five NOAA-funded and four NSF-funded research projects. During the same period, Dr. Mann also served as co-investigator of five additional NSF-and NOAA-funded research projects, as well as on projects funded by the Department of Energy (DOE), the United States Agency for International Development (USAID), and the Office of Naval Research (ONR). This level of success in proposing research, and obtaining funding to conduct it, clearly places Dr. Mann among the most respected scientists in his field. Such success would not have been possible had he not met or exceeded the highest standards of his profession for proposing research.
Let’s be clear here. What Dr. Mann’s success establishes is that the grant proposals he submitted have impressed the reviewers as well written and worth funding. His success in securing this funding, by itself, establishes nothing at all about the process that went into preparing those proposals — a process the scientists reviewing the proposals did not evaluate and to which they were not privy.
Of course, I was not privy to Dr. Mann’s process in preparing his grant proposals. Nor, I imagine, were the legion of his critics spawned by ClimateGate. This means that there is no positive evidence (from his successful grant proposals) that he did anything unethical in his proposing of research. The standing practice within the scientific and academic community seems to be to presume ethical conduct in proposing research unless there is evidence to the contrary. Grant review mechanisms may not be the most reliable way to get evidence of unethical conduct in the preparation of grant proposals, but short of bugging each and every scientist’s office and lab (and installing the necessary spyware on their computers), it’s not clear how we could reliably get evidence of grant-writing wrongdoing. It seems mostly to be detected when the wrongdoer makes a mistake that makes it easier to identify the methods section lifted from someone else, the doctored image, or the implausible preliminary data.
Still, I think we can recognize that “presumed innocent of cheating until proven otherwise” is a reasonable standard and recognize that success at grant-writing is mostly proof that you grok what the scientists reviewing your grant proposals like.
Next came the question of whether Dr. Mann engaged in activities that seriously deviated from accepted practices for conducting research. The focus here was on practices around sharing data and source code with other researchers.
[T]he Investigatory Committee established that Dr. Mann has generally used data collected by others, a common practice in paleoclimatology research. Raw data used in Dr. Mann’s field of paleoclimatology are laboriously collected by researchers who obtain core drillings from the ocean floor, from coral formations, from polar ice or from glaciers, or who collect tree rings that provide climate information from the past millennium and beyond. Other raw data are retrieved from thousands of weather stations around the globe. Almost all of the raw data used in paleoclimatology are made publicly available, typically after the originators of the data have had an initial opportunity to evaluate the data and publish their findings. In some cases, small sub-sets of data may be protected by commercial agreements; in other cases some data may have been released to close colleagues before the originators had time to consummate their prerogative to have a limited period (usually about two years) of exclusivity; in still other cases there may be legal constraints (imposed by some countries) that prohibit the public sharing of some climate data. The Investigatory Committee established that Dr. Mann, in all of his published studies, precisely identified the source(s) of his raw data and, whenever possible, made the data and or links to the data available to other researchers. These actions were entirely in line with accepted practices for sharing data in his field of research.
With regard to sharing source codes used to analyze these raw climate data and the intermediate calculations produced by these codes (referred to as “dirty laundry” by Dr. Mann in one of the stolen emails) with other researchers, there appears to be a range of accepted practices. Moreover, there is evidence that these practices have evolved during the last decade toward increased sharing of source codes and intermediate data via authors’ web sites or web links associated with published scientific journal articles. Thus, while it was not considered standard practice ten years ago to make such information publicly available, most researchers in paleoclimatology are today prepared to share such information, in part to avoid unwarranted suspicion of improprieties in their treatment of the raw data. Dr. Mann’s actual practices with regard to making source codes and intermediate data readily available reflect, in all respects, evolving practices within his field. … Moreover, most of his research methodology involves the use of Principal Components Analysis, a well-established mathematical procedure that is widely used in climate research and in many other fields of science. Thus, the Investigatory Committee concluded that the manner in which Dr. Mann used and shared source codes has been well within the range of accepted practices in his field.
There are two things worth noticing here. First is the explicit recognition that accepted practices in a scientific community change over time — sometimes in as little as a decade. This means a scientist’s practices could start out within the range of what the community considers acceptable and end up falling outside that range, if the mood of the community changes while the scientist’s practices remain stable. Second, the committee points to Dr. Mann’s use of an analytic technique “widely used in climate research and in many other fields of science”. This observation provides at least circumstantial evidence of the plausible appropriateness of Dr. Mann’s practice in conducting his analyses.
Then, the committee’s report describes looking for more circumstantial evidence and I start shifting uneasily in my chair:
When a scientist’s research findings are well outside the range of findings published by other scientists examining the same or similar phenomena, legitimate questions may be raised about whether the science is based on accepted practices or whether questionable methods might have been used. Most questions about Dr. Mann’s findings have been focused on his early published work that showed the “hockey stick” pattern of climate change. In fact, research published since then by Dr. Mann and by independent researchers has shown patterns similar to those first described by Dr. Mann, although Dr. Mann’s more recent work has shown slightly less dramatic changes than those reported originally. In some cases, other researchers (e.g., Wahl & Ammann, 2007) have been able to replicate Dr. Mann’s findings, using the publicly available data and algorithms. The convergence of findings by different teams of researchers, using different data sets, lends further credence to the fact that Dr. Mann’s conduct of his research has followed acceptable practice within his field. Further support for this conclusion may be found in the observation that almost all of Dr. Mann’s work was accomplished jointly with other scientists. The checks and balances inherent in such a scientific team approach further diminishes chances that anything unethical or inappropriate occurred in the conduct of the research.
Here, it is surely true that difficulty in replicating a published result indicates a place where more scientific attention is warranted — whether because the original reported result is in error, or because the measurement are very hard to control, or because the methodology for the analysis is sufficiently convoluted that it’s hard to avoid mistakes. However, the fact that a reported result has been replicated is not evidence that there was nothing unethical about the process of making the measurements or performing the analyses. At least some fabricated or falsified results are likely to be good enough guesses that they are “replicated”. (This is what makes possible the sorry excuse offered by fakers that their fakery is less problematic if it turns out that they were right.)
As for the claim that a multitude of coauthors serve as evidence of good conduct, I suggest that anyone who has been paying attention should not want to claim that nothing unethical or inappropriate ever happens in coauthored papers. A coauthor may serve as a useful check against sloppy thinking, or as someone motivated to challenge one’s unethical (or ethically marginal practices), but coauthors can be fooled, too.
Again, this is not to claim that Dr. Mann got anything unethical or improper past his coauthors — I have no reason to believe that he did. Rather, it is to say that the circumstantial evidence of good conduct provided by the existence of coauthors is pretty weak tea.
The report goes on:
A particularly telling indicator of a scientist’s standing within the research community is the recognition that is bestowed by other scientists. Judged by that indicator, Dr. Mann’s work, from the beginning of his career, has been recognized as outstanding. … All of these awards and recognitions, as well as others not specifically cited here, serve as evidence that his scientific work, especially the conduct of his research, has from the beginning of his career been judged to be outstanding by a broad spectrum of scientists. Had Dr. Mann’s conduct of his research been outside the range of accepted practices, it would have been impossible for him to receive so many awards and recognitions, which typically involve intense scrutiny from scientists who may or may not agree with his scientific conclusions.
This conclusions strikes me as too strong. A scientist can win recognition and accolades for his or her scientific work without the other scientists bestowing that recognition or those accolades having any direct knowledge of that scientist’s day to day practice. Generally, recognition and praise are proffered on the basis of the “end product” of a stretch of scientific work: a published paper. As with grant proposals, such papers are evaluated with the presumption that the scientist did the experiments he or she claims to have done, using the experimental methods described in the paper, and that he or she performed the transformations and analyses of the data he or she describes, and that the literature he or she cites actually says what he or she says it does, etc. This presumption that scientific papers are honest reports can go wrong.
Of course, I know of no evidence that Dr. Mann’s scientific papers are anything but honest reports of properly conducted research. Still, it makes me nervous that the Investigatory Committee seems to be drawing a conclusion that the awards and recognition Dr. Mann has received from colleagues in his field tell us anything more than that they have found his published papers important or original or persuasive or thorough. Especially given the comments of Dr. William Curry to the committee, discussed in the last post, that “transforming the raw data into usable information is labor intensive and difficult”, I’m not sure it’s even a good bet that the people praising Dr. Mann’s work worked through the details of his math.
Dr. Mann’s record of publication in peer reviewed scientific journals offers compelling evidence that his scientific work is highly regarded by his peers, thus offering de facto evidence of his adherence to established standards and practices regarding the reporting of research. … literally dozens of the most highly qualified scientists in the world scrutinized and examined every detail of the scientific work done by Dr. Mann and his colleagues and judged it to meet the high standards necessary for publication. Moreover, Dr. Mann’s work on the Third Assessment Report (2001) of the Intergovernmental Panel on Climate Change received recognition (along with several hundred other scientists) by being awarded the 2007 Nobel Peace Prize. Clearly, Dr. Mann’s reporting of his research has been successful and judged to be outstanding by his peers. This would have been impossible had his activities in reporting his work been outside of accepted practices in his field.
More accurately, literally dozens of the most highly qualified scientists in the world scrutinized and examined every detail that is scrutinized within the scope of the peer review process of the scientific work done by Dr. Mann and his colleagues and judged it to meet the high standards necessary for publication. But this does not guarantee that that work is free from honest error or unethical conduct. Otherwise, journals with peer review would have no need for corrections or retractions.
Moreover, the conferral of the Nobel Peace Prize on the IPCC may speak to the perceived relevance of the Third Assessment Report as far as global policy decisions, but it’s not clear why it should be taken as a prima facie certification that the scientific work of one of the scientists who contributed to is was fully ethical and appropriate. Indeed, given that the choice of the recipients for the Peace Prize is somewhat political, mentioning it here seems pretty irrelevant.
For those who hoped that this investigation might deliver Dr. Mann’s head on a platter, the judgment of the Investigatory Committee delivers something more akin to his little toe:
One issue raised by some who read the stolen emails was whether Dr. Mann distributed privileged information to others to gain some advantage for his interpretation of climate change. The privileged information in question consisted of unpublished manuscripts that were sent to him by colleagues in his field. The Investigatory Committee determined that none of the manuscripts were accompanied by an explicit request to not share them with others. Dr. Mann believed that, on the basis of his collegial relationship with the manuscripts’ authors, he implicitly had permission to share them with close colleagues. Moreover, in each case, Dr. Mann explicitly urged the recipients of the unpublished manuscripts to first check with the authors if they intended to use the manuscripts in any way. Although the Investigatory Committee determined that Dr. Mann had acted in good faith with respect to sharing the unpublished manuscripts in question, the Investigatory Committee also found that among the experts interviewed by the Investigatory Committee there was a range of opinion regarding the appropriateness of Dr. Mann’s actions. … The Investigatory Committee considers Dr. Mann’s actions in sharing unpublished manuscripts with third parties, without first having received express consent from the authors of such manuscripts, to be careless and inappropriate. While sharing an unpublished manuscript on the basis of the author’s implied consent may be an acceptable practice in the judgment of some individuals, the Investigatory Committee believes the best practice in this regard is to obtain express consent from the author before sharing an unpublished manuscript with third parties.
This ruling reminds us that there is frequently a difference between what a community recognizes as best practices, accepted practices, and practices that aren’t so good but aren’t so bad that members of the community think it’s worth the trouble to raise a stink about them. In any given situation, the best course may be to embrace the best practices, but it is not the case that every researcher who falls short of those best practices is judged blameworthy.
If there were an Authoritative Manual for Being a Good Scientist, maybe it would spell out the acceptable departures from best practices with more precision, but there isn’t, so it doesn’t.
After a close reading of the Investigatory Committee’s final report, I’m a little grumpy. It’s not that I believe there is evidence of misconduct against Dr. Mann that the committee overlooked. Rather, I’m frustrated that the conclusions of his positive adherence to the community’s establish standards and practices come across as stronger than the evidence before the committee probably warrants.
Here, we are squarely in the realm of trust and accountability (about which I have blogged before). Scientists are engaged in a discourse where they are expected to share their data, defend their conclusions, and show their work if they are asked by a scientific colleague to do so. Yet, practically, unless they intend to build a career of repeating other scientists’ measurements and analyses and checking other scientists’ work (and the funding for this career trajectory is not what you’d call robust), scientists have to trust each other — at least, until there is compelling evidence that such trust is misplaced. And, while I have no objection to scientists pragmatically adopting the presumption that other scientists are conducting themselves ethically, I think it’s a mistake to conclude from its general success in guiding scientists’ actions that this presumption is necessarily true.
To be fair to the Investigatory Committee, they were charged with making a determination — establish what counts as accepted practices for proposing, conducting, or reporting research in the community of climate scientists — that would probably require a major piece of social scientific research to do it justice. Interviewing five members of that community cannot be counted on to provide a representative picture of the views of the members of that community as a whole, so it’s understandable that the Investigatory Committee would try to bolster their N=5 data set with additional evidence. However, my sense is that the committee’s conclusions would have been better served if they had included an explicit acknowledgment of the limitations inherent in the available data. As things stand, the over-strong conclusions the committee draws from Dr. Mann’s successful track record of securing funding and publishing scientific papers are likely to raise more questions than would a support that stated that it had found no smoking gun suggesting improper conduct on Dr. Mann’s part.
* * * * *
It would be nice to have a fourth post on the evaluation of ethics of the accusers (both “professional” and occasional) as well as the public at large and the scientific and academic establishments.
I personally think it was pretty obvious that this family of accusations was both frivolous and malicious, esp. given that no scientific result was on the table.
I’m happy to take on this assignment, though I’m inclined to deal with the the accusers as types rather than named individuals. If readers think that there are particular ClimateGate accusers who ought properly to be dealt with as named individuals in the upcoming part 4, I’d be grateful if they would email me links to accounts of their statements and conduct that suggest I should so treat them.