In my earlier post about the findings of the Penn State inquiry committee looking into allegations of research misconduct against Michael Mann, I mentioned that the one allegation that was found to merit further investigation may have broad implications for how the public understands what good scientific work looks like, and for how scientists themselves understand what good scientific work looks like.
Some of the commenters on that post seemed interested in discussing those implications. Others, not so much. As commenter Evan Harper notes:
It is clear that there are two discussions in parallel here; one is serious, thoughtful, and focused on the very real and very difficult questions at hand. The other is utterly inane, comprising vague ideological broadsides against nebulous AGW conspirators, many of which evince elementary misunderstandings about the underlying science.
If I wanted to read the second kind of conversation, there are a million blogs out there with which I could torture myself. But I want to read - and perhaps participate in - the first kind of conversation. Here and now, I cannot do that, because the second conversation is drowning out the first.
Were that comment moderators could crack down on these poisonous nonsense-peddlers. Their right to swing their (ham)fists ends where our noses begin
Ask and you shall receive.
Commenters on this post are invited to discuss the question of what counts (or should count) as accepted scientific practices, either in meteorology, or climate science more broadly, or across scientific fields.
It is fair game to look at specific kinds of behaviors displayed by scientists in the purloined CRU emails and consider whether these behaviors fit within accepted practices or do not. As always, laying out your reasoning on these matters will enrich the discussion.
It is also fair game to consider hypothetically other kinds of behaviors scientists might display when interacting with their scientific colleagues and competitors and whether these behaviors might or might not fall within accepted scientific practices.
It is even fair game to broach the question of whether some of the practices that are, as a matter of fact, accepted by scientific practitioners and communities might not undermine the goals of those scientific practitioners and communities (whether knowledge-building goals or goals about communicating with and cooperating with non-scientists).
What is out of bounds in the comments on this post: arguments about what the balance of the data show or do not show about climate trends, or claims that the Penn State inquiry committee was engaged in a whitewash or cover-up of Dr. Mann's wrongdoing, or other commentary on whether climate change is a religion or a green conspiracy or what have you.
As Evan points out, there is ample opportunity to consider those arguments and claims elsewhere on the internets. If you decide to bring them to the comments on this post, where they are officially off topic, I shall moderate them out of existence.
- Log in to post comments
>>>Commenters on this post are invited to discuss the question of what counts (or should count) as accepted scientific practices, either in meteorology, or climate science more broadly, or across scientific fields.<<<
Maggie, I think the standard varies with the importance of the issue. This issue of AGW is a question of enormous importance to the world. Perhaps, it is the most important question we will ever ask. If the AGW theorists are right, the world in in jeopardy and we need to work toward fixing the problems. IF the AGW theorists are wrong, trillions of dollars will be wasted in a senseless effort to halt the production of a harmless gas. The cost of this will be a terribly regressive tax which will punish the poorest people in the world. Here I could make a lengthy argument about how regressive this tax of any carbon taxing scheme would be, but to simplify the point just consider this brief fact. 70% of the cost vegetables is energy created. The plowing, harvesting, spaying, watering, trucking, refrigerating etc. required to put carrots and potatoes on our tables all require energy. A carbon tax will show up in everything that everyone in the world uses everyday to make life livable. The wealthy can afford it, but the poorest of the poor will suffer dearly. The enormity of the question of AGW demands the highest standard of scientific integrity. When science demands the support and commitment of an entire nations, everyone becomes a stakeholder in the process and all of the science should be available to every stake holder.
All of the data AND the techniques that are used to model and predict Climate Related issues must be available for review by the entire scientific community. I find it hard to believe that ethical scientists would object to this. The tree-ring proxies of Mann, Briffa, Jones et al are of enormous importance because this is the proxy data which has most influenced the historic temp. graph used by the IPCC.
The use of these tree ring data sets tell us whether or not the current temperature of the earth is in or out of an abnormal range of natural variation. We need to know the answer to this as well as it can be known.
When the tree ring sets are are compared to 27 non tree ring data sets the picture of the historic temperature of the world for the past 1,000 years provides a completely different view of the MWP and LIA. This is not an either or choice. Only one picture can be right. Which is it?
If is also apparent from the hacked emails that briffa, jones, mann, crowley were communicating on a weekly basis. It is natural for scientists who engage in similar scientific endeavors to do this, but was an artificial influence introduced to their processes as a result of the close liaison between these four scientists? This use of tree ring proxy data is one of only a half dozen bedrock issues which would ultimately determine whether AGW is or isn't real. Therefore, I contend, that it is of the utmost importance that we understand how the tree ring data points were selected (which trees were in and which were out). We also must understand the math used in the computer models that were used to convert this tree ring proxy data into a 1000 year temperature model.
If AGW is a real threat to mankind, I will pay my share to help. However, if AGW turns out to be but a political scam and a swindle, I will resist foolish political actions with every fiber of my being. Truth is the foundation on which freedom depends. We must know the truth!
Then there's his phrase "erstwhile scientists". Does he mean "ersatz scientists"?
Precision with words is accepted scientific practice.
On another blog sometime ago, Hank Roberts called our attention to the observations of Peter Watts on science as done by scientists. It's worth linking to here. A pithy excerpt:
My own exposure to scientific practice was limited to two years in a doctoral program, but what I saw then accords very well with Watts's description. Yes, it's often unseemly, but it's the way our store of knowledge about the universe grows.
I would point out that we are not likely to know "accepted scientific practices" any better than the administrators on the inquiry panel. For accepted practices specifically in meteorology we are likely to know even less.
We do know that models are used and that sometimes, not always, the models contradict each other, not with opposite predictions but varying predictions. Now what does the weatherman do, does he say, 'I can't predict the weather' or does he get to say, 'based on the observations I say this.' I don't know, I would have to ask "scientific weathermen."
And then the question may rise, what scientist is qualified to say that such and such action is outside accepted practice? Well, someone who knows the practice, obviously, in a same or like position.
Then, further again, finding them may not be so easy. The law in the US has solved this problem, so to speak, by requiring this qualified scientist, when if it is a medical doctor, to practice from the same geographical area in the same type of medicine in dispute, otherwise they don't know what the accepted practice is.
This requirement recognizes that there are going to be regional differences and university differences. As much as we might wish not to bring up differences in universities, they are there, and finding the same standards among them is not so quick and easy as one might suppose.
So the scientists on the inquiry panel should be meteorologists, from his own university or others similar and of like size to his, and further, they should be doing the same type of work as he is. If those persons can be gotten to serve, a just and knowledgeable inquiry could be held.
Assuming we were to get an inquiry panel of qualified scientists, what report would be expected? There are several possibilities. Would it be like a panel of judges with the majority to rule by vote, and the minority or minorities to file a minority report? Or would they try to reach a consensus like a jury, which would leave the door open for them to report that they could not reach a consensus. A consensus report would be of the type made by the administrators. And there could be types of reports in between, confirming some things, finding others not proven.
And, having written my share, I will leave it to others to speculate on actions that might be taken after various findings.
I'm very interested in this issue, in a very general way (as is so often this case with philosophers): How do we decide what people and what points of view do we allow in our community, and what people and what points of view do we not? Or, in some political philosophy jargon, what are the bounds of tolerance, and when should we tolerate the intolerant? For example, should a small-l-liberal, small-d-democratic country allow people to express Neo-Nazi views within its borders? What about the sort of left-wing views that `democracy' is a bourgeois farce, and we should live under a Leninist dictatorship? Still more problematically, what about people who think same-sex marriage should be illegal?
In the Mann case, we want to be more specific, and there are various degrees of specificity at which we can pose the question: What are the bounds of tolerance within the scientific community? What are the bounds of tolerance within the meteorological community? And how about within the community of experts on climate change? In trying to keep criticisms of his work published, has Mann done anything that gives us (or, really, scientists of the appropriate sort) reason to exclude him from some or all of these communities? What about Stephen McIntyre?
One way of answering these questions -- quite popular among critics of Mann and the IPCC consensus, I've noticed -- is to say that the scientific community should be as open as possible, and pretty much any attempt to exclude any point of view is an egregious wrong. But this won't work; surely we shouldn't include, say, some poor guy who genuinely thinks the ghost of Abraham Lincoln told him that global warming is caused by an Evil Hedgehog Conspiracy. We need some way to exclude the reasonable views.
This leads to an alternative way of answering these questions, one more popular among defenders of Mann and the IPCC consensus. In order to protect scientists from the distraction of having to reply endlessly to the likes of the Evil Hedgehog Conspiracy theorists (and, often included here, Stephen McIntyre), this view suggests that only an elite group of experts -- say, those who accept the primary conclusions of the IPCC consensus -- should be allowed in. But this won't work either. If the only people allowed in the scientific community are those who accept the dominant views, those dominant views are unlikely to ever be seriously challenged. The organized skepticism of science (as Robert Merton called it) is supposed to be one of its key features, a vital part of its monumental success at producing knowledge. (Cue the montage of Galileo, Darwin, Einstein, &c.)
The problem, then, as I see it, is to figure out the golden mean between complete scientific anarchy and the scientific equivalent of the expulsion of heretics. Unfortunately, after thinking about this semi-regularly for two years, stating the problem is as far as I've gotten.
Eric Raymond has a compelling discussion of many of these points in "Error cascade: a definition and examples"
He defines error cascade as: "Iâve used the term âerror cascadeâ on this blog several times, notably in referring to AGW hysteria. A commenter has asked me to explain it, and I think thatâs a good idea as (a) the web sources on the concept are a bit confusing, and (b) Iâll probably use the term again â error cascades are all too common where science meets public policy.
"In medical jargon, an âerror cascadeâ is something very specific: a series of escalating errors iin diagnosis or treatment, each one amplifying the effect of the previous one. This is a well established term in the medical literature: this abstract is quite revealing about the context of use.
Thereâs a slightly different term, information cascade, which is used to describe the propagation of beliefs and attitudes through crowd psychology. Information cascades occur because humans are social animals and tend to follow the behavior of those around them. When the social incentives are right, humans will substitute the judgment of others for their own."
Two excerpts that go to the heart of the matter:
"In extreme cases, entire fields of inquiry can go down a rathole for years because almost everyone has preference-falsified almost everyone else into submission to a âscientific consensusâ theory that is (a) widely but privately disbelieved, and (b) doesnât predict or retrodict observed facts at all well. In the worst case, the field will become pathologized â scientific fraud will spread like dry rot among workers overinvested in the âconsensusâ view and scrambling to prop it up. Yes, anthropogenic global warming, Iâm looking at you!"
"Actually, my very favorite example of an error cascade revealed by consilience failure isnât from climatology: itâs the the oceans of bogus theory and wilful misinterpretations of primary data generated by anthropology and sociology to protect the âtabula rasaâ premise advanced by Franz Boas and other founders of the field in the early 20th century. Eventually this cascade collided with increasing evidence from biology and cognitive psychology that the human mind is not in fact a âblank slateâ or completely general cognitive machine passively accepting acculturation. Steven Pinkerâs book The Blank Slate is eloquent about the causes and the huge consequences of this error."
I think part of this debate (the real one) rests on who can be considered an expert.
I think most scientists, with their years of expertise and long hours in lab or beating a computer code to death, have a certain air about them that challenges the notion that someone who did not go through that process could possibly properly critique their work. While this may be the case with many people's work, I include my own primarily, climate science has gotten many a layperson off their butts and onto Excel trying to plot and manipulate and interpret a great deal of data that is readily available to the public. Does that fact mean anyone should have a say?
No, it doesn't. But being published should be a good enough reason to allow someone to have his/her concerns heard. If they (the climate science community) wanted to settle this thing once and for all (the manipulation and conspiracy thing) they would just have a conference where people like McIntyre, who is published, could present their grievances and Mann and whoever else could answer the questions where possible and whatever else. The public could be invited for a nominal charge of $20 and the proceeds could go to the charity of choice of the keynote speakers.
I think that much of this debate (the real one) has to do with the inclusion of particular individuals who have, to their credit, worked very hard at trying to understand some of the limitations of current methodologies for the sake of scientific knowledge. One might say 'well then these individuals can use the literature to make their points' and this is true to some extent. But, as is the case in all sciences, the process of literature review and finding proper avenues for publication suitable for the gravity of the material can be a trying business. Many times good papers are overlooked because they could find print in the more revered journals. I think a public debate or conference of interested individuals might be just the 'trick'.
I would assume this kind of rhetoric - stating, in essence, that climate science is some sort of fraud - is out-of-bounds according to the original posts's constraints.
But if we must have it, it's probably worth knowing that Eric Raymond says the same about research linking HIV to AIDS.
A good bit of this is coming out because of the publication  of communications among colleagues and (perhaps) friends. This brings us to two principles in tension:
1) Character is what you do when you think nobody will ever know.
2) If we insist that scientists live as though they were always on-camera, we won't have many scientists.
As many have pointed out, it would be "interesting" to see Antony Watts' correspondence (all of it) from the same time published. No, I don't think it's a good idea except as a thought experiment. I do think that asking the question of any of us brings up important points.
I know that a lot gets said over drinks among colleagues that we would never want made public, a great deal of it absolutely legitimate -- but we can't live our lives watching every word to avoid misunderstanding of them out of context by strangers.
And yet, and yet. Famously, electronic mail has been compared for at least 20 years to correspondence by postcard. I know that $COMPANY puts us through annual reminder sessions to the effect that whatever we write, however offhand, should be written as though it might someday become public.
We don't want scientists to start imitating Microsoft employees and avoiding any kind of "paper trail," where an absolute minimum is committed to fixed form (paper, electronic mail, etc.) We want to encourage the open and uninhibited exchange of information. We want to encourage the personal relationships that are too often overlooked but play such a large part in our ability to work productively (and enjoy it!)
But it's still all postcards and we can't ignore that. How do we strike a livable balance?
 As in, "making public"
The scientists at CERN publish their methods, data and results contemporaneously on the CERN website. In doing so they are merely perpetuating, by modern means, a tradition of openness in scientific research which has served it well, and which flows naturally and ineluctably from the Scientific Method itself. The same modern means was available to Mann, Jones et al, but not only did they disdain its use, they sought to conceal their work, and their anguish at its inadequacies, from any kind of proper peer review (cf "...even if we have to redefine what the peer review literature is...", inter alia).
Had they followed CERNâs example, none of this would be happening.
Their failure to do so invites the adverse inference that their work was defective, and certainly not capable of justifying a massive impost on the global economy.
And another thing - whether or not Mann complied with Jones' request to delete any emails, he did not immediately reply to Jones upbraiding him for making the request. This failure alone deserves censure. Had he done so, his position on the fourth count would look rather better than it does.
Oh boy, do we get service here! :-)
I think there is no serious doubt that that the CRU researchers believed they were footsoldiers in an ideological war. It is clear that this "battlefield" mentality led them to unacceptable actions. What is not clear is how consequential these actions actually were in contributing to the scientific consensus. My suspicion is that they were largely irrelevant.
At one point, Phil Jones pledged, "Kevin and I will keep [two papers] out [of the IPCC AR] somehow - even if we have to redefine what the peer-review literature is !" Both papers were ultimately referenced in the IPCC report, and their conclusions incorporated. If this was a fraud (I would not make such a claim,) it was one of the least effective frauds in history.
Now, I'd love to rap the CRU scientists for thinking of themselves as troops on a battlefield instead of objective ivory-tower sages. But the thing is, I can't actually disagree with them. They are involved in an ideological war; this is true regardless of whether they choose to behave as detached philosophers or to fight back.
Quite possibly, these guys acted unethically. What can't be denied, I think, is that if they did act unethically, it was largely because they were under enormous pressure from a politically motivated denialist movement and they didn't know how to handle it properly. If there is a "properly" here.
If you doubt that there exists a climate denial movement, I would simply point out that the most (in)famous of the e-mails, the "Hide the Decline" missive, is still touted as smoking-gun evidence of a coverup even after it was conclusively shown to be best scientific practice in action.
If it is necessary to come up with a moral-of-the-story here, I would submit the following: Every science PhD should be required to demonstrate a serious understanding of science communication in addition to science itself. If as Sagan says, "our future depends, powerfully, on how well we understand this Cosmos," then it is equally true that our future depends powerfully on how well we communicate our understanding of this Cosmos. Even to people who may not want to hear it.
@ Evan 11
These are all good points. I would join you on all of them except I have some questions. How would you think science communication differs from just good communication? And what leads you to believe that these science PhD's you refer to could have or obtain a 'serious understanding' as you call it, of this thing? For some reason they are slow learners in that area. (I generalize, I jab.) I can't tell them anything and it does not surprise me one bit, it's expected. There are reasons for that and not all good. Another day maybe. Sorry for an unpleasant wall there, let us pass on.
Let us take a scientist-philosopher who has shown a fine talent for analysis and clear explication, but is that person ready, willing, and able to give advocacy speeches that work, and put what is spoken down in writing where Joe Blow can catch it? We don't know, do we, and she or he may not know herself, but that is not an ethically spotless business, that advocacy, as you touch on, and scientists who can do it with effect and with high ethics, like T.H. Huxley for example, are rather rare. There are some moderns. But of what you wish for, 'a serious understanding of science communication,' well, each may decide for him or herself what the situation is. I think, with a feeling of about 90% certainty that I am right, that your wish is not doable in practice in the cold cruel world. Time to think think some more. I know I will. It's fun to think what others should do.
So far we have not had much mention of Mann as victim, though you graze it. There are further procedures to be followed. I'm guessing he's in the AAUP (American Association of University Professors) and will have a certain amount of process support from fellow members. So suppose for a moment he is in the wrong. What is to be done? Handspank, career loss, guillotine? What does it depend on? Does the faculty senate decide or the president of the U? We will see.
And then suppose that you are him. You would need your friends and university to come to your aid more than ever to the full extent they could, especially if you were in the wrong in some way. And does he have children in school who will be affected, does he have a child with hare lip or crooked teeth to be repaired, is his wife very ill, is he supporting his aged mother in assisted living which insurance does not cover? Have they adopted a child? Ho, we are talking about more than one person when we say Mann? These too are ethical questions.
And in all the questions and answers should we not recognize occasionally as reminder that Mann is not accused of any societally recognized crime, not federal, not state, not Hague. Seems worth remembering.
And also, in all the questions and answers should we not recognize occasionally as reminder that Mann is not accused of anything that caused money-damages to a single person in any way, anywhere, so no one can step forward and say, "He owes me and this amount in damages." Seems worth remembering.
Now, your apparent main thrust, analysis of why they did what they did sounds as if accurate in the scheme of things (no serious doubt you say.) I like it. Thank you for that. But, since they deny, in a sense, that they did what they did, as defined by others, we will not be able to reach the question of why with them, except as one commenter pointed out, over a cup of coffee perhaps.
As a PhD student working at CERN I'd like to respond to the statement in #10 about how methods, data, and results are shared at CERN. In general CERN is a very open environment, however, there are some qualifications to that.
First some background on CERN. Most scientists who work at CERN are members of one of several large scientific collaborations; these collaborations can have hundreds or even thousands of members. These collaborations are usually based around a large experiment that was designed and built by the collaborators.
As a rule, the data obtained from an experiment like this is tightly controlled by the collaboration and can only be used by its members. This is to prevent misuse of the data as well as to allow the members to benefit from the work they have contributed to the experiment. This doesn't mean that any collaborator can do whatever they want with the data. All publications based on data from the experiment must go through a thorough internal review process before it is shown outside the collaboration. Methods and results are scrutinized and only after approval can they be submitted to a journal for publication (where it will receive further external review). As far was I know, raw data is rarely shared outside the collaboration.
Within collaborations all data is shared. Methods and source code are shared to a large extent, though there is definitely intra-collaboration competition and people can be secretive until they have results they are confident in. There can be a lot of politics involved, and the process isn't always perfect but good science does get done.
Once something has been published it is essentially like any other physics publication and can be discussed and criticized by anyone.
This is my perspective as a relative newcomer to one CERN collaboration (I've been involved with the ATLAS experiment for about 3 years).
The public is artificially selective in its concerns over scientific misconduct. The comments on this post are mostly inaccessible or incomprehensible to the non-scientific public. For them, some education and trust offer better comprehension of episodes of misconduct than the typical commentary in general reporting.
I think there is no serious doubt that that the CRU researchers believed they were footsoldiers in an ideological war.
This seems a much stronger claim then that supported by the emails.
In 13 years of emails, of these guy's innermost thoughts, and much of that time under constant political attack... like, literally, about half the population think their life's work is a sham... they get upset and say things that are inappropriate sometimes?
That makes them believe that they are footsoldiers? I think it's more parsimonious to think they were scientists, trying to pursue a hard life of understanding the world and getting frustrated on occasion. Being a footsoldier in a war implies that their goal was something other then just doing their science and being somewhat respected.
"""Commenters on this post are invited to discuss the question of what counts (or should count) as accepted scientific practices, either in meteorology, or climate science more broadly, or across scientific fields."""
This cleaves me in twain as the Feyerabend in me wars with...er...the Lakatos?
So, do we need answers to the broad question of what counts as scientific practice (good, bad, acceptable) as well as specific questions of what counts as (good or bad) psychology? And are we looking at every aspect of the behavior of people claiming the title of "Scientist"? E.g., do we distinguish between the scientific practices of "doing science" (i.e., generating "scientific results"), "curating journals with peer review", "providing expert testimony", "training new scientists", and "educating the public"? Because these are all quite different if often tightly inter-related activities.
So, I think, Evan, that requiring *every* science PhD to demonstrate a "serious" understanding of science communication (is it sufficient if they know it's hard and they aren't very good at it?! do they have to be able to communicate their science to first graders? to people who hate them?) is, interpreted as I suspect you meant it, way too much. Most of us aren't Sagan and yet lots of not-Sagan's can do a lot of great science.
Now, *supporting* science communicators among us is a good thing. (Done right!) In my departments duty allocation there is provision for writing about our research for the mass media (well, for a person to do this...it'd be nice if you could get a few hours for doing that systematically for your own, e.g., by maintaining a blog).
You know, I'm not even a scientist, but the amount of BS I see the "skeptical" crowd posting I would go insane if I had to work the field in which I don't just have to spend significant time and effort to do my science, and then have to spend many times more time addressing bunch of denialists on not just my own research, but also educating them on the earlier research by other scientists, simply because they are so lazy and thick headed that they cannot or won't crack open the journals and actually try to understand what they are reading.
In reply to Evan Harper - it seems clear that the war you are reffering to is the one between people interested in climate science and actually doing the science, and those uninterested in the science except as it turns out to impinge upon their political beliefs.
Also, I think it silly to demand that science PhD's demonstrate an understanding of science communication. They don't all have the time or the ability to do so. Thats what university PR departments are for, is it not?
Certainly one can demand that PhD's who go out into the public arena demonstrate some competence in communciation, but for many more who don't, why waste their time? For example, MBH comprised 3 people, only one of whom many people seem to think exists, i.e. Mann, because he speaks out.
HHmm, I think I might be getting a little off topic.
What I think there certainly is is a gap between the understanding of day to day science operations and what the public and many decision makers (let alone the media) know about how it all works. Scientific discussions and arenas are subject to the same constraints in terms of personality, influence, money and time which bedevil every organisation in existence. However, because its an ongoing discussion with actual evidence involved and an ethos for taking account of the data, things do progress, unlike say in politics. Any attempt to get people to agree upon things which they disagree about, to co-ordinate the results of thousands of scientists research, will end up with little political spats about who said something nasty about whom, what influence so and so has with the committee because he knows them, whether X is worth listening to or not and so on.
The IPCC needs a bit of bringing up to date, but the problem is how to do that without letting Inhofe and others gut it to irrelevancy. People have to remember that most of climatology has been run on a shoestring for years, and the IPCC is mostly staffed by volunteers. They don't get paid, merely a few expenses, and anyway its an appendage to an organisation, the UN, which many countries want to warp to their own agenda. So to expect it to be up to date in all things and super efficient is lunacy. It'll need more money to improve, and also run the risk of bureacracy.
Just a couple of comments on conduct, this is general
1) Don't delete your source/raw data
2) Don't try to hide information if it is requested under an FOI order.
The source/raw data should be fairly obvious, it allows others to see if tampering has happened and/or to develop better predictive/retrodictive models.
Trying to avoid giving out data when an FOI order has been placed on you is illegal.
Both acts create the appearance of deception.
Deleting raw/source data is also an act of theft.
The data has in most cases been paid for by a company or government and they own it, not the person using it.
An example would be CERN and the LHC, the data they will collect has been, mostly, paid for by the taxpayers of Europe as have the wages of most, if not all, of the people working there; it doesn't belong to the scientists.
I'm not suggesting that anyone should publish their data before they've had a chance to make use of it for their own advantage/advancement; but the data should be released after it is used as a matter of course without a need for coercion.
3) Don't lie.
You should try it sometime.
A few thoughts on Eric Raymond:
His status as a commercially-minded promoter of free/open source software might tend to obscure a few points. The first is that he's not half as good a programmer as he wants people to think -- he end-of-lifed Fetchmail for no really good reason except to prove a somewhat dubious point about product lifecycles, and the people who took over the codebase inherited a program suffering from not only a lot of neglected bugs, but a rather poorly thought out design that made it somewhat the opposite of a category killer.
The second is that Raymond's critical thinking skills are really bad. Since 9/11 he's gone from right-libertarian to firebreathing wingnut, as well as openly racist and homophobic (if not quite to a Stormfront degree, at least Bell Curve territory). (And given his somewhat bitter and self-serving attitude towards school bullying ("I had to tough it out so why shouldn't everyone else") I wouldn't be surprised if he had sociopath tendencies on top of it.)
All that taken together, I'm not sure I'd put any credibility in anything he says outside the narrow cathedral/bazaar paradigm, and even there he's a hypocrite (when was the last time the Jargon File was updated?).
On the matter of climate scientists hiding their data, it reminds me of an old comment (I want to say from de Tocqueville, but I'm probably wrong) about US race relations: "The white man makes the black man shine his shoes, then denigrates him for being a bootblack." The same people crying fraud in this case are the ones that created the paranoid, politically-charged conditions climate scientists work under. Whether they want to admit it or not, they've created something like a self-fulfilling prophecy, except the "fulfillment" is entirely an illusion.
Some of these comments are coming very close to crossing the line for the conversation for which this particular comment thread has been designated.
Talking about how the background political climate may be a constraint on how scientists interact and communicate (whether amongst themselves or with non-scientists) is fair game. But let's not, for this one thread, get bogged down in the separable issue of who or what may be responsible for creating the background political climate, or or which quotable authors you think may have been duped by political interests (possibly in combination with a weakness in his own cognitive skills), or even in the appealing opportunity to take snarky swipes at each other.
For this one thread, let's see if we can keep the discussion focused on the big question: what does real scientific practice (not just the version of it in the "scientific method" box in the middle school science textbook) look like?
How about this
1) No code.
If there is a computer program in question then the author should provide a description of the algorithm sufficient that an expert in the field can follow it.
Putting the code online makes it unlikely that anyone will code in the algorithm for himself or herself, and as debugging is not theoretically possible (see âhalting problemâ) independent code is the only check that is possible.
The point being that the possibility of two independent pieces of code arriving at the same wrong answer is vanishingly small.
2) Donât ask someone for something that isnât his or hers to give.
If a researcher is using (with permission) somebody elseâs data, that doesnât mean that they are allowed to give it to YOU. It is ok to ask where the original data is at, and the researcher should tell you, but it is up to you to get the data from the owner.
3) Donât ask for something you ALREADY HAVE.
Scientists work for a living. If you already have the thing in question, donât ask for it. That isnât science, that is stalking.
4) CRACK A BLOODY BOOK
If you arenât up to speed in a field then that is your problem. Donât pretend that something is wrong just because you havenât done the homework necessary to understand it.
5) Donât lie your ass off. (Ok, I said this in a previous post, but given some of the comments, I obviously need to say it again)
I would say the fundamental requirement of accepted scientific practice, in any field, is documentation.
We are always told, from school to PhD, that the Methods section should contain enough information for another (competent) person to replicate the work. And we all know it never does. There is always something we have to figure out ourselves, or email the original author about.
It's not possible to describe everything one has done in a journal article (the editors want your conclusion, and the evidence and logic that supports it), but the details should be available somewhere (even if it's just a handwritten note on a printout in a file "this data not used as it doesn't look right"). I think probably everyone who has done a PhD involving experimental work has had pages of data that "did not look right", and which were not included in the analysis. I also think most people have the data filed somewhere, so it could be included in an analysis (if anybody thought to ask for it and was able to find it).
The conventional scientific paper follows IMRaD: Introduction, Methods, Results, and Discussion. The idea is that the Introduction (subjective) sets up the question, the Methods (objective) describes what was done, the Results (objective) describes what was observed, and the Discussion (subjective) develops the implications of the results and seeks to persuade readers to accept the author's interpretation.
I think accepted scientific practice must involve absolute integrity in the 2 objective sections -- which is where documentation comes in as the main requirement.
The discussion is supposed to be based on the results. Somebody may take some results and interpret them completely opposite to the current paradigm. That is not fraudulent science (and if the logical arguments are good it may not be bad science).
I haven't read the "purloined emails". If their writers were trying to suppress results or methods, or criticism of results or methods, then they were not following "accepted scientific practice" (even if it is accepted in their field; that just means their field is not scientific).
If they were trying to suppress interpretation, then they were acting as human scientists.
The basic Arts were grammar, logic and rhetoric.
Grammar may be important here, but I don't know enough to comment.
Logic is the domain in which scientists are trained. They learn grammar only when forced. Some learn rhetoric, but most have the foolish belief that Results require no rhetoric.
Rhetoric is persuading people of your cause.
There was a time when every person described as educated knew grammar and logic and rhetoric.
In our world, some people learn logic without learning rhetoric; others learn rhetoric without learning logic.
Thus those who understand the world cannot lead; and those who lead do not understand the world.
#10 TomFP said
Matthew has already mentioned some caveats about CERN, but even TomFP an accurate statement about CERN, what happens there does not reflect scientific practise elsewhere.
Chemists and biologists in general don't do this (I'm sure someone somewhere might be interested in the readouts form the plate reader for every experiment I've done, but we're not going to waste time and effort mounting the blessed things). There is NO expectation in science that anyone anywhere should be able to download *original* data (as opposed to analysed data) wholesale (some astronomer do this, but in general whole slather access to raw, original data is very rare).
And very few people keep original data forever. In Australia, according to the National Health and Medical Research council, Clinical trials data is retained for a minimum 15 years, but for other data need only be kept for 5 years. So the whole "they should have kept their original data forever and a day" expectation is nonsense.
Well said, Elspi. It's especially important when dealing with phenomena like AGW, because they are much more complex than what's covered in, say, a typical undergraduate physics curriculum. If you're going to challenge the AGW consensus, you need to put the time in, and not leave out any part of the process.
That means starting with introductory material and working your way up. That's how you get the theoretical framework needed to interpret the massive amount of empirical data, from multiple independent sources, that the AGW consensus draws on. To even know of the existence of the data requires reading all the historic and current peer-reviewed literature. Interpreting it requires a thorough knowledge of statistics, and you'll want to conduct your own experiments, and develop and test your own models.
Very few people can do all that on their own. For most of us, the only practical route is an extended apprenticeship: obtaining undergraduate and graduate degrees, and doing original research under an established advisor. By the time you get that far, you'll be interacting regularly with the community of professional peers that have been working on this for decades: discussing it informally, in person, by phone and by email (perhaps more cautiously than Phil Jones did); and formally, by presenting your ideas at the same conferences and publishing articles in the same journals they do, which unavoidably entails exposing yourself to their unsparing criticism 8^(!
That's "real scientific practice." There are easier ways to make a living.
I work in a related field (geology - Mann and I both go to American Geophysical Union meetings). Many of the criticisms of Mann's work could apply to the kind of things I do, so I'm watching the Penn State discussions with interest.
Some of the criticism has to do with access to the code behind some published results - the critics want access to the raw code (if I'm understanding the complaints correctly), and argue that it's bad scientific practice to leave the code out of the methods sections of papers.
My corner of geology (metamorphic petrology/structural geology) also uses various sorts of modeling to try to understand what we're seeing in rocks. Some of the models are based on well-established physics (like heat flow), and are used to test whether our explanations for past processes are plausible. Some models are attempts to understand physical processes in the first place - I'm thinking of the work of people who do physical modeling of faults to try to understand why some become inactive and others become active.
I read a lot of papers in which the results are, at least in part, based on the output of some kind of computer modeling. The methods sections usually include general statements about the model used and the reasons for choosing the parameters, but they don't generally include the code itself. Some of the programs are freely available to other scientists (usually through the author's web page). In other cases, the software is a work in progress, and one has to collaborate with the author in order to use it. Whether the programs are publicly available or not, they are often difficult to use without advice from someone with more experience with them.
From what I've seen, Mann and colleagues follow similar practices when dealing with modeling in their work.
It seems to me from the little bit of exposure I've had to the CRU debacle, that there are problems with the whole FOI process. It was assumed that FOI requests would be relatively infrequent and not present a major distraction to research. But from what I read, the rules required at least 18 hours of work per request, and many many requests were being made. I can only conclude that in practice the FOI was being used in a manner analogous to a Denial of Service attack, i.e. overwhelm a site/organization that you wish to harm with a slew of requests. So clearly we have to construct some sort of system that allows reasonable requests to be honoured, but doesn't allow DOS attacks.
As we've seen, once scientific results are seen as dangerous to a groups ideology or commercial agenda, the motivation for bad faith attacks exists. Scientists have a tough enough job as it is, to add public communications, and compliance with various laws (such as FOI) seems way to onerous to me.
I do have one thing that came to mind to me concerning communications of science. I remember having to take time out to demonstrate a certain degree of competance in a foreign language. I don't begrude that time, but in the modern context perhaps science communications could be substituted.
Just a general observation -- the "best practice" standards of open information and professional behaviour at all times are designed to work amongst peers, where there is generally a genuine research purpose to requests for data and methods. Yes, of course there is a great deal of professional sniping and using data and methods precisely to see if someone's conclusions are replicable and robust, or if that someone CAN be shot down. However, the ground assumption there is that the ultimate goal is not just to make a name, but also provide a better approximation of reality and more robust knowledge of the world.
In the case of the highly politicised climate issues, requests for data and methods have for years been used as political and media-manipulation tools by political special interests rather than other researchers in the field. While professional disagreements shouldn't present a problem to anyone with confidence in what they're doing, for over a decade now many researchers have found their work torn apart not in professional peer-review settings, but in general media opinion pieces, with themselves as targets of deliberate smears along the way.
With the best will in the world, I honestly don't see how the "best practices" model could stay in play without a few knocks and dents as researchers get resentful and defensive. Omega Centauri's point about FOI requests being used as DOS attacks is spot-on, and has precedent in Sen. Inhofe's activities. Add to that the fact that many researchers have become aware that even when they try their best to clearly communicate what they are doing and what data they have to the general media, they will often be deliberately misquoted and misused -- for one example, the deliberate conflation of GLOBAL sea ice with ARCTIC sea ice in the Michael Asher Daily Tech article of January 1, 2009, which misused an interview given in good faith by Bill Chapman at UIUC's Arctic Center in order to disseminate a (deliberately?) false meme about the climate. After researchers have been burned, or seen their colleagues burned, by this kind of behaviour on a consistent basis, it's almost inevitable that there should be a closing of ranks and resentment against further demands for information and openness and communication. How can that be prevented? To what degree should researchers be open political targets, too?
The problem is that Mann et al are trying to do science, and the AGW deniers are fighting a war. Politics being a war by other means. We know that âall is fair in love and warâ. Lying, cheating, stealing, killing your enemies, shock and awe, terrorism, genocide and scorched Earth policies are not âaccepted scientific practicesâ, they are war fighting practices that have been used in the past and that are still used today.
To the extent that Mann's detractors use tactics, they legitimize the use of those tactics by Mann in retaliation. Responding in kind is a legitimate response under the rules of war. First use of nuclear weapons is considered to be (now) against the rules of war and a war crime. So is first use of chemical weapons. Retaliatory use of nuclear weapons and even chemical weapons is not considered a war crime, but a legitimate act of self-defense. Retaliatory use is considered acceptable because it deters first use which is not acceptable.
Adhering to âaccepted scientific practicesâ is not a suicide pact.
The only âscienceâ that counts is what is published in peer reviewed journals. Violating accepted scientific practices to publish or to bar from publishing any manuscript is unacceptable. It is unacceptable to all scientists. Scientists who do violate acceptable scientific practices open themselves up to criticism that can exceed accepted scientific practices.
Climate researchers as scientists are stakeholders in the science of AGW. As residents of Earth, they are also stakeholders in the Earth's climate. Stakeholders with a greater understanding of the long term consequences of AGW. Adhering to accepted scientific practices as scientists does not reduce their right to act as stakeholders of the Earth's climate. As I said before, adhering to accepted scientific practices is not a suicide pact.
The only science that counts is the science published in peer review journals. When material is published in peer reviewed journals that deviates from acceptable scientific practices, the stakeholders in that scientific enterprise have an expectation that the unacceptable practices will be corrected, and they also have an obligation to the other stakeholders to make that correction happen. Those stakeholders include the authors, other scientists in the field, readers of the journal, editors of the journal, peer reviewers of the journal, colleagues of stakeholders, funding agencies, policy makers who would use that published material as the basis for policy, and also individuals affected by those policies. In the context of AGW, everyone is a stakeholder.
In the context of the stolen emails, stealing emails is not an accepted scientific practice. Behaviors as described in the email only rise to the level of not accepted scientific practices if they affect the publishing or non-publishing of material in peer reviewed journals.
A FOI request is not a scientific process, it is a legal/political process. A request by a colleague for original data to be used for purposes laid out by the colleague to check results in a peer reviewed paper is a scientific process. A FOI used as a DOS attack is an example of a not accepted scientific process.
If scientists and stakeholders are going to work together on a common scientific enterprise, they need to have a common set of rules and a common set of acceptable scientific practices. If one party finds it acceptable to use FOI as a DOS, then all stakeholders should have the implicit reciprocal right to use not accepted scientific practices against that party. The problem is that once the conflict escalates out of a scientific dispute, then non-scientific practices become acceptable and the winner will be the one who goes nuclear first, or the one with the most lawyers, guns and money.
The problem isn't the use of non-scientific practices, the issue is who used them first. In the case of Mann et al, the AGW deniers went non-scientific first. Acceptable scientific practices are not a suicide pact. People attacked non-scientifically have every right to respond in kind. That doesn't change the science.
Except, of course, that Mann's code is freely available. People are still asking 'Why won't Mann release his code", when it is, and has been for some time sitting in plain public view. See http://www.realclimate.org/index.php/data-sources/ for this and other data and code that climate scientists are "hiding" in plain view on publicly accessible web sites.
This is the main problem, that there has grown up a series of memes around climate science which refuse to be budged by mere facts.
To see a linear exegesis of how CRU and associates wrecked Climate Science and dented real science for years to come you might want to go through John Costella's Climategate analysis
Where he takes the emails in chronological order and links directly to the email in question for every criticism. His conclusion:
"Climategate has shattered that myth. It gives us a peephole into the work of the scientists investigating possibly the most important issue ever to face mankind. Instead of seeing large collaborations of meticulous, careful, critical scientists, we instead see a small team of incompetent cowboys, abusing almost every aspect of the framework of science to build a fortress around their âold boysâ clubâ, to prevent real scientists from seeing the shambles of their âresearchâ. Most people are aghast that this could have happened; and it is only because âclimate scienceâ exploded from a relatively tiny corner of academia into a hugely funded industry in a matter of mere years that the perpetrators were able to get away with it for so long."
It's amazing at this late date to see all these erstwhile scientists coming forward to defend what was done here in terms of the ethics alone. Leaving aside the truth or falsehood of climate science, it's quite evident to even the most casual observer that all of science is tarred and besmirched by these actions *regardless of intentions.*
It's a matter or trust and belief in the non-partisan nature of science. The CRU folks have reduced that in a serious way that touches all sciences.
@ Kim, who uses modeling similar to Mann
There are always people talking about real science just as there are about real religion, and it just so happens that theirs is the one that is real. They want to expand their own view to "all science" when they have no basis for doing so. The Feyerabend, the Lakatos arguments of scientific method and results are interesting and have their incarnations in James Watson and E. O. Wilson when both were at Harvard. The clash is described by Wilson in his autobiographical book Naturalist (there's probably another side). But the situation of Mann is a practical one at this time. Hindsight analysis will be just that.
The old novel by Britisher C. P. Snow (also author of Science and Government, etc.) called The Affair, no, not about men and women but about a sticky business at Cambridge, would make a good leisure read about a similar situation. The ultimate danger to be avoided would be the loss of close relations and confidences with colleagues, to be never trusted again. Mann could avoid that, but his press statement claiming victory is of questionable effect on that, since a proceeding was left open.
In any case, if you were to find yourself in some intimidating situation (what would it be, the against earthquakers? :), the wisest thing to do, and early, would be consult an articulate lawyer who would help you sort out and recognize the issues and possible answers involved, and on occasion have her or him speak for you if necessary. Not only wisest, but not to do so would be foolish. That's my opinion on that and I'm sticking to it!
Cheers to you Kim
vanderleun's response above (which doesn't fit the topic of the post, but is actually a good example of what is not accepted scientific practices). There is no science at all in the post, or in the blog that is linked to. It is simply a rehash of the emails that were stolen.
The only accepted scientific practice to refute an argument is with data and analysis which directly bear on the subject in dispute. Does John Costella do that? No, he doesn't. He simply analyzes the stolen emails. The stolen emails are not data about climate or climate change. They are personal communications between scientists. They are easy to take out of context, but in any case they provide zero probative value on the issue of climate change.
Are there any temperature measurements in the emails? Any models? Any radiative forcing calculations? No, there are none of these. So what use is there in analyzing the emails? None as far as determining if there is AGW or if there will be adverse climate change.
If Mann et al were completely wrong in every single bit of data and analysis that they did, that does not make the opposite of what they have been saying correct, it simply makes the issue of climate change unknown. To find that AGW will not happen, and that there will be no adverse climate change, it is insufficient to find that Mann et al are wrong, it is necessary to positively show, with data and analysis that AGW will not happen, and that rising CO2 levels will have no adverse effect on the climate.
The AGW deniers have not done this. They have not shown with data and analysis that AGW will not happen, they have simply thrown up smoke to try and discredit Mann et als data and analysis with hype and even lies of their own.
I am quite sure that they have not done this because they can't do it. That there are no facts and analysis that are compatible with no AGW and no adverse climate change.
I might be straying too close to the forbidden zone with this reply, so before I get canned I'll say it's a great thread with some enlightening comments. Cheers to all concerned.
I have trouble taking vanderleun's link too seriously since it's at a glance a litanny of gotchas and thought crimes with few genuine transgressions. And even with the email for reference it's still out of context.
He even makes a big song and dance about the 'hide the decline' thing. I don't know why people like this one so much. I am a casual observer and I'd been reading about the questionable status of tree ring proxies for a year before the CRU hack. It's hard to take someone seriously who holds up a lack of perfect consensus on complex matters as a negative (or the desire to talk it out and put up a united front publicly. This goes on all the time in every organisation comprising more than one person. Geez, parents do it). But I haven't read it all.
There's a limited amount of time, of money and page space. to be allocated in science. Competition for it causing the odd less-than-ideal behaviour should come as no surprise. The world is not short on habit, cliques, insularity, prejudice and arrogance. Competition makes for dogged, stubborn and combative people as well. I really don't know why so many climate skeptics (as I call folks who aren't outright deniers but are fairly easy to tip to the negative side by bad PR like this) are shocked.
The page also suggests that Mann and Jones exert too much influence over what gets published in the field. That's as may be in principle, but I suspect experts in a given field advising, even vigorously, journal editors isn't that uncommon in any science. But I don't know. Perhaps some of the scientists around could chime in.
If this has damaged all science its because the public never really understood it in the first place, still having some image of avuncular guys in lab coats with brylcreme'd hair and pipes who 'make everything alright' from '50s car ads.
Regardless, I'm sympathetic to the idea that this is so damn important we need everything perfectly above board and seen to be above board at all times. But here I think the skeptics are setting the terms more than they actually deserve to. All the calls for greater transparency and openness (much of which is groundless, as mentioned) in the name of scientific principles. What many of them want, in essence, is all the data laid out for them so they can check it independantly.
But that's not science either, and we've seen repeatedly over the last few years skeptics can pick on statistical methods, the inclusion of this or that factor, and make quite sensible seeming arguments to the lay reader. But these arguments are false and ignorant of the scientific practice.
I may be being too charitable, but I read those emails and I see people mostly trying very hard to express in clear and simple terms something that is not clear and simple and isn't going to be. Half of the major gotchas in the public are over the creation of a graph or two and the supposed manipulations therein merely desires to remove distractions from the overall thrust of the argument. An ultimately poor practice maybe, but a dedicated quibbler can (and does) make noodle soup out of all the proxies, anomaly calculations, polynomial fits etc and make it say whatever they like.
As much as I support openness and backyard science in principle, the call for complete transparency and the inclusion of blogs as a valid source of criticism is like the witch's dunking chair of public discourse. If the skeptics are right and credibility is all but gone (and I don't think it is, but that's by the by) they'll accept nothing less than an ordeal climate science is unlikely to survive in order to "restore" it.
Perhaps my cynicism is over the top, but I get very worried about complex science in public policy when thinking about this. I can't see the media, the populace and politicians/interested parties doing right by something this complicated.
But, I guess, when have they ever? My understanding of the practices isn't very detailed. Maybe any "proper science practice" review is just blanket liquid bans on aircraft and it will restore confidence and calm down after a while.
(hmm, too many words, not enough said. Oh well)
Then there's his phrase "erstwhile scientists". Does he mean "ersatz scientists"?
Precision with words is accepted scientific practice.
Rather, to see that Climategate actually reflects the unceasing attempts by climate change deniers to undermine real science, see http://www.realclimate.org/index.php/archives/2009/11/the-cru-hack-context/
To quote from another commentary
There is one point about using computer models that I rarely see addressed: they tend to encapsulate a lot of assumptions -- necessarily so. You can't simulate climate on the quantum level. But such assumptions aren't always carefully elucidated, especially when some large model is being shared among different researchers. The obvious solution to this is to publish the model source code, subject it to review and testing of some sort, but I'd suggest that this is often the wrong approach.
Being able to say researcher X can duplicate researcher Y's results -- reproducibility -- is one of the most important criteria for good science. Yet if X uses Y's computer model any support for a claim of reproducibility is severely damaged. There may be erroneous assumptions, embedded data -- and software bugs -- in a model. What needs to happen is for researcher Y to publish the design of the model and the various premisses underlying that design. These are much easier for domain experts who may not be software engineers to evaluate. Then X needs to reproduce the model. If the design and premises of the model are acceptable and results can be reproduced with this new model, a much stronger case can be made for actual replication. Yet models and significant chunks of model code (which are likely to be the trickiest bits and thus the most likely to have errors) are often shared. That's a problem; X must very careful to recreate the model and not just copy all or part of it. If the code is published, X must avoid relying on it and instead rely on the published description of what the model does.
I'm not attacking climate models in particular, here, though they often are of a type that is harder to validate independently through simplified tests the way models of simpler physical processes are. I've worked on computer models myself. They have enormous power to aid analysis in ways that are impossible otherwise. The manner in which they are built -- often piecewise and incrementally, with experimentation along the way informing further development -- makes them as much a representation of results as a tool for achieving it. This tends to argue for their publication but the implications of any reuse must be well understood. This includes the understanding that a model encapsulates results -- potentially erroneous results -- and not just methodology, and so the same model is to be considered largely unusable as a tool in credible attempts to validate those results. Of course, any time a model's results have been properly reproduced its use in producing further results gains support, though once again independent reproduction of those further results should be required.