Wegman plagiarised, but there is worse

Well yes, as is slowly becoming obvious. Deltoid reports USA today (or you can also look at the full Mashey). In the curious world of academe (which I presume Wegman aspires to) plagiarism is a no-no far more serious that just getting the wrong answer; and has the virtue of being fairly easy to spot.

But whilst plagiarism is bad (possibly even fatal) for your academic reputation, it doesn't directly say anything about science, or the validity of conclusions. It is evidence that the author has been sloppy and - in this case - bolsters the argument that the author didn't really understand what he was doing, or was deliberately misleading.

And that is the important bit: that the Wegman report's slavish copying of M&M lead them to incorrectly evaluate MBH. For that you need to plough through the detail at Deep Climate (as briefly ref'd by me).

To quote DC directly:

Wegman et al took the M&M critique of MBH at face value, and deliberately excluded all substantive discussion of scientific literature answering M&M (especially Wahl and Ammann).

Wegman et al completely misunderstood the M&M simulation methodology, and claimed that M&M had demonstrated that the MBH short-centered PCA would mine for "hockey sticks", even from low-order, low-correlation AR1(.2) red noise. But in fact the displayed figure (4.4) was taken from the top 1% of simulated PC1s generated from high-correlation, high-persistence ARFIMA-based noise, as archived by M&M. (And I also show that simulations based on AR1(.2) noise would have shown much less evidence of bias from "short-centered" PCA, even if one focuses only on PC1).

The point about what you can get done for is interesting, though, and worth a little musing. It is (obviously) unacceptable to zap researchers for just getting the wrong answer; that would be too big a constraint on academic freedom (sliding gracefully over the question of whether that applies to reports to Congress). But in the normal course of things, Wegman's report would be a paper, and it would have been subject to peer review. And quite likely it would have failed, for the plagiarism. It might well have failed for the science, too, for the problems that DC found. But if had it passed, it would then have been open to a reply in the literature, which it isn't really now. Having decided that you can't zap them for the wrong answer, you're then left not being able to zap them for deliberately getting the wrong answer, and tenaciously clinging to the wrong answer despite the evidence; this seems regrettable.

Refs

* Eli (and Eli scents blood)
* JEB
* In Salon - but note how slow this story is. The MSM is clearly very nervous of it.

More like this

[May 26th: Pulled to the top to update with the Nature editorial which, as well as noting the paper being pulled, also notes the mysteriously dilatory George Mason University investigation. June 3rd: And pulled again, since Science have a piece on the actual retraction, and again note the GMU lack…
Joe Barton's Committee has released a report they commissioned on the hockey stick by Wegman, Scott and Said (WSS). The focus of the report is much narrower than the NRC report and the results are basically a subset of the NRC report. In particular, both reports find that "off-centre" method used…
Unless I haven't been paying attention, the mighty Madhav Khandekar's "Questioning the Global Warming Science: An Annotated bibliography of recent peer-reviewed papers" has been met with total indifference. Until now... Its supposed (I think) to be a sort of Peiser-done-properly. He saith: "a…
John Mashey analyses emails from Wegman and Azen and yes, Wegman's defence against plagiarism charges is to say that he and his students plagiarized from Denise Reeves. Multiple times. Andrew Gelman says it best: The major conclusions [of the plagiarised paper] are that there are different styles…

People deserve good life and business loans or just short term loan would make it much better. Just because people's freedom is based on money.

Yes, this has had me tearing my hair out.

Wegman plagiarizing other sources suggests ethical problems on Wegman's part, but to the casual reader it probably increases rather than decreases their confidence in the substance of his claims. (Hey, if he found lots of problems in Mann et al., and you're saying his report was plagiarized, that must mean that other people found lots of problems in Mann et al., too ... right?)

Wegman failing to catch serious errors in McIntyre's criticism of Mann is much more significant for the "science" question, but unfortunately the charge of "plagiarism!" has a natural tendency to suck up all the oxygen in the debate, leaving more complicated and important but less dramatic issues gasping for breath on the sidelines.

OK, with that out of the way, what makes you say "And quite likely it would have failed, for the plagiarism. It might well have failed for the science, too"? IMHO the order of probabilities would be reversed. Do you think the peer review process is effective at catching plagiarism? Are there any examples of this?

In my own field (not climate change) I know of exactly one case where plagiarism was caught during the review process. The plagiarism was serious and unambiguous. The manuscript was initially reviewed by three reviewers, none of whom noticed it. After revisions, it was sent to another group of three reviewers ... only one of whom discovered the plagiarism, at the last minute.

How closely do most people check for plagiarism when they're reviewing paper manuscripts for a journal? I know my colleagues worry a great deal about this in student papers, but until this case came up I've never heard anyone talk about making a concerted effort to detect plagiarism in journal articles.

[Maybe I was being optimistic. But if sent to people familiar with this world - Bradley, perhaps :-) - there would have been some hope -W]

Sorry, I meant more oops!, too... but too many links in a post at WC's abode get chomped anyway ;-)

OK, with that out of the way, what makes you say "And quite likely it would have failed, for the plagiarism. It might well have failed for the science, too"? IMHO the order of probabilities would be reversed. Do you think the peer review process is effective at catching plagiarism? Are there any examples of this?

In my field (also not climate science) I also know of exactly one case of plagiarism that was caught during the review process, and that was because I was both reviewer and victim in that case. The authors of that manuscript submitted it to another journal, which published it (I was not a reviewer on the manuscript at the second journal). It turned out that some parts of the paper were original, and some were true, but no part of the paper was both--the parts of the paper that were original were so far off-base to be, as Pauli put it, not even wrong, and I'm surprised the referees didn't catch that (I didn't because, on the editor's advice, I stopped reading after detecting the plagiarized passage). But it's not all that unusual to find identical or nearly identical paragraphs in different papers. Sometimes there is a good reason for it: many of the experimental results in my field come from satellite data, and such papers need to include some boilerplate text identifying relevant details about the satellite. But often some other paragraph, usually introductory material, will be lifted wholesale from another paper; my officemate has discovered more than one case of this kind of plagiarism.

In the case of the Wegman report, I'm not so sure that the flawed science would have been noticed right away--it was about four years after publication that Deep Climate published on his blog his criticisms of the methodology, and finding the flaws would have taken some digging (unlike in the aforementioned case where the clues that the authors didn't know what they were talking about stood out immediately when I finally did read that part of the paper). But the fact that the Wegman report was so dismissive of M&M's critics would have been noticed, and the hypothetical reviewers would likely have called for major revision or rejection on those grounds.

Short answer: No, I don't think the system does a good job of detecting plagiarism. Most reviewers aren't expecting it, so unless by chance the editor asks the victim to review the paper, it doesn't get caught.

By Eric Lund (not verified) on 22 Nov 2010 #permalink

1) 35 of 91 pages had material with substantial near-verbatim plagiarism.
But most of it is unlikely to have been noticed unless you were specifically looking.
(The one exception is the social networks piece, but the authors were unlikely to have seen it.)
In particular, experts don't spend a lot if time going over intro stuff carefully in a 91 pager.
An expert sees a few tables from Bradley, and several paged of text with the the right terms, and sees the vague cite at the end of the tree rings, says standard stuff from a famous book, skip.
Of course, non-experts read carefully to try to understand what this is about... And absorb a clear message that there are confounding factors everywhere, and that you really cant use tree rings.

Likewise experts don't spend a lot of time studying the summaries of papers.

2) But The Stoat is not quite right in his basic thesis.
Indeed, you should not zap researchers for making mistakes.
However, plagiarism is only one of the academic unholy trinity of FFP .
Google FFP falsification fabrication plagiarism

Plagiarism once found is easiest to see even with zero field knowledge.
The FF can sometimes be arguable, unless UT is really blatant or really pervasive or both.
FF (sometimes the distinction is clear, sometimes murky) is briefly alluded to on the exec summary of SSWR. People focused on plagiarism because it us deadly enough, but if you read SSWR, some of the changes of meaning and biases enumerated in detail may well turn out to be FF.
There was a reason they were enumerated....

By John Mashey (not verified) on 22 Nov 2010 #permalink

Uh, I'm not an expert (I just read DeepClimate), but it seems that MacIntyre generated 10,000 runs of "red noises", passed those runs thru a PC1 function, graded each run according to which shown the greatest upward rise to the right, sorted the runs according to that criteria, and kept the 100 with the greatest rises.

What does that proves ? Nothing. That if you start with random data *showing a particular trend*, the PC1 techniques will detect this trend.

If M&M want to assert that all proxies are random data. Then we can never know anything about past climate thru proxies, ever.

But the proxies are NOT random data. Tree growth as seen in tree rings can be influenced by a lot of things, but the most significant growth factor for the northern trees chosen is temperature. So, over a long period (400 years?) the growth rate of these trees is considered a good proxy for temperature. The so-so rate of growth in cold periods is *NOT* due to serial noise.

When you generate 100,000 random curves, and then keep the 1% that have a chosen characteristic, duh, they're no longer random curves.

> effort to detect plagiarism in journal articles

http://www.nature.com/nature/journal/v467/n7312/full/467153d.html

"... Since October 2008, we have detected unoriginal material in a staggering 31% of papers submitted to the Journal of Zhejiang UniversityâScience (692 of 2,233 submissions). The publication, designated as a key academic journal by the National Natural Science Foundation of China, was the first in China to sign up for CrossRef's plagiarism-screening ..."
---------

My guess: we're going to be shocked, shocked, to find plagiarism has been going on at a high rate for a very long time by people who never expected the Internet inquisition.

Much of human behavior in the past is shocking.

The attitude common through history that the little people were there to be harvested along with everything they produce -- is a class attitude. People with that attitude are found everywhere; academia and business and government.

The web tools for searching threaten the old top-down control of what's known. Expect a pushback: see for example http://lauren.vortex.com/archive/000776.html

Hat tip and deep bow to John Mashey, this time for "FPP" as a search term

Which turns up: http://jnci.oxfordjournals.org/content/100/1/7.full

"... scientists think there has been an increase in misconduct. The reasons cited were increased pressure to produce and publish, increased commercialization of research results, tougher competition for funds, the diminished prestige of science, increased opportunities to cheat in the Internet age, and the inadequacy of the peer-review system to detect misconduct in complex projects...."

My thought: watch out for the "founder/overthrown" notion.

Whenever the culture begins to acknowledge a past problem, it's easy to focus too much on the bad past. Beware a "science can't be trusted" reaction. "Trust, but verify."

David Ritson's critique of Wegman's figure 4-4 (which I essentially corroborated and then some), was made public within weeks of the release of the Wegman report. Unfortunately, what little attention it got focused on Wegman's (ironic) lack of transparency, and not on the actual critique itself.

For more, see:

http://deepclimate.org/2010/10/25/the-wegman-report-sees-red-noise/

It is indeed unfortunate, though, that no one thought to dig into McIntyre's code until now.

I think it's important to place plagiarism in context. Far from lending further credence to the claims in Wegman et al, as some appear to think, it clearly demonstrates a complete lack of relevant domain knowledge in paleoclimatology and social network analysis. How any academic can assert, as Judith Curry did, that this lack of knowledge somehow leads to objectivity is beyond belief. Frankly, the abysmal analysis of Wegman et al was all too predictable, once the other shoddy scholarship was exposed (even parts of the background on PCA and statistical noise models was copied).

Having said that, I have to admit even I was surprised and just how poor and completely mistaken Wegman's analysis was.

[Curry comes out of this (and not just this) looking like a twat. See for example http://julesandjames.blogspot.com/2010/11/pop-quiz-on-responses-to-wegm… -W]

Well, I must mention a slight bit, re "Wegman's Analysis."

Somebody reran McI's code. Although as lead and senior author, Wegman is certainly responsible. we don't yet know who did, and who wrote the text that goes with that. It might well have been Said, in which case string expertise might not be expected.
Just one of those fascinating loose threads.

By John Mashey (not verified) on 22 Nov 2010 #permalink

Well, Wegman et al also misinterpreted M&M 2005 and its references to "persistent red noise", Hosking's synthetic time series method and so on. Presumably, Wegman read M&M even if he delegated such mundane tasks such as reading or running the code. And it seems implausible that he didn't have a hand in the key section 4, which *consistently* refers to M&M's supposed red noise (i.e. AR1) null proxy test.

[Well, there is another point here, which is that everyone was wandering around saying how terrible the palaeo folk's stats were, and Wegman must have gone into this thinking he could do it with his eyes shut. Presumably he got a rather nasty shock when he realised that it was actually a bit more compex than he had thought and that he would have to work hard and think about it if he wanted to do a decent job. And presumably at that point he decided that he was too busy to bother -W]

#1

There were numerous red flags in Wegman et al. For starters there were large swathes of unattributed text, sometimes running to several pages, accompanied by curious changes of tone and incongruous juxtapositions. I think any serious review of the SNA section, for example, would have to ask how there could be five pages of SNA background material with no citations whatsoever. As in zero, nada, zilch, zip!

The easy part was figuring out it must have been copied or at least not written by the authors. Harder was finding all the sources, although some popped out from the beginning.

DC, the mish-mosh nature of the WR is evidence of original intent to do a hatchet job. William is likely right about the subject matter having turned out to be harder than it appeared at first glance, but also Wegman probnably figured that a Congressional report probably didn't have to meet academic standards. The present kerfuffle, coming as late as it does (but I must say presciently timed relative to the coming shift in control of the House), must have been a nasty surprise, now made much nastier by the WaPo exposition. It must be the talk of the GMU campus just now. Hopefully it will make more difficult any plan by the GMU admins to sweep everything under the rug.

"How any academic can assert, as Judith Curry did, that this lack of knowledge somehow leads to objectivity is beyond belief."

Yet it's entirely consistent with Judy having decided she should hold forth similarly on scientific matters (e.g. sensitivity) regarding which she palpably lacks expertise. I suspect she sees them as connected.

By Steve Bloom (not verified) on 22 Nov 2010 #permalink

Wm, the elephant under the tent is that Wegman was a program officer in the Strategic Defense Initiative (SDI). This almost certainly brought him into contact with the George Marshall Institute and its founders, Seitz, Jastrow and Nierenberg. He was and is no innocent.

[Well, he was chosen by Barton -W]

Another interesting marker to watch is whether Wegman and GMU were able to wipe backup tapes containing his Email

[It would be exciting if soenoe could get some kind of disclosure order -W]

William@13:

Where did they get the time to write half the report about SNA?

It's kind of funny to see Wegman respond they had time pressure, that they were asked to check the math, and then spend so much time on something so flawed as the SNA. The point has been made many times before, I know, but I'd really like Wegman to explain why they did an SNA, when all they were asked to do was to check M&M's claims (which did not include, AFAIK, that Mann had a social network that precluded proper peer review).

[My assumption was that they had one hanging around and felt like bunging it in. Certainly, it helped divert attention from the errors elsewhere, and accusations of cronyism were useful mud to fling for those who couldn't even get close enough to the maths to even understand the PC1 stuff -W]

...the elephant under the tent...

A well-chosen (and presumably apposite) idiomatic phraseology, no doubt.

I presume we know he isn't a canvassing donkey? ;-)

Wegman also appears to have a little bit of an ego problem - I've been turning down invitations to appear in "Who's Who of X" practically since my undergrad days - they are pretty much (as I understand it) vanity listings, so why one would list a whole set of them proudly on one's webpage is a bit of a red flag:

http://www.galaxy.gmu.edu/stats/faculty/wegman.html

"He has been listed in Who's Who in America, Who's Who in the South and Southeast, Who's Who in American Education, Who's Who among Entrepreneurs, Who's Who in Leading American Executives, Who's Who in Frontier Science and Technology, Who's Who in Science and Engineering, and American Men and Women of Science. "

Wegman has responded. This is after he had stated:

âIâm very well aware of the report, but I have been asked by the university not to comment until all the issues have been settled,â Wegman says, by phone. âSome litigation is underway.â

So it leaves me wondering what's been settled exactly. The wording in the response is interesting.

âWe are not the bad guys. ⦠We have never intended that our Congressional testimony was intended to take intellectual creditâ for other scholarsâ work.â

âWegman said he and his report co-authors felt âsome pressureâ from a House committee to complete the report âfaster than we might like.â But he denied that there was any attempt to tilt the influential climate report politically.â

This seems to be a âwe were pretty sloppy but didn't mean any harmâ defense, which doesn't fly. The second claim is laughable.

http://www.usatoday.com/weather/climate/globalwarming/2010-11-22-plagia…

http://content.usatoday.com/communities/sciencefair/post/2010/11/wegman…

Marco: re SNA
Reread SSWR, specifically:

p.1 on claimed missions #1 and #2, and what I thought were the real missions.

p.32, section 3.4m, including discussion of #2.

By John Mashey (not verified) on 23 Nov 2010 #permalink

Oops, more on SNA:

SSWR W.5, pp.143- talks about this in detail.
They had a little experience with SAN, albeit in the peculiar mode of apply SNA terminology to computer networks, not a domain in particular need of it. Calling a node an actor seems of marginal benefit. They showed no clue of actually knowing much about real SNA.

I quoted a fairly strong comment from a serious SNA researcher, and I got similar comments from another well-known one later, but I figured the horse was already dead multiple times, no need to beat it some more, so I didn't bug theme to be quoted.

See also p.94, slide 19, the two annotations.
They were pressed hard not to do the SNA, but kept right on.
It was key to #2.

By John Mashey (not verified) on 23 Nov 2010 #permalink

See Journal of Scientific Exploration is a Dog. While this is mostly for the benefit of someone in the UK, it has an odd connection to Wegman Report.

Specifically, SSWR showed the many influences of McIntyre&McKitrick(2005) (MM05X in SSWR) in the Wegman report, i.e., the red-coded Memes are found there.

MM05x cites Deming's quote (p.6), although he is misnamed, and somehow JSE (the dog astrology journal) got changed to a more credible one, i.e., Science. apparently Wegman & co never checked that out.

HWQDAJ more likely got this from an earlier version of that, McKitrick (2005), which at least cited JSE correctly. Somehow that got changed in the weeks between.

By John Mashey (not verified) on 23 Nov 2010 #permalink

Of course Wegman is a bad guy, who else but a bad guy would have included that oh so cute "social network analysis" of Mann and his co-authors in a report to Congress. It was a drive by worthy in the best Mexican drug cartel style, and you can quote that to Judith Curry.

His comment of "we are not the bad guys" implies that the *other* guys *were* the bad guys.

Even when defending his own sloppy work he still slanders Mann et al

I'm amazed this went unchallenged

Re. 27 Louise. Glad someone else spotted that.

But Louise, I'm pretty sure Wegman referred to Barton and friends as the 'bad guys' :-)

By Martin Vermeer (not verified) on 26 Nov 2010 #permalink

Vergano has another article, in which Wegman digs himself deeper yet.

I've posted a long response to his comments there.
In addition, at the end is a quote that is more important than it seems. Those unfamiliar with ORI might take a look.

As I said over at Rabett Run,

"Once again, Lewis Carroll to the rescue:
Beware the JabberwORI, my son!
The jaws that bite, the claws that catch!"

By John Mashey (not verified) on 27 Nov 2010 #permalink