Some ESP-bashing red meat for you ScienceBlogs readers out there

A reporter contacted me to ask my impression of this article by Peter Bancel and Roger Nelson, which reports evidence that "the coherent attention or emotional response of large populations" can affect the output of quantum-mechanical random number generators.

I spent a few minutes looking at the article, and, well, it's about what you might expect. Very professionally done, close to zero connection between their data and whatever they actually think they're studying.

(Just for example, they mention that their random number generators are electromagnetically shielded--which seems kinda funny given that they're trying to detect effects on these generators, and I assume some of these effects might come electromagnetically. They also alternate between assuring the reader that (a) The random number generators really are generating random numbers, (b) They cleaned the data to fix all the cases where the random number generators aren't generating random numbers, (c) The random numbers can be affected by people all over the place, so they're not really random.)

OK, OK, fine. The substance of the article isn't particularly interesting to me: I have little interest in what was happening with these people's random number generators, and I have little doubt that the researchers could have found similar patterns had they looked at the data in other, more obviously meaningless ways. (For example, instead of taking 236 days that were believed to be particularly important (New Year's Days, dates of earthquakes, plane crashes, other newsworthy events), I suspect they could've taken just about any selection of days and found something interesting.) Anyway, that's not the issue.

My real point here is that the article reads like a physics paper--and, indeed, I looked up the first author and indeed he is a physicist. Physicists can look pretty silly doing data analysis on non-physics problems. But, now I'm wondering: is data analysis in experimental physics this bad? Or am I just succumbing to my own selection bias, judging the academic field of physics by its most publicity-worthy rather than its best practitioners? I'd hate to think that the occasionally headline-grabbing research out of CERN, etc., is really just blind data manipulation!

P.S. Yeah, sure, make all the jokes you want about how we do things in quantitative social science. Still, it's not this bad, is it? At least we try to have some connection between our measurements and the phenomena we're studying. Here, these dudes are torturing the data to within an inch of its life (as the saying goes). But such behavior might be second nature to physicists, who routinely have to process and process and process the noise out of their experimental data.

P.P.S. Yes, I know it's in poor taste to make fun of people who (unlike others whom I make fun in my blogging) are neither trying nor succeeding to do any harm (beyond, maybe, wasting the money of some of their funders). Somebody asked me to read the article and I felt like procrastinating, but maybe that's not really enough of a justification, I guess. I apologize and promise never to do it again.

Categories

More like this

It's not so much that data analysis in physics - particularly the big "needle in a haystack" stuff at places like CERN - is bad. The problem is the disconnect between the design of the experiment and the analysis of the data.
These big experiments are designed at a point in time by a small group of people, yet most of the data analysis is done many years later by others who had no involvement in the design, and thus no first (or often even second) hand understanding of the orginal design, its limitations and how it differs from the "as built" configuration that actually took the data.
In fact, the majority of the Ph.D.s in fields like experimental particle physics have never designed their own experiments and then analysed the data from those experiments!
I think the best experimentalists to "trust" when it comes to analyzing data outside physics are those that have gone "end to end" in the experiment design and analysis lifecycle a few times.

Also, at risk of breaking a glass house form the outside. Bancel doesn't appear to have published, in physics, in over a decade (Google Scholar-hardly definitive). I've published more recently than that, but certainly wouldn't consider myself a physicist anymore.

He may be a bad data analyst as a scientist (or parapsychologist), but I think it would be a little unfair to pick on physicists as a group based on that evidence.

Yeah, sure, make all the jokes you want about how we do things in quantitative social science. Still, it's not this bad, is it?

Well, if I get to cherry-pick too, then yes, it is that bad. Looks like they took down their link to the preprint, maybe out of embarrassment.

OK. Now I have to defend physicists - being one. a) you cannot automatically ascribe the sins of the sample to the population. And b) at least most physicists screw up the analysis of their own data. We leave that criminal enterprise called statistical meta-analysis to the social and medical "sciences".

But, now I'm wondering: is data analysis in experimental physics this bad?

I was a post-doc at Risø several years ago, and my boss was a statistician. I commented on the lack of any other statistical expertise there (we were both there as plant pathologists!), and she observed that the place was run by physicists, and they don't think they need statistics.

"...the place was run by physicists, and they don't think they need statistics."

They may have thought that they don't need statisticians. Modern physics is full of statistics. Quantum mechanics is all about statistics. To be a competent physician (empirical or theoretical) you need competence in statistics. Those error bars are not pulled out of hat. You won't see long discussions of statistical analysis in physics papers, because they omit the grunt work to make room for important things. It doesn't mean that the analysis hasn't been done.

By Lassi Hippeläinen (not verified) on 13 Nov 2009 #permalink

Bob O'H - same thing seems to be the case in clinical research - the basic science people (some who won math awards early in their careers) get/accept the least input on thier experiments and stat analyses. Who knows if its chicken or egg but Frank Harrel has recently been doing some good work to rectify this at Vanderbilt

Keith