The End of PEAR

PEAR is gone. Yes, I know I'm late with this news; folks like [PZ](http://scienceblogs.com/pharyngula/2006/11/shhhdont_tell_deepak.php), [Orac](http://scienceblogs.com/insolence/2006/11/news_too_good_to_confine_to_j…) and [Jeff Shallit](http://recursed.blogspot.com/2006/11/pear-has-finally-rotted.html) reported this
great news days ago. But I wanted to add my two bits, by explaining just why this is good news. So I'm going to take this news as an opportunity to remind you just what PEAR was, what they did, and why it's so good that they're gone.

PEAR was the "Princeton Engineering Anamalies Research" center. They were a group within the engineering department at Princeton that was supposedly studying whether or not consciousness could affect the physical world, and if so, how. Their primarily tool was what they called the "REG"; a highly insulated/isolated device that generated a random string of 0s and 1s. The idea was that this device was sufficiently well isolated that no *physical* intervention by operators of the device would be able to affect its output. They performed a variety of experiments using this device, including things like seeing if a person, without physically touching the device, could alter the distribution of ones and zeros by *thinking* about how they wanted to affect the outcome, or seeing if the distribution varied around the occurrence of events of global significance.

So far, in principle, there's nothing terribly wrong with that. I'd question whether it's *worth* doing, without some justification for why they would expect to discover anything, but if they can
find someone willing to fund the work, and that's how they want to spend their time, I certainly wouldn't have any problem with it.

The problem with PEAR was that they were *sure* that their experiments would show positive results,
and they used shoddy experimental techniques and invalid mathematical analyses to *make* the results
look positive. They never, in 20-odd years of work, managed to create statistically significant results. But they didn't let that stop them: they massaged the data to *create* positive results, and then tried to justify the ways they manipulated the data using techniques that ranged from sloppy to
outright dishonest.

A couple of PEARs greatest hits, to give you an idea:

* [**An attempt to create a mathematical explanation for how consciousness affects reality**][pear-math]. This work uses some of the worst fake math that I've ever seen. They slap together some notation and terminology from algebra and group theory that have nothing to do with what they're discussing to make it *look* like they've actually got a mathematical theory underneath the woo-gibberish that they're spouting.
* [**Skewing statistics to show that minds can affect the REG**][pear-reg]. This one looks at the data recorded from single-users attempting to influence the REG with their minds. It's a classic example of using invalid statistical analysis to skew data. This also includes one of my very favorite examples of weasel-wording: "In contrast, the anomaly is not statistically evident in the 52% of individual operators producing databases in the intended directions (z0 = 0.31, p0 = 0.38), a feature having possible structural implications, as discussed below." Yeah, there are really some
pretty darned important "structural implications" in the fact that none of your experimental data is statistically significant: the results that they trumpet in this paper amount to a skew of 0.02% in distribution from the REG.
* [**Post-Hoc data selection to create desired results**][pear-gcp]. In which the PEAR gang
try to study whether events of global significance create anomalous patterns in the REGs. They record data all of the time on the REGs; then when something important happens (like an earthquake, or tsunami, or a terrorist attack) they go back to the data for the time period around the event, and see if they can find any miniscule sample period where the results are skewed.

As you can see from that little sampling, PEARs work ranged from shoddy to downright dishonest. But the fact that they existed as a research center at one of America's top universities - the same university where Einstein taught, gives them a prestige that they don't deserve. Their work is frequently cited by woo-merchants of all kinds as "real scientific support" for their crackpottery. (For example, see my recent post on [Deepak Chopra][choprawoo].)

I don't normally rejoice at seeing a fellow researcher lose their funding. It's *hard* to get
money to do research, and it's generally sad to see work end not because the work showed no promise, but simply because it's funding source dried up. I know of some really tragic cases of great researching getting cut off because of budget problems. But in the case of PEAR, the work *never* showed any promise. It was an elaborate mockery of science which gave shelter to bullshitters and frauds of the worst sort. It's good to see that the source of money that was paying them to provide that service finally give up.

[choprawoo]: http://scienceblogs.com/goodmath/2006/11/deepak_chopra_is_an_idiot.php
[pear-reg]: http://goodmath.blogspot.com/2006/04/bad-math-of-paranormal-research-pe…
[pear-math]: http://scienceblogs.com/goodmath/2006/07/pear_yet_again_the_theory_behi…
[pear-gcp]: http://goodmath.blogspot.com/2006/05/repearing-bad-math.html

Tags

More like this

As [PZ](http://scienceblogs.com/pharyngula/2006/11/chopra_go_play_with_steve_ir…) pointed out, Deepak Chopra is back with *yet another* of his clueless, uninformed, idiotic rants. This time, he's written [an article trying to "prove" that there is an afterlife](http://www.intentblog.com/archives/…
There's been a bunch of discussion here at ScienceBlogs about whether or not mathematicians are qualified to talk about evolution, triggered by [an article by ID-guy Casey Luskin][luskin]. So far, [Razib at Gene Expression][gnxp], [Jason at][evblog1][EvolutionBlog][evblog2], and [John at Stranger…
Today we'll finally get to building the categories that provide the model for the multiplicative linear logic. Before we jump into that, I want to explain why it is that we separate out the multiplicative part. Remember from the simply typed lambda calculus, that [we showed that the type system was…
Being a Nice Jewish BoyTM, Christmas is one of the most boring days of the entire year. So yesterday, I was sitting with my laptop, looking for something interesting to read. I try to regularly read the [Panda's Thumb][pt], but sometimes when I don't have time, I just drop a bookmark in my "to read…

The grant committee wrote a renewal of grant letter, on a PC. Then they applied their consciousnesses onto the string of zeroes and ones on the hard drive, and behold! A miracle happened. The letter turned into a grant renewal rejection.

Of course conciousness affects the real world. I conciously decide to pick up a pencil, and lo and behold, I pick it up, producing a measurable effect on the world.

This looks more like telepathy or telekinesis. Is there a better word?

Telehocuspocus?

Tele-auto-delusion?

Telewoowoopathy?

Okay, I'm going to concentrate my consciousness on GoogleEarth and see if I can make it into the Flat Earth.

Insert black humor here about PEAR having to surrender their lab in Room 101 of the 1984 Building, where, after sufficient brainwashing and threats of torture, 2 + 2 fingers are held up, scanned, digitized, and seen as 5 fingers.

You know, when I saw this headline in my reader, for a split second I thought you mean the PHP Extension and Applicaiton Repository and went WTF!?!?!

:P

I had never heard of PEAR before PZ's post, but I guess I don't have much to say about them getting the axe.

This looks more like telepathy or telekinesis. Is there a better word?

I always liked "autokinesis." The power to mentally control your own body!

By Anton Mates (not verified) on 25 Nov 2006 #permalink

You'd think that if the idea had any merit, they'd be able to predict catastrophic events. I.e., get feedback on catastrophic events faster than the cable news teams do. It's easy to imagine setting up an auto-reporting system that triggers an alert whenever their criteria for a significant deviation are met. Then the researchers could turn on their televisions and expect to learn of some disaster somewhere, rather than having to go back after they already know there's been a disaster and look for a peak that hits somewhere near the event. Bigger events would presumably having bigger peaks.

Coming up with real, predictive scientific tests for their hypotheses is easy. That they never bothered shows that the PEAR team never had any interest in doing science with their grant money.

Joshua:

I agree with you. But let me put my devils advocate hat halfway on for a moment. In the GCP stuff that PEAR did, the
deviations that they're associating with events are *very* small. If they signalled some kind of alert for every deviation, even with the threshold set at the high end of the deviations that they're associating with events, they'd be signalling several hundred deviations per day. Their argument would be that there are *many* things happening every day, and that we just can't know about all of the events that are having enough of an impact on some subset of the worlds population to trigger a deviation.

Removing the hat, it's pretty obvious that it's gibberish. The deviations that they're reporting are *expected* from any random noise generator. If they *weren't* there, it would prove that the REG isn't generating truly random data. But then when they find it there, they try to assign some importance to it. The key fact is that the deviations that they measure are *never* outside of the expected deviation range of truly random data.

About the REG experiment, it is true that the presentation of the 12 years review is not very good. I read the paper carefully and I think that the mathematics are sound. Since you're claiming that this is bad math, maybe you could tell me where I am wrong. Here my interpretation:

- First for the most controversial paragraph. You're interpretation is: "57% of experiments, performed by 52% of the operators: this is an admission that the sample is biased!"

I think this interpretation is very unlikely (come on, do you really believe the authors would make such a mistake). I think this is the correct interpretation (at least coherent with S.I.D. and O.I.D. entry of HI-LO column in table 1):

57% of the 522 series have ended with more trials in the intended direction. (Significant)

52% of the 91 operators have, overall, more trials in the intended direction (Not significant).

This is not a contradiction, for example if there are 2 operators which have each done 100 series consisting of 1 trial, the results could be:

Operator 1: 65 series in the intended direction (ID), 35 series not in the intended direction (NID).

Operator 2: 49 ID, 51 NID

Then, 57% of the 200 series have ended with more trials in the intended direction. 50% of the 2 operators have, overall, more trials in the intended direction.

I think, however, that this part of the paper is very weak since only the proportions are given. If the data from which these proportions were calculated had been there, this paragraph would have been much clearer.

In this view, it is natural to wonder why the correlations applied to series but not to operators.

Now for the most striking part of this paper:

They had made (839800+836650)*200 = 335290000 trials with an intended directions. We can estimate from the data that (839800*100.026 + 836650*(200-99.984)) = 167680221 trials where in the intended directions, that is 35221 more than the average or 3.85 standard deviation (stddev=9155). Using the area under the standard normal distribution we found that the probability of being this far (or farter) from the average in the correct direction is 6*10^-5 (due to approximation this is different from the value in the paper which is 7*10^-5).

I can't say much about the experimental protocol or the credentials of the authors, but as far as the mathematics are concerned everything seems correct. Am I wrong?

I agree that the trials could have been assigned after the experiment. In this case it is a fraud and the mathematics are irrelevant.