Maybe we should use therapeutic touch instead of growth factors to culture cells

ResearchBlogging.orgIn complaining about the infiltration of pseudoscience in the form of "complementary and alternative medicine" (CAM) into academic medicine, as I have many times, I've made the observation that three common modalities appear to function as "gateway woo, if you will, in that they are the tip of the wedge (not unlike the wedge strategy for "intelligent design" creationism, actually) that slip into any defect or crack it can find and widen it, allowing entrance of more hard core woo like homeopathy behind them.

All of these modalities fall under the rubric of "energy healing" in that the rationale given for how they "work" is that they somehow alter, correct, or "unblock" the flow of qi, or that mystically implausible "life energy" that scientists can't seem to measure but energy healers assure us really, truly does exist. One of these is acupuncture, which has proliferated throughout many areas of medicine despite a lack of evidence that it does anything more than a placebo. However, at least acupuncture actually does something in that it involves introducing needles underneath the skin. It might conceivably do something, although it's virtually certain that, whatever it might do, it isn't achieving it by "unblocking" the flow of anything, much less qi. The next is reiki, which to me is nothing more than faith healing with Eastern mysticism rather than Christian religion as its basis. In reiki, the healer claims to be able to manipulate a patient's qi for therapeutic effect in essence by holding his hands out and willing it. The last of this trio of wedgy woo is a distinctly American form of woo known as therapeutic touch (TT), which tends to be promoted and practiced primarily by nurses. Indeed, I view TT as, in essence, an Americanized form of reiki whose name is a misnomer, in that its practitioners hold their hands very close to the patient without actually touching the patient and will them to be healed by manipulating their "life energy."

As I said, these forms of woo are "gateway woo" that lead the way to the introduction of the harder core stuff, like homeopathy, applied kinesiology, or even reflexology. However, we skeptics are seemingly supposed to accept it when we are told that these are really and truly science, maaaan. Sometimes advocates of these modalities are stung by such criticism to the point where they want to try to prove that there's science behind their mysticism, and when they do there are sometimes truly hilarious results. For instance, not too long ago I discussed a series of experiments published in which reiki was tested for its ability to alleviate the increase in heart rate observed in rats placed under stress. I couldn't help but giggle when I pictured reiki masters doing their mystical hand gestures and concentration on laboratory rats. I wondered what could possibly top that experiment for sheer ridiculousness.

Now I know. Now they're doing therapeutic touch on cell culture and writing glowing press releases about it:

Steeped in white-coat science since she earned her Ph.D. in cell biology at Columbia University 20 years ago, Gloria Gronowicz is about the last person you'd expect to put stock in the touchy-feely discipline of energy medicine. But then the University of Connecticut researcher saw it with her own eyes, under a high-power microscope in her own laboratory, where, once, only well-accepted biological building blocks -- proteins, mitochondria, DNA and the like -- got respect.

Therapeutic Touch performed by trained energy healers significantly stimulated the growth of bone and tendon cells in lab dishes.

Her results, recently published in two scientific journals, provide novel evidence that there may be a powerful energy field that, when channeled through human hands, can influence the course of events at a cellular level.

"What she's showing is an association that defies explanation with what we currently know," said Margaret A. Chesney, a professor of medicine at the University of Maryland and former deputy director of the National Center for Complementary and Alternative Medicine at the National Institutes of Health." She's Daniel Boone

I truly hate it when someone says that a result "defies explanation with what we currently know." The reason is that, in experiments investigating the paranormal (and, make no mistake, the claims of TT that they can manipulate a human energy field with therapeutic intent definitely merit the adjective "paranormal"), it is in fact very rare for a result actually to deserve such a description. There are virtually always alternative hypotheses or explanations for such results that do not involve the invocation of ideas that defy the currently understood laws of physics. For instance, in the case of any one single study, in general alternate hypotheses always include the possibility that the observed result was spurious or a statistical fluke (at the 95% confidence level, at the very minimum 5% of studies will appear "positive" by random chance alone) or that there is some sort of unacknowledged or unnoticed systematic bias in the experiment that isn't apparent in its description and may not be apparent even to the investigators. Indeed, when it exists, study authors are usually blissfully unaware of such sources of systematic bias; they think they've eliminated all sources of bias, but a closer inspection shows that they have not. Less commonly it's intentional. Furthermore, when the results of a set of experiments supposedly defy so many currently understood tenets of physics and chemistry, as the results reported in the articles mentioned in the above press release do, then no single study can eliminate those two possibilities. It would take an amount of evidence coming close to being as compelling as all the scientific evidence that argues against the result being possible to force a paradigm shift to make a serious argument that current science is incorrect on this issue.

It's also a bit of a stretch to call the journals where these results were published "two scientific journals." The reason is that one of the journals is the Journal of Alternative and Complementary Medicine (JACM), a major bastion of pseudoscience and a convenient wastebasket journal in which CAM mavens publish their pseudoscience. It's also a little dubious that the two papers, one in JACM and one in the Journal of Orthopedic Research (JOR) appear to be in fact one study. To me this looks like a case of publishing what we in the biz call the MPU, or "minimal publishable unit," in order to produce the most publications per set of experiments. Because these two articles clearly describe what is in essence one study, I'm going to take them both together and treat them more or less as one study. However, I'm going to concentrate mainly on the JOR paper, because at least it was subjected to something resembling peer review. (Again, based on the low quality of the articles published there, I consider JACM's quality of peer review to be clearly so shockingly substandard on a scientific basis that I don't even consider it peer review at all.)

Overall, I'm underwhelmed by Gronowicz's study.

One thing I noticed right away in the JOR paper is that the manuscript was first submitted on August 1, 2006 but was not accepted for publication until March 27, 2008. That's a really long time, even for a medical journal like JOR. (Medical journals tend to take a long time to publish manuscripts, sometimes as long as a year.) No doubt TT advocates will say it's because those evil reductionist scientists are trying to suppress Real And True Evidence That TT Works, but more likely it implies a lot of criticism and requested revisions, perhaps with a bit of fighting with the editor to get this published. The fact that the JACM article was published a few months ago makes me wonder whether Gronowicz, frustrated with a real scientific journal's skepticism over her work, decided to publish quickly in a woo-friendly journal, which gladly ate up what she was dishing out. Perhaps she did it because she needed a publication or two for a grant application or progress report. That's just my guess, although I consider it an educated one. Otherwise, why would she have published in such a trashy journal as JACM, when she had a paper in the pipeline for JOR?

Basically, in the two papers, Gronowicz studies whether the application of TT can stimulate the growth of human cells and the mineralization of human osteoblasts (HOBs). Her findings from the two studies, if you believe them, suggest that not only does TT increase the proliferation and mineralization of osteoblasts but that it is specific in its effects in that it exhibits no such effects on the growth of the osteosarcoma cell line SaOS. Put in its simplest terms, the implication, again if you believe the study, is that TT practitioners can somehow through force of will used to manipulate life energy cause "good" cells to grow without also making the "bad" cells grow too. Unaddressed is why on earth anyone would think that manipulating the life energy of an intelligent organism could be "scaled down" to manipulating the life energy of a plate of cells.

There are a number of problems with both studies, but first let's look at the positives. First, the researchers did actually do a fairly reasonable job of trying to blind those doing the assays to the experimental groups, as described in the JOR paper:

Control (untreated) and "treatment" tissue cultures plates were clamped in one of two ring stands on a bench top, and were approximately 15 inches from the benchtop so that the practitioner hands could reach all sides without touching. Control and treated plates were positioned at either end of an L-shaped laboratory. Treatment was alternately performed on either end of the room with the treated plates receiving treatment twice a week and the untreated plates remaining clamped for the same time period while treatment was being performed on the other end of the room. Then the tissue culture plates were returned to the same incubator. Positioning of the plates in the incubator was random, and a technician with no knowledge of TT, set up the plates and returned the plates to the incubator.

Unfortunately, it was not described how cells were allocated to one of the groups, and it was also somewhat disturbing that in later experiments, the investigators added a "placebo" group, in which a random lab tech or student not trained in therapeutic touch, mimicked the motion of TT practitioners holding their hands a few inches from the plate and distracted themselves from "focusing" or showing "intent" on the cells by counting backwards from 100 to 1. (I kid you not.) Unfortunately, the authors, as far as I can tell, appeared to compare it not only to the new experiments but to the old experiments as well; in other words, the placebo group was analyzed post hoc with the other groups. Also, aside from the performance of the final assays, the handling of the plates did not appear to be blinded, which could potentially be problematic.

A second problem that I noted was a puzzling inconsistency between data from identical experiments presented in both of the papers. For example, this is the graph of human osteoblast proliferation as measured by 3H-thymidine uptake from the JACM paper (C=control; TT= therapeutic touch; P=placebo):


Note one very important observation. Proliferation in the TT group was only slightly higher than in the control group, but proliferation in the placebo group was also elevated over control, albeit not as much and not by a statistically significant amount. Now let's look at the JOR figure, which seems to show the same comparisions for human osteoblasts:


The pattern is the same in that the results of the proliferation measurements were: TT > P > C. However, the differences observed are much larger. Whenever I see sets of data like this, I have to wonder which data from which experiments were included in each graph and why the graphs show such a striking difference in the magnitude of the alleged effect. (That's aside from the ethics of publishing in essence identical experiments in two different journals.)

There's also a graph in the JACM article conveniently not included in the JOR article that suggests even more strongly that the observed results could very well be due to random chance. At the very least, it suggests that something strange is going on here, and it's not strangeness due to a new and amazing discovery. I'm referring to a time course of the alleged affect at one week and two weeks based on the number of TT treatments:


Note that at one week there is no difference between control and TT regardless of the number of TT treatments. This result was reported in both articles; for whatever reason, one week of TT was not enough in these cells. However, at two weeks, there was a difference with four treatments and eight treatments but not with six or ten treatments, a result of three different experiments. Such a result brings into serious question whether the TT results were, in fact, real. The reason is that, if TT were real, we'd expect a dose-response curve of some sort, most likely with the effect increasing with the number of treatments, the "dose" of TT, if you will. There is no good scientific reason to expect that only four or eight TT treatments would have an effect while six or ten would not. Such a result desperately needs to be explained. Maybe it's quantum effects. Or maybe it's magic, where only multiples of four treatments have an effect. Or maybe it's some sort of mathematically perfect woo, where only treatments numbering a power of two have an effect. Too bad the investigators didn't do only two treatments or carry their analysis out to twelve or sixteen TT treatments.

Finally, in the JOR article, much is made of how the statistics were handled. For example, here is an excerpt of the methods section:

Data analysis focused on comparing the distribution of levels of proliferation and mineralization across study conditions, for example, "therapeutic touch versus control" or "therapeutic touch versus control versus placebo." All comparisons used "exact" nonparametric statistical tests. Nonparametric tests were selected because study measures typically did not follow normal distributions and sometimes exhibited clear evidence of heterogeneity of variance between groups. "Exact" versions of the tests were performed to avoid reliance on "large-sample" approximations in the calculation of p-values.

This, of course, begs the question of why there was so much heterogeneity in variance between groups. The variables under study at generally take on a normal distribution, and there is no a priori reason to suspect that the variance between the groups would differ so much that special statistical consideration.

I've mentioned before how important it is to control for multiple comparisons when there are more than two experimental groups. Failure to do so can easily lead to "statistically significant" correlations that in reality are not. The reason is that, at a 5% confidence level, each pairwise comparison produces a 5% chance of a "false positive," and the more comparisons there are the more the chance of finding a "false positive" increases. There is a statistical correction for this tendency known as the Bonferroni method. It is actually to Gronowicz's credit that she did indeed use the Bonferroni method. However, using the Bonferroni method rendered her correlations to be not significant statistically, as this excerpt from the JOR article admits:

Figure 1C demonstrated that TT stimulated HOB DNA synthesis after 2 weeks ( p = 0.04) but the placebo individual did not stimulate DNA synthesis. In the post hoc pairwise comparisons that followed this statistically significant finding, the Bonferroni adjustment required application of a significance level equal to 0.0167. None of the comparisons fell below this more rigorous threshold (control vs. TT, p = 0.095; control vs. placebo, p = 0.017; TT vs. placebo, p = 0.43), suggesting that the experiment was underpowered to support the conservative Bonferroni approach. However, the results are suggestive of the possibility that the training and intention of an experienced practitioner may be required to elicit an effect.

In other words, when done with the proper statistics, there was no statistically significant effect of TT on human osteoblast (HOB) proliferation. Indeed, the comparison of TT versus placebo was the only comparison that came close to statistical significance. The same was true for mineralization of HOBs:

TT was able to increase mineralization compared to untreated even at 4 and 6 weeks of TT treatment. However, once again the study may have been unpowered to support use of the conservative, Bonferroni approach to performance of multiple, pairwise statistical tests. The p-values at 4 and 6 weeks were both equal to 0.029. Although these p-values fell below the nominal 0.05 cutoff for significance, they did not reach the more extreme threshold of 0.0167 required by use of the Bonferroni method.

In other words, the mineralization results for HOBs were not statistically significant, either. True, there were a bunch of other statistically significant differences between the TT group and controls, but they were not independent results. They were measurements that are tightly correlated with proliferation and mineralization. I have a guess as to the reason why so much is made of the statistics in the JOR paper, in which five paragraphs in the Methods section were devoted to statistics and the justification of the statistical tests chosen but not so much in the JACM paper, where only one brief paragraph was used and the statistical test chosen was not nearly as rigorous. It has to do with that nasty peer review. My guess is that the reviewers for JOR forced Gronowicz to use the less permissive test, which, when she applied it, resulted in the effect of TT going from highly statistically significant to non-significant (although for a couple of values it was close). In contrast, the JACM "reviewers" were perfectly happy with the use of an inappropriate statistical test. Indeed, in the JACM paper, Gronowicz appears to have used a pairwise comparison using Student's t test, rather than the more appropriate test for multiple comparisons: ANOVA with the appropriate post-analysis correction applied to account for multiple comparisons.

This strikes me as a good time to point out a general rule in biological studies of this type. Whenever you read a paper in which the authors spend this much time justifying their use of statistics and pointing out that their results are statistically significant under just one less rigorous test but not significant under the correct (and more rigorous) test, it is almost certain that what you're looking at is an effect that is probably either a product of random chance, perhaps coupled with a mild undetected systematic bias in the experiments. This is different from clinical trials, where the variability is such that it's not uncommon to see such discussion, especially since such trials cannot be repeated as easily as cell culture experiments such as these.

Finally, whenever you come across experiments such as this that claim to provide hard evidence for a highly implausible modality, it is important to consider three things. First, what is the real hypothesis being tested? Is it reasonable? In this case, it is not. After all, if TT actually did work the way its advocates claim, it is by redirecting the flow of "life energy" in a living organism by in essence "focusing" and thinking good thoughts about the patient to help him "heal." Even if that were possible in a human, with trillions of cells, how could a "healer" possibly know to modulate such "energy" in a plate of a few million cells, which presumably contains far less of this "energy"? It would be like killing an ant with a bazooka (one of my favorite activities when it comes to woo, actually). Next, it is important to look at the actual methodology in detail rather than just the abstract. The abstract is the concentrated "spin" that the authors want placed on their results. It will not contain the caveats or the shortcomings that might cast doubt on the results. Finally, above all, remember that no phenomenon can be established by a single experiment or even a single lab. After all, spurious results and unrecognized systematic bias can bedevil even the best researcher. To be accepted, such a result has to be repeatable, both by the investigator and by other laboratories. The more in conflict with established science the result, the more critical this reproducibility is. Indeed, for a result that conflicts with well-established science as dramatically as this one does to be accepted, the level of evidence in support of the phenomenon should start to approach the amount of evidence existing that suggests that the phenomenon cannot exist.

Sadly, Dr. Granowicz seems to have forgotten this. In the discussions of both papers, she argues that her results definitely show that TT can "elicit biological effects in vitro" and opining that the type of energy emanating from TT practitioners' hands is unknown. Her discussion in the JACM paper is even worse, as she cites a number of incredibly dubious studies on reiki and Qi Gong from--you guessed it!--JACM. Meanwhile, in the press release, she says:

"Should somebody with osteoporosis or a broken leg go to their Reiki practitioner?" Gronowicz said. "We don't know."

Actually, we do know.

Perhaps you wonder: Who would fund a study like this? Who would fund a study in which TT practitioners tried to ply their craft on dishes of cultured cells? Do you even have to ask. That's right. This study was funded by NCCAM. It's just another example of your tax dollars hard at work.


1. Jhaveri, A., Walsh, S.J., Wang, Y., McCarthy, M., Gronowicz, G. (2008). Therapeutic touch affects DNA synthesis and mineralization of human osteoblasts in culture. Journal of Orthopaedic Research DOI: 10.1002/jor.20688

2. Gronowicz, G.A., Jhaveri, A., Clarke, L.W., Aronow, M.S., Smith, T.H. (2008). Therapeutic Touch Stimulates the Proliferation of Human Cells in Culture. The Journal of Alternative and Complementary Medicine, 14(3), 233-239. DOI: 10.1089/acm.2007.7163


More like this

I frequently call homeopathy The One Quackery to Rule Them All, but there are times when I am not so sure that that's the case. You see, there is...another. I'm referring, of course, to what is referred to as "energy medicine." What energy medicine modalities have in common is that they postulate…
It's amazing how fast six months can pass, isn't it? Well, almost six months, anyway, as it was five and a half months ago that I wrote about a particularly execrable example of quackademic medicine in the form of a study that actually looked at an "energy healing" modality known as "energy…
I almost feel sorry for acupuncturists these days. Almost. Well, not exactly. Clearly, given the infiltration of woo into academic medicine, acupuncturists are in demand even in the most allegedly "science-based" of academic medical centers. After all, acupuncture is what I like to refer to as "…
I don't know if I should thank Peter Lipson or condemn him. What am I talking about? Yesterday, Peter sent me a brain-meltingly bad study in so-called "complementary and alternative medicine" that shows me just how bad a study can be and be accepted into what I used to consider a reasonably good…

Masterly stuff.

Couldn't agree more about the laughable JACM, which is Cargo Cult "journal" incarnate.

In fact, as you have set out so elegantly, this particular example sort of does us all a service as it provides a snapshot of the predictable differences in peer review / referee-ing between a real journal (which has it) and a Woo-journal (which doesn't).

Is it conceivable that the odd difference between osteoblast proliferation in placebo/control is due to additional heat due to the proximity of the healer's/lab tech's hands and bodies? If the treatments went on for 15 minutes or so, I guess it might raise the temperature of the culture by a few degrees.

Or would that level of extra heat be nowhere near enough to stimulate measurable extra growth?

[Not a biologist, so prepared to be completely and totally wrong].

Or even a different amount of light getting into the dish.

If the later placebo trials ended up with "better" results than the earlier control trials, maybe that represents a learning curve in how to take care of cells growing in vitro.

Number of treatments may relate to treatments at different times of the day. You can't do 8 treatments in a week all at the same time of day. Many environmental parameters vary with time of day.

Most tissue culture plates are not hermetically sealed. They can't be because the cells require gas exchange. Usually there is a small and controlled gap between the lid and the plate that allows for some gas exchange. Movement of the hands around a plate with a lid on it would increase gas transfer by generating slight movements of air which will generate pressure gradients across the face of the plate increasing gas convection.

To prevent this the plates would have to be enclosed in a gas-tight container while out of the incubator. That would also greatly reduce fluctuations of temperature, light and other parameters, and would allow for blinding of the people transferring the plates and the TT practitioners treating them.

Let me guess- these guys talk to their plants too, right?

Um...reproducability is the biggest stumbling block for me.
Before the experiment even begins, are we to suppose that anybody can influence the growth of cells, or just woo-mongers? If it's the latter, we couldn't reproduce the experiment because we don't 'believe', right?

If the former argument is used, then the data is easy to refute by reproducing the experiment, this time with unbiased professionals. Then, obviously, submitting the results and the ensuing paper to peer-review.

Is there a journal that publishes unbiased, scientific reproductions of specious woo experiments? Maybe there should be, but then who has the time or the money to disprove all these claims, especially when they're so busy doing real science? Maybe Mr Randi ?

Another question, I suppose, is 'where to begin?' Acupuncture, homeopathy, herbalism?

You get to the point in your article here where you step into the role of priest, and out of the role of scientist. That's a shame. The "woo" here works or doesn't work. It can be tested.

Just because something is obviously impossible doesn't mean that it is impossible. "Obviously impossible" is a prejudice. If we knew what was "obviously impossible" and what was not, we would not need to do science, because we would already have all the answers.

Spooky action at a distance is "obviously impossible," for example, but in fact it /is/ possible, and shows up in experimental results. I don't mean to imply that reiki and spooky action at a distance are equally plausible; my point is merely that the way we eliminate hypotheses is by producing evidence that contradicts them, not by dismissing them as "impossible."

So here, when you analyze the results and come up with criticisms of the experiment, that's scientific. When you talk about how it's obviously impossible, that's priestly. If you really think it's worth analyzing and debunking this stuff, you might as well do it right.

My stumbling block is usually the original finding. TT is 'therapeutic'; it's not a human 'energy field' that breaks down simple molecules in a solution, heats up a cup of water, magnetizes, charges, etc. It's causing the super complex process of cell growth. It's fabulous. And it's exactly what we need: it would be a waste of a finding if TT repaired broken AM radios instead.

So I think to myself "What are the chances that these people 1) actually stumbled on a fabulous human capability 2) are looking at something (like cell growth, as opposed to heating a cup of water) they don't truly understand, so they're confusing themselves?"

When you talk about how it's obviously impossible, that's priestly. If you really think it's worth analyzing and debunking this stuff, you might as well do it right.
I'm sorry, but there does come a level of stupid where saying that something is obviously impossible is fair game. Of course, it always comes with the implicit "There is a tiny chance I may be wrong and would be persuaded by an appropriately huge amount of evidence" proviso, but for energy healing and homeopathy those boats have long since sailed down the Asymptotically Approaching Zero River.

I see the problem, they were trying to use the Laws of Emanation and Conduction in a reality where they do not apply.

(Law of Emanation: Is concerned with sources of energy and how energy moves in natural pathways.

(Law of Conduction: Concerned with getting energy to move through media it would not ordinarily do.

(In both cases it involves a phenomenon that does not exist in our universe, and those talents and skills involved in using that phenomenon to one's benefit. It's not at all like herding cats because first, cats exist , and that when you know what you're doing you can actually herd them. Not easily, but you can herd them.)


When you say "obviously impossible", who are you quoting?

As DrFrank points out, it is quite reasonable to point out that a claim violates an enormous body of established evidence and practical theory, and to be especially skeptical about it in that context. Orac explicitly states this more than once. And if anything, he goes out of his way to remind us that extraordinary claims can in fact turn out to be valid.


There's a big difference between TT and "spooky action at a distance": Action at a distance is seen, and the problem is how to explain it with any sense at all (see for a quick overview of non-locality and experimental results).

TT is an explanation of a physical effect that we just don't see, with other predictions that we just don't see (google "Emily Rosa" for the JAMA article).

So, big difference: one has experiments on its side, the other doesn't. How many experiments, Ted, do we have to have before we can call a claimed physical phenomenon "implausible"?

By spudbeach (not verified) on 18 Aug 2008 #permalink

I see several methodology problems with this experiment.

1. Thymidine uptake is not a good indicator of proliferation due to the touchy little vagaries of nucleotide metabolism, especially given that DMEM and Alpha are comparatively nutrient-poor stuff. A better method that the whole rest of the cell culture universe uses is Trypan Blue exclusion; the popular method in Europe is the CASY system, which is similar to a Coulter counter. You trypsinize the cells, stain them, count on a hemocytometer or an automated system. Scintillation results are likely an artifact of metabolic uptake, not true results.

2. Lack of homogeneity in the culture medium, coupled with contamination from the harvested bones from humans. Adapt your cells to microcarriers, optimize your media, or use a different cell line that is pre-adapted to a more uniform culture technique (e.g. suspension culture, serum-free). Serum-based media produce a lot of variability, and their DMEM enrichment method is...well, I don't know anyone else who does that sort of thing unless they are deliberately attempting to get primary cultures.

3. Blot analysis by digital camera is not a very precise quantitation in my experience. The camera adds its own contrast as part of the algorithm built into the software, which cannot be deconvoluted. I typically don't count any "results" unless they are orders of magnitude different. Use a spectroscopy method instead, there are several documented. The variability described is extremely typical of normal expression variability in any cell culture, even of highly engineered clonal populations.

4. Three 6-well plates is not enough due to variability mentioned in #3. Variability is compounded by the lack of homogeneity in plate culture. A reactor or at least a large-ish spinner or shaker flask is a better choice to get a more representative sample and maintain a homogeneous population.

Wow, I must revise #2: Apparently they were indeed trying to get primary cell cultures. "Trying" being the operative word. Um...That's just not how it's done. Because, and my point still stands--that's a sure-fire method for contamination by any of the other several dozen cell types present in chunks of human bones. There are a few protocols for deriving primary osteocytes, but the authors didn't use any of 'em.

Sorry for the double post.

Can I get a Therapeutic Touch practitioner to heal my washing machine's energy field?

"The reason is that, at a 5% confidence level, each pairwise comparison produces a 5% chance of a "false positive," and the more comparisons there are the more the chance of finding a "false positive" increases."

Actually, I think people are overly reliant on the 5% confidence level. Many (I'm sure Orac is not among them, though, but still would like to point this out) probably confuse a 5% chance that the result is random with a 95% chance that the effect is genuine. This is not true, though.

Consider the murder of Mr. Jones. DNA tests of blood from the culprit were used to identify Mr Smith as the murderer. The probablity of a chance agreement are 1 in a million (make this a billion if this number is too small for you, or a trillion). Does this mean that with a 99.9999% probability Mr. Smith is really the murderer?

Nope. Consider the case that Mr. Smith, at the time of the murder happening, was goving a talk in front of hundreds of people. Or that he was known to be 1000 miles away. Or that he was actually dead at the time of the murder.

The point being that the probability of the explanation being the correct one is not easily related to the 5% by chance probability given by the p=0.05 criterion. (Somebody who is more versed in statistics than I could probably point out how this realtes to Bayesian statistics...)

So even *if* the results were statistically significant, it would only mean that this is something we should study further, not more.

'obviously impossible' is NOT a bad preliminary criteria, when used correctly. When a single experiment seems to contradict principles that have been established through thousands of experiments and thoughtlines, you have 3 basic forks:

1) Error (or bias or dishonesty) in the experiment

2) Misunderstanding the results

3) Everything else is wrong, this is right.

My money is on 1 or 2

There should be an annual award for "most craptacular study or experiment". This one would win.
NCCAM . . . your tax dollars at work . . . helping quacks fleece the public

Yeah, Orac's 95% CI statement is a little off base (although along *extremely* common lines for intrepreting measures of precision) - but the core of the point remains that, even doing an experiment where the "True" answer is "This effect doesn't exist" there exists a possibility that through chance your results will be positive.

And for a study this one, I suspect it's considerably higher than 5%, stats or no stats.