That pesky cell phone issue again

I hate writing posts like this almost as much as people hate reading them. But write them I must. It's the cell phone issue again.

Health risks from cell phones aren't supposed to happen because the radiofrequency electromagnetic radiation involved is not energetic enough to ionize molecules. The damage done by ionizing radiation is related to the chemical changes that ensue from the ionizations. Those chemical changes don't occur with exposure to non-ionizing radiation. The most non-ionizing radiation is supposed to do is heat of up the tissue (as in a microwave oven), and the thermal effects from most kinds of non-ionizing radiation are so small they are lost in the noise of normal thermal fluctuations. Moreover we are bathed in non-ionizing radiation. Visible light, for example, is of this type (note that light does interact with our tissues; that's how we are able to see). There is also lots of radiation we don't see and doesn't appear interact with our tissues, from the kind given off by powerlines to radio and TV stations to infrared and LED remote controls. The list is endless. And includes cell phones.

Unfortunately inconvenient results keep appearing, ranging from basic studies on cells to a rather extensive epidemiological literature. There is nothing conclusive and there are counter explanations for all the results. But they keep coming up. This week the journal BMC Genomics (a very respectable peer reviewed journal) had another one. Here's the abstract:

Background

Earlier we have shown that the mobile phone radiation (radiofrequency modulated electromagnetic fields; RF-EMF) alters protein expression in human endothelial cell line. This does not mean that similar response will take place in human body exposed to this radiation. Therefore, in this pilot human volunteer study, using proteomics approach, we have examined whether a local exposure of human skin to RF-EMF will cause changes in protein expression in living people.

Results

Small area of forearm's skin in 10 female volunteers was exposed to RF-EMF (specific absorption rate SAR=1.3W/kg) and punch biopsies were collected from exposed and non exposed areas of skin. Proteins extracted from biopsies were separated using 2-DE and protein expression changes were analyzed using PDQuest software. Analysis has identified 8 proteins that were statistically significantly affected (Anova and Wilcoxon tests). Two of the proteins were present in all 10 volunteers. This suggests that protein expression in human skin might be affected by the exposure to RF-EMF. The number of affected proteins was similar to the number of affected proteins observed in our earlier in vitro studies.

Conclusions

This is the first study showing that molecular level changes might take place in human volunteers in response to exposure to RF-EMF. Our study confirms that proteomics screening approach can identify protein targets of RF-EMF in human volunteers. (Karinen et al., BMC Genomics)

You can read the whole paper here. So what does this say? It doesn't say there is a health risk from cell phones. It does suggest that the non-ionizing radiation from these phones produced a biological effect, an alteration in protein expression in people's skin cells. This adds to a grwoing body of literature indicating that the biological consequences of non-ionizing radiation can go beyond heating of tissue.

We have become quite dependent upon cell phones, just as we have become dependent on the electric grid, which also produces electromagnetic radiation of a different frequency. It is certainly within the realm of possibility that either or both might have health consequences. This doesn't make them much different than many other amenities of daily life, including the computer I am sitting in front of at this instant.

Or automobiles. They are dangerous enough to kill 40,000 Americans a year. This is less than they used to kill because we have been making them safer. Not totally safe, but safer. If it turns out that some sources of non-ionizing radiation have hazards to them (which wouldn't surprise me in the least), then I suggest we take the same attitude: we try to make them safer. With power lines there are a variety of ways to do this and certainly this is true with cell phones, too.

But before we can do this we need to look at the possibility seriously and not continually dismiss scientists who assert health or biological effects are cranks. Some of them might be, but it can no longer be assumed that even entertaining the possibility is prima facie evidence of crankhood. This in turn means more work to delineate the biological effects and to use that information, if indicated, to make the ubiquitous sources of this type of radiation safer. This will take time and if there are effects I suspect they are not big. But given the extraordinary prevalence of exposure the aggregate burden could be substantial.

I know no one wants to hear this, nor do I want to write it. That's life.

More like this

The tl;dr: maybe a little but for benign reasons. If fertility is important to you and you are a man, don’t put hot things in your pockets. This may fall into the category of switching from tidy whities to boxer briefs. A study came out in September suggesting that it does. It is a meta-analysis by…
A few days ago, I came across an article on Engadget that mentioned almost in passing some studies that seemed to indicate health problems or no health problems, depending on the specific study, due to the ubiquitous and maligned cellular telephone. Not having dealt with this issue much on my blog…
As I survey the lack of reason that infests--nay, permeates every fiber of--my country, sometimes I despair. Whether it's because of the freak fest that the race for the Republican nomination has become, with each candidate seemingly battling to prove he can bring home the crazier crazy than any…
A paper delivered at the annual meeting of the American Society for Reproductive Medicine is being reported to say that there is an apparent dose response relationship between cell phone and sperm counts, i.e., the more hours spent on the phone each day the lower sperm count levels. Scientists in…

revere rightfully lectures us, the devoted readers, that we shouldn't dismiss scientists as cranks who look into the possible health ramifications of our current technology, such as the use of cell phones. I wonder why, this respect doesn't extend to the scientists who have questions, concerns, differing opinions, are not fully indoctrinated into the humans-are-causing-global-warming, and American humans are the worse, debate. They are called 'deniers' which I think everyone has to admit invokes visions of religious fanaticism on the part of the humans-are-causing-global-warming advocates. ( For some reason, I keep getting that pod people movie popping into my head - people pointing and shrieking DENIER! in a very high pitched whine - its shiveringly spooky).

By pauls lane (not verified) on 25 Feb 2008 #permalink

revere,

You are right that we must not dismiss such findings out of hand. We must assess them based on the evidence presented.

Unfortunately, in the case of this study, the evidence is embarassingly weak. The authors compared 579 different protein spots on 2D gels by pairwise t-test, and found 8 spots where the intensity differed between treated and control with p < 0.05. But in the authors' own words, "The p-values are not adjusted for multiple comparisons."

If I do 579 t-tests without adjusting for multiple comparisons, and if my errors are normally distributed, I can expect 5% of those tests (i.e. ~ 29 spots) to yield p < 0.05 by chance alone. The authors actually found only 8.

Moreover, of those 8 spots, the "prevalence" ranged from 4-10, where (in the authors' words): "prevalence shows in how many volunteers, out of 10, was detected given protein spot." As best I can tell, that means that 6 of the 8 spots weren't detected at all in some volunteers. Indeed, Figure 1 shows that the two spots that had a prevalence of 10 were the two most intense spots. They were also the ones that appeared to change the least in response to treatment. The other 6 "significant" spots were much fainter, supposedly changed much more (up to 220 fold?), and couldn't even be detected in up to 6 of the volunteers. All of that is exactly what I would expected for a false positive. A spot near the limit of detection is most likely to show spurious apparent changes.

The authors doesn't give enough detail to really evaluate their statistical analyses or their methods (another big flaw), so it's possible I'm misinterpreting some or all of the above. But on the whole,this paper fairly screams "artifact" to me.

As mobile phone radiation is in the right frequency range to cause heating effects in tissue it is conceivable that it can cause changes in protein expression.

However, ultra low frequency EMF such as from powerlines are a whole different issue as the long wavelength (metres) precludes any meaningful transmission of energy to biological tissues (ie. no heating and no effects on cell molecules).

It is somewhat reassuring that there hasn't been a spate of brain tumours since the introduction of mobile phones a decade ago, but then again it just might be a long latency period like with asbestos...

tlazolteotl - it matters little what I dismiss. I am referring to the global warming culture of hate that has arisen. The only aspect of global warming that cannot be denied. Please see:

http://online.wsj.com/article/SB120390556121489679.html?mod=opinion_jou…

which I quote: "John McCain, Barack Obama and Hillary Clinton all promise bold action on climate change . All have endorsed a form of cap-and-trade system that would severely limit future carbon emissions. The Democratic Congress is champing at the bit to act. So too is the Climate Action Partnership, a coalition of companies led by General Electric and Duke Energy.

You'd think this would be a rich time for debate on the issue of climate change. But it's precisely as sweeping change on climate policy is becoming likely that many people have decided the time for debate is over. One writer puts climate change skeptics "in a similar moral category to Holocaust denial," another envisions "war crimes trials" for the deniers. And during the tour for his film "An Inconvenient Truth," Al Gore himself belittled "global warming deniers" as unworthy of any attention."

By pauls lane (not verified) on 25 Feb 2008 #permalink

Which begs another question: which is (less) safer, the cell phone or the Bluetooth device on your ear???

qetzal: Please don't get me started on the multiple comparison conundrum. Does this meant that every time you publish data on the same data set you have to go back and issue corrections on your old papers to "correct" for multiple comparisons? And why even the same data set. Think of all the statistical test that have ever been done. Should we go back and correct them? Think about it.

The above post (10:01 PM) is revere, right? It has my name, but I certainly didn't post it.

Anyway, are you seriously dismissing the issue of multiple comparisons here? If so, let me pose a thought experiment. Suppose I take a single sample containing 579 different proteins. I'll divide that sample into 20 aliquots, and run a 2D gel on each aliquot.

Next I'll arbitrarily pick two gels at a time, call one "treated" and one "control." For each pair, I will determine the ratio of treated:control spot intensity for all 579 spots.

Now I'm going to pool that data from the 10 pairs. Finally, I'm going to test the data for each of the 579 spots, and ask if the treated:control ratio is different than 1, with p less than 0.05. Tell me - should I expect to get any 'statistically significant' results this way?

Obviously, there is no real difference in spot intensity between treated and control. It's all one sample. Nevertheless, I will almost certainly find that the apparent treated:control ratio is significantly different than 1, with p less than 0.05, for a few dozen spots.

Determining the intensity of each spot necessarily entails some degree of error. When I take the ratio of intensities between gels, that error means I will often get a value other than 1. For most spots, that error will balance out. In some gel pairs the ratio will be greater than 1, in others it will be less than 1. But for a few spots, just by chance, the errors will happen to be all or mostly in one direction. In those cases, I can readily expect to get p values less than 0.05. And in fact, depending on how I do my statistical testing, and how the errors in spot intensity are distributed, I should expect about 5% false positives with p less than 0.05. That's pretty much what p less than 0.05 means, right?

Now, I don't know exactly how the authors of this paper did their analysis, because they do a terrible job of explaining. And it's not just due to the language barrier. They simply don't provide the minimum information needed. But it's clear that they did something comparable to what I described above. They did 579 significance tests and then concluded that the results with p less than 0.05 were probably real, even though we'd expect about that many false positives just by chance.

They even admit it right in the paper (emphasis added):

Interestingly, when adjusting results of our previous cellular study [6] using the size of proteome analyzed in the present study (pI 4-7; less than 40kDa) the number of the statistically significantly affected proteins appears to be similar in this and in earlier [6] study, 8 spots and 9 spots, respectively. The number of differentially expressed protein spots in both studies is below the number of expected false positives. However, as we have demonstrated experimentally [6] and discussed previously [11] it is likely that some of the proteins will be indeed, real positives.

If you dig up their reference 6, it's a very similar study except in cultured cells. There again, they did multiple comparisons, found a few with p less than 0.05, but no more than expected by chance. Surprisingly, when they looked at some of those proteins by alternate methods, at least one seemed to really show variation due to radiation treatment. I'll readily admit that is surprising. But it's the only real piece of data in either paper that actually seems to show some possible effect. And given the quality of everything else they did, I'm inclined to be quite skeptical of that result.

Reference 11 is also an interesting read. There, the senior author agrees that yes, we'd expect to see false positives at about the frequency they observed. But, he argues, that doesn't prove they are false positives! And at least one in ref 6 was (supposedly) a real result, so they're probably all real. Seriously.

I'm not arguing that cell phones can't possibly have biological effects. I don't know that literature well enough. But this particular study adds absolutely nothing to our understanding of the issue. I also recommend you learn a bit more about statistics if you're going to argue over it. Your comment suggests more ignorance of the subject than I'd expect from a senior public health scientist and/or practitioner.

It is almost inevitable that there will be some biological effects of any radiation, and this is gradually being defined and understood through the emerging science of biophysics. There are many examples in biology of situations where biomolecules or metallo-ions absorb energy and that this energy is actively involved in catalysing or regulating biochemical reactions, and affecting biochemical pathways.

A non human (but well known) example would be the action of quantosomes within the chlorophyll system of a plant during photosynthesis.

We already know that temperature affects reaction rates and equilibrium points in chemical reactions. Thus, any radiation capable of causing a change in temperature or biomolecular energy within a cell is likely to affect it - we dont know if the affects are adverse or not.

As we understand the role of bioenergetics in biochemistry, we will be better able to quantify what the effects of differing forms of radiation might be: as Revere has pointed out - modern society has become dependent on many radiation emitting pieces of technology. As a society, we may just have to exercise a 'risk vs. benefit' trade off. We do not live in a world where there is no risk - instead we have to choose which risks we want to take.

Even if this study cannot be considered strong enough to draw any firm conclusions, it is only a question of time before such effects will be demonstrated. At least if the mechanisms and effects are more clearly understood we can take measures to minimise risks wherever possible -the trouble is that we do not yet understand the relative risks of differing forms of radiation. Perhaps we will know more in a decade or two ...

qetzal: The reason I called it the multiple comparison conundrum is because it is actually quite puzzling. Some noted epidemiologists (e.g., Poole, Rothman) have said that the usual way of thinking about this can't be right and certainly almost everyone agrees the usual "cure" (the Bonferroni correction) is Draconian and inappropriate. Now I invite you to do a thought experiment. Why isn't every comparison made subject to the objections you have made? Think about it What is the denominator? If you only want to consider a single data set (although the rationale for this is doubtful) why wouldn't you have to go back and correct all previous papers? Suppose this group had published the comparisons as separate papers? You cannot dismiss results like this just be waving the magic "multiple comparisons" wand.

revere,

I agree that correcting for multiple comparisons can introduce new problems, and that there are situations where the best approach isn't clear. This isn't one of them.

If the authors repeated their methods exactly, but used aliquots of the same protein sample for every gel, they should still get a handful of spots where p is less than 0.05. The number of such false-positives should be similar to the number of spots they claim are true positives.

Do you dispute this?

Their results are not different than expected under the null hypothesis. We expect to see about this number of false positives (more, actually). The authors admit it in the paper!

Their only data is indistinguishable from expected false positives, yet they conclude there is a true treatment effect. Since when are results that match the null hypothesis considered evidence against the null hypothesis?

Things would be different if the authors had used a second method like ELISA and showed concordance with the gel data. They did not.

qetzal: It is more than "sometimes" inappropriate. There is a basic defect in the reasoning. First, you seem to have made an error in saying that evidence against rejection of (a modified) null is evidence for the null. Of course it isn't, as I am sure you recognize. You are only leaving open the possibility this is a false positive. Moreover, the exact repetition of experimental methods is irrelevant to the reasoning. The defect must lie somewhere in the frequentist (Neyman-Pearson) paradigm which is producing this aberrant result. Fisher would not have allowed it nor would the likelihood folks or Bayesians. So it is not so easy. These are deep waters, indeed.

First, you seem to have made an error in saying that evidence against rejection of (a modified) null is evidence for the null.

No, I'm saying that if the ONLY evidence presented is completely consistent with the null hypothesis, we have no reason to assume anything other than the null hypothesis.

I'm also saying that in this study, the ONLY evidence presented is, in fact, completely consistent with the null hypothesis. (Please say so if you disagree with this specific point!)

Saying there's a defect in the frequentist paradigm only hurts your argument. It's the authors of the study that are trying to use frequentist statistics (incorrectly) to claim an effect. If you argue that such statistics are intrinsically flawed, you've eliminated their whole basis for claiming statistical significance!