When I wrote earlier about Steve Milloy, I commented on his attack on a study that found that the introduction safe-storage laws was followed by a 23% reduction in unintentional shooting deaths of children. Milloy claimed:
The reported 23% decrease in injuries is a pretty weak result-probably beyond the capability of the ecologic type of study to reliably detect. Even in the better types of epidemiology studies (i.e., cohort and case-control), rate increases of less than 100% (and rate decreases of less than 50%) are very suspect.
Milloy repeats this factor-of-two principle many times on junkscience.com. For example, on this page Milloy asserts:
Relative risks from 1.0 - 2.0 should be ignored.
(This page explains what a "relative risk" is if you don't know.)
In my earlier post I observed that Milloy somehow neglected to apply this factor-of-two principle to Lott's work. Today I want to write about the origins of his principle. It's a very interesting story.
When I first read his comments I was rather puzzled. A measure that reduced crime by 45% would be a pretty spectacular success, but by Milloy's principle it would be ignored. If you look in statistics text books you will not find Milloy's principle. You will find that two sorts of significance are important:
- Statistical Significance
- Is it likely that the result occured by chance? A result that has less than a 5% probability of occuring by chance is usually considered statistically significant. (Although values other than 5% could be used.)
- Practical Significance
- Does it make a difference that matters? A measure that only made a difference of a handful of crimes in the whole country probably isn't worth worrying about.
Another important thing statistics texts will tell you is that correlation is not the same thing as causation. Just because the safe-storage law was followed by a 23% drop in injuries it doesn't follow that the law caused the drop. Some other factor might have caused the drop. Some people misunderstand this to mean that correlation doesn't have anything to do with causation. Correlation doesn't prove causation, but it is evidence for causation.
Milloy's factor of 2 principle arises from neither sort of significance. Larger factors are more likely to be statistically significant, but a factor of 2 can easily by statistically significant. If we are talking about a very rare crime, a factor of two change might not be practically significant, but for more common ones it most certainly would be. Finally, larger factors are stronger evidence for causation. There aren't many things that make a factor of ten difference, so if we find a correlation with a factor of ten difference, it's unlikely to really be caused by something else, while things that make a factor of two difference are more common, so factors of that size are more likely to be really caused by something else, but that certainly does not mean that they should be "ignored".
The only authority that Milloy offers in support of his principle that risks of less than a factor of two should be ignored is an out-of-context quote from a National Cancer Institute press release about a study finding a link between breast cancer and abortion. If you look at the whole press release you will see that they are not saying that all risks of less than a factor of two should be ignored, but that a risk of less than two along with other evidence suggests that the link was spurious (as subsequent work found). Milloy even complains that the NCI didn't follow his principle in other cases.
That brings me to an amazing story that was revealed in the Philip Morris documents archive. You see, in 1992 the EPA concluded that passive smoking caused lung cancer with a risk factor of about 1.25 for a non-smoker with a smoking spouse. Philip Morris obviously wanted to discount this finding. If only epidemiology guidelines included Milloy's factor-of-two principle, then they could point to them and dismiss the EPA's result. So Philip Morris set out to get the epidemiologists to adopt Milloy's principle.
They funded the creation of TASSC and junkscience.com. Milloy used junkscience.com to energetically attack the EPA's passive smoking conclusions and promote the factor-of-two principle. They also organised a series of seminars to try to get the scientific community to adopt what they called what they called "Good Epidemiology Practices" (GEP). The GEP guidelines were mostly perfectly reasonable things like
3. Statements of study design should contain a description of statistical techniques.
However, slipped into the middle of the GEP guidelines was this:
8. Odds ratios of 2 or less should be treated with caution, particularly when the confidence intervals are wide. There is a likelihood that the odds ratio is artefactual and the result of problems with case or control selection, confounders or bias.
The reaction of the scientists to the GEP guidelines was something like this:
"Excellent idea! We need guidelines for good practice and these fit the bill. We should adopt them...
Oh, except for number 8 about odds ratios. That doesn't make sense so we'll drop that one."
Philip Morris kept pushing its GEP guidelines to various scientific organizations for several years, but eventually they realized that it just wasn't going to work, as explained in this internal memo:
Approximately three years ago, the concept of GEP's was discussed in considerable detail in PM. Corporate Affairs thought it was a wonderful idea, because at first they ... felt that part of a code for Good Epidemiological Practices would state that any relative risk of less than 2 would be ignored. This is of course not the case. No epidemiological organization would agree to this, and even Corporate Affairs realizes this now.
The full story of GEP, with copious references to Philip Morris' internal documents is detailed in a paper published in the American Journal of Public Health.
The fact that the Philip Morris executives thought that their GEP plan had a chance of succeeding tells us something about how they think science is conducted. The scientists did not adopt Milloy's factor-of-two principle because it was, well, wrong. The Philip Morris executives thought that the truth of something did not matter to the scientists---you could get them to say something just by lobbying them. This attitude seems common to promoters of "sound science". They seem to think that real scientists aren't interested in finding out what is true or false but instead just concoct results to advance a political agenda or get more funding. In other words, they think real scientists operate like they do.
Efforts to promote Milloy's bogus factor-of-two principle continue to this day. Just last month Iain Murray published an article where he wrote:
Epidemiologists generally agree that one cannot ascribe medical causation to a risk factor if the factor is associated with less than double the occurrence than normal.
No, epidemiologists do not "generally agree" with this. In fact, Philip Morris' efforts to get then to agree with this proposition have proven that do not agree with it at all.
And where was Murray's article published? Tech Central Station, another astroturf operation like junkscience.com. And who employs Murray? The Competitive Enterprise Institute, which is partly funded by Philip Morris.