Quote mining about secondhand smoke

Not surprisingly, in response to my article on the health risks of secondhand smoke yesterday, the "skeptics' came out in force, although I must admit that even I hadn't expected quite as large an influx as what appeared. Perhaps I'll prepare a general response in the near future (and, no, I didn't take the Surgeon General's report as the be-all and end-all, but it did make a compelling case for SHS causing increasing the risk of lung cancer and cardiovascular disease at least, and it also served as a convenient aggregator of the many, many studies out there). In the meantime one commenter in particular piqued my interest, as he or she basically reposted a list of quotes by various scientists claiming that relative risks less than 2.0 are basically rubbish and not to be trusted. Not coincidentally, the relative risks of heart disease and lung cancer due to SHS estimated by epidemiological studies range between 1.2 and 1.3 in most studies and meta-analyses, including the aggregation of these studies in the Surgeon General's report. By implication, this list of quotes, rather than actually finding substantive problems with the actual epidemiological studies or how they were aggregated, are clearly meant to appeal to authority to convince you that any relative risk under 2.0 should be ignored as completely unreliable.

Not surprisingly, my skeptical antennae started twitching when I read this list, as it smacked of crank talking points based on quote mining similar to the anti-Darwin quotes that are the favorites of "intelligent design" creationists. The liberal use of ellipses in the quote by Sir Richard Doll in particular made me suspicious. Unfortunately, that was the one quote whose context I can't find online. (I will, however, most definitely look in my medical school's library for the book the next time I'm there.) Consequently, I decided to look up as many of these quotes as I could find online. It didn't take long for me to find a website where this list of quotes was published virtually verbatim, at Forces.org, among other places.

Forces.org is an amazing website, a virtual repository of smoking crankery beyond what I've ever seen before. It not only denies any dangers from secondhand smoke, but denies that smoking itself causes cancer. For example, it claims that medical radiation is a necessary and more important cofactor in causing lung cancer, and even goes so far at one point as to say explicitly, "There is no proof that smoking causes cancer." Wow. Not even the SHS "skeptics" make the claim that smoking itself doesn't cause cancer, but Forces.org apparently does, or at the very least strongly implies that smoking isn't such a big deal in causing lung cancer. I can't say that I've ever seen a website about smoking that's quite so...well, cranky. I have to thank rrgabe23 for pointing me to this site, however inadvertently and indirectly. There's enough crankery there to keep both me and fellow SB'er Mark busy for quite a while, should either of us ever be so inclined to wade into the muck there.

But I digress. Back to the quotes.

I'll start with the easiest one to find, because it was taken from the Journal of the American Medical Association. It's a big mistake to include one from a journal that's easily accessible online, because I can pull up the context behind it, and I did. So let's look at two quotes as published in the list:

FDA - "Relative risks of 2 have a history of unreliability" - Robert Temple, M.D. Food and Drug Administration Journal of the American Medical Association (JAMA), Letters, September 8, 1999

FDA - "My basic rule is if the relative risk isn't at least 3 or 4, forget it." - Robert Temple, director of drug evaluation at the Food and Drug Administration.

First off, the second quote comes from a news article published in Science in 1995 entitled Epidemiology Faces Its Limits. Here's the context:

Robert Temple, director of drug evaluation at the Food and Drug Administration, puts it bluntly: "My basic rule is if the relative risk isn't at least three or four, forget it." But as John Bailar, an epidemiologist at McGill University and former statistical consultant for the NEJM, points out, there is no reliable way of identifying the dividing line. "If you see a 10-fold relative risk and it's replicated and it's a good study with biological backup, like we have with cigarettes and lung cancer, you can draw a strong inference," he says. "If it's a 1.5 relative risk, and it's only one study and even a very good one, you scratch your chin and say maybe."

Some epidemiologists say that an association with an increased risk of tens of percent might be believed if it shows up consistently in many different studies. That's the rationale for meta-analysis-a technique for combining many ambiguous studies to see whether they tend in the same direction (Science, 3 August 1990, p. 476).

In other words, epidemiologists tend to disagree over whether a relative risk of less than 2 is significant in single studies or small numbers of studies, but there is a fairly broad agreement that if a relative risk less than 2 is found in multiple studies done in different places with different methodologies, that's reasonable evidence that there is more likely than not a real correlation. However, what was really interesting to me was when I looked into the source of the other quote in the letter to JAMA. The letter to JAMA was in response to a letter by Douglas Weed, M.D., Ph.D. of the National Cancer Institute criticizing an article by Dr. Temple entitled Meta-analysis and Epidemiologic Studies in Drug Development and Postmarketing Surveillance. In this article, Dr. Temple suggested a blanket policy under which any epidemiological study reporting a relative risk of less than 2.0 should not be published until it is replicated. He suggested this not because he thinks that studies with relative risks of less than 2.0 are not real, but because he feels that single studies are too prone to problems. In other words, he's not saying that he believes relative risks under 2.0 should be automatically discounted, but rather that detecting such low relative risks is "problematic":

The relative risk is a far more important determinant of how and whether adverse events can be detected than whether the events themselves are rare or common. Changes in the rates of relatively common events are often of greatest concern--a 30% increase in myocardial infarction rates, after all, would be more damaging than a 10-fold or even 100-fold increase in the rate of a 1 per million event--but methods to detect these changes other than through controlled trials are problematic.

How ironic. 30% is a relative risk of 1.3, which is right around the value that has become pretty well established for the additional risk of cardiovascular events due to chronic exposure to SHS. One can't help but wonder if that was the example that Dr. Temple had in mind when he made his statement. He goes on:

A 2- to 3-fold relative risk of a myocardial infarction or death is not a "small" increase in risk in the usual sense; it is far larger, for example, than the benefit of such effective treatments as postinfarction aspirin, β-blockade, angiotensin-converting enzyme inhibition, or thrombolysis. Nonetheless, Taubes found that a sizable group of epidemiologists did not consider findings of relative risks of this magnitude in epidemiologic studies reliable. Some suggested that replication of such a finding in different environments with different methods might be more persuasive than a single study.

Moreover, here's what Dr. Temple writes in the letter to JAMA including the quote cited above in context:

Fourth, as indicated in my article, relative risks of 2 are not problematic because they are unimportant; it would be very desirable to detect them. Many of the most powerful interventions we have (eg, thrombolysis, use of postinfarction β-blockers) do not create effects as large as 2-fold. The problem is that such risks, when observed in epidemiologic studies, have a history of unreliability, not because of obvious errors or methodological inadequacy, but because selection and other biases cannot be fully controlled in these studies. It is also true, as Weed notes, that failure to see a small effect in these settings also would be unreliable.

Well, well, well, well, that certainly sounds different than the way the quote was presented, doesn't it? Dr. Temple even states that it would be "very desirable" to detect smaller relative risks! It should also be remembered that his quote comes in the context of his arguing not that we shouldn't believe relative risks less than 2, but rather that at least two studies should confirm such a low relative risk before we take it seriously. So we have at least one case (and possibly two) of deceptive quote-mining in the list of quotes. That'll teach 'em to include an easily checked quote in a list like that. (Look for Dr. Temple's quotes to disappear from future iterations of the list, to be replaced by another, less easily tracked down set of quote-mined quotes.) Unfortunately, the creators of this list weren't quite so careless with the other quotes, which were either hard or impossible to track down, at least online. Even so, let's see what else we can find. Next up was the quote, "In epidemiologic research, relative risks of less than 2 are considered small and usually difficult to interpret. Such increases may be due to chance, statistical bias or effects of confounding factors that are sometimes not evident." - National Cancer Institute, "Abortion and possible risk for breast cancer: analysis and inconsistencies," October 26, 1994. I did a lot of Google searches looking for the context of this one. All that came up were websites downplaying the risk of SHS and, as in the case of Forces.org. It turns out that, as far as I can tell, this statement came from a press release by the NCI with that date. Unfortunately (and conveniently), it's not on the Cancer.gov website, meaning that I can't examine the full context. The NCI news page only appears to go back to 1998. However, I can piece together part of the context.

What I could figure out is that the above press release was apparently in response to a famous (or infamous, depending on your point of view) 1994 study published in the Journal of the National Cancer Institute that reported a 50% elevation in breast cancer risk in women who had had abortions (or a relative risk of 1.5). Of course, it's not exactly rocket science to express skepticism of a single epidemiological study with a relative risk of less than 2, especially if there is no plausible biological mechanism to explain the result. That's a whole lot different than examining the results of dozens of studies. If I can find a complete copy of the NIH press release from which that quote is drawn, I may post again on this topic to provide full context.

Finally, we have the quote from a publication by the World Health Organization:"Relative risks of less than 2.0 may readily reflect some unperceived bias or confounding factor, those over 5.0 are unlikely to do so." - Breslow and Day, 1980, Statistical methods in cancer research, Vol. 1, The analysis of case control studies. I'm half-tempted to order a copy of this publication, just to see what the context is for that quote. Given that it's just one sentence, I'm very curious.

In any case, for single studies, it is indeed wise to interpret a relative risk of less than 2 with great caution. When many studies all point to a relative risk between 1 and 2, it is reasonable to start to conclude that the findings are probably real. Moreover, as Dr. Temple, one of the scientists quoted above, said, relative risks below 2 may well be real and have real practical and clinical significance, making it desirable to detect them. The problem is, because they are relatively small in non-controlled prospective studies, a lot of confounding factors can indeed interfere with individual studies. Again, that's why multiple studies are needed. We have that for the dangers of SHS. Moreover, it's not true that we don't act based on relative risks less than 2. A couple of prime examples come from just the past couple of years. For example, a few months ago, a pooled meta-analysis of studies looking at cardiovascular risk from Avandia revealed a relative risk of 1.43 for myocardial infarction and 1.64 for death from cardiovascular events. This is just a single study with a relative risk of less than 2.0; yet it led to a widespread change in doctors' prescribing habits, such that Avandia prescriptions have plummeted precipitously. And, of course, I can't resist pointing out that Dr. Temple himself cited a number of examples where relative risks of less than 2 are widely accepted as real.

But who's kidding whom? The real purpose of the list above is nothing more than an appeal to authority to suggest that any relative risk below 2 is bullshit and to be ignored, when such is not the case. What the real message is from epidemiologists is that relative risks below 2 should be viewed with skepticism, particularly if they're single studies and there is no biologically plausible mechanism to explain the noted association. This is not the case with SHS. Not only is there a biologically plausible mechanism, given that we know that smoking causes cancer and cardiovascular disease, but there are many studies that find a relative risk in the same range.

Such quote collections are a favorite tactic of cranks. Usually, they consist of mined quotes taken out of context to support the crank's position cherry picked from sources that are hard to track down. Such lists of quotes then propagate far and wide across the Internet by e-mail and on blogs and websites. Sometimes they mutate along the way, with the addition of more quotes or the editing of the quotes that are there.The above list qualifies, because most of the quotes are very difficult to track down to find the context in which they were made and they have the whiff of being cherry-picked to form just such a crank list. In this case, the mistake made was that two of the quotes were very easy to track down online by anyone with access to institutional subscriptions to Science and JAMA. It took me less than 10 minutes to show that the quotes by Dr. Temple, particularly the one about the unreliability of relative risks under 2, were grossly taken out of context and even downright deceptive. It wouldn't surprise me if the others are, too, although i can't yet prove it. Even if they aren't, they remain misleading because they discuss the problems with interpreting relative risks under 2 in single studies, not the more relevant case of when many studies suggest a relative risk of under 2, as is the case with studies on the health risks of SHS.

If anyone can provide the full text and context of any of the remaining quotes to me so that I can examine them in detail, I'd be grateful. In particular, those of you who post and repost those quotes, please send me the context if you have it. (I'm guessing that none of you probably do.) In the meantime, I will keep looking. I hope my library has the book by Sir Richard Doll from which one of the quotes is purported to come.

Categories

More like this

It's not about the absolute RR, it's about the power of the study and times replicated. What a BS argument, and of course, it would take a quote-mine to justify something so silly.

Classic crankery. I might make fun of them two when I get a chance.

The title of the Breslow-Day book gives a big hint on the context. They are discussing case-control studies, where you start with the outcome (e.g. cancer and not-cancer) and attempt to go back into historical records and recollections and figure out the history, possibly matching for reports of known risk factors.

There are multiple concerns when constructing these studies, including selection biases (from using at-hand case and controls) and recall/incomplete records. Since these are often used when the aetiology is not well understood, you have to worry about unmeasured/unknown confounders, too.

Sorry Orac, you're drawing completely the wrong moral from the Avandia story. As an editorial in Nature Clinical Practice put it:
"The NEJM paper suffers from several serious limitations, three of which deserve specific attention. First, the 42 trials included did not have the same protocol. Patients excluded in one trial could have been included in the next. Some trials compared Avandia with placebo; others tested it against an active comparator. Furthermore, the trials were of different durations and tested different dosing regimens. Second, this analysis used published data only. This lack of first-hand source data means outcomes could not be verified, double-checked or examined closely. Third, as admitted in the meta-analysis, the conclusions are made on the basis of a small number of events 'that could be affected by small changes in the classification of events'.Many studies included were designed to assess end points other than cardiovascular disease."

As Professor Brian Strom (Chair and Professor of Biostatistics and Epidemiology and Professor of Medicine and Pharmacology at the University of Pennsylvania School of Medicine) put it (in a submission to the House Committee on Oversight and Government Reform) : most of the studies in the analysis were not published (and therefore they were not peer-reviewed).
This business is a disgrace : the NEJM should not have published this paper.

So Orac: why are you so gullible about this? Why did you not apply the same fierce scepticism to the Avandia case as to disbelievers in the bad effects of SHS ?

By Paul Power (not verified) on 17 Jul 2007 #permalink

James Repace made the same point when arguing for a smoking ban before the St. Louis County Council in 2005, that it is the combined weight of all ETS studies that makes one sure that secondhand smoke causes lung cancer and heart disease in nonsmokers. But as a mere bar patron questioning whether his loss of liberty due to smoking bans has to happen, I remain skeptical. People lie about their smoking histories and the failure of most studies to control for a factor as relevant as urban residency makes me suspicious of the whole collection. The way Dr. Enstrom was treated by Stanton Glantz and the American Cancer Society makes me wonder how many other ETS studies that came to politically incorrect conclusions never were published. I was more impressed by David Kuneman's criticisms of ETS science made on behalf of the Missouri Restaurant Association to the St. Louis County Council than I was with James Repace's original arguments at the same hearing.

http://kuneman.smokersclub.com/EMRS.html
http://kuneman.smokersclub.com/urban.html

By Bill Hannegan (not verified) on 17 Jul 2007 #permalink

Why do people even bother quote mining these days? Those darn interwebs messin up everyone's misinformation campaigns.

Good job taking these apart.

Patrick, there's an old saying that a lie can get halfway around the world, while the truth is still putting its boots on. That's what the quote-miners are working at - strenuously putting out crap, so that it's in more peoples' heads. Note that the tobacco people have been playing this game for over 40 years.

Barry, Stanton Glantz is better than Big Tobacco ever was at putting out the crap. The Helena heart attack study helped push the Chicago smoking ban thru and now even Dr. Michael Siegel says Helena was bogus:

"I think it is high time that my fellow tobacco control researchers and practitioners recognize that the Helena et al. studies are examples of shoddy science that apparently now passes as acceptable in tobacco control research. While I support workplace smoking bans, I do not believe that we should be using shoddy science to promote them."

http://tobaccoanalysis.blogspot.com/

By Bill Hannegan (not verified) on 17 Jul 2007 #permalink

When discussing food issues they actually invoke the pharma shill argument. I can't believe that with smoking being dangerous that second hand smoke would be completely safe. However I am against smoking bans. I think it should be up to the establishment to allow or prohibit smoking. While I understand and agree with banning smoking in restaurants. I personally prefer not to smell stale cigarette smoke while I'm eating. I think bars are a different story. If you don't smoke and don't want to inhale smoke then go work at a different bar. Same goes for patrons. Why should us evil smokers be punished more then we are. Cigs are taxed to the hilt. Whatever money we cost in medical care is more then paid for by the taxes. If the politicians misuse the money not our fault. Also last I checked non-smokers die as well often in just as costly and nasty ways. They may die later then smokers consequently paying more money into the system before taking it out at the end.
I do not agree with force.org about SHS they are freaking nuts and they most certainly quote mined.
As far as air quality stand down wind from a "natural" person (apparently one of the descriptives for those who avoid deodorant and bathing) then talk to me about air quality. The government doesn't mandate people bath, use deodorant or other personal hygiene. Given that only about 20% of people in the North east smoke financially speaking some places should ban smoking but not by state mandate.

Make of it what you will.

http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10…

Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies

Richard Smith

Competing Interests: RS was an editor for the BMJ for 25 years. For the last 13 of those years, he was the editor and chief executive of the BMJ Publishing Group, responsible for the profits of not only the BMJ but of the whole group, which published some 25 other journals. He stepped down in July 2004. He is now a member of the board of the Public Library of Science, a position for which he is not paid.

"Journals have devolved into information laundering operations for the pharmaceutical industry", wrote Richard Horton, editor of the Lancet, in March 2004 [1]. In the same year, Marcia Angell, former editor of the New England Journal of Medicine, lambasted the industry for becoming "primarily a marketing machine" and co-opting "every institution that might stand in its way" [2]. Medical journals were conspicuously absent from her list of co-opted institutions, but she and Horton are not the only editors who have become increasingly queasy about the power and influence of the industry. Jerry Kassirer, another former editor of the New England Journal of Medicine, argues that the industry has deflected the moral compasses of many physicians [3], and the editors of PLoS Medicine have declared that they will not become "part of the cycle of dependency...between journals and the pharmaceutical industry" [4]. Something is clearly up."

The implication that somehow pharmaceutical companies are behind the campaign against smoking and SHS has to be one of the all time stupidest arguments I've ever heard against indoor smoking bans and the campaign against smoking in general.

Really.

Think about it for a minute. If anything, that argument goes the other way. After all, what diseases do drug companies make huge amounts of money selling drugs to treat? Cardiovascular disease and cancer, naturally, because they're so common. If anything, big pharma would benefit more if more people smoked and if no indoor smoking bans were promulgated, because then they could predict that there'd probably be more people requiring medication for heart disease and chemotherapy for lung cancer. Heck, given that cardiovascular disease can frequently lead to erectile dysfunction, it wouldn't be unreasonable to predict that more smokers and more SHS would lead to more men needing Viagra, Cialis, and those other highly profitable ED drugs.

That's arguable. (Seriously) I guess we'd have to look at the books to know. Seems reasonable to me, however, that they would make the largest amounts on lifetime-dependency drugs (such as nicotine replacements, anti-depressants) rather than end-of-life treatments (which we'll all need at some point). The anti-depressant market (now clearly being targeted at ex-smokers) is huge.

Since they get the end-of-life market either way -- I can see them reaching for the lifetime dependency for now.

It is eerie to me how every smoking ban is accompanied by a government hand-out of nicotine replacement products. Although the effectiveness is highly questionable. (Now recommended for children as young as 12 in Scotland) Let's get people hooked on something huh? (yeah, I know -- it's supposed to HELP...) I fail to see why pharmaceutical companies are more trustable than tobacco companies.

... every smoking ban is accompanied by a government hand-out of nicotine replacement products.

Eh? I could easily be mistaken, but I have no recollection of this happening in Ireland, in the parts of the UK that now have bans (that is, Scotland and England (and Wales?)), or that it will happen here in France (when the ban comes into (full) effect). I suppose it's possible the UK and French health services might be doing something like this--but if so, since those are health services, I'd expect (assume) it'd be as part of a serious, monitered program to reduce the person's addiction until they are no longer dependent on the drug.

Bif, I was referring to my area of the US. I'm not as familiar with NHS policies and such (and what would constitute a "hand-out" in that case) -- perhaps someone else could comment on that.

My comment about Scotland is linked here:

http://www.theherald.co.uk/news/news/display.var.1387496.0.0.php

You can decide if it's part of a serious monitored program. I honestly don't want to argue -- just wanted to present some ideas, thoughts and information for everyone's consideration.

It's peculiar how no one, in particular Orac, has responded to my comments on the Avandia affair.

After all, this business shows clearly that neither Orac or the NEJM (the most prestigious medical journal in the world) can reliably determine if an epidemiological paper is good or bad, even on a hitherto uncontentious matter. (I should add here that the NEJM had an accompanying editorial on Avandia that went further than the discredited paper itself). And if this is so then how can we trust anything printed on such a contentious issue as the alleged effects of SHS ?

By Paul Power (not verified) on 17 Jul 2007 #permalink

GDF -

Nicotine replacement products are not 'lifetime-dependancy' products. And given that generics are freely available, I seriously doubt that they are high margin products either. And before you claim differently.. I use NRT products to give up, and I no longer use them.

The NHS does give NRT products on prescription. This is despite the fact that the government gets a lot of tax revenue from smoking, and saves several years pension payments on every smoker. Sometimes people are motivated by things other than money.

I oppose the smoking ban on general civil liberties grounds - plus is stinks(sic.) of people who visit a pub once a month or so imposing their views on the pub regulars. But the effects of SHS appear real.

By Andrew Dodds (not verified) on 17 Jul 2007 #permalink

Actually, I mentioned Avandia on purpose, because I figured someone like Paul would zero in on it as the perceived weakest example and ignore all the others. He didn't disappoint! He completely neglected the other examples of relative risks under 2 that we act on. I could add some more, such as most risks associated with diet. The only thing that surprised me is that Paul was the only one to go on a rant against the Avandia study in service of speciously supporting the contention that relative risks less than 2 are not important..

I've discussed the Avandia study before. It's a flawed study, but not as bad as its critics would lead you to believe. In retrospect, having read it and read the comments of my readers who took me to task for some of my criticisms, it tends to be more believable than I first thought now that I know that there were other indications and data in the pre-approval studies that Avandia was associated with cardiovascular side effects.

"He completely neglected the other examples of relative risks under 2 that we act on" and "Paul was the only one to go on a rant against the Avandia study in service of speciously supporting the contention that relative risks less than 2 are not important.." No I did not. I made no comment on or related to the size of the risk. You commended a fatally-flawed study. I attacked it because of its flaws, not because of the size of the alleged risk. I did not even mention other flaws, such as the financial interest of one author in the only alternative to Avandia or that some years ago he denounced a paper that did exactly what he is now doing for the very flaws that have been pointed out in his new paper.
Or how about this beauty from the paper itself: "results are based on a relatively small number of events, resulting in odds ratios that could be affected by small changes in the classification of events" preceded by "95% confidence interval [CI], 1.03 to 1.98" for the "summary odds ratio for myocardial infarction ". Since an odds ratio of 1 is zero effect, that the CI lower bound was so close to 1 suggests there is no effect given the instability in the "classification of events". And looking at where that CI comes from gives the following:
Combined small trials CI: 0.88 - 2.39
DREAM: 0.74 - 3.68
ADOPT: 0.80 - 2.21
(http://content.nejm.org/cgi/content/full/356/24/2457/T4).
All these ranges include 1.0, the measure of zero effect.

" [I]t tends to be more believable than I first thought now that I know that there were other indications and data in the pre-approval studies that Avandia was associated with cardiovascular side effects." That's dreadful nonsense. It's exactly the same as saying that a particular instance of a logical fallacy is ok because, in spite of the bad reasoning, the conclusion happens (by accident) to be true. The paper is garbage regardless of whether its conclusion is correct. Science is as much the method as the conclusion, if not more so given that all results are liable to be overthrown by later discoveries and better theories. This paper is therefore very bad science no matter what the truth about Avandia.

The link to the debate on SHS is this. Epidemiology is not reliable enough to decide such questions. There is too much room for creativity in how epidemiology is done for basic scientific standards to apply. These are that there be universally agreed methods and replicability. As we can see in this case, different researchers would treat the same data differently. They would select the data for the work in different ways (this paper excluded studies showing no heart attacks, worked mostly on unpublished data and in some cases on the data summaries rather than the raw data). They would not apply the same techniques in the same way, thereby avoiding replicability.
That's why epidemiology is forever producing contradictory results. In a scientific field that is unacceptable. In a field such a physics researchers would discuss little else until the problem was fixed. But epidemiology goes on it merry self-contradictory way.

By Paul Power (not verified) on 18 Jul 2007 #permalink

Physics? Oh, please, give me a frikkin' break.

Comparing epidemiology to physics in this context is just plain specious and stupid, as are your aspersions on epidemiology. Physics has the advantage of having a level of control over experimental design and parameters that can never be achieved in epidemiology--or even in the gold standard of medical studies, double-blind prospective randomized clinical trials. That is why in some ways epidemiology is harder than physics, because controlling adequately for what is outside the investigators' control is so complex. Moreover, you're just plain full of it when you claim that epidemiologists don't talk about confounding factors and problems with studies. Apparently you've never been to medical conferences where such studies are presented. That's all they talk about.

I also note that you once again ignored all the other examples of relative risks less than 2 that are considered relevant in medicine. The plain fact is that it's a load of B.S. to claim that relative risks less than 2 are automatically irrelevant and not important.

Your statement, however, that "epidemiology is not reliable enough to decide such questions" is very revealing. Indeed, it's "dreadful nonsense," to borrow your term. Besides being just plain incorrect, it strongly suggests that you don't really want the question ever to be answered. In fact, it's exactly the same gambit that antivaccination cranks like to use when saying that epidemiological evidence can't show that mercury in vaccines doesn't cause autism or that vaccination itself doesn't cause autism.

Of course, even antivaccination loons (well, most of them, anyway) aren't so stupid as not to realize that a prospective randomized trial looking at mercury in vaccines is not practical or ethical. They know it, and know it will never be done, which means we have to rely on epidemiology to answer the question. They attack the epidemiology as being unable to definitively answer the question, the implication being that we can never know for sure whether vaccines cause autism. Similarly, I suspect that you also know that it would be impractical and unethical to do any sort of definitive prospective randomized trial looking at the question of secondhand smoke. Consequently you attack the epidemiology as "inadequate" to answering the question, knowing that it's the only science that can realistically and practically ever be applied to the question in humans. The implication, of course, is that if epidemiology can't answer the question then the question can never be definitively answered and we'll never know whether SHS causes health problems--and therefore we shouldn't do anything to ban SHS.

Nice try, but it won't fly.

Personally, my sense of the risks of SHS is that it's just hard to believe one can separate out the pure SHS exposure from all the crap we inhale all the time, that's simply out there in the atmosphere. Look and the newspaper air quality reports - pollen, molds, "particulates" -- and that's just the stuff that can be measured. It would be nice for someone to do some sort of study of the actual physical content of smoke, cigarette and other, to explain what the components are that trigger the diseases that are alleged to happen.

Having said that, I have no sympathy for smokers. As far as I'm concerned, it's social payback time. After growing up irritated, nauseated, and disgusted by cigarette smoke and being told, "If you don't like it get out," now when I hear smokers complain about the "right" to smoke someplace, I've got the ready answer.

Personally, my sense of the risks of SHS is that it's just hard to believe one can separate out the pure SHS exposure from all the crap we inhale all the time, that's simply out there in the atmosphere. Look and the newspaper air quality reports - pollen, molds, "particulates" -- and that's just the stuff that can be measured.

I'm rather surprised that you'd use a rather obvious argument from incredulity. Do you accept such arguments from creationists? I didn't think so, and neither do I accept that argument from you. I'm sorry if I'm being a bit harsh, but I was surprised. I wouldn't have expected this sort of argument from you.

Besides, there are good biomarkers for exposure to SHS. Cotinine, for example, is a metabolite of nicotine and can be readily detected in the blood and urine of people exposed to smoke, be it from smoking cigarettes or from SHS. Its level in the blood is proportional to tobacco smoke exposure. Several of these of these studies verify and/or estimate SHS exposure by measuring cotinine.

And jumpin' Jesus on a pogo stick, a whole chapter of the Surgeon General's report is dedicated to the evidence about the contents of smoke and a large part of Chapter 3 is devoted to assessing the use of cotinine as a biomarker for smoke exposure.

Orac - you were sooo right about the number and type of 'skeptics' who replied to your first article on SHS, and frankly, I'm beginning to see the same pattern here as well.

What I find amazing is the number of posters perfectly happy to argue that SHS isn't harmful ( and citing 'studies' to prove it), when even they have to admit that the evidence for cigarette smoke being dangerous is overwhelming. If its harming you when you choose to inhale it, what do you think it does to the person sitting next to you? Does it disappear? Does it only effect the person who paid for the ciggies? Does it vanish into another dimension? No - it hangs around until some other poor sod breathes it in. To argue anything else is pure denial. Now you could argue that it isn't as harmful - but to say its a myth, etc is simply crap.

As for the 'rights' of smokers, perhaps we should talk about the 'rights' of crack addicts, etc? You are doing something which results in me and my family breathing in your crap, and possibly giving me cancer, etc. I dont have to put up with that, especially since you are in a minority.

If you want to smoke, thats up to you. But dont delude yourselves that its about civil rights or that you are not causing harm.

Even better, you could just give up. You'd live longer, be healthier, smell better and save money. You'd also help your family . I spent far too long talking to the families of people who have died from cancer - its no fun at all. No one needs to smoke, so why bother?

Cognitive dissonance is a wonderful thing. People who can be completely rational when it comes to so many other controversies become quote-mining fallacy spewers when it comes to their pet idea. I'm sure that the majority of people who believe that SHS is not harmful are either a.) smokers or b.) politically motivated.

By anonimouse (not verified) on 18 Jul 2007 #permalink

anonimouse, smoke from any source can be harmful if you breathe enough of it. One bar owner told me that when he first bought his bar, the smoke was so thick on a busy night you could not see to the end of the bar. Who would doubt that breathing that bar air every day could eventually hurt the bartender? But the owner then installed a powerful ventilation system that cleared the air so well I could hardly tell patrons were smoking when I visited last year. I do doubt that this purified bar air was so dangerous that the local municipality was obligated to ban smoking at the bar to protect the employees. But the municipality went ahead and banned all public smoking, including at this bar. The staff hated the ban so much they all quit.

By Bill Hannegan (not verified) on 18 Jul 2007 #permalink

I would like to address the issue of RR >= 2 and the concept of Confidence Intervals. This is going to be really long, but it needs to be in order to address it properly.

I'll take CI's first because they lead into the RR argument. All that a 95% CI interval tells you is that the numerical findings of the study are only 5% or less likely to have occurred by PURE RANDOM CHANCE. It says NOTHING AT ALL about causality.

A classic example of this is the supposed high correlation between bubble gum chewing in one city and the subsequent rise in juvenile crime in a similar city five years later and hundreds of miles away. You could do a study and find quite a high RR and a very nice 95% CI. But does this mean that bubblegum chewing in Toronto causes juveniles to commit crimes five years later in New York? Of course not: the two measures are simply indicators of a bubble in age grouping: as those of an age to be bubblegum chewers grow up a bit a certain percentage of them become juvenile criminals.

A solid 95% CI, say 1.9 to 2.1, indicates strongly that there's a real doubling of "risk" of an effect in the presence of a preceding factor. A squishy 95% CI, say 1.1 to 2.9 also indicates a doubling, but it's much softer. The first CI would probably translate to a 99% certainty at 1.1 to 2.9 while the second would simply disappear if you asked it for 99%. But again, and most importantly, in EITHER case, ALL you are talking about is whether the results are likely to have occurred by chance: they have nothing to do with causality: they are a MINIMUM scientific standard.

Now... onward to RRs of 2 or 3. Epidemiology properly recognizes that it is not a "laboratory science." There are always going to be lots of factors beyond control, there will always be factors no one has thought of measuring/analyzing (Hey, do smokers take more showers to get rid of their smoke smell? Does that mean they breathe in more water droplets filled with nasty cancer causing asbestos? Has anyone ever done a study on THAT?), and there will, at least sometimes, be epidemiological studies that are skewed by either overt dishonesty or by subliminal experimenter bias.

Since a lot of those studies use data that can't be easily checked by the public or by peer-reviewers, dishonesty has to be considered a major risk when money, grants, or academic prestige are involved. That risk must rise even more when the dangerous concept of "idealism" enters the ring.

There used to be a good number of scientists willing to skirt their ethics for the pay of tobacco companies. How many more antismoking
scientists must there be who'll similarly skirt their ethics not just because of the lure of money but ALSO because they believe they're "doing the right thing" and that coming up with the "proper" results that support smoking bans will be for the ultimate benefit of humanity?

Asking for a high RR helps safeguard against these types of things. Fudging numbers in a big way is harder than fudging them in a small way. If money is involved AND prestige AND idealism... then the watchdogs need to be even sharper than usual.

Can small RR's mean something? Yes. Here's an example: If you're in Madison Square Garden 100 feet away from a random nut who pulls out a gun and just fires it in a random direction your RR of getting hit is going to be slightly higher than someone who's standing 110 feet away. That's reality. All factors are known as stated. Just because the absolute risks are very small and the RR might be only something like 1.05 if you're 100 feet away compared to the 110 feet guy, it's still real.

But with secondary smoke we're not dealing with such a cut and dried thing: there are TONS of other factors and there's a LOT of experimenter bias and idealism and money and academic/professional positioning and catfighting coming into the picture. And that provides PLENTY of reason to throw an RR of 1.19 for a disease that strikes only a couple of people out of a thousand into a cocked hat.

The study done by Dave Kuneman and myself used data 100% accessible to the peer reviewers and to the general public. No fault has been found in it, nor has any real fault been found in its analysis. The study should have been published by at least one of the three responsible journals to which it was submitted. The fact that it wasn't amounts to a form of "passive fraud" that gives a dangerous indication of just how biased the "mountain of studies" used by antismoking lobbyists actually is.

Smoking bans are bad laws based upon lies. And a law based upon lies is no law at all.

Michael J. McFadden
Author of Dissecting Antismokers' Brains
http://pasan.TheTruthIsALie.com

Yet again, Orac, you fail to answer my points about the Avandia study. Why ? I still have said nothing about "small" risks. Nothing. Not one word. Kindly stop claiming otherwise.
Where did you get this from "Moreover, you're just plain full of it when you claim that epidemiologists don't talk about confounding factors and problems with studies". Not only did I not claim this, the flaws I mention were discovered by epidemiologists. I ask you to withdraw this slur now.
And then you write 'Your statement, however, that "epidemiology is not reliable enough to decide such questions" is very revealing. Indeed, it's "dreadful nonsense," to borrow your term'. Again I wrote no such thing. I did not refer to epidemiology as nonsense. The nonsense is the paper. I ask you to withdraw this slur now.

Your latest response implies you stand by this study.
Under a post on the wrongness of cherrypicking you defend a study that does just that. You might like to ponder this.
Your whole attitude here is completely different to what you've demonstrated over the many months I've been reading your site.
By defending this paper you've given permission to everyone to do the same thing. Imagine the glee in the likes of the Geiers that they can produce a paper on the link between some vaccine and some form of ill-health with these characterstics:

- uses unpublished data
- uses studies that were not meant to measure rates of that form of ill-health
- ignores all studies that show no incidences of the same form of ill-health
- depends on counts of incidences that even it admits are not reliable
- uses summary data rather than raw data from some studies
- gets no confidence interval showing an effect until all the studies are lumped together
- even then gets a confidence interval that is extremely close to including the "zero effect" value, which in conjunction with the unreliability of the counts of incidences renders the finding extremely doubtful .

You would reject out of hand a paper from the Geiers with only some of these failings and you'd be right.

Tell me, Orac, what failings would this paper have to have for you or the NEJM referees to reject it ?

Either this paper is bad epdemiology as other epidemiologists have claimed or it's good epidemiology. In the former case epidemiology is saved but your credibility is damaged. In the latter case epidemiology is pseudoscientific bunkum. I'm going with the former.

By Paul Power (not verified) on 18 Jul 2007 #permalink

"I'll take CI's first because they lead into the RR argument. All that a 95% CI interval tells you is that the numerical findings of the study are only 5% or less likely to have occurred by PURE RANDOM CHANCE. It says NOTHING AT ALL about causality."

That is true for a simple correlation. But is that all this study did? I do not have access to the whole study, so others will have to speak up.

The problem you are addressing is that, with a simple correlation there may be multiple possible explanations or causes for the relationship you see, in this case between SHS and smoking related illness in non-smokers. If you preform only a correlation analysis, this will only confirm that there is a pattern there, but no causation because there are still those other possible explanations to consider.

So usually a researcher will control for those other factors, often things like age, sex, genetic history, etc., all things that could possibly explain the pattern. Once those possible explanations have been controlled for in the analysis, they have been removed from the equation, and you should be left with your one focus explanation, in this case that the onset of smoking related illnesses in non-smokers is affected by SHS. Of course this is still technically a correlation as you can not assume that you can think of all possible explanations and control for them. But if you have controlled for all you CAN think of, and you still come out with a significant result, than you may be justified in concluding that there is likely, to a significant degree, a causal relationship in the pattern between the two factors.

As a researcher, you should know this. Why aren't you telling the whole story?

Yet again, Orac, you fail to answer my points about the Avandia study. Why ? I still have said nothing about "small" risks. Nothing. Not one word. Kindly stop claiming otherwise.

Then what was your point in saying that "epidemiology is not reliable enough" to answer the question about the health risks of SHS? Surely you don't think the risks are large.

Where did you get this from "Moreover, you're just plain full of it when you claim that epidemiologists don't talk about confounding factors and problems with studies". Not only did I not claim this, the

Then what was your point about comparing epidemiology to physics, stating, "As we can see in this case, different researchers would treat the same data differently. They would select the data for the work in different ways (this paper excluded studies showing no heart attacks, worked mostly on unpublished data and in some cases on the data summaries rather than the raw data). They would not apply the same techniques in the same way, thereby avoiding replicability. That's why epidemiology is forever producing contradictory results. In a scientific field that is unacceptable"?

The obvious implication is that you think that epidemiologists, in apparent contrast to physicists, don't talk about problems such as confounding factors, differences in technique, data selection, replicability, etc., which is a load of B.S.

The study done by Dave Kuneman and myself used data 100% accessible to the peer reviewers and to the general public. No fault has been found in it, nor has any real fault been found in its analysis.

So you say. We have no way of knowing if you are correct about this or not. After all, in your own ACSH article you stated that Tobacco Control did find fault with it; you simply didn't agree with the reviews, stating, "... the overall tone of the reviews was such that it is highly unlikely that any of them came from our recommended list. Indeed, they seemed more like reviews written by those we had noted as specifically not suitable, as they contained many points that were quite negative but seemingly groundless or irrelevant."

So, what did the reviews say? Inquiring minds want to know! I would also point out that, even if you are correct, no journal is obligated to use your "recommended" list.

I am, however, grateful for a decent talk as to why this "relative risk less than 2 is bogus" canard being thrown around is, in fact, a canard.

In response to your post about quote mining, some of the quotes came from this article originaly publish in Science:

http://www.nasw.org/awards/1996/96Taubesarticle.htm

and I believe you or MarkH have this in you newbie section:
"Why Most Published Research Finding Are False" found at:

http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10…

I am not quote mining here and both articles were published in respectable journals. My only point is that both articles suggest that there is not a consensus as to whether small RRs are meaningful or not. I would suggest that answer is still open and hint is made to scientific bias against publishing negative findings. My two cents

When it comes to relative risk

"two (better yet, three or four) is the magic number"

comes from:

http://www.stats.org/in_depth/evaluate_healthrisks/health_risks_page5.h…

Also says small RRs not worth worrying about unless there are more studies. There table of contents:

http://www.stats.org/in_depth/evaluate_healthrisks/How_eval_health_risk…

Also has a section about publication bias. I would point out that publication bias is more likely to occur with political charged topics. Second hand smoke, global warming, come to mind. Anytime "scientific consensus" is used there is a good chance that a vocal few are the consensus.

In case you hadn't noticed, I cited in my post above that very Science article from which the first Dr. Temple qutoe came. I even quoted a passage from it. I didn't, however, point out how between 1995 and 1999 Dr. Temple apparently changed his mind from saying that any RR under 3 or 4 is crap to the position that we should be suspicious of RRs under 2. Perhaps I should have. In either case, my discussion of his other quote, which was clearly taken out of context from the JAMA article and letter, stands. That quote was clearly quote-mined and presented in a most deceptive way in the list of quotes above. Just read the whole article and the letter, and it is obvious. Both, I believe, are now freely available.

As for the second article you mention, although I didn't blog about that article specifically, I did blog about another very similar article by Dr. Ioannidis on two occasions.

Orac, my point is that the consensus of RRs under 2 is not out yet. I am one of those. While you have shown that the quotes may have be taken out of context, my links, I believe showed that people skeptical of RRs less then 2, 3, or 4 are not entirely incorrect. If you haven't already visited Dr. Siegel's blog to see how medical science is being misused to achieve a political agenda, I would encourage you to do so. He started his blog because he feels the tobacco control groups are causing the medical science to loss credibility do to their misrepresenting research findings.
His blog can be found at:

http://tobaccoanalysis.blogspot.com/

It will also answer why you are getting so many hits from what you consider cranks. Science and politics do not mix.

Orac Writes: "If anything, that argument goes the other way. After all, what diseases do drug companies make huge amounts of money selling drugs to treat? "

Well actually...

The Robert Wood Johnson Foundation, (A big contributor to the anti smoking lobby) is connected to Johnson and Johnson. There was a study suggesting that smoking actually SAVES medical costs, since smokers tend to die before they reach the Nursing Homes.

Since Johnson and Johnson makes most of their revenue selling things like gauze and incotinence pads, there is a very good chance their anti smoking campaign IS profit oriented.

Orac:

1) You did not answer most of my points. In partuicluar I asked you to withdraw two "slurs". Are you doing so?

2) To answer the points you made:

a) I claim that epidemiology is not reliable enough to handle situations where the CI has values close to "zero effect" where there is doubt over the validity of all the numbers. This is so whether the alleged effect is big or not. One study that annoyed me in this way had a CI ranging between about 200% of the headline figure and 9% of it (with a very small sample). The headline figure was around 7 times previous best estimates. This particular paper was published in the Lancet.
I also claim that when the result has been arrived at by adusting the raw numbers to allow for confounding factors then if the confounding factors are orders of magnitude bigger than the result then you can reject the result out of hand unless the raw numbers are very big, otherwise the result may be solely down to the mathematical adjustments being wrong, unluckily for those doing the study.
2) Your "obvious conclusion" is not valid. You should have taken more care reading my posts. Epidemiologists are the leading critics of how their field is practised BUT TO NO AVAIL, which is where I am going with this. Consider the Avandia study again . The paper shows no statistically significant link to excess deaths (CI 0.98 to 2.74 total). You have criticised many people for not paying due regard to the tentativeness of some results in papers they do not like. '"May" be associated with an effect on morbidity from heart disease? That's hardly a strong conclusion..' as you put it in the Mt Helena paper case. I agree. But here you are defending a paper whose very tentative results have led to denunications of the FDA by the NEJM and members of Congress ; and claims to the general public they could be killed by Avandia on the authority of this paper which shows no such thing (Reuters: "Another study has found that diabetes drugs intended to help patients live longer, healthier lives may in fact increase the chances they will die -- this time of heart attacks and other causes..")]

And you still stand by this study, even after I have invoked what the spectre of what the likes of the Geiers could do with similar chicanery. You've offered invalid arguments (like claiming the sutdy is ok because of other information available) and have not answered any of the points I made against the paper, all of which come from epidemiologists. Instead you have counter-attacked with untrue accusations. This might play good rhetoric but it's bad science. You know that as a good Popperian you have to defend your argument at its weakest, which means defending this paper at its weakest. Because of this paper, people are withdrawing from clinical trials on Avandia. Is this what you would want for something you were researching ?

By Paul Power (not verified) on 21 Jul 2007 #permalink

One thing I must mention, in relation to the general argument. If you look at the Wiki page on "Statistical Significance" you will find links to articles showing a profound problem in this area. Apparently, one error very common in reaearch is to confuse the p-value with the Type-I error rate (false rejection of the null hypothesis). The paper at http://ftp.isds.duke.edu/WorkingPapers/03-26.pdf shows this error in many textbooks on statistics for non-mathematicians and indeed in some for mathematicians. As the authors put it, 'A consequence of this is the number of "statistically significant effects" later found to be negligible, to the embarrassment of the statistical community.'.

Another article linked to is a Wiki page on the Bayes factor, where a possible solution is offered. Again (http://www.cs.ucsd.edu/users/goguen/courses/275f00/stat.html) we read the criticism: "In 1986, .. Prof Kenneth Rothman, of the University of Massachusetts, editor of the respected American Journal of Public Health, made a bold stand and told all researchers wanting to publish in the journal that he would no longer accept results based on P-values" because "P-values were 'startlingly prone' to attribute significance to fluke results'" according to a team led by Prof Leonard Savage in the 1960s. Indeed "In 1995, The British Psychological Society and its counterpart in America quietly set up a working party to consider introducing a ban on P-values in its journals. The following year, the working party was disbanded - having made no decision. 'The view was that it would cause too much upheaval for the journals,' said one senior figure. "

Whatever the validity of the proposed solution, we have to acknowledge the problem. If a work claims to detect some effect - of whatever size - at the "95% level" then we should expect 95 of 100 similar articles to agree (if it's correct) or 5 of 100 to agree (if it's incorrect). Instead we frequently have a situation where we get a series of papers each contradicting the last - more 50-50 than 95-5.

By Paul Power (not verified) on 21 Jul 2007 #permalink

Anecdotal - but for whatever it's worth.

As a reality check on my own thoughts, I recently polled 5 friends who are epidemiologists/public health analysts/public policy analysts (some people in health topics, some not), etc... this question. Have you ever even produced a piece of work that was contrary to the interests of your funder/program? Each one literally laughed. It was as if I was asking a used car salesman if he pointed out the flaws of the car to his customer.

I don't know how you judge this. These folks aren't exceptionally dishonest people. Nor would (I believe) any of them actually change data or anything as crude as that. But we all know how to choose the right research question, choose the right confounders, make simple wording changes in questionnaire items, look at one data set vs. another, look at a particular outcome in a particular data set...

Some of this, I think, may even be unconscious, as certain programs tend to attract researchers who are already "believers". Some may be, as I've posted before, working for the paycheck. But clearly, funders don't continue to fund research that does not support their interest.

I feel like I'm saying something so painfully obvious here that it doesn't even need saying. But apparently it does. I'm not suggesting a "conspiracy", I'm suggesting that a lot of people and organizations are acting out of self-interest - political or financial.

Now with drug trials, my guess is that there is less of this sort of thing (or it's more subtle), because you'd be seeing the body counts. With epidemiology and health related public policy issues, we're getting close to anything goes.

So, we bicker about this paper and that paper, but I suggest that the damage is done long before any one paper hits press.

GDF wrote, "I'm not suggesting a "conspiracy", I'm suggesting that a lot of people and organizations are acting out of self-interest - political or financial. "

And GDF, that is *exactly* the point I make in the first section of "Dissecting..." There ARE different people acting in different ways for different reasons to promote smoking bans. Until fairly recently their efforts were fairly uncoordinated and woudl certainly not fit the classic "conspiracy" definition.

Over the past ten years or so that may have changed to some extent. We now see $10 million dollar conferences bringing together 5,000 antismoking activists/lobbyists/researchers at a time to plan and coordinate activities until the next conference and/or beyond. We now see documents such as:

http://www.no-smoke.org/pdf/CIA_Fundamentals.pdf

and

http://www.dhs.cahwnet.gov/tobacco/documents/TobaccoMasterPlan2003.pdf

(The latter, just like the incriminating Helena graph, seems to have been removed from the internet. However, just like that graph, it can be found using the services of the "WayBack Machine" at

http://www.archive.org/web/web.php

Fortunately, Rosemary Woods hasn't been there yet...)

And we now see coordinated efforts to push smoiking bans not just on local or national levels, but worldwide with the Tobacco Control Framework Treaty.

So today, there might actually be a bit of water to that "conspiracy" argument despite the fact that I refrained from making it while writing "Dissecting..."

Michael J. McFadden
Author of Dissecting Antismokers' Brains
http://pasan.TheTruthIsALie.com

Any "study" claiming smoking or SHS "causes" a death is highly suspect and probably greatly in error due to the use of death certificates for cause of death.
Gary K.

A joint report by the Royal Colleges of Pathologists Surgeons and Physicians ("The Autopsy and Audit", 1991), says: "In autopsies (post-mortems) performed on patients thought to have died of malignant disease (cancer) there was only 75% agreement that malignancy was the cause of the death and in only 56% was the primary site identified correctly." (So if you are told you have cancer there is a one in four chance that you haven't, and even if you have there is almost a fifty-fifty chance that you're being treated for one in the wrong place).
The report ended: "Such high levels of discordance mean that mortality statistics which are not supported by autopsy examinations must be viewed with caution." The rate of post-mortems in England and Wales is 27%.

A survey in Hungary, which has a very high rate of postmortems, showed that even when they'd cut you up pathologists couldn't be dead sure of what had killed you in almost 20% of the cases.

Professor Alvan Feinstein, of Yale, a world authority on epidemiology (the study of the causes of disease), has said firmly that death certificates are merely "passports to burial", and for more than 50 years, every time someone has studied the causes of death listed on the death certificates, the conclusion has been that the information is 'grossly inaccurate and unreliable".

http://en.wikipedia.org/wiki/Autopsy

A study that focused on myocardial infarction (heart attack) as a cause of death found significant errors of omission and commission, i.e. a sizable number cases ascribed to myocardial infarctions (MIs) were not MIs and a significant number of non-MIs were actually MIs.
A large meta-analysis suggested that approximately one third of death certificates are incorrect and that half of the autopsies performed produced findings that were not suspected before the person died.[

Other information
The principal aim of an autopsy is to discover the cause of death, to determine the state of health of the person before he or she died, and whether any medical diagnosis and treatment before death was appropriate. Studies have shown that even in the modern era of use of high technology scanning and medical tests, the medical cause of death is wrong in about one third of instances unless an autopsy is performed.

In about one in ten cases the cause of death is so wrong that had it been known in life the medical management of the patient would have been significantly different.
In most Western countries the number of autopsies performed in hospitals has been decreasing every year since 1955
http://www.theage.com.au/articles/2004/03/14/1079199100562.html
Concern at declining hospital autopsy rates

By Julie Robotham
March 15, 2004

The number of autopsies on people who die in hospital has plummeted in the past 10 years, raising concern that many certified causes of death could be wrong.
A national survey of hospitals and pathologists found autopsies were performed in fewer than 5 per cent of in-hospital adult deaths between 2003 and 2002, compared with 14 per cent when research was conducted in 1992-1993.

Study leader David Davies said a recent international survey suggested 9 per cent of autopsies uncovered errors in diagnosis that, if acted on while the person was alive, "could affect the patient's prognosis and outcome".
Professor Davies, area pathology director for South Western Sydney Area Health Service, said 24 per cent of autopsies revealed "clinically missed diagnoses involving a principal underlying disease or primary cause of death", which probably would not have affected the patient's treatment or survival.

Evidence Report/Technology Assessment: Number 58
Autopsy as an Outcome and Performance Measure
Overview
An extensive literature documents a high prevalence of errors in clinical diagnosis discovered at autopsy. Multiple studies have suggested no significant decrease in these errors over time.
In 1994, the last year for which national U.S. data exist, the autopsy rate for all non-forensic deaths fell below 6 percent.
http://www.ahrq.gov/clinic/epcsums/autopsum.htm
http://gateway.nlm.nih.gov/MeetingAbstracts/102233674.html

Institute of Medicine of Chicago, IL 60604, USA.
PRINCIPAL FINDINGS: The average Chicago area hospital autopsy rate rose from 11% in 1920 and peaked at 49% in 1955. The average autopsy rate declined steadily to 14% in 1985, and has continued to decline slowly since that year.
CONCLUSIONS: ....... It also suggests that epidemiological data on diseases and causes of death may be inaccurate.

By Gary Kayser (not verified) on 22 Jul 2007 #permalink

Mr. McFadden:

You still haven't answered my question about why it is that Dr. Siegel hasn't been willing to add his name to your paper and thus help get it published if he agrees that the Helena study is not a good one and that yours is. I'd suggest that you ask him. Certainly if I were you that would be one question I'd be wondering about.

Aren't you in the least bit curious? Or does it serve your purpose better to be able to cry "martyr"?

What,no reply from Orac?

What an appalling lack of manners and civility.

Oh well;since this blog is full of vain-glorious puffery and ill-concealed ego, I am not surprised.

By Gary Kayser (not verified) on 22 Jul 2007 #permalink

Orac wrote, "Mr. McFadden:

You still haven't answered my question about why it is that Dr. Siegel hasn't been willing to add his name to your paper and thus help get it published if he agrees that the Helena study is not a good one and that yours is. I'd suggest that you ask him. Certainly if I were you that would be one question I'd be wondering about."

Orac, it is answered on the other blog, the one concerning Helena, and is paired with either a second or a third request to you to return the civility and answer the questions I have raised to you.

Michael J. McFadden
Author of Dissecting Antismokers' Brains
http://pasan.TheTruthIsALie.com