Ed Yong demands higher accountability in science journalism and has made me think of how in the last two days I've run across two examples of shoddy reporting. These two articles I think encompass a large part of the problem, the first from the NYT, represents the common failure of science reporters to be critical of correlative results. While lacking egregious factual errors, in accepting the authors' conclusions without vetting the results of the actual paper, the journalist has created a misleading article. The second, from Forbes, represents the worst kind of corporate news hackery, and shows the pathetic gullibility of reporters regurgitating the fanciful nonsense of drug companies without any apparent attempt to vet or fact-check their story. With a google search the facts are smashed.
The first article Digital records may not cut costs, I think is typical of most science reporting. That is, it's not grossly incompetent but it overstates the case of the article involved and fails to amplify the shortcomings of the research.
The NYT article is describing this article from Health Affairs, which caught my eye before the NYT article was even published because I believe electronic medical records (EMRs) will prevent redundancies and lower costs. So, am I wrong? Will EMRs save us money or possibly increase redundancy as the HA article suggests?
I haven't given up hope. This article is a correlative study based on survey data, and proves precisely nothing.
We analyzed data from the 2008 National Ambulatory Medical Care Survey, a survey of 28,741 patient visits to a nationally representative sample of the offices of 1,187 nonfederal physicians.19 The survey excludes hospital outpatient departments and offices of radiologists, anesthesiologists, and pathologists.
...
The survey collects information about the practice setting, including detailed information about computerization, as well as about the characteristics of the patients seen and the tests ordered at each surveyed visit.
...
To assess whether computerized access to imaging results reduced the ordering of imaging, we separately analyzed predictors of whether a patient received a computed tomography scan; magnetic resonance imaging; any advanced imaging procedure (computed tomography scan, magnetic resonance imaging, or positron emission tomography scan); or any image (an advanced image, X-ray, bone density measurement, ultrasound, or other image).We examined two indicators of physicians' access to imaging results. The first was whether the practice had what the survey called "a computerized system for viewing imaging results"--that is, a system that presents a text report of a physician's interpretation of the imaging study, an actual visual electronic radiologic image, or both. The second was, for those practices with such a system, whether "electronic images [were] returned"--that is, whether in addition to or in place of a text report, the actual visual images were returned electronically.
In those few cases where physicians indicated that they had such a system but its capability was "turned off," we considered that they did not have access to imaging results.
So rather than testing two cohorts of physicians that are equivalent except for access to patient EMR, or even better, introducing EMRs to physicians offices with or without access to outside providers tests and studying ordering habits, they are just mining survey data. Worse, the comparison is between physicians with potentially a huge variety of systems, not necessarily compatible with other systems, not necessarily connected to every place a patient may get imaging or lab results. After all, there is a huge heterogeneity of EMRs, they are not compatible with one another. There is no reason to think if some imaging is available, all imaging will be available to the physician.
One of the most frustrating things as a doctor is when you're evaluating a new patient transferred from somewhere else, going through their chart, and trying to figure out everything the previous hospital did. Inevitably the discharge summary is buried somewhere in a pile of thousands of nursing notes, computer-generated order histories, and outdated labs. Then after you've finally found the summary and are working out the real history of what's happened to patient you run across the imaging. Invariably, the imaging was from a few hours ago and showed something so terrible they had to be transferred to your quaternary care center immediately - before the final read from their radiologist could be included in the chart. You pop the disc in the computer, and...nothing happens. Cursing you take the film downstairs to the radiologist and she tells you, "oh, that format is incompatible with our system." Frustrated, you call the hospital that sent the patient, and you find out their medical records department will open Monday at 9am (it's Saturday at 2AM) and you are left to make a surgical decision. You now have a sick patient, an incomplete record, and no imaging and no chance to get the information that you need that may result in a surgical decision for the next day and a half. All you have is the unconfirmed word that the patient has a big abdominal problem, and you have to make a big decision on the notoriously inaccurate report of the transferring physician (guess what, smaller hospitals dump inconvenient patients on higher-level centers all the time - sad but true).
What do you do? Well, you re-image the patient. This means additional delay, additional cost, additional radiation to the patient. Sound horrible? It is. And this example is just one end of the spectrum from the outpatient physician who rather than wait for records in transfer just runs a new CBC and chem7, to the other extreme which is that in transfer most hospitals do not accept any of the tests from the outside hospital and re-run everything, including imaging.
Now our hospital has an EMR, and the ability to access some other EMRs, but it in no way guarantees compatibility or access to records from hospitals outside our network.
This is why this study isn't helpful. The potential of the EMR to reduce redundancy isn't in the mere presence of a hospital or office having some kind of electronic records system. It will come when there is a top-down regulatory framework imposed which will require record compatibility between systems. The problem is, again, the market. Every electronic medical record software company insists on proprietary record formats. They are not readily transferable as that might prevent their disgusting attempts to garner market share at the cost of patients' health and efficient use of medical resources. If I were dictator, I'd just force every hospital to use the CPRS system from the VA, which while clunky, is comprehensive, fully functional, secure, and compatible across every VA health system. Instead we've bought the privatization cool-aide and rely on one of a dozen different companies, all vying against each other and ensuring none of their software talks to anyone else. This is the worst possible situation (as are many in US health care). A compromise would be to allow any company to generate software but have an agreed upon format that is universal, like the formatting code for dvds or cds. You can have a Sony player or a Pioneer, but they all can read the same damn disc.
So, did the NYT article emphasize any of these flaws within the article? Yes, with a throwaway paragraph from a dissenter at the bottom:
Dr. David J. Brailer, who was the national coordinator for health information technology in the administration of George W. Bush, said he was unconvinced by the study's conclusions because they were based on a correlation in the data and were not the result of a controlled test.
The study did not explore why physicians in computerized offices ordered more tests. Dr. McCormick speculated that digital technology might simply make ordering tests easier.
Now, Brailer brings up a good point. This study proves exactly nothing. Instead it shows a correlation based on what I believe are highly flawed assumptions, to disparage EMRs' potential. But does that stop the NYT from making the first paragraph basically swallow this flawed message? Nope:
Computerized patient records are unlikely to cut health care costs and may actually encourage doctors to order expensive tests more often, a study published on Monday concludes.
This study shows no such thing. I can understand saying "there is a correlation between doctors with electronic records and more test ordering", but nothing about increasing costs as it says nothing about EMR compatibility, function, or whether the doctors are even able to access other providers records. Also, centers with more technology are likely to result in more use of technology, including advanced imaging and the correlation might just be due to patients present at more technologically advanced centers. I somehow doubt your country doctor is going to start ordering all sorts of new tests because someone hands him a laptop. And the authors themselves note that their results do not exclude self-referral, the practice you may remember I criticized yesterday creates incentives for physicians to over test.
This is an example of a reporting that I think is just mediocre. It reports results without much emphasis on the weaknesses of the correlative nature of the study and overstates the significance of the results. Yes there is a throwaway paragraph from a dissenter for balance, but if anything a paper that shows a correlative effect, contrary to numerous other analyses showing the opposite effect, needs a higher level of scrutiny and skepticism.
Now, the Forbes article, "The Truly Staggering Cost Of Inventing New Drugs", on the other hand, is a truly egregious example of a reporter falling hook-line-and-sinker for some drug company propaganda. Mathew Herper writes:
During the Super Bowl, a representative of the pharmaceutical company Eli Lilly posted the on the company's corporate blog that the average cost of bringing a new drug to market is $1.3 billion, a price that would buy 371 Super Bowl ads, 16 million official NFL footballs, two pro football stadiums, pay of almost all NFL football players, and every seat in every NFL stadium for six weeks in a row. This is, of course, ludicrous.
The average drug developed by a major pharmaceutical company costs at least $4 billion, and it can be as much as $11 billion.
1.3 billion was obviously bullshit, and just based on a guess I would say it overshoots by 10-fold, but that's a pretty typical corporate overstatement of their trials and tribulations. Herper wants to increase the BS to a 100-fold exaggeration. He does this by the highly dubious process of dividing their entire R&D budget by the number of "new" drugs the companies produce in a given year.
Rebecca Warburton and Donald Light at Plos expose the flaws in this argument:
Firstly, the estimates in Forbes accept company R&D figures uncritically and ignore evidence that what companies count as "R&D" may be broader than the costs of bench, lab, and trial research that make up R&D. Drug companies work hard to hide their real costs from any outside scrutiny. And they never link their alleged costs to how quickly they earn them back at high prices.
Secondly, the estimates in Forbes divide total reported costs by the number of "new drugs." Given the small number used in the Forbes estimates, for example only 5 in 14 years for AstraZeneca, "new drugs" must mean NMEs or new active ingredients. The big companies turn out many more newly patented variations on existing drugs that involve less risk, time and cost. In other words, the Forbes estimates divide total R&D for research on all products by the handful of NMEs. However, the me-too variations are the main products of R&D, and they account for about 60 percent of the United States' drug budget.
Sure, if you shrink the denominator to 1/5th of what it should be, your numbers are going to jump up. For every "new" drug, 8 or 9 "me too" drugs are developed. I checked the data on approvals here and just fact checking the drug maker at the top of the list, Astra Zeneca, quoted as producing 5 drugs from 1997-2011 in the Forbes article, looks like it produced closer to 15 over that time period. Glaxo, quoted as making 10 drugs, made closer to 40. This "me-too" effect has been known since Marcia Angell exposed the myth of drug company R&D almost a decade ago. Further, development of new compounds and investigation of novel pharmaceuticals is often in NIH-sponsored labs who then sell the rights to a drug company to develop. Drug companies get a more reliable return on medications that are copy-cats than on taking risks on potential dead-end new development research.
Oh, and the drug companies are screwing with the numerator too:
Finally, half of the industry's average cost of R&D is not real R&D costs at all, but an estimate of profits foregone - a highly inflated estimate of what companies would have made had they put their money in an index fund and not developed new drugs in the first place! Given the staggering cost estimates in Forbes, you might think that drug companies should do just that and become investment banks.
So what is the real cost? Warburton and Light explain:
Our own estimate of pharmaceutical R&D is often misquoted as an average of $43 million per new drug, which commentators reject as being absurd. However, we make clear this estimate for the year 2000 does not include the cost of discovery (because it varies greatly and no one has accurate figures), nor the "cost of capital" (for reasons explained in our article which can be read here). Our estimate is the net cost to major companies after taxpayers cover about 50% of their R&D expenses. We use the median cost because the average cost gets inflated by a few costly R&D projects.
In sum, we estimate that the median, net, corporate cost to develop a new drug, based on the confidential cost data that companies reported to their policy research center at Tufts University, is $56 million in 2011, plus the unknown company costs of discovery and the artificial estimate of profits foregone, if you think it should be added. We also show that R&D costs for in-house new active ingredients are much higher, and costs for me-too variations are much lower than this single figure.
Our estimate is almost double the only solid corporate report of R&D costs, which can be found in audited tax returns from the late 1990s. Here companies reported average costs for clinical trials per drug of only $22.4 million. Not $224 million but $22.4 million
...
if the Forbes figures and business arguments are correct, then nearly all of the global pharmaceutical companies listed in their article would have gone bankrupt between 1997 and 2011.
The NYT article merits a sigh, the Forbes article merits a trip to the whipping shed. In a single google search you find he's not just missing the point, his reporting fails at basic journalism.
McCormick, D., Bor, D., Woolhandler, S., & Himmelstein, D. (2012). Giving Office-Based Physicians Electronic Access To Patients' Prior Imaging And Lab Results Did Not Deter Ordering Of Tests Health Affairs, 31 (3), 488-496 DOI: 10.1377/hlthaff.2011.0876
- Log in to post comments
I think your first mistake is assuming those magazines "do" science journalism, any more than fashion magazines are likely to report, correctly, or at all, on say.. the trashing of rain forests, to produce compounds used by makeup manufacturers to produce the stuff they advertise in fashion magazines.
Still, one might hope they where slightly less stupid at "reputable" magazines, except that, even science magazines are crap at this sort of thing, either failing to get the facts right, editing things so they sound like common ideas about the subject, rather than understanding what was being described, etc. Frankly, I doubt you even need a background in science to write them, and you certainly don't need to know any more about say.. neuroscience to write about it, than the physicist currently thinking about, "Whether the human brain, in complete contradiction to his own field, can somehow 'see' the quantum effect known as 'spooky action at a distance'."
Basically, its like expecting a neurosurgeon to correctly discuss some detailed aspect of pediatrics, having never worked in the field, never mind studied it. And, to make matters all the worse, the farther you get from professional journals (which have their own issues), the less likely the editor is going to contact all, or any, of the people being interviewed, to verify they got the article right, and the higher the chance some twit will publish it anyway, even after being told its got things wrong with it. And, that is without even talking about "adjustments" to the language, which change the meaning in unintended ways, which the scientists might not realize, the editor doesn't have a clue isn't the correct meaning, and the public is assumed to be too stupid to understand, unless its been dumbed down enough that they will, entirely unintentionally, end up getting completely wrong information from it.
Frankly, getting basic facts right is the least problem, sometimes, but it would definitely be a step in the right direction (or a leap, I am not sometimes sure which...).
The last time I checked that was true for about one new drug per year, or ~5% of the total. If you have a higher number, please back it up with a cite or a list of actual drugs.
I know there's an incentive for Pharma to inflate R&D budgets (tax credits), but on the other hand the stuff that's eligible for the tax credits isn't so easy to inflate and Wall Street investors view big R&D budgets as a big liability.
I'm not sure what to make about adding each follow on drug to the denominator. The big cost is in getting a new NME approved; new dosages and indications are relatively cheap in both time and money. You can certainly make the argument that Astrazeneca had 15 drugs approved during the period, but saying that Zomig and Zomig ZMT (orally disintegrating) should be counted as two drugs as far as development costs are concerned and thus the reported cost of development of zolmitriptan should be cut in half doesn't seem a very useful way to look at it. Similarly 3 new dosages for budesonide (another 3 drugs out of Astrazeneca's 15) wasn't very expensive compared to getting Brilinta through two rounds with the FDA.
It makes more sense to me to give each NME it's own entry, and lump in new approvals (and their associated development costs) accordingly. This especially makes sense if you don't consider those follow on drugs as being important in terms of medicine or innovation.
To get a better handle on what those numbers reall are, how about these analyses:
1. Look at total R&D expenditures for NDAs from small pharmas and biotechs, specifically ones that have only worked on one drug.
2. Look at total R&D expenditures for follow on drugs from companies that are only working on follow on drugs.
While I would break out opportunity costs as a separate line item, I absolutely agree with including them any time so much capital is tied up for so much time. I wish my city would do that each time it tried to get spendy and build a new ballpark/stadium.
Hibob, quite right, I did not phrase that appropriately. Publicly-funded R&D in pharmaceuticals does not typically generate new compounds but it is responsible in a big way for investigating mechanisms and pathways.
This is figured into the R&D expenditure estimates by the cited authors as accounting for as much as 50% of the cost of drug R&D. I will restate for accuracy.
Thanks Mark!
Something I didn't notice before is that Warburton and Light are talking about median cost of a drug approval, not the average. As your fact checking shows, the majority (2/3 to 3/4) of drug approvals aren't for NMEs, they're for new indications, formulations, etc of older drugs. So the median drug approval cost could easily not reflect ANY of the cost of getting NMEs approved.
Here's a look at a breakdown of costs for approval of an NME, taking into account failures as well:
(Nature Reviews Drug Discovery 9, 203-214 (March 2010))
Target-to-Hit: 24 million for 24.3 projects.
* Hit-to-Lead: 49 million for 19.4 projects.
* Lead Optimization: 146 million for 14.6 projects.
* Preclinical: 62 million for 12.4 projects
* Phase I: 128 million for 8.6 projects
* Phase II: 185 million for 4.6 projects
* Phase III: 235 million for 1.6 projects
* Submission to Launch: 44 million for 1.1 projects.
* End product: 1 new NME at a total cost of 873 million.
(or $250 million if you only include work on the one successful candidate).
Opportunity costs not included, neither are any phase IV trials.
Those numbers seem legit. But, do those costs reflect expenditures solely by the drug companies? After all, many clinical trials are paid for by the NIH, by medical centers, etc., and the main contribution of the drug company is to provide the drug for free. Are they taking credit for these public expenditures?
Also, the cost of the NME is expected to be higher, and in your example almost hit the billion mark. But if they make 5-9 "me too" drugs for each NME, and the amount of research and development costs are going to be vastly less (while profits may be just as high or higher - see Nexium). I think the final average or median will be a lot lower.
Your post reminds me of a very satirical piece written in the Guardian about science journalism. I'm sure you've seen it, but for those who haven't, it explains how 'lazy' in general journalists are, and most especially when reporting matters related to health and science:
http://www.guardian.co.uk/science/the-lay-scientist/2010/sep/24/1
You got me thinking about the funding for clinical trials. The only data I have access to is clinicaltrials.gov, which doesn't break down shared costs. Nevertheless, doing an advanced search for phase III clinical trials, location: USA, funding source: industry yields 861 trials. . The same search but with NIH or other govt agency as the funding source yields 53 trials, a ratio of 16:1.
Removing the location constraint adds five more trials to the NIH/govt agency column (58), but doubles the number of industry trials to 1721 (30:1).
For Phase II trials the ratio was closer (14228:7201 industry:govt) but then those are much cheaper.
I do have a big question for anyone who accepts Warburton and Light's analysis at face value ($56M to develop a drug, decreased development times, decreasing/static failure rates, resulting drugs then being stupendously profitable). if that's all true, the ROI on pharma's actual R&D must be astounding. Why would Pharma then pack their research budget with extraneous costs instead of spending more on (tax credit subsidized) actual R&D? Why do stock buy backs instead of R&D?
Similarly, why do biotechs partner up to get through phase III? Shouldn't they be waiting til after approval so they can retain the bulk of the profits?
Hibob, that figure only includes intramural trials at NIH, not all the grants that have clinical trials as a component distributed throughout the US and in collaboration with Pharma. They have to partner up and collaborate because they don't own the medical centers and hospitals required to administer the trials. These are largely publicly funded institutions.
Actually when I search at clinical trials I find 6654 open phase I-III interventional studies by industry with 10758 by universities, NIH and other agencies. With refinement to "pharmaceuticals" I get 5798 and 7381. Narrowed to USA I get 3392 industry and 3372 by all other agencies.
There are very few instances where digital records do not cut costs. This is not an opinion...It is an accepted fact.
So yes, I am a bit miffed that this would be an issue up for debate.