I advocate science-based medicine (SBM) on this blog. However, from time to time, consider it necessary to point out that SBM is not the same thing as turning medicine into a science. Rather, I argue that what we do as clinicians should be based in science. Contrary to what some might claim, this is not a distinction without a difference. If we were practicing pure science, we would theoretically be able to create algorithms and flowcharts telling us how to care for patients with any given condition, and we would never deviate from them. It is true that we do have algorithms and flowcharts suggesting guidelines for care for a wide variety of conditions, but there is wide latitude in them, and often a physician's "judgment" still generally trumps the guidelines, and a physician has to practice quite far outside such guidelines to wander into the realm of malpractice. While it is also true that sometimes physicians have an overinflated view of the quality of their own "clinical judgment," sometimes to the point of leading them to reject well-established science, as Dr. Jay Gordon frequently does, what I consider to be physician's judgment is knowing how to apply existing medical science to individual patients based on their circumstances and, yes, even desires and values.
Indeed, if there's one area where SBM has all too often fallen short in the past, it's in taking into account the patient's experience with various treatments. What got me thinking (again) about this issue was an article by Dr. Pauline Chen in the New York Times last Thursday entitled Listening to Patients Living With Illness. She begins her article with an anecdote:
Wiry, fair-haired and in his 60s, the patient had received a prostate cancer diagnosis a year earlier. When his doctors told him that surgery and radiation therapy were equally effective and that it was up to him to decide, he chose radiation with little hesitation.
But one afternoon a month after completing his treatment, the patient was shocked to see red urine collecting in the urinal. After his doctors performed a series of tests and bladder irrigations through a pencil-size catheter, he learned that the bleeding was a complication of the radiation treatment.
He recalled briefly hearing about this side effect three months earlier, but none of the reports he had been given or collected mentioned it, and once he had recovered from the angst of the emergency room and the doctor's office visits and the discomfort of the clinical work-up, he didn't give it more thought -- until a few weeks later, when he started bleeding again.
By the time I met him, he was in the middle of his third visit to the hospital. "I feel like I'm tied to this place," he said. He showed me a plastic jug partly filled with urine the color of fruit punch, and he described a post-treatment life marked by fear of going to the bathroom and discovering blood. "If I had known that my life would be like this after radiation," he sighed, "I would have chosen the surgery."
To this, I'll add a little random bit of personal experience of my own. No, I wasn't a patient who had to face something like this patient, but I do see something similar in my patients. Back when I was in my surgical oncology fellowship -- and before that, in my general surgery fellowship -- I was always taught that lumpectomy was preferable to mastectomy because it saves the breast and most women want to save their breasts. After all, lumpectomy plus radiation therapy results in the same chance of survival as mastectomy; so we should offer lumpectomy whenever tumor characteristics (the main one being size relative to the rest of the breast) permit it. Yet this assessment often neglects to acknowledge that, for some women, undergoing six or seven weeks of radiation is horribly inconvenient, and that there are often complications. It also often neglects to acknowledge that there is a price for saving the breast besides having to undergo radiation therapy: there's the possibility of more surgery to achieve clear surgical margins, not to mention a higher risk of local recurrence in the breast. For some women, this latter possibility is a deal-breaker. Even though they acknowledge that their chances of survival would be the same with lumpectomy or mastectomy, the thought of an approximately 8% local recurrence rate eats at them to the point that they opt for mastectomy.
Then there is the issue of chemotherapy. We frequently recommend cytotoxic chemotherapy for women with relatively early stage breast cancer, even though the addition of chemotherapy in such patients only increases the chance of survival by perhaps 2-3% on an absolute basis, depending upon the tumor. Of course, as I've pointed out before, the benefits of chemotherapy are more marked in more advanced operable tumors, but in early stage tumors they are rather modest. This is therapy that causes hair loss, increased risk of infections, and can cause damage to the heart, but it is the standard of care. Most women are willing to undergo this sort of therapy, too; I can't locate the study, but I've seen one survey where women respond that they would be willing to undergo chemotherapy for a 1% increased chance of survival.
The point is that these sorts of questions are value judgments that often depend upon what patients consider important. The patient described by Dr. Chen, for instance, would apparently prefer risks of surgery rather than peeing blood all the time and having to go back to the doctor's office and hospital time and time again for this problem. Science can tell a physician and patient like this that radiation or surgery will produce an equivalent chance of surviving his cancer. It can tell them what the complications of each choice are likely to be, and what the odds are of each complication. That's part of what I mean when I refer to science-based medicine. What it can't tell the patient and doctor is which constellation of risks would be more easily bearable by the patient. The same is true for whether to choose mastectomy or radiation or whether to opt for chemotherapy after breast cancer. Science provides the numbers and the "price" of each choice, but it can't -- nor should it -- tell the patient what to value. Moreover, what the patient values may not be what the physician values. As Dr. Chen points out:
Whether conducted at a laboratory bench or in clinical trials, medical research has long been driven by a single overriding goal -- the need to find a cure. Usually referred to more modestly as a search for "the most effective treatment," this standard has served as both a barometer of success and a major criterion for funding. Most published studies are marked by a preponderance of data documenting even minor blips in laboratory values or changes in the size of a spot of cancer or area of heart muscle damage on specialized X-rays. Some studies bolster the apparent success of their results with additional data on societal effects like treatment costs or numbers of workdays missed.
Few studies, however, focus on the patient experience.
She then refers to a study by Dr. Albert W. Wu, lead author and a general internist and professor of health policy and management at the Johns Hopkins Bloomberg School of Public Health in Baltimore published in the journal Health Affairs entitled Adding The Patient Perspective To Comparative Effectiveness Research. In this study, Wu et al argue for the inclusion of the patient's perspective in comparative effectiveness research. What this involves is patient-reported outcomes. To illustrate the concept, Wu et al use this chart for patients with chronic obstructive pulmonary disease (COPD).
These sorts of measures are particularly appropriate for comparative effectiveness research (CER). For the reason, consider what CER is: basically CER compares existing treatment modalities already determined to be effective in prior clinical trials in order to determine which is more effective. Other important measures include cost-effectiveness. However, although some efforts go into assessing patient-reported quality of life outcomes of the sort listed above, all too often it's hit-or-miss whether these sorts of measurements are included in clinical trials. One initiative that this article describes is the Patient-Centered Outcomes Research Institute, whose mandate is to:
- Establish an objective research agenda;
- Develop research methodological standards;
- Contract with eligible entities to conduct the research;
- Ensure transparency by requesting public input; and
- Disseminate the results to patients and healthcare providers.
Wu et al suggest that the PCORI can only realize its potential if it supports initiatives that integrate measures of patient experience into not just research but into routine clinical care. A number of possibilities are suggested, including how to integrate general and disease-specific tools into clinical trials in order to measure patient-reported outcomes. Also suggested are various means of integrating these tools not just into clinical research but into routine clinical care, including using them in administrative claims data, linking this data to electronic medical records, and even promoting the collection of such data as being required for reimbursement.
One problem I can perceive immediately in trying to use the PCORI is that it has no real power. In fact, the health insurance reform bill known as the Patient Protection and Affordable Care Act (PPACA), which mandated the creation of the Patient-Centered Outcomes Research Institute, provides no power to it. Indeed, its main charge is to assess "relative health outcomes, clinical effectiveness, and appropriateness" of different medical treatments, both by evaluating existing studies and conducting its own. Even given that huge mandate, the law also states that the PCORI does not have the power to mandate or even endorse coverage rules or reimbursement for any particular treatment. Indeed, so toothless is the PCORI, at least in its present form, that it has been disdainfully described as being like the UK's NICE but without any actual teeth, which is all too true. Basically, the law says that Medicare may take the institute's research into account when deciding what procedures it will cover, as long as the new research is not the sole justification and the agency allows for public input. Moreover, if the political reaction to the USPSTF's revision of the guidelines for mammographic screening last year is any indication, if politicians don't like a PCORI recommendation, you can be quite sure that they'll behave similarly. After all the ranting about "rationing" that was used to attack the PPACA, it was not politically feasible to make the PCORI a government agency or to imbue it with any real authority.
Politics aside, let's get back to the sorts of initiatives suggested by Wu et al. One that in particular interests me is the concept of using patient portals to collect this information. Patient portals are websites that offer a variety of services to patients, including secure e-mail communication with the clinician, the ability to schedule appointments and request prescription refills, as well as the opportunity to complete intake and other forms that used to be completed on paper in the office. The authors propose using such portals to collect patient-centered quality of life measurements and give an example of how this might be done in the case of a hypothetical breast cancer patient:
In one possible scenario, a woman with breast cancer is being followed by an oncologist who would like to know how she is doing on the chemotherapy regimen she is receiving. The oncologist logs on to PatientViewpoint.org, enters the patient's number, and orders the BR-23 Breast Cancer-Specific Quality of Life Questionnaire for her to complete online before her next visit. The patient receives an e-mail notification to do this, logs on to PatientViewpoint.org, and completes the survey.
The patient's results are automatically calculated and are made available both on the website and within the hospital's electronic health record alongside all of her other laboratory test results. At the visit, the oncologist pulls up the results and asks the patient about an increase in her depression scores. It would also be possible to aggregate all of the patient's questionnaire results with those of other patients receiving chemotherapy for similar breast cancer cases and to use these data to help compare the effectiveness of different regimens.
Dr. Wu's site is currently only set up to accommodate breast and prostate cancer patients, but it could be expanded. There now exist a large number of tools like the BR-23 to assess quality of life, and, with what appears to be the nigh inevitable infiltration of the electronic medical record into medicine over the next several years, integrating such tools into routine clinical care should become increasingly easy and inexpensive. On the other hand, one problem with such tools is that clinicians are already buried in "information overload." Whether they would actually read and use the results of such studies outside the context of clinical trials is not assured, at least not if there is no incentive to do so. If this sort of approach is going to work, the government and insurance companies are going to have to pony up. Another problem is that a lot of doctors don't like this sort of measurement. They consider it unscientific and "squishy" or they don't know what to do with the information. Whether these attitudes will change or not as CER becomes increasingly embedded in clinical research is impossible to say.
Dr. Wu's article leads me to reflect upon two things. First, it's important to remember that the reason these "softer," "squishier" measures are becoming more important is precisely because SBM has been so successful. Diseases that were once fatal are now chronic. A prime example is HIV/AIDS. Back when I was in medical school, HIV was invariably fatal. AIDS patients died rapidly -- and in most unpleasant ways. Thanks to SBM, which developed the cocktails of antiretroviral drugs, HIV/AIDS has become a chronic disease, so much so that babies born with HIV are now approaching adulthood. What this success means is that, although not completely, by and large mortality is no longer the be-all and end-all of HIV treatment. Now, we are seeing quality of life issues coming to the fore. The same is true for some cancers, and it's certainly true for diabetes and heart disease. As Wu et al point out:
Patient-reported outcomes directly support the primary goal of much of health care: to improve health-related quality of life, particularly for people with chronic illnesses. No one can judge this better than the patient. For example, the main objective of hip replacement surgery is to reduce pain and improve the capacity to get around. The main goal of cataract extraction is to improve visual functioning--that is, the ability to perform activities that require eyesight, such as reading, walking without falls, and working on a computer.
In addition, there are often trade-offs between the length and quality of life. Important considerations are the side effects of treatment of HIV disease, the temporary diminution of functioning after coronary bypass surgery, or fatigue resulting from cancer chemotherapy. Even for life-saving treatments, this kind of trade-off can influence a patient's decision making among alternative courses of care.
Once again, these decisions and the trade-offs patients decide to accept should be informed by the science. The options presented to the patient and their cost in terms of potential complications and impact on the patient's ability to go about his daily activities and in essence live his life must be based on science. However, that does not mean that the final determination will always be purely based on estimates of efficacy. If the patient decides, for instance, that the survival advantage that chemotherapy will provide after her breast cancer surgery is not sufficient to be worth months of hair loss, fatigue, and the risk of heart damage, then that is her choice. The key is that we as clinicians must make sure that she has accurate, science-based information upon which to base that choice. Informed consent must be based on sound, scientifically verified information. Anything else, such as the sorts of "informed consent" advocated by "health freedom" groups is in reality misinformed consent. It is our responsibility as science-based practitioners to do our best to make sure that the treatments we offer our patients are based in science and that the information about the relative benefits, risks, and costs about these treatments is also based in science.
The second thing that comes to my mind is the complete contrast between the sorts of efforts that Wu et al are undertaking and what purveyors of unscientific so-called "complementary and alternative medicine" (CAM) do. SBM is, through CER, undertaking systematic measurements of quality of life measures, and the use of genetic tests that provide information about prognosis and predict response to therapy, making its first real steps towards truly "personalized" medicine. True, these steps are halting -- stumbling at times, even -- but they are steps towards the day when SBM can offer patients treatment options based on science and personalized to the characteristics of the biology of their disease that are unique to them, all while taking patients' own values and desires into account. Whatever the deficiencies and faults of SBM (and it's impossible not to concede that there are many), SBM is far closer to true "personalized medicine" than any CAM, and it is using CER to come even closer still. CAM has nothing to compare.
Thanks for this. I hope I live long enough to see some of this come into routine practice. It could signal the end of the dreadful trend to "integrate" CAM into SBM. There will always be fringie loons who won't give up their CAM, but the practices discussed here should go a long way to keeping it out of SBM.
Health economists have been doing this kind of research for a while, when trying to determine the 'utility' (loosely based on HRQoL) of different treatment options. We use different methods to ascertain and predict what a population of patients is likely to choose, but obviously this has hard to do precisely and doesn't speak to what any individual may or may not do.
I should also say that I managed to walk a couple of anti-vaxxers a few steps away from the precipice this weekend, and the stuff on this site helped big time.
Indeed, there has been progress over the past decade and a half, at least, in recognizing that the outcomes of medicine that matter, ultimately, are the experiences of patients -- of health and illness, and of medical services. Unfortunately, although communicating with patients may well be the single most important clinical skill, it is a very small emphasis of physician training and doctors, alas, aren't usually very good at it.
I have said here before that "integrative" medicine ought to mean medicine that is integrated with people's values, experiences, goals and life worlds. Biomedical science gives essential information inputs to treatment choice and processes, but doesn't answer the question of what Joe or Sally ought to do in their particular circumstances -- nor does it offer them everything they need and deserve. It is the common failure of medicine in these respects that gives the opening for the woomeisters.
I'm very glad that Orac is displaying humility and understanding in this regard. I'm sure he's always had it, but I haven't seen much of it discussed here. If we can have more of it, I'll be an eager participant.
One aspect does bother me, however, expressed in this comment
"If I had known that my life would be like this after radiation," he sighed, "I would have chosen the surgery."
Yeah, hindsight is 20/20. It's not clear from the description of how likely this outcome was, though. In making the decision, it has to be based on the info going in, and you can't just look at the outcome.
He says no one told him about this happening. Perhaps because it is very rare? I mean, sure, if he knew this was going to happen, he would have chosen differently. However, what if he knew there was a 1 in 1000 chance of it happening? Would he still have chosen otherwise?
It's very easy to say afterward that "I wouldn't have chosen the path had I known it would have done this." But the whole point is that we usually DON'T know that it would do this before we do it. We can talk about how likely it is to do something, and that can be a high probability or a low probability. However, just because something has a low probability doesn't mean it can't happen, and you are taking a chance if you do it. It is a small chance, true, but you know, if 100 000 people do a procedure that has a 1 in a 1000 chance of doing X, then 100 people are going to have X.
If we knew they weren't going to have the problem, it wouldn't be a 1 in 1000 chance of happening. That they have a 1 in 1000 chance of it happening means we don't know of something that make us think they have a lower chance.
Decisions need to be based on the probabilities going in. Unfortunately, as we see in the vaccination threads, people are very poor at assessing probabilities.
A similar effect to what Pablo mentions is the ongoing march of science. Yesterday's standard of care may not be the same as today's, because we've learned more, recognize more of the drawbacks of the old standard, or have come up with better approaches. But that doesn't mean that it was "wrong" or a bad choice for the people who got the old standard of care when it WAS the standard of care.
No decision may be properly evaluated except by the standard of what the person making the decision knew at the time. But it's not easy to avoid unfair evaluations based on additional information. "If I'd known then what I know now" is not really a meaningful way to think about things.
If "If I'd known then what I know now" worked, I'd win the lottery every week.
Sometimes I think we put undue pressure on patients to make decisions when there is equipoise between treatments. A particularly striking example is hypoplastic left heart syndrome. After diagnosis, the parents may be given up to four options: immediate listing for transplant, classic palliative reconstruction, hybrid palliative reconstruction, or non-surgical palliation. It is an extremely difficult decision for them to make in a short amount of (emotionally charged) time. No matter how much education they are given, they may not be happy in hindsight. I am not sure what would be a better option but we definitely have a lot of room for improvement. I am glad to see that research is being done in this are and that SBM people are following it.
this is very good to read, thanks Orac!
are patient-based evaluations like this ever used in determining whether to grant or rescind approval for treatments? For example, when Vioxx was recalled, my rheumatologist was somewhat distressed, as he had many patients who were doing much better on Vioxx, which was reducing their risk of bleeding ulcer significantly - many of whom would have continued to take the drug even with the undisclosed risks of cardiac problems. Many were older senior citizens who preferred quality of life over duration of life. Many of those patients haven't responded as well to other drugs, even other COX-2 inhibitors.
As well as Enbrel works for me, if something came along that promised to work almost as well, but compromised my immune system less, I might switch. The stuff works miracles when it's in my system, but if I get an infection, I can't take it, so I regress. *sigh*
I'd wonder how they would, in collecting this sort of data, equalize it between the stoics and the dramatists? Does ease of seeing a doctor also factor in? The exact same degree of inconvenience or discomfort could be rated very differently by different people. As well, someone who goes to the doctor immediately upon developing symptoms of a side effect and gets effective treatment right away may perceive their side-effects as being more minor than someone who waits a long time for treatment, either because of finances, difficulty in finding a doctor, not recognizing symptoms or plain stubbornness.
All interesting food for thought, especially for those of us with chronic diseases.
The government just took away NICE's powers to reject drugs. Apparently, it's A-OK for PCTs and GP consortia to reject drugs when they're too expensive, but having it be fair is wrong, because that leads to Daily Mail headlines about people being denied medicine. In the Mailiverse, you can complain both about people being denied drugs that cost too much, and the skyrocketing NHS drugs bill.
Transparent rationing is never very popular, unfortunately.