While I was reading the assigned chapters of the book Soul Made Flesh (Zimmer, 2004) for class this past week, I came upon the story of the physician Thomas Sydenham. He was particularly good at making careful bedside observations while he was treating patients. In fact, he made the observation that diseases acted the same in everyone, and they were not unique to an individual. He made careful notes on disease symptoms, and even suggested that perhaps diseases should be treated as if they are individual species.
Sydenham's work turned out to be controversial, because when prescribing a treatment, sometimes he would not use the traditional one, but instead would experiment with different treatments. Other physicians wanted to get his license revoked because his experimentation outraged them, even though Sydenham documented which treatments seemed to work better. We discussed this a bit in class, but I want to know why his new treatments were so controversial. If he devised his treatments in a manner that seemed logical, and he had experimental evidence to back it up, why was there such resistance? Is it perhaps that people could not accept that traditional treatments really did not work, and that they may now be responsible for the deaths of many people that could have been saved? Or is it simply a matter of people not being able to accept that things change?
Is there any data about patient outcome for him? Isn't patient outcome pretty much the only way to measure if a treatment is successful?
I don't know anything about Thomas Sydenham, but the initial rejection of Ignaz Semmelweis's discovery that hand washing made a difference in patient mortality is well documented, and includes all of the reasons you suggest.
Sorry to go political and all, but the conservative viewpoint (calls for tradition) are resistant to change and progress. Same as it ever was.
I take comfort in the fact that we made it out of that period after all. We'll make it out of this one too.
I want to note that I do mean "resistant" and not "opposed".
I would suspect that a story like this summons up in peoples' minds the image a of cold, unfeeling, "Mad Scientist" who plays with human lives as if they were expendable. People may envision that he gave patients treatments that he had no evidence would work instead of the reliable traditional treatments. If this were true, and he did it without informing the patient, he would be partially responsible if any of them died. Of course, I have no idea what he was really like. He could have easily been a great humanitarian who gave people new more effective treatments and saved them from ineffective older practices. What I'm saying is that people easily leap to the former conclusion, most likely due to their own fears of illness.
You have to realize that Sydenham worked at the juncture between the Renaissance and the Age of Reason. During the Renaissance, the cool new ideas were all from the classical heritage. Galen was the utmost, baby. You didn't go around dissing those Roman and Greek authors, because they were just right, right, right!
The Age of Reason actually got started just as Sydenham was doing his bit. Remember, Newton didn't do his thing until the 1660's and the world didn't get delirious about him until a few decades later. Somebody (Pope?) wrote
Nature and Nature's truth
lay hid in night.
God said, "Let Newton be!"
And all was light!
They were still going through a transition period here. The Salem Witch Trials were in the late 1690s. So let's not get too self-righteous about how stupid they were to resist Sydenham's work -- the scientific methodology that we take for granted wasn't such a certainty back then.
At the risk of defending "the establishment". . .
I don't know much about this case but from what you say here it doesn't sound like he was using very rigorous methodology. Without controls and sufficient sample size his results would be anecdotal at best.
Just curious, but what does everyone think of this?
http://www.lrb.co.uk/v29/n20/fodo01_.html
You left out some crucial information...
When did this occur?
Where did this occur?
I would have different views according to whether Sydenham was practicing in 16th century France or if he's a physician in present day Houston.
Imagine if a doctor today started trying different stuff just because he felt like it. "Oh, this thing made the patient die. Oops." Do you think the "establishment" would be pissed off about his "flaunting of convention"? I think so.
...but in My Man Sydenham's case, it was a very different world. That was the beginning of the Scientific Method (tm). With very few exceptions, up until the Enlightenment "education" meant learning by rote the unassailable wisdom of those who had gone before, and damn the evidence. It didn't occur to most to even LOOK for evidence.
And before we criticise such close-minded stuff, how many of the general population actually know how to prove, for example, that the earth orbits the sun? Few. Most of what we all know we take as given from authority figures. The power of science is in challenging EVERYTHING, but it is not a simple or obvious mindset. Most of what most people believe is gained entirely unscientifically, and that includes scientists themselves.
What's amazing is how much impact the scientific outlook, practiced by such a small number and only imperfectly, has had on our understanding of the world. It's like humans were *almost* brilliant, and we just needed that one little tool to open the floodgates.
So as tempting it is to mock the generations of doctors who uncritically bled their patients to death, the task for us is working out the next breakthrough, which is just as obvious, that will have future generations wondering how *we* missed it for so long.
Science is not pure. The methodolgy and logic are, but the work is done by people. They get invested in their hypotheses and theories, and they get very territorial about it. When the new kid on the block turns up with a better idea, they will kick it around hoping it will die till it is clear the new kid is right, at which point they will reluctantly adopt it (which is why science works better than religion). Worse these invested guys in senior positions often get to decide where the funding goes, so the young guy with the bright idea cannot even get funding to do his research. For this reason it has been said "the greatenss of a scientist's work can be measured by how many years they held up research in the field" and "science progresses one death at a time" I think the latter is too gloomy - science is competitive so eventually the truth come out, but it is not always welcomed at first. True back then, true now, but things happen faster now.
I don't know about Sydenham, but as regarding Semmelweis, it's worth noting that he was an epic jackass; condescending to his peers, and bellicose to no end.
Generally, people ignored him and his findings because they didn't like him.
I imagine it's not impossible that the same scenario applies to the physician about whom you're speaking.
Things are no different now: as long as doctors adhere to what's commonly accepted, they can't be sued for malpractice. It's only if they fail to apply what's commonly accepted properly, or try something else, that they can legally be held responsible.
This wouldn't be so unreasonable if most medical procedures were well-studied and the totality of their risks and benefits known, but that isn't the case, particularly with surgical interventions. It's remarkable how scarce science-based medicine actually is.
And let's not forget those brave souls who "flout" convention, either!
(Sorry, I'm an editor who lurks among all you scientists, admiring your breadth of knowledge yet thrilled to add my mite from time to time!)
If you've got it, flout it.
#8
Here's the money quote:
"The moral is that if you want them to have wings, you will have to redesign pigs radically. But natural selection, since it is incremental and cumulative, can't do that sort of thing. Evolution by natural selection is inherently a conservative process, and once you're well along the evolutionary route to being a pig, your further options are considerably constrained; you can't, for example, go back and retrofit feathers."
Pretty much not convincing at all, and the "conceptual argument" even less so.
This will touch on several aspects of health care related to the above.
I sincerely feel that outcomes of treatment modicums are not being documented as well they should be. There is little follow up by providers after treatments are initiated. Similar to "take and aspirin and call back if you still feel bad in the morning.", although today, the after hours dictum is more like, "It this is an xgy, call 911."
Trying different treatments with careful follow up observation isn't such a bad idea, since standard treatments aren't necessarily (or always) effective. Many docs have their own favorite treatments, based on their own personal experience, and that's fine, except it's a small sampling, and not done under controlled conditions. As far as controlled studies, most are funded by drug companies who may have a vested interest in the outcomes, and that interest may indeed 'bias' the published results. As an example, I'm suspicious of a recent study that stated that blood pressure normals for older individuals were too high, and needed to be revised downward. This resulted in a huge increase in the sales of blood pressure meds, one of the most profitable to the industry. Same goes for correlating cholesterol to coronary episodes, and its effect on the sale of statin drugs.
But I'm getting away from my main point, and that is that we would benefit from better follow up regarding outcomes. One way would be to document into a database patient outcomes, based not just on follow up exams and lab work, which often is not done anyway, but by gathering and documenting patient input.
When I was five, I was given a sulfa drug, which produced hives. Nowhere was that ever documented except perhaps for a scribbled notation, even though allergic reactions can intensify in subsequent administrations, and cause many deaths and impairments annually. Granted, today's triple sulfas are less toxic than earlier ones, but hives one time can produce a lethal condition later on. Years later, I was given a moderate dosage of Phenobarbital, which failed to metabolize properly, and produced extreme lethargy and disorientation after several days. I stopped the drug, but again, there is no record of that on file anywhere.
Even more recently, I was in an ER for a kidney stone episode, and a nurse gave me a painkiller IV that should only be administered IM, causing a left arm vein collapse, but hey, mistakes happen. What hit me harder that day was hearing someone gasping for breath in an adjoining cubicle. After a few minutes, a doc noticed and yelled for some epinephrine, and that resolved her problem, but, the problem was caused by the staff giving that lady a drug she was allergic to.
With today's computer technology, there's no reason not to have all relevant medical data on file, and it would pay dividends in providing better health care. When one has a doctor appointment, they could answer questions online the night before, as well as more accurately providing a medical history, if an initial visit. Now I realize that that might put clip board manufacturers out of business, but it would shorten the Q and A time required with the patient, would add valuable medical data, and might even generate an Rx up front, requiring only a brief confirmatory exam and interview.
Another possible benefit: If a patient had a life threatening episode, the online query could be set up to trigger a rapid response based on the data given. If certain answers were given, like chest pain, shortness of breath, chest tightness, a simple AI algorithm could trigger a page to the on call doc, generate a call back to the patient to confirm, and get him/her into the ER if warranted.
To aid in assessment of treatment efficacies, follow-up online data collection could go into data bases to help document what works, side effects, etc. We have the technology to do that, although I'm sure the drug companies would resist since it doesn't suit their interests. But if there's anyone out there in the medical or computer technology areas that would like to further those ideas, go to it! By the way, since Google didn't think of it yet, Microsoft may be moving in that direction. Check out: http://tinyurl.com/ytmyl7
I'm of the opinion that such resistance has to do with responsibility.
By condemning the method that will lead to better treatment, we can take refuge in the comfort that we did the best our present knowledge permitted.
... there is no gain without risk.
To my point, is the medical establishment responsible for avoiding death, or improving life? ... or more to the point ... avoiding the blame for death, or a garnering credit for saving life?
All interesting comments but sailor comes closest (again) to what I would see as an accurate explanation. I would would put it thus:
Beliefs exist in our minds in a hierarchy with the most important beliefs near the top and possessing the strongest connection to our personal identity - and therefore armed with the strongest protective emotions.
Scientists spend their lives nurturing their garden of scientific beliefs. After a few years, they largely share those beliefs with the scientists they work with, giving those beliefs social-identity power as well. When a "new theory" comes along it automatically threatens that well-cultivated garden and evinces negative emotions - that can border on mental pain or anguish when one is exposed to them. Higher order (paradigmatic) scientific beliefs - like evolution or the medical belief in your example - threaten dozens or hundreds of lesser beliefs below them in the hierarchy. They will always be met with incredulity and even anger and can take many years to become accepted - at which point they will enjoy the same emotional protections.
When being exposed to a new theory then, many scientists are most likely to put their mind to work trying to imagine why it could not possibly be true, or perhaps even trying to discredit the person proposing it - to relieve the mental anguish posed by that threat.
Those few scientists who hold the scientific method itself (and perhaps the value of using it to expand scientific knowledge) as an even higher belief in their hierarchy (higher than the theories and explanations that live below it) - will have an easier time of it, and may even be eager to explore the evidence. But there are not many brave scientists willing to go through their careers believing that everything they know is provisional and subject to new discoveries that could falsify them. It's just a scary proposition that makes one feel insecure.
What it comes down to for scientists and non-scientists alike - is that our strongest beliefs form our identity and we will protect them instinctively.
(The comment I made about sailor indicates that he or she and I probably share some beliefs near the top of our respective hierarchies - although perhaps not this one.)
Atul Gawande wrote an interesting piece a few years ago called "Desperate Measures," which talks about the late Harvard surgeon Francis Daniels Moore. He was pretty fearless in trying out experimental procedures on his patients--and made some pretty important progress, and also hurt a lot of people. A lot of physicians were not too happy with his methods.
I don't know anything about Sydenham. But the thing about medicine is that it's not science. The goal is generally to help patients get better, not to use them as your lab bench, and risk is pretty much inherent in all experiments. Just look at all of the drugs and other therapies that seemed like a good idea, seemed like they had a lot of solid evidence backing them up, and then went horribly wrong.
In all of these cases, I'm sure there were other reasons the medical community resisted--personality, general reliance on tradition, etc.--but it's a pretty scary prospect to start experimenting on people when you thought your way was working ok before.
I think the need to avoid the cognitive dissonance of accepting that you'd been mistreating patients your whole career would definitely be a major part of the unconscious motive. I've seen the equivalent (in another area) operating for nearly two decades now among a large group of (mostly older) practitioners.
Here's what the Encyclopedia Britannica says about him:
And here is a biography of the man from www.sydenham.org.uk:
I can find no references to back up the claim made in the quote above. There are some references to Sydenham having been at odds or mildly in dispute with some of his contemporaries however this is nothing unusually in the 17th and 18th centuries as there were many competing 'schools' of medicine who were always ready and willing to pore scorn onto any or all of their rivals. Sydenham himself pored scorn on what was probably the greatest advance in medicine achieved in the 16th and 17th centuries the establishment of anatomy based on dissection as the fundament of a medical education.
'This is all very fine,' he told a young student, 'but it won't do - Anatomy - Botany - Nonsense! Sir, I know an old woman in Covent Garden who understands botany better, and as for anatomy, my butcher can dissect a joint full and well; now, young man, all that is stuff; you must go to the bedside, it is there alone you can learn disease.' quoted from The Greatest Benefit to Mankind by Roy Porter.
I suspect your claim from above is your author gilding the lily without historical justification!
As Feynman would always advise and direct us; Observation and experiment.
When I first enlisted in the US Navy, one of our core values had been "tradition." It was supposed to evoke the memory of our 200-year-plus history as a highly professional force, but about halfway through my career "tradition" was removed from the list of our core values.
It seems that tradition also includes such things as sexism and racism, as well as just a general reluctance to change for the better when necessary. As Churchill said (speaking of the British navy), "naval tradition" includes such things as "rum, sodomy and the lash." So tradition was out, but we did get to keep the cracker jacks.
One issue that I do not find discussed (apologies if I missed it) is whether he informed his patients that this is what he wanted to try, and then recieve that individual's consent to try his experimental approach. If not, then what he did was unethical.
I am questioning whether implied consent was agreed to, not the need to experiment to find improved treatments.
Ed, concepts abut consent have changed over the centuries. At the time, survival rates for those who consulted physicians were about the same as those who didn't. There was no presumption that the physician would heal you, just that he would try. The onus of responsibility was for the physician in a paternal role, trying their best. Some might discuss this with the patient, but most probably would not, so as not to discourage them (similar to what happens in Japan right now).
Was his behaviour ethical? It wouldn't be now, but at the time it would have not raised any questions, and given the communication styles of the time, was probably much less distressing than trying to use our modern methods in that setting.
I'd say his behaviour was ethical then, but wouldn't be now.
"An epic jackass, condescending to his peers and bellicose to no end" would probably describe more Heads of Departments or Chiefs of Staff than it would describe the erring physician. The public is only vaguely aware of the insider politics that go on in any institution or on any medical staff or of the economic issues so frequently intruding on proper patient care.
Examine some of the cases at the Semmelweis Society, which assists physicians who are in varying degrees of hot water with their medical staff or read the JCAOH statement on what constitutes a "disruptive" physician.
Just yesterday I heard the author of the book Overtreated, Shannon Brownlee. She claims that a very larger percent of current medical practice is based on tradition, not science. Physicians are not necessarily scientists, and may not evaluate their own practices from the viewpoint of scientific analysis.
They get away with it because most of the time people just get better no matter what the doctor does.
Semmelweis, if I recall correctly, *was* the head of his ... well, I was about to say 'department', but I think he may have been head of his institution. It's difficult to recall now, but I do remember that he tended to harangue the doctors and nurses under him in the heirarchy, so he was hardly a loud-mouthed peon.
(What I speak of is recollection from a term paper on Semmelweis I wrote for an Infectious Human Disease* course).
*Tell me that doesn't sound like a great name for a punk band.
Pelican's Point: When being exposed to a new theory then, many scientists are most likely to put their mind to work trying to imagine why it could not possibly be true, ... to relieve the mental anguish posed by that threat.
And given that "most new ideas are wrong", this (without your parenthetical clause), is in fact the correct response. It's an interesting observation, about how the practice of science has harnessed certain features of human psychology to the purposes of science.
David Harmon: It's good then that you accept the psychology involved. And you are no doubt correct about most new ideas don't pan out. However, ideas that fail for objective reasons add to the knowledge base - scientists now know one path that probably won't be too fruitful - opening up research funds for the "more fit" ideas.
Ideas that fail because they threaten existing belief systems are opportunities lost - that will only be resurrected after many times as much funds are applied overcoming the barriers of having been previously discredited unfairly - sending researchers down less promising paths in the interim.
So, do you really think it's good that new ideas are rejected because they cause anxiety in the minds of some scientists - and not because they do not fare so well when examined more thoroughly, but objectively?
PS - Answering this allows me to emphasize that generally, neither scientists or doctors are scoundrels because of this. I am sure that the psychology applies to any field of professional knowledege where praticioners have a large stake in a stable field of "truths" that they can exploit to their own advantage. I can't imagine many fields where this would not apply. (I do consider physicians as falling under the broad meaning of the term - scientist.)
My own experience as the occasional recipient of medical services leads to me to believe that medicine has its share of practicioners whose higher order identity beliefs are more focused on their own reputation and income than on the careful integration of new procedures and better treatments. And, as with most fields, those most opposed to adapting to change tend to be the older members who have a large fairly stable income stream and whose positions are most securely attached to traditional knowledge - that requires no great mental effort to understand and apply.
I have also been totally amazed at the many doctors I have met - some older doctors too - who are completely open to new knowledge and their willingness to empower their patients with their objective views of the risks and possible benefits of such things.
Excellent point Pelican. Often the greatest impediment to advancement in any field is the "conventional wisdom" which just happens to be wrong. That is what Thomas Kuhn talks about in his "structure of scientific revolutions". Peers are good for doing what Kuhn calls "normal science", science within the existing paradigm. They are absolutely terrible at evaluating things outside that paradigm, and often get it wrong.
There are a number of incorrect paradigms in "modern" biological science. They can't be disputed too much because it causes angst among the "believers" and they then reject your proposals or manuscripts and you become "shunned". An example would be "homeostasis". A belief for which there is zero evidence and which is actually wrong. But the concept of "homeostasis" has taken on a dogmatic following that it continues to be widely used. PubMed lists over 177,000 citations, with over 10,000 in 2007 so far.
The problem of old wrong ideas being perpetuated is at least as severe as the problem of new correct ideas being rejected. Often until you get rid of the old wrong idea there isn't enough "space" for the new correct one.
Damn.. Didn't know Dr. House was that old, no wonder he limps. lol
daedalus2u: "There are a number of incorrect paradigms in "modern" biological science."
I'm not a biologist but I think you are right. And they each sit at the end of a string of paradigms or beliefs that they replaced - but only after bitter battles to defend them in most cases.
Does anyone really believe that we have finally arrived at the place in science when all paradigms are finally proven - and we will no longer have to worry about adjusting our belief systems to accommodate new discoveries?
I'll bet many do believe just that - especially in their own fields.
PS - My sense is that the theory of evolution is pretty safe as a real paradigm. I suspect that our understanding of how it works will increase in a way that it will become a larger paradigm. I think there may be a missing piece of the puzzle that once discovered, will allow an expanded version to explain the existence of life from non-life - and not just how living things change. Perhaps something involving thermodynamics and information theory.
The big problem with peer review is that anything that challenges an existing paradigm requires a higher standard of evidence. But research funding to acquire that evidence is much harder to get. When funding is extremely tight, paradigm breaking research isn't funded. It is considered too "high risk". What "high risk" actually means, is that the reviewers will look foolish if the research program fails, or wrong if it succeeds.
What's your body temperature today?
Hidari: Fodor thinks that evo-devo is an alternative to natural selection? That's just... wrong. And what the heck is "brown moulting"?
homeostasis is wrong? are you for real? please elaborate.
Lua, today being Monday and the libraries being accessible again I skim read the Sydenham articles in both the Dictionary of Scientific Biography and the Oxford Dictionary of National Biography neither of them makes any mention of any attempt or even threat to remove Sydenham licence to practice. I am reluctant to criticise Carl Zimmer without having read his book or without knowing what source he gives for the claim that you say he has made but it would appear that he is guilty of writing bad history of medicine. As a result of his book advocating quinine as a cure for fever, Sydenham did become embroiled in the controversy between the Royal College of Physicians, his licensing body, and the rival organisation the College of Chemical Physicians who were also trying to get Royal patronage. As a result of this he was attacked by Henry Stubbe in his book An Epistolary Discourse Concerning Phlebotomy and The Lord Bacons Relation of the Sweating-Sickness Examined (1671) and Stubbe might have demanded the withdrawal of Sydenham's licence however not having Stubbe's book to hand I can't say. However Stubbe himself was a renowned controversialist who attacked many people without real effect.
Yes, homeostasis is completely wrong. What exactly is "static" about cellular physiology and how is that "stasis" maintained?
In reality nothing is "static", all physiological parameters that are important are regulated via feedback control. Feedback control necessitates a "setpoint", deviations from that "setpoint" and compensatory mechanisms activated by those deviations which serve to raise or lower the physiological parameter in question so as to move it closer to the "setpoint".
Is body temperature static? No it isn't. It goes up and down all the time, sometimes with good reasons that are understood (infection induced fevers). Sometimes for reasons that are unclear (temperature variation around ovulation).
Obviously there is a temperature "setpoint". Is there any evidence to suggest that that "setpoint" is constant and fixed? No there isn't. There is excellent evidence that temperature is a control parameter that the body uses to adjust physiology, as in an infection induced fever.
The "homeostasis hypothesis" would suggest that external control of body temperature would result in superior health, a prediction which has been falsified by the observation that individuals where fevers are blocked pharmacologically or via external cooling exhibit inferior immune response.
The "non-homeostasis hypothesis" suggests that any and all physiological parameters than can be changed by an organism are "fair game", for the organism's control systems to change so as to facilitate survival by that organism.
Why are some physiological parameters "special" and not subject to modulation to facilitate survival? There is no data to suggest that any physiological control parameters are "special" and are kept "static" as the homeostasis hypothesis requires. There is only our inability to measure those variations, and our ignorance in what those changes mean, and our hubris in believing that what we don't know or can't measure isn't important.