Back in December, I took issue with a highly irritating article by someone who normally should know better, Jonah Lehrer, entitled The Truth Wears Off: Is There Something Wrong With the Scientific Method?, so much so that I wrote one of my typical long-winded deconstructions of the article. One thing that irritated me was contained in the very title itself, namely the insinuation that the "decline effect," which is the tendency of effects observed in early scientific experiments demonstrating a phenomenon to "decline" or become less robust as more and more experiments are performed, is somehow some mysterious phenomenon that scientists deny. If you want the long version of what I found so wrong about the Lehrer's article, you can go to the link. The short version is that not only is the "decline effect" not nearly as mysterious as Lehrer made it sound but it's not some sort of serious, near fatal problem with how science is done. Indeed, it's not particularly mysterious at all to many of us who actually--oh, you know--do science, particularly those of us who do medical science and clinical trials. Nor was I the only one to take serious issue with Lehrer's article. Steve Novella and P.Z. Myers did as well (not to mention a certain "friend" of the blog). None of us were pleased with Lehrer's characterization of the "decline effect" as the "truth wearing off" and his implication at the end of the article that this effect means that it's impossible ever really to "prove" anything.
It turns out that Jonah Lehrer has responded to some of the criticisms of his article entitled More Thoughts on the Decline Effect. Unfortunately, his response to criticism demonstrates that he's pretty much missed the point again:
This week, the magazine published four very thoughtful letters in response to the piece. The first letter, like many of the e-mails, tweets, and comments I've received directly, argues that the decline effect is ultimately a minor worry, since "in the long run, science prevails over human bias."
This is, of course, not a bad argument against the sensationalism that Lehrer demonstrated regarding the decline effect, and, as an example of this argument, Howard Stuart cited the Millikan oil drop effect, in which Robert Millikan calculated the charge of the electron. His first estimate of the charge was too small, and it took several years before the correct value was finally agreed upon after many investigators tried to replicated Millikan's results. As Stuart points out, the reason it took so long for Millikan's result to be corrected to the currently accepted, higher value for the charge of an electron was because scientists were biased towards rejecting results that differed too far from Millikan's. Lehrer responds by in essence repeating Stuart's point using an excerpt from a talk by Richard Feynman and then pointing out that this is a good example of selective reporting. Well, yes, no one is denying that, but the point is that science is self-correcting and that science does ultimately prevail over human bias.
Lehrer's response:
But that's not always the case. For one thing, a third of scientific papers never get cited, let alone repeated, which means that many errors are never exposed. But even those theories that do get replicated are shadowed by uncertainty. After all, one of the more disturbing aspects of the decline effect is that many results we now believe to be false have been replicated numerous times. To take but one example I cited in the article: After fluctuating asymmetry, a widely publicized theory in evolutionary biology, was proposed in the early nineteen-nineties, nine of the first ten independent tests confirmed the theory. In fact, it took several years before an overwhelming majority of published papers began rejecting it. This raises the obvious problem: If false results can get replicated, then how do we demarcate science from pseudoscience? And how can we be sure that anything--even a multiply confirmed finding--is true?
Let's just put it this way. If a scientific paper is never cited and the experiments described in the paper never repeated, then I would argue that the science in the paper is probably just not that important. Think about it. The reason the Millikan oil drop experiment was repeated time and time again until science got it right is because the value of the charge of an electron is a very basic, fundamental value in physics. It was (and is) important to know what it is. Science that reports fundamentally important results will be replicated. Science that is not, might not ever be replicated, but, when you come right down to it, is it really that big a deal that it isn't? I'd say that it probably isn't. Yes, sometimes a bit of important science lies buried in the literature for years without anyone appreciating its importance, only to be near-miraculously discovered and extended by another scientist, but such stories are relatively uncommon.
More importantly, I don't understand why Lehrer rehashes an example he used in his original article. As you recall, he used fluctuating asymmetry as his most prominent example of the decline effect, devoting considerable verbiage in his article to it. Basically, as described, fluctuating asymmetry appeared to be an important and robust result, with a number of papers finding results that supported the hypothesis. However, over time, the results became less and less robust to the point where it appears that the hypothesis is not so well supported after all. Again, this is nothing more than science correcting itself. As I've said before with regards to medicine, it may take longer than we like. It might be a lot messier than we like. It might even be uglier than we like. But eventually, science finds the way, and false hypotheses are rejected. Lehrer seems to think this should happen instantaneously, but if science were that clear cut it wouldn't be so hard to do at the cutting edge. Even more distressingly, Lehrer goes one worse than his previous article, which he concluded by implying that the decline effect somehow makes it impossible to know anything about the universe with any degree of certainty. Now he's implying that somehow the decline effect makes it horribly difficult to differentiate science from pseudoscience. Now I don't want to try to make light of the demarcation problem or imply that it's always easy to distinguish pseudsocience from science, but let's just put it this way: The decline effect is not what makes demarcation between science and pseudoscience difficult. It's probably not even a major consideration.
I find Lehrer's question off-base as well. How can be be sure that even a multiply confirmed finding is true? He can't! We can't! Repeat after me, Jonah: Scientific conclusions are always provisional, always subject to change. We can never be sure that even multiply confirmed findings are "true," whatever "true" means! Why is this such a difficult concept for Lehrer to grasp? He seems to think that science has to be able to "prove" something once and for all, or else it's a failure, kind of like the way Mike Myers says, "If it isn't Scottish, it's crap!" If a scientific conclusion isn't, well, conclusive, Lehrer seems to be saying, it's crap. However, there's almost certainly no such thing as a "perfect" or final understanding of anything. This is science's greatest strength, but also its greatest Achilles heel in terms of acceptance by the public. How often do we hear people complaining about how one week there is a study concluding that this or that is unhealthy, only to be followed less than a year later claiming that this or that is unhealthy? Or how often do we hear cranks use changes in scientific "truth" as "evidence" that science is inherently unreliable, the discovery of H. pylori as the most common cause of duodenal ulcers being a favorite example? Yes, because in the early 1980s it was discovered that H. pylori causes ulcers, causing a radical change in how physicians treat them, cranks, particularly alt-med cranks, like to cite resistance to H. pylori as proof that science is unreliable and changes too radically--and therefore by implication their woo must work.
Lehrer instead chooses to take an easy swipe at his critics:
These questions have no easy answers. However, I think the decline effect is an important reminder that we shouldn't simply reassure ourselves with platitudes about the rigors of replication or the inevitable corrections of peer review. Although we often pretend that experiments settle the truth for us--that we are mere passive observers, dutifully recording the facts--the reality of science is a lot messier. It is an intensely human process, shaped by all of our usual talents, tendencies, and flaws.
Actually, what Lehrer's critics have been doing is anything but reassuring ourselves with platitudes about the rigors of replication. Indeed, all of us who bothered to write about Lehrer's article spent considerable time pointing out how regression to the mean, publication bias, and a variety of other factors that could explain much of the decline effect. We spent a lot of effort trying to explain how it is unsurprising that initial promising results often appear less so as more and more scientists investigate a question, developing along the way better techniques and approaches to investigating the question and approaching it from different angles. We spent a lot of verbiage describing how it is not at all unsurprising that new drugs, which seem to work so well in early clinical trials, appear to lose efficacy as their indication is broadened beyond the homogeneous initial small groups of subjects to more patients whose characteristics are less tightly controlled. Indeed, one of the letter writers pointed this very fact out to Lehrer, but he chose not to address this point directly.
The decline effect is something any physician who does clinical research knows from experience (although he may not call it that) because he sees it so often. To Lehrer it seemed to be some sort of shocking revelation in clinical research. The expectation that randomized clinical trials can overestimate the efficacy of new drugs is the very reason why, after drugs are released, physicians sometimes carry out what are known as "pragmatic trials," which are designed to find out how effective a treatment is in everyday, real-world practice, where the conditions are not nearly as controlled and the patient populations not nearly as homogeneous as they are in randomized clinical trials. Efficacy results determined in pragmatic trials are virtually always less robust than what was measured in the original randomized clinical trials. Not that any of this stops Lehrer from simply repeating the same stuff about big pharma having incentives to shape the results of its science and clinical trials. We get it; we get it. Science is done by humans, and sometimes human biases and motivations other than scientific discovery influence thee humans who do science.
Finally, unfortunately, Lehrer strikes the same wrong notes as he did before when trying to answer criticisms that he's giving aid and comfort to denialists. Here's what he writes in response to just such a criticism:
One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. Natural selection and climate change have been verified in thousands of different ways by thousands of different scientists working in many different fields. (This doesn't mean, of course, that such theories won't change or get modified--the strength of science is that nothing is settled.) Instead of wasting public debate on solid theories, I wish we'd spend more time considering the value of second-generation antipsychotics or the verity of the latest gene-association study.
Here Lehrer demonstrates a profound misunderstanding of how science denialism works. Here's a hint: The reason why such topics become the targets of scientific denialism is because the conclusions of science run up against very strong religious, political, or primal views. Evolution runs up against fundamentalist religion that cannot abide the concept that humans evolved from "lower" creatures. Those with political views that oppose government mandated action to lower the emissions of greenhouse gases attack AGW science because of its implications. Although the treatment of mental illness can certainly bring out the crazy, if you'll excuse the possible insensitivity of the term (for example: Scientology), for most people there just isn't the same level of intense ideological investment in the efficacy of second generation antipsychotics as there is in whether or not our understanding of AGW is accurate or whether humans evolved from "lower" creatures. Ditto whether the latest gene association study is correct. Besides, the efficacy of second generation antipsychotics and results of the latest gene association study are not yet settled science in anything like the way that evolution is. Scientists still debate them intensely, and, particularly for the latest gene association studies, they are not accepted as anything near settled science. Consequently, when the public hears about such studies, they usually don't know what to think of them and promptly forget them.
More importantly, after discussing the decline effect and impugning the reliability of science, Lehrer still can't seem to give a coherent explanation as to why AGW and evolution are such reliable, well-founded scientific theories compared to what he seems to perceive as the unreliability of the rest of science. Worse, he hasn't addressed many of the more cogent criticisms of his work, in particular the numerous attempts to explain to him why it is not at all remarkable that second generation antipsychotics have not proven to be as effective as initial results suggested or why it is not particularly surprising or disturbing that fluctuating asymmetry never panned out. Lehrer had a great opportunity to explain why making scientific conclusions is so difficult and why all scientific knowledge is provisional. Those points are in his articles on the decline effect, but they're buried in the surrounding implication that the decline effect is mysterious. Then in the last paragraph of his response to critics Lehrer has the chutzpah to declare that "there is nothing inherently mysterious about why the scientific process occasionally fails or the decline effect occurs."
That's what we've been trying to tell Lehrer since he wrote his first article on the decline effect, but he hasn't been listening! Lehrer is, of course, correct when he quotes a scientist asserting that the decline effect can be studied by science; it's just that he doesn't seem to realize that how and why the scientific method fails have been subjects of research ever since there has been a scientific method.
- Log in to post comments
This is my first post here, and I love the blog. Please don't take this criticism badly. This article is poorly edited, saying in many cases the opposite of what is intended in a sentence, which is rare for here. Hope everything is all right.
(e.g. Lehrer seems to think this should happen instantaneously, but if science were that clear cut it wouldn't be so easy. )
I love this sentence though:
example? yes, because in the early 1980s it was discovered that H. pylori causes ulcers, causing a radical change in how physicians treat them cranks, particularly alt-med cranks
Ah, if only we could change how we treat them cranks.
Good article. I find Jonah Lehrer frustrating; he seems to be so very good at writing the little pictures while getting it completely arse-backwards on the big picture. I loved every chapter of Proust was a Neuroscientist; but hated the framing discussion.
Oh, and while we're copy-editing for you, where you say "the Millikan oil drop experiment was repeated time and time again until science got it write" - that should be "right".
Since we have a newbie here, I'll reiterate my policy regarding grammar and style trolls. Basically, if a comment is primarily nothing more than a criticism of spelling, style, or grammar, I have a tendency to delete it with extreme prejudice after correcting the mistake (if it is a mistake). This is a blog, not a magazine or professional publication, and I usually only have time for one shot at looking a post over. I've been at this over six years, and one thing I've learned in that time is that all grammar, spelling, and style criticisms tend to do is to derail the comment thread into minutiae unrelated to the topic at hand. They add nothing. In fact, they detract from the experience.
So please stick to substance.
You try cranking out 1,000 to 3,000 words a day after working 12 hours at your day job and see if you can crank out posts like this without the occasional spelling or grammar faux pas.
If there were something wrong with the scientific method, how exactly would you prove that? Intuition? Revelation? Coin toss? Hallucinations induced by heavy drugs?
The only reason the decline effect is even described is because the scientific method really does work (if sometimes slowly) to improve our understanding.
If few people read the original study, few people are exposed to the paper that contains the error.
If no people cite the erroneous paper, then the error does not spread.
A paper that some people may have worked at disappears into oblivion. How is that an example of an error that becomes established scientific fact, just because nobody is interested enough to read or replicate it? It isn't.
If someone does stumble across this later in a PubMed search, then it may be cited and attempts may be made to replicate it.
This is not a genuine concern, but just a way of suggesting there is an iceberg of problem papers, with most of them hidden from view. Oooooh! One third. That is a lot of papers, but nobody (or slightly more than nobody) is reading them. And nobody is going to collide with the iceberg and sink on a night to remember. The papers are unread/forgotten, not memorable, and have no impact.
A Pareto power law probably applies. 10%/20% of papers make up 80%/90% of what is read and what is cited.
It is a mistake to worry about papers that are not read and papers that are not cited. It is a mistake to worry about papers that are so unloved, that their contribution to a journal's impact factor is negative.
.
"If false results get replicated, how can we distinguish science from pseudoscience?"
I cannot believe that he is making this argument. Sometimes, purely by chance, the first few experiments will confirm a false hypothesis. But after the experiment is repeated many,many times, we will get a true picture of whether or not the hypothesis is correct. Anyone with a basic understanding of statistics should understand this.
I was just going to write what Michael wrote... The Central Limit Theorem always checks in after several trials of anything. Keep flipping a coin, and you'll have a 50/50 spread of heads and tails eventually. Do something enough times, and the outliers disappear. This is something anti-vaxers are yet to comprehend. We can focus like a laser beam on thimerosal for years to come, all the while missing other possible causes of all the evil in the world.
rene
That's assuming all the variables are known and are unchanging. This only works in a closed system.
Is the vaccine the only variable? No. Comparing unknown (but confidently stated as known) health risks with known coin flipping risks is an error in thought process.
The practice of medicine is not a science. And it's certainly not a science where you just try to figure out how many sides of the coin you have.
While it's true that for "most people" issues about the efficacy of psychiatric drugs are not matters of "intense ideological investment", they are the primary talking point(along with cancer chemo-therapy) of woo-meisters' seething hatred of BigPharma, serving as horrifying examples with which the scare the marks. The side effects of second generation anti-psychotics ( which are not to be dismissed lightly) and the "inefficacy" of SSRI's are exagerated and repeated to underscore how BigPharma "abuses" patients and enriches itself. Obviously, most ( not all) of our web woo-meisters are not old enough to remember *institutionalization* of most patients with SMI . Those who eschew BigPharma solutions offer up few choices- Scientology, meditation,"spirituality", or Orthomolecular psychiatry ( basically, megadoses of niacin)-for treating SMI, none of which work.
So, as I wrote yesterday on my own blog, I think that while the initial criticism of Lehrer from the skeptiblogosphere was well-deserved (that final paragraph of his original article was a real stinker), I think his follow-ups have sufficiently clarified his point... You do make a few solid criticisms here that I hadn't thought of, but overall I think Lehrer more or less "gets it". (I do wish he was more savvy when it comes to the threat of science denialists, though)
I do want to add another comment to one thing Lehrer said in this most recent article, though:
This question actually has a rather easy and simply answer in principle, even if it's messy in practice: Bayes' Theorem. An experiment or hypothesis is science if -- even if it turns out to have given a false result -- the experiment/hypothesis was reasonable in light of the priors. If prior probability suggests the experiment or hypothesis is ridiculous, not even worth considering, then it's pseudoscience.
Of course the devil is in the details. Continental drift/plate tectonics is a good example of where the scientific consensus caused a grave misestimation of the priors, resulting in what would eventually be recognized as a sound theory being classed as pseudoscience. But that's where the whole "it works out eventually" thing comes in.
I expound on this in more detail in the aforementioned blog post. Science is still as messy as Lehrer says -- but the demarcation between science and pseudoscience is, at least in principle, not so mysterious.
Sorry, but I think Orac's continued bleating on this is unwarranted. Long form magazine articles can't be all things to all people. They are characterized by theme, point of view, strong characters, etc.
I believe that Lehrer's story is important and likely quite compelling to general readers (even at the New Yorker) who have little to none familiarity with such issues as the decline effect:
http://www.collide-a-scape.com/2011/01/05/oracs-pedantic-peeve/
Augustine proved my own point. Write about something long enough, and he'll pop up to try to correct you, even when his corrections totally miss the point.
"Biostatistics, motherf*cker, do you speak it?" - My Biostats Professor
Augisine, I see why you are so confused. You're thinking of the idiom to "take ones medicine" and that for the most part isn't very scientific. Otherwise you are a dolt. Go back to doing what you do best, oh thats right, nothing.
So is the practice of delivering medicine science, no more than building an aeroplane is science, then again it is no less.
sci·enceâ â
[sahy-uhns] Show IPA
ânoun
1.
a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws: the mathematical sciences.
2.
systematic knowledge of the physical or material world gained through observation and experimentation.
3.
any of the branches of natural or physical science.
4.
systematized knowledge in general.
5.
knowledge, as of facts or principles; knowledge gained by systematic study.
6.
a particular branch of knowledge.
7.
skill, esp. reflecting a precise application of facts or principles; proficiency.
med·i·cineâ â
[med-uh-sin or, especially Brit., med-suhn] Show IPA
noun, verb, -cined, -cin·ing.
ânoun
1.
any substance or substances used in treating disease or illness; medicament; remedy.
2.
the art or science of restoring or preserving health or due physical condition, as by means of drugs, surgical operations or appliances, or manipulations: often divided into medicine proper, surgery, and obstetrics.
3.
the art or science of treating disease with drugs or curative substances, as distinguished from surgery and obstetrics.
4.
the medical profession.
5.
(among North American Indians) any object or practice regarded as having magical powers.
âverb (used with object)
6.
to administer medicine to.
âIdioms
7.
give someone a dose / taste / of his / herown medicine, to repay or punish a person for an injury by use of the offender's own methods.
8.
take one's medicine, to undergo or accept punishment, esp. deserved punishment: He took his medicine like a man.
It's almost like you tossed that coin right over Auggie's head.....
To borrow from Stoppard, re: coin flipping:
Thoughts, augie?
@ #11 Keith Kloor
Actually, if there is one problem with Orac's "continued bleating", it is that it is not reaching that general New Yorker audience, who is only getting Lehrer's skewed perspective. That general audience that doesn't understand or appreciate how science really works and is taking Lehrer's article at face value: science is fundamentally flawed rather than science is complicated, nuanced, but ultimately successful.
It is precisely because Lehrer's audience is people "who have little to none familiarity with such issues as the decline effect" that Orac's criticism is so important and not pedantic.
@ Keith Kloor
I think your point misses the point completely. It is exactly this audience that, while generally well-educated, is very deficient often in basic science.
Mr. Lehrer's background (according to Wikipedia) seems fairly short on hard science and heavier on writing and philosophy. I confess ignorance as to just what neuroscience is--outside the medical profession of neurology, anyway.
------
Orac, I'm glad that you returned to this theme, because it is one that we critically-thinking, but less-versed-in-science-than-you, readers rely on to sharpen our own wits when confronted with the truly burning stupid that greets us on a regular basis.
#16, you have a valid point about people (including New Yorker readers) needing a better understanding of how science works.
Let me put things another way. Science and climate bloggers often complain about magazine stories because they don't like the angle or thrust of a given article. Let me give you a few examples, and pardon some of the self-referential links.
Most recently, we saw a solid profile in Scientific American of climate scientist Judith Curry heavily criticized. Former SciAm editor in chief John Rennie rebutted the main charges here:
http://blogs.plos.org/retort/2010/10/28/a-pitiful-poll-and-an-abused-ar…
I explained why I thought the Curry storyline was appropriate for Scientific American to explore:
http://www.collide-a-scape.com/2010/10/26/curry-the-apostate/
Speaking of storylines, that's the essence of journalism, to tell a story, and yes, there is usually a slant in such stories. People are, of course, free to disagree with the slant.
So another good recent example of unwarranted criticism concerns this piece on "clean coal" by James Fallows:
http://www.theatlantic.com/magazine/archive/2010/12/dirty-coal-clean-fu…
Which David Roberts at Grist had serious problems with:
http://www.grist.org/article/2010-11-10-question-james-fallows-coal-foc…
I thought the coal story was eminently worthy and said Roberts' complaints were misplaced:
http://www.collide-a-scape.com/2010/11/11/is-clean-coal-story-worthy/
Lastly, I'll point to the umbrage over the 2009 profile of Freeman Dyson in the NY Times magazine:
http://www.collide-a-scape.com/2009/04/14/garfields-take-on-romm-on-the…
I mention these three examples because they were singled out by critics for advancing a main theme or the words/actions of a person that the critics found undeserving of attention.
While I think some of the criticism of Lehrer's article is justified, the heavy-handed dismissiveness of it by the likes of Orac strikes me as excessive and petty. And I think that has to do as much with objection to the storyline of the article.
Any thoughts on this similar article in Discover Magazine?
Lehrer is missing kind of an important point about the Millikan oil drop experiment - its quite possible to avoid the bias towards the expected value by implementing an appropriate blinding procedure. That would be normal practice for this type of measurement today. Its not just that science gets there eventually by grinding away doing the same thing over and over, it gets there by getting better.
Or to put it another way, to answer the question:
If you're doing science, you spend most of your time trying to fix these problems or limit and quantify their effects. You in fact spend a lot of time looking for new problems.
If you're doing pseudoscience, you spend your time ignoring these problems, whining when people point them out and thinking up bullshit excuses about why they don't apply to you (possibly involving quantums).
Augustine:
It kinda is actually. Science is about breaking down complicated issues into simpler ones that can be manipulated, controlled, eliminated and combined. Then, after you've got some basics down, you begin analyzing how they come together, until you've got something close to the real world. So a lot of science really does come down to figuring out how a single factor can vary by binary choices. The practice of medicine is not an exact science by any means, but it certainly must be informed by science if it's going to be meaningful in any way. The whole point of science ultimately comes down to prediction - can you predict the results of an intervention? If not, why not? And once you've made a prediction, you test it. If your test confirms your prediction, you test a more elaborate one. If it does not, you try a different prediction. Eventually you come up with a model that allows you to predict with a high degree of accuracy.
And medicine can be both figuring out how many sides a coin has, and what happens when you flip it. Basic, applied, exploratory, clinical - science and medicine are all these things. Sure it's complicated in practice, but that's why science is a social process - you have other people to check your results, try to replicate them, argue with you, review them, confirm, refute, and iterate. With enough people working long enough, you eventually get to something close to the right answer. But it only works if people are honest, agree to ground rules, and don't cheat by moving the goalposts or assuming the answer before the test begins and refusing to change it in the face of contrary evidence.
@Keith Kloor: That's just a hunch, but it seems to me that you defend journalists who think that their profession is about telling a good story. I prefer those who think they shall tell the truth - even at a cost to the story.
I think the example of the ulcer causing bacteria is telling, since it shows the *opposite* of the decline effect - I can't come up with a fancy name.
While it started out with a relatively small study, it was confirmed repeatedly.
I would think that there should be many examples of the "anti-decline" effect, where small trials have been increasingly confirmed by further research.
Could Lehrer be blinded by his own confirmation bias?
Damn you, KeithB, I wish I'd thought of that, the better to put it in my post!
Feel free, Orac, you don't even need to credit me. 8^)
(On the "The Online Photographer" Blog, Mike puts especially good comments as attachments to the original post called "Featured Comments")
@Michael #6
It's not surprising that subsequent experiments would obtain similar results if the same methodology is used. If the source of the error is some subtle aspect of the methods used, it could take time to ferret out the problem.
Replicating the results doesn't necessarily make them right, it makes them reproduceable. Over time, as all the underlying assumptions are checked and re-checked, the truth is more likely to emerge.
Science is hard.
RE: Helicobacter pylori & the oil drop effect, etc
I always find it amusing when people use examples of how science ultimately works as examples of how science doesn't work.
How do we learn that scientists were once wrong about various ideas? Is it through people choosing what to believe as Lehrer concludes his original article? Is it revelation through religious texts? Is it through new age, post modernist thinking? Nope. It's through the crucible of science and the rigorous and repeated application of the scientific process. Science: It works, b!+ch&$.
Orac, sweetie darling, do take a dried frog pill, or do whatever blinkenlight boxes need to do to relax. It's a well-known fact that no-one can properly proof-read their own writing. Not even professional copy-editors are capable of this. A helpful correction offered in a friendly manner (with no flames) is better met with a quick "kthx, oops, fixed". Of course anyone who suggests you're a loser for making the odd slip-up is an idiot and deserves the flames, but that wasn't me!
That all said, speaking of psychotropic meds, Lehrer's been wrong in that general area before. IIRC, he gets close to Pop Ev Psych in thinking of depression as possibly adaptational.
The decline effect happens because of: (1) faulty initial research procedures get tightened over time, partly in response to peer review (reviewers can get excited by new and provocative findings and initially ignore logic and design problems, problems that later reviewers are less willing to overlook--which leads to more precise and appropriate measurements and more sensitive tests); (2) the not inconsiderable effects of regression to the mean; (3) statistical abberations due to working with small, non-representative samples; and (4) selective reporting of findings by scientists (the Millikan example).
I love the New Yorker and over the years they have published many fine articles on science by bright and knowledgeable writers including Lehrer (e.g., John McPhee, Jonathon Schell, John Hersey, Rachel Carson, Jeremy Bernstein, Atul Gawande, Malcom Gladwell). Despite this excellent record of science translation, it is useful to remember that the New Yorker published another fine scientific writer (Paul Brodeur) about 20 years ago who claimed that power lines were leading to brain cancer. It was compelling writing and it had a big effect on public opinion and science, but ultimately none of the breathless claims panned out despite the fears struck in the community. Decline effects, though rarely referred to that way, are widely recognized in science (often meta-analyses will divide up the sample of studies depending on when they were conducted--early vs. late).
My boss and I were discussing this article, and science, and I may need to link this to him. The problem is science is too slow, when it doesn't have to be! 11 years for a retraction of that fraud? Science bloggers are doing in days what what it takes published journals to do in years.
I was on holidays when Jonah Lehrer's piece on the "decline effect" appeared, so mine is a somewhat tardy response. Nevertheless, it does introduce a couple of novel points and it's the only response I've seen thus far that presents some data. If you're interested you can view it at
http://www.bestthinking.com/thinkers/science/social_sciences/psychology…
Cheers,
Mike
You say yourself that scientific facts are provisional, but then you contradict yourself by saying that those standing in opposition to those same facts have some anti-science agenda. Some of them do. Many of them don't. I think you ignore the political and career aspirations of the researchers and the institutions that they serve at your peril. In my view failing to take this fact into account has a somewhat negative affect on your whole argument.
Discussion brings to mind the ubiquity of placebo effect and possibility of self-correcting biological tendency in humans.
Even though double blind studies engaging placebo arms seek to nullify the effect is it not possible that the effect acts in concert with and/or despite the formula/device being employed? I guess since pain and death are the enemy the potential for unforeseen flaws in initially promising weapons is insufficient to preclude their deployment.
Knowledge or No Ledge.
Hi there. Just curious if you've written on this at all:
http://www.scientificamerican.com/article.cfm?id=demand-better-health-c…