The Questionable Authority

Author (and fellow ScienceBlogger) David Dobbs has an article on PTSD in the latest Scientific American, and has several related posts on his blog here at Sb. Dobbs’ primary argument seems to be that PTSD is being widely overdiagnosed, in part because the condition itself is poorly defined, and in part as the result of various social and economic factors. At least a couple of other bloggers enjoyed his writing on the topic. Personally, I’m not so sure.

As many of you know, I’ve got some fairly significant ties to the US military. My wife has deployed twice, and has had close and personal experiences with combat. Our family has dealt with her deployments well, all things considered, but that does not mean that it’s been an easy process. The first-hand experiences have encouraged me to take a much closer look at both the military healthcare system in general and at mental health issues in the military in particular than I probably would have otherwise. I’m bringing this up not because I’m hoping that you’ll think I’m some sort of authority on the topic (I’m not), but because I’d like to make sure that my potential biases are out in the open from the start.

Dobbs’ Scientific American article is certainly thought-provoking, and it raises a number of valuable and important issues. Unfortunately, the important and valid points are found mixed in with far too much poor reporting. Although Dobbs admits that the issue of PTSD diagnosis is complex and the subject of scientific debate, his report is almost entirely one sided. It also contains some apples-and-oranges comparisons, and conflates several different problems. His blog articles continue that trend.

For the sake of simplicity, fairness, and a cliche, I’m not going to go through the article point by point. Instead, I’m going to look at the good stuff first, then the bad, then (naturally) the ugly.

The Good:

Although Dobbs is skeptical of the current process for diagnosing PTSD in veterans, neither he nor anyone he interviewed for his story doubt the existence of the condition – they simply doubt that the prevalence is as high as some have claimed. As starting points for discussion go, there are far worse.

Dobbs’ analysis of the problems with the current PTSD diagnosis is good. The relationship between PTSD diagnosis and malleable human memory is not well understood, and does need quite a bit of additional attention. The overlap between symptoms of PTSD and other conditions is definitely problematic, particularly since (as Dobbs points out) a misdiagnosis of PTSD can get in the way of the correct diagnosis and treatment of other mental conditions.

Dobbs is also dead on the money with his analysis of the interference between the dysfunctional VA benefits process and the treatment of PTSD. The disability system does not provide any incentive for veterans to report recovery or improvement of PTSD symptoms, which makes both treatment and epidemiological study of the condition more difficult. His suggestion that we look at the Australian system is interesting, if unlikely to happen.

The Bad:

I’m not usually a massive fan of the journalistic impulse toward “balance.” There are far too many examples of articles where the reporter is covering a topic where there is no real scientific debate, but still covers “both sides” out of some sort of urge to ensure that every point of view -no matter how insane- receives equal time. But that does not mean that it’s always safe to ignore a point of view, either.

In the case of PTSD, there certainly does seem to be considerable scientific debate over a number of points. Surprisingly, though, Dobbs focuses almost entirely on the position advocated by Harvard psychologist Richard McNally – someone Dobbs himself identifies early in the article as “perhaps the most forceful of the [PTSD] critics.” Dobbs quotes McNally extensively during the course of the article, and seems to uncritically adopt a number of McNally’s positions. He names one critic of McNally – Dean Kilpatrick – and says that Kilpatrick “once essentially called McNally a liar”, but he does not provide any explanation of what, exactly, Kilpatrick objects to, or even why he called McNally a liar.

One particularly egregious example of Dobbs’ uncritical acceptance of McNally’s perspective can be found on page 2 of the web version of the Scientific American article:

McNally shares the general admiration for Dohrenwend’s careful work. Soon after it was published, however, McNally asserted that Dohrenwend’s numbers were still too high because he counted as PTSD cases those veterans with only mild, subdiagnostic symptoms, people rated as “generally functioning pretty well.” If you included only those suffering “clinically significant impairment” the level generally required for diagnosis and insurance compensation in most mental illness the rates fell yet further, to 5.4 percent at the time of the survey and 11 percent lifetime. It was not one in three veterans who eventually developed PTSD, but one in nine and only one in 18 had it at any given time. The NVVRS, in other words, appears to have overstated PTSD rates in Vietnam vets by almost 300 percent.

The original source for McNally’s perspective seems to be a letter to the editor that was published in Science in early 2007:

Moreover, contrary to what Kilpatrick states, Dohrenwend et al. did not use “extremely conservative criteria to determine PTSD status.” Instead, they accepted a case as PTSD-positive if the veteran received a score from one through seven on the nine-point Global Assessment of Functioning (GAF) scale. Nine is the highest possible level of functioning, whereas one is the lowest. The typical (apparent) PTSD case received a GAF score of seven, defined as “[s]ome difficulty in social, occupational, or school functioning, but generally functioning pretty well, has some meaningful interpersonal relationships OR some mild symptoms (e.g., depressed mood and mild insomnia, occasional truancy, or theft within the household)” [(2), p. 2]. Clearly, a seven does not indicate clinically significant impairment, as noted by Buckley. Had they been slightly more stringent (i.e., GAF rating from one through six), the prevalence would have dropped by 65%, not 40%. Thus, the estimate for current (late 1980s) prevalence would have been 5.4%–substantially lower than either Dohrenwend et al.’s estimate of 9.1% or the original NVVRS estimate of 15.2%.

McNally is objecting to diagnosing someone with PTSD is they are only experiencing “some difficulty”, but not “moderate difficulty”, as the result of their symptoms. If we agree with McNally and Dobbs, what does that mean? If someone experiencing “some difficulty” functioning as the result of PTSD-like symptoms is not diagnosed as actually having PTSD, are we saying that “some difficulty” functioning is an acceptable mental health outcome in the combat veteran? That it’s not really “significant”? Do they receive a different diagnosis, or do we think that “some difficulty” isn’t something that warrants receiving mental health assistance?

More importantly, this glosses over the fact that the current level of impairment doesn’t always equal the maximum level experienced by the veteran. From Dohrenwend et al.’s own analysis:

The clinicians made ratings of the severity of PTSD at its worst in addition to the severity at the time of the examination. These severity ratings, which are strongly related to the GAF impairment ratings, suggest that the results in Table 1 underestimate impairment when the disorder was at its worst. For example, 36.1% of veterans in the current group were rated mild, 43.1% moderate, and 20.8% severe at the time of diagnosis. When PTSD was at its worst, 3.7% of veterans were rated mild, 31.8% moderate, and 66.5% severe. The results also suggest that at least 85% of veterans in the past group had more than slight impairment when their PTSD was at its most severe.

McNally makes no mention of that finding, which would indicate that while the “current” prevalence of PTSD would substantially drop if his definition of significant is accepted, the lifetime rate would not change much. It’s also worth noting that the accuracy of McNally’s assertion that the “typical (apparent) PTSD case received a GAF score of seven” depends largely on how one defines “typical.” PTSD-diagnosed veterans with a GAF score of seven were the largest group, but not a majority. It can as easily be said that the “typical” PTSD case presents with moderate to severe difficulty functioning (GAF range of 4-6).

Of course, you’d learn this only if you reviewed Dobbs’ sources. Although he cites them, he makes no mention of anything that would cast any level of doubt on McNally’s position.

The Ugly:

Dobbs’ uncritical acceptance of McNally’s position and failure to adequately present any other view are what I’d call bad reporting. Unfortunately, that’s not the only problem with Dobbs’ piece. The next two issues, at least in my opinion, cross the line that separates the merely bad from the inexplicably horrible.

One of these issues is actually something that Dobbs did not mention. He paid a great deal of attention to the financial incentives that can result in the inflation of PTSD diagnosis rates in the VA system, but he did not discuss factors that can lead to the underreporting of PTSD in troops who are still in the military. Writing an article about the diagnosis of PTSD in the military without mentioning the effect stigmatization can have on diagnosis and treatment is like writing the history of the New York Yankees without mentioning Babe Ruth.

Dobbs’ failure to mention barriers to diagnosis and treatment in soldiers is particularly noteworthy because of two recent studies that he brings up late in the article. Both of these studies report on the prevalence of PTSD in troops who have come back from combat deployments in the last few years. As Dobbs notes, both of these studies reported a much lower prevalence of PTSD than studies that have involved Vietnam or Gulf War I vets. I suspect that the differences are too large to be explained as being only the result of an unwillingness to report symptoms in troops still in the military, but that doesn’t mean that stigmatization played no role in the difference. I’m baffled by Dobbs’ failure to at least mention the issue.

Dobbs also manages to either inexplicably misinterpret or misrepresent one of the recent studies. Here’s some of what he wrote:

The biggest longitudinal study of soldiers returning from Iraq, led by VA researcher Charles Milliken and published in 2007, seemed to confirm that we should expect a high incidence of PTSD. It surveyed combat troops immediately on return from deployment and again about six months later and found around 20 percent symptomatically “at risk” of PTSD….

A few months later another study the first to track large numbers of soldiers through the wars in Iraq and Afghanistan provided a clearer and more consistent picture. Led by U.S. Navy researcher Tyler Smith and published in the British Medical Journal, the study monitored mental health and combat exposure in 50,000 U.S. soldiers from 2001 to 2006. The researchers took particular care to tie symptoms to types of combat exposure. Among some 12,000 troops who went to Iraq or Afghanistan, 4.3 percent developed diagnosis-level symptoms of PTSD. The rate ran about 8 percent in those with combat exposure and 2 percent in those not exposed.

These numbers are about a quarter of the rates Milliken found. …

Dobbs also brings this up in one of the blog posts:

Finally, the conflicting studies of PTSD in US veterans of the Iraq and Afghanistan wars cited in the piece are Milliken et alia, “Longitudinal Assessment of Mental Health Problems Among Active and Reserve Component Soldiers Returning From the Iraq War,” JAMA 14 Nov 2007, which found rates of around 20%, and Smith et al, “New onset and persistent symptoms of post-traumatic stress disorder self reported after deployment and combat exposures: prospective population based US military cohort study,” BMJ 16 Feb 2008, which found rates of under 5%

If you actually take the time to read the two articles, it’s immediately apparent that Dobbs is comparing apples and oranges. Smith et al. reported the rate of patients meeting the criteria for a formal diagnosis of PTSD. Milliken et al. did not. They looked at a different assessment tool, and reported not only the prevalence of soldiers who met enough criteria to be diagnosed with PTSD, but also the prevalence of soldiers reporting any symptoms.

The Milliken study based their assessment of PTSD risk on the answers that the soldiers gave on a four-item assessment that’s widely used in the primary care community to determine if a patient should be referred to a mental health professional for follow-up. Patients who give positive answers to three or four of the questions are considered to test positive, and should receive referrals for follow-up with a specialist. In Table 1 of their paper, they note that they considered anyone who answered any of the four questions positively to be at risk for PTSD.

I’m not entirely clear where Dobbs got his 20% figure from. Looking at Table 1, it appears that about 22% were shown to be at risk at initial assessment, and 29% at follow up. However, they also reported that 20.3% of all the respondents were identified as having a “clinician-identified mental health problem”. That figure is not restricted to PTSD diagnoses – it also includes depression, anger, suicide, and family conflict. He seems to either be understating what Milliken actually reported, or reporting the wrong result. (I should note that all of those figures are based on the active duty results, and that the reserve/national guard results are all much higher.)

At initial assessment, 6.2% of the respondents in the Milliken study met the criteria for referral for PTSD follow-up. That figure increased to 9.1% at the time of the follow-up study. That’s still higher than the Smith study, but it’s nowhere near 20%.

Dobbs seems to be implying that it’s strange that the Milliken study reported a higher PTSD percentage than the Smith study. Given that the two studies looked at different populations, used different diagnostic criteria, and – most importantly – weren’t actually reporting on the same thing, I do not share that feeling.

There were things that I enjoyed about Dobbs SciAm article, and there were definitely items in there that should spark more discussion. Unfortunately, the article as a whole suffered from a number of serious problems. A controversial topic was discussed without attempting to cover (or fully acknowledge) more than one viewpoint. Alternative explanations for findings were ignored, even though they had been presented in the original research, and one research article that did not fit the chosen perspective was thoroughly misrepresented. There is no doubt that the diagnosis of PTSD is complex and difficult topic that would benefit from a thorough, careful, and unbiased examination. The Scientific American article in question fit exactly none of those criteria.

References:

Dohrenwend et al. 2006. The Psychological Risks of Vietnam for U.S. Veterans: A Revisit with New Data and Methods. Science Vol. 313. no. 5789, pp. 979 – 982

DOI: 10.1126/science.1128944

McNally. 2007. Letter to the Editor. Science Vol. 315. no. 5809, pp. 184 – 187

DOI: 10.1126/science.315.5809.184b

Milliken et al. 2007. Longitudinal Assessment of Mental Health Problems Among Active and Reserve Component Soldiers Returning From the Iraq War. JAMA Vol 298(18):2141-2148

Smith et al. 2008. New onset and persistent symptoms of post-traumatic stress disorder self reported after deployment and combat exposures: prospective population based US military cohort study. BMJ Vol 336(7640): 366-371

DOI: 10.1136/bmj.39430.638241.AE

Comments

  1. #1 Joshua Zelinsky
    March 24, 2009

    “glosses over the fact that the current level of impairment doesn’t always equal the current level.” Should one of those “current” be replaced with “long-term”?

  2. #2 David Dobbs
    March 24, 2009

    Mike,

    This is a thoughtful critique of my article, but (as you might expect) I think you’ve got some things wrong, and in some cases have read the worst into omissions from my article that were forced because of space. That said, your criticisms are understandable and certainly deserve response — which I’ll try to supply later today or tomorrow.

    Very briefly, however, a few of the larger points:

    - While I can understand your concerns about balance, the article does not pretend to be a he said/she said presentation of a debate, but the presentation of a critique of a prevailing view of PTSD that itself is presented in virtually every media story about PTSD. And in the short space of 3000 words — just 464 words longer than your post here — it had to articulate the arguments about the 1) the conceptual basis of PTSD; 2) the epidemiology of PTSD (that produces the estimates of prevalence); the severe problems posed by the VA’s PTSD disability structure; and 4) briefly but vitally, how these problems are connected to a nation’s mixed feelings about war.

    This meant I had to work in rather bold strokes — which meant, to my intense regret, leaving some fine points out.

    What you call the apples v oranges comparison in the Milliken and Smith studies, for example. I would have liked to include a passage explaining the differences between those studies and the several reasons they got different results. I’ll post such an explanation later on my blog. Yes, they did use two different measures: But my point — which I admit I did not (could not) take the room to make utterly clear — is that the measures used in the Smith study were better and more rigorous in several ways than the Milliken study. It is a much more reliable estimate of actual PTSD than is the Milliken study, which measured PTSD symptoms (which overlap heavily with those of other problems) and risk factors (which might or might not lead to actual PTSD). So the Smith study produces a good estimate of PTSD; the Milliken study produces estimates of PTSD symptoms and risk. Yet the culture as a whole grabs the higher Milliken numbers and counts every soldier who scored positive on it as having PTSD.

    (Where’d I get the 20%? It is, admittedly, a bit of a mash, since the study divvied its figures into regular military and Guard/Reserve (who had higher rates). Twenty percent was a rough shake-out of those numbers so I could use a single figure. I’ll try to explain further later, but that’s the origin. If I remember correctly, that probably slightly understaes the Milliken overall rate.)

    In short, the Milliken study — and its use by the trauma psychology community and the press — expresses perfectly the larger problem: It looks specifically for PTSD; too easily mistakes symptoms of depression, anxiety, or passing normal adjustment for PTSD; and (at least in its application in the wider culture) declares those showing any sign of adjustment or change as PTSD positive. Finally, the Milliken study’s identification of those at risk for PTSD didn’t even hold up very well internally: Huge percentages of those tagged as PTSD positive as they headed home scored negative 6 months later, and vice-versa as well. Yet everyone seems comfortable stating that 20% of the vets studied — even though it wasn’t even the same 20% 6 months apart — have PTSD.

    I’ll try to address some of your other points, such as the rates in the Vietnam veteran study and how stigma might affect the reported or estimated rates — in another note.

    But let me close by addressing again the larger point about balance. What seemed important to me here — and the most important thing to present — was that a growing number of experts and authorities and trauma psychology are raising important questions about both the conceptual underpinnings of the PTSD diagnosis and its overapplication. This critique is not a trivial one and rests on a growing body of data, as well as on the fact that the central mechanisms that define PTSD have been called into question by research since the diagnosis was articulated. And while I leaned heavily on McNally for articulating the argument in the article, this is not a one-man show. Some of the most respected, experienced, and authoritative people in trauma psychology, psychiatry, epidemiology, and diagnostic science share his concerns.

    So we have a major critique about the fundamentals of a diagnosis that has profound consequences for those who receive it, and a pile of evidence suggesting that a) our conception of PTSD is faulty and needs re-examination and b) not only are we overusing this shaky diagnosis (partly because it’s shaky), but its overapplication is rarely helping and often harming the veterans who receive it.

    Yet — and here’s the important point — this critique is going almost completely ignored. For that reason, I thought it important to present it as forcefully as possible while still being true to the essential facts. I regret immensely that space forced me to leave out many details, elaborations, distinctions, and explanations (from both sides of the argument) to get the broad lines of this thing out there. But I feel the article is accurate, and that the faults you see are results not of misrepresentation but of necessary omissions due to the constraints noted.

    That said, I welcome the opportunity to air these things out and to clarify, and if necessary correct, any muddled, mixed, or mangled points. This is the beauty of media 2.0. If my points or arguments or explanations above don’t make sense or satisfy, I trust you’ll let me know.

    Best,

    David Dobbs

  3. #3 j.tarzwell
    March 24, 2009

    This may be a minor point, but the Global Assessment of Functioning is not on a scale of 1-9. It is on a scale of 0-100, a scale which allows for significant and subtle variations in how clinicians attempt to assess function. Contrary to what was written in the article, 9 is not the highest, nor is 1 the lowest. (Unless the study being cited used a modified version of the GAF?)

    You can see the GAF for yourself in the DSM-IV-TR, pp. 34. (Diagnostic and Statistical Manual of Mental Disorders, fourth edition, text revision, published by the American Psychiatric Association.)

  4. #4 moises chaves
    March 25, 2009

    wtf

  5. #5 Dr. Michael Gaspar
    March 26, 2009

    I am a retired family doctor and have no personal stake in the PTSD debate. Yet I was similarly disturbed by the imbalance in the Dobbs article and was in process of composing a critical letter to SciAm when I encountered your blog, which made several of the points I intended to raise. I would just add a few other objections.
    1) Dobbs’ claim that PTSD was introduced as a diagnosis in 1980 “in response to anti-Vietnam War psychiatrists and veterans who sought a diagnosis to recognize what they saw as the unique suffering of Vietnam vets” implies that this was arbitrary and politically motivated. Such a loaded claim surely should have had corroborating references or some other defense. It begs questions about Dobbs’ own motivations for writing his article. The comment also seems to ignore the fact that PTSD has been applied to veterans of WWII, the Korean War, and other conflicts predating Vietnam.
    2)Kilpatrick offers substantive criticism of McNally’s estimates of PTSD incidence based on Dohrenwend’s analysis which went unaddressed by Dobbs. Dobbs’ unreferenced allegation that Kilpatrick once “essentially called McNally a liar” was a cheap way of discrediting Kilpatrick.
    3)One of the most noteworthy aspects of Dohrenwend’s findings was that there was very good corroboration between what PTSD victims had reported as traumatic experiences and what military records appeared to objectively confirm. This would seem to settle the question of whether false memories or fabricated histories play much of a role in PTSD overdiagnosis. Dobbs fails to acknowledge this and devotes his next five paragraphs to develop his theory, or McNally’s, that false memories significantly confound PTSD diagnosis.
    4)The point about there being no biomarkers to confirm a diagnosis of PTSD is silly. No psychiatric diagnosis relies on biomarkers.
    5) The Bodkin study doesn’t seem to prove anything. Noone is making the claim that the PTSD symptom complex is exclusive to PTSD -obviously there is considerable overlap with anxiety and depressive disorders. Similarly noone is claiming that people exposed to significant traumas are more likely to experience this symptom complex as opposed to other possible etiologies. The point of the diagnosis is just to recognize that trauma can be an etiology of significant dysfunction.
    6) I am a big believer in cognitive therapy. I dispute the notion that CBT targetting PTSD would be ineffective for non-trauma-related disorders and vice versa. I am sceptical that concern for veterans being offerred the wrong treatment if given a misdiagnosis of PTSD is what is driving McNally’s dissent. I would be interested to know whether there is evidence McNally is being funded by government to help advance an agenda to reduce the costs of veterans’ health care and disability.

  6. #6 Afghanistan Veteran
    April 10, 2009

    Dobbs is a real jackass. He is an irresponsible blogger and a terrorist to disabled veterans. Dobbs is my “Jane Fonda” hit list.

    Thanks for clearing this up Mike.

The site is undergoing maintenance presently. Commenting has been disabled. Please check back later!