Author (and fellow ScienceBlogger) David Dobbs has an article on PTSD in the latest Scientific American, and has several related posts on his blog here at Sb. Dobbs’ primary argument seems to be that PTSD is being widely overdiagnosed, in part because the condition itself is poorly defined, and in part as the result of various social and economic factors. At least a couple of other bloggers enjoyed his writing on the topic. Personally, I’m not so sure.
As many of you know, I’ve got some fairly significant ties to the US military. My wife has deployed twice, and has had close and personal experiences with combat. Our family has dealt with her deployments well, all things considered, but that does not mean that it’s been an easy process. The first-hand experiences have encouraged me to take a much closer look at both the military healthcare system in general and at mental health issues in the military in particular than I probably would have otherwise. I’m bringing this up not because I’m hoping that you’ll think I’m some sort of authority on the topic (I’m not), but because I’d like to make sure that my potential biases are out in the open from the start.
Dobbs’ Scientific American article is certainly thought-provoking, and it raises a number of valuable and important issues. Unfortunately, the important and valid points are found mixed in with far too much poor reporting. Although Dobbs admits that the issue of PTSD diagnosis is complex and the subject of scientific debate, his report is almost entirely one sided. It also contains some apples-and-oranges comparisons, and conflates several different problems. His blog articles continue that trend.
For the sake of simplicity, fairness, and a cliche, I’m not going to go through the article point by point. Instead, I’m going to look at the good stuff first, then the bad, then (naturally) the ugly.
Although Dobbs is skeptical of the current process for diagnosing PTSD in veterans, neither he nor anyone he interviewed for his story doubt the existence of the condition – they simply doubt that the prevalence is as high as some have claimed. As starting points for discussion go, there are far worse.
Dobbs’ analysis of the problems with the current PTSD diagnosis is good. The relationship between PTSD diagnosis and malleable human memory is not well understood, and does need quite a bit of additional attention. The overlap between symptoms of PTSD and other conditions is definitely problematic, particularly since (as Dobbs points out) a misdiagnosis of PTSD can get in the way of the correct diagnosis and treatment of other mental conditions.
Dobbs is also dead on the money with his analysis of the interference between the dysfunctional VA benefits process and the treatment of PTSD. The disability system does not provide any incentive for veterans to report recovery or improvement of PTSD symptoms, which makes both treatment and epidemiological study of the condition more difficult. His suggestion that we look at the Australian system is interesting, if unlikely to happen.
I’m not usually a massive fan of the journalistic impulse toward “balance.” There are far too many examples of articles where the reporter is covering a topic where there is no real scientific debate, but still covers “both sides” out of some sort of urge to ensure that every point of view -no matter how insane- receives equal time. But that does not mean that it’s always safe to ignore a point of view, either.
In the case of PTSD, there certainly does seem to be considerable scientific debate over a number of points. Surprisingly, though, Dobbs focuses almost entirely on the position advocated by Harvard psychologist Richard McNally – someone Dobbs himself identifies early in the article as “perhaps the most forceful of the [PTSD] critics.” Dobbs quotes McNally extensively during the course of the article, and seems to uncritically adopt a number of McNally’s positions. He names one critic of McNally – Dean Kilpatrick – and says that Kilpatrick “once essentially called McNally a liar”, but he does not provide any explanation of what, exactly, Kilpatrick objects to, or even why he called McNally a liar.
One particularly egregious example of Dobbs’ uncritical acceptance of McNally’s perspective can be found on page 2 of the web version of the Scientific American article:
McNally shares the general admiration for Dohrenwend’s careful work. Soon after it was published, however, McNally asserted that Dohrenwend’s numbers were still too high because he counted as PTSD cases those veterans with only mild, subdiagnostic symptoms, people rated as “generally functioning pretty well.” If you included only those suffering “clinically significant impairment” the level generally required for diagnosis and insurance compensation in most mental illness the rates fell yet further, to 5.4 percent at the time of the survey and 11 percent lifetime. It was not one in three veterans who eventually developed PTSD, but one in nine and only one in 18 had it at any given time. The NVVRS, in other words, appears to have overstated PTSD rates in Vietnam vets by almost 300 percent.
The original source for McNally’s perspective seems to be a letter to the editor that was published in Science in early 2007:
Moreover, contrary to what Kilpatrick states, Dohrenwend et al. did not use “extremely conservative criteria to determine PTSD status.” Instead, they accepted a case as PTSD-positive if the veteran received a score from one through seven on the nine-point Global Assessment of Functioning (GAF) scale. Nine is the highest possible level of functioning, whereas one is the lowest. The typical (apparent) PTSD case received a GAF score of seven, defined as “[s]ome difficulty in social, occupational, or school functioning, but generally functioning pretty well, has some meaningful interpersonal relationships OR some mild symptoms (e.g., depressed mood and mild insomnia, occasional truancy, or theft within the household)” [(2), p. 2]. Clearly, a seven does not indicate clinically significant impairment, as noted by Buckley. Had they been slightly more stringent (i.e., GAF rating from one through six), the prevalence would have dropped by 65%, not 40%. Thus, the estimate for current (late 1980s) prevalence would have been 5.4%–substantially lower than either Dohrenwend et al.’s estimate of 9.1% or the original NVVRS estimate of 15.2%.
McNally is objecting to diagnosing someone with PTSD is they are only experiencing “some difficulty”, but not “moderate difficulty”, as the result of their symptoms. If we agree with McNally and Dobbs, what does that mean? If someone experiencing “some difficulty” functioning as the result of PTSD-like symptoms is not diagnosed as actually having PTSD, are we saying that “some difficulty” functioning is an acceptable mental health outcome in the combat veteran? That it’s not really “significant”? Do they receive a different diagnosis, or do we think that “some difficulty” isn’t something that warrants receiving mental health assistance?
More importantly, this glosses over the fact that the current level of impairment doesn’t always equal the maximum level experienced by the veteran. From Dohrenwend et al.’s own analysis:
The clinicians made ratings of the severity of PTSD at its worst in addition to the severity at the time of the examination. These severity ratings, which are strongly related to the GAF impairment ratings, suggest that the results in Table 1 underestimate impairment when the disorder was at its worst. For example, 36.1% of veterans in the current group were rated mild, 43.1% moderate, and 20.8% severe at the time of diagnosis. When PTSD was at its worst, 3.7% of veterans were rated mild, 31.8% moderate, and 66.5% severe. The results also suggest that at least 85% of veterans in the past group had more than slight impairment when their PTSD was at its most severe.
McNally makes no mention of that finding, which would indicate that while the “current” prevalence of PTSD would substantially drop if his definition of significant is accepted, the lifetime rate would not change much. It’s also worth noting that the accuracy of McNally’s assertion that the “typical (apparent) PTSD case received a GAF score of seven” depends largely on how one defines “typical.” PTSD-diagnosed veterans with a GAF score of seven were the largest group, but not a majority. It can as easily be said that the “typical” PTSD case presents with moderate to severe difficulty functioning (GAF range of 4-6).
Of course, you’d learn this only if you reviewed Dobbs’ sources. Although he cites them, he makes no mention of anything that would cast any level of doubt on McNally’s position.
Dobbs’ uncritical acceptance of McNally’s position and failure to adequately present any other view are what I’d call bad reporting. Unfortunately, that’s not the only problem with Dobbs’ piece. The next two issues, at least in my opinion, cross the line that separates the merely bad from the inexplicably horrible.
One of these issues is actually something that Dobbs did not mention. He paid a great deal of attention to the financial incentives that can result in the inflation of PTSD diagnosis rates in the VA system, but he did not discuss factors that can lead to the underreporting of PTSD in troops who are still in the military. Writing an article about the diagnosis of PTSD in the military without mentioning the effect stigmatization can have on diagnosis and treatment is like writing the history of the New York Yankees without mentioning Babe Ruth.
Dobbs’ failure to mention barriers to diagnosis and treatment in soldiers is particularly noteworthy because of two recent studies that he brings up late in the article. Both of these studies report on the prevalence of PTSD in troops who have come back from combat deployments in the last few years. As Dobbs notes, both of these studies reported a much lower prevalence of PTSD than studies that have involved Vietnam or Gulf War I vets. I suspect that the differences are too large to be explained as being only the result of an unwillingness to report symptoms in troops still in the military, but that doesn’t mean that stigmatization played no role in the difference. I’m baffled by Dobbs’ failure to at least mention the issue.
Dobbs also manages to either inexplicably misinterpret or misrepresent one of the recent studies. Here’s some of what he wrote:
The biggest longitudinal study of soldiers returning from Iraq, led by VA researcher Charles Milliken and published in 2007, seemed to confirm that we should expect a high incidence of PTSD. It surveyed combat troops immediately on return from deployment and again about six months later and found around 20 percent symptomatically “at risk” of PTSD….
A few months later another study the first to track large numbers of soldiers through the wars in Iraq and Afghanistan provided a clearer and more consistent picture. Led by U.S. Navy researcher Tyler Smith and published in the British Medical Journal, the study monitored mental health and combat exposure in 50,000 U.S. soldiers from 2001 to 2006. The researchers took particular care to tie symptoms to types of combat exposure. Among some 12,000 troops who went to Iraq or Afghanistan, 4.3 percent developed diagnosis-level symptoms of PTSD. The rate ran about 8 percent in those with combat exposure and 2 percent in those not exposed.
These numbers are about a quarter of the rates Milliken found. …
Dobbs also brings this up in one of the blog posts:
Finally, the conflicting studies of PTSD in US veterans of the Iraq and Afghanistan wars cited in the piece are Milliken et alia, “Longitudinal Assessment of Mental Health Problems Among Active and Reserve Component Soldiers Returning From the Iraq War,” JAMA 14 Nov 2007, which found rates of around 20%, and Smith et al, “New onset and persistent symptoms of post-traumatic stress disorder self reported after deployment and combat exposures: prospective population based US military cohort study,” BMJ 16 Feb 2008, which found rates of under 5%
If you actually take the time to read the two articles, it’s immediately apparent that Dobbs is comparing apples and oranges. Smith et al. reported the rate of patients meeting the criteria for a formal diagnosis of PTSD. Milliken et al. did not. They looked at a different assessment tool, and reported not only the prevalence of soldiers who met enough criteria to be diagnosed with PTSD, but also the prevalence of soldiers reporting any symptoms.
The Milliken study based their assessment of PTSD risk on the answers that the soldiers gave on a four-item assessment that’s widely used in the primary care community to determine if a patient should be referred to a mental health professional for follow-up. Patients who give positive answers to three or four of the questions are considered to test positive, and should receive referrals for follow-up with a specialist. In Table 1 of their paper, they note that they considered anyone who answered any of the four questions positively to be at risk for PTSD.
I’m not entirely clear where Dobbs got his 20% figure from. Looking at Table 1, it appears that about 22% were shown to be at risk at initial assessment, and 29% at follow up. However, they also reported that 20.3% of all the respondents were identified as having a “clinician-identified mental health problem”. That figure is not restricted to PTSD diagnoses – it also includes depression, anger, suicide, and family conflict. He seems to either be understating what Milliken actually reported, or reporting the wrong result. (I should note that all of those figures are based on the active duty results, and that the reserve/national guard results are all much higher.)
At initial assessment, 6.2% of the respondents in the Milliken study met the criteria for referral for PTSD follow-up. That figure increased to 9.1% at the time of the follow-up study. That’s still higher than the Smith study, but it’s nowhere near 20%.
Dobbs seems to be implying that it’s strange that the Milliken study reported a higher PTSD percentage than the Smith study. Given that the two studies looked at different populations, used different diagnostic criteria, and – most importantly – weren’t actually reporting on the same thing, I do not share that feeling.
There were things that I enjoyed about Dobbs SciAm article, and there were definitely items in there that should spark more discussion. Unfortunately, the article as a whole suffered from a number of serious problems. A controversial topic was discussed without attempting to cover (or fully acknowledge) more than one viewpoint. Alternative explanations for findings were ignored, even though they had been presented in the original research, and one research article that did not fit the chosen perspective was thoroughly misrepresented. There is no doubt that the diagnosis of PTSD is complex and difficult topic that would benefit from a thorough, careful, and unbiased examination. The Scientific American article in question fit exactly none of those criteria.
Dohrenwend et al. 2006. The Psychological Risks of Vietnam for U.S. Veterans: A Revisit with New Data and Methods. Science Vol. 313. no. 5789, pp. 979 – 982
McNally. 2007. Letter to the Editor. Science Vol. 315. no. 5809, pp. 184 – 187
Milliken et al. 2007. Longitudinal Assessment of Mental Health Problems Among Active and Reserve Component Soldiers Returning From the Iraq War. JAMA Vol 298(18):2141-2148
Smith et al. 2008. New onset and persistent symptoms of post-traumatic stress disorder self reported after deployment and combat exposures: prospective population based US military cohort study. BMJ Vol 336(7640): 366-371