Loftin's study on Washington DC handgun ban

Jim Lindgren believes that a post by Carl Bogus on the DC handgun ban is uninformed. Bogus wrote:

A careful study that compared the nine year period before the ban was enacted with the nine years following enactment, and then compared what happened in D.C. with the immediately surrounding areas in Maryland and Virginia, found that the handgun ban reduced gun-related homicides by 25% and gun-related suicides by 23 percent. Colin Loftin, Ph.D., et al., "Effects of Restrictive Licensing of Handguns of Homicide and Suicide in the District of Columbia," 325 New Eng. J. Med. 1615 (Dec. 5, 1991). The law did not turn Washington into the Garden of Eden, and crime rates fluctuated, particularly during the last few years of the study when the use of "crack" cocaine was increasing and homicides increased dramatically. Nevertheless, the effect of the law was both immediate and sustained, and things would have been worse without it.

Now this can certainly be criticised -- the study did not conclude that the reductions were caused by the ban since it is possible that some other factor was the cause. But Lindgren's comments are not correct:

From what I've seen, the Loftin study that Bogus points to should not be taken seriously. A simple Google search would have revealed why. According to Dean Payne's re-analysis, if you use Loftin's homicide and suicide data, adjust for population changes (as you must), and use per capita rates (as you must), the DC ban is associated with more deaths after the ban, not fewer. ...

That the New England Journal of Medicine would publish a time-series article that did not account for population changes over roughly a two-decade period is embarrassing, but then peer review seems to suffer when gun control articles are involved.

While the numbers given in Loftin's paper were for numbers of homicides and suicides rather than rates, Loftin et al state that they get similar results if age adjusted per-capita rates are used. (This corrects for change in the age structure of the population as well as its size.) The results for age adjusted per-capita rates were given in supplementary material. Here they are:

Type of fatality        Before change
and location            law    after
                               law
                        rate   rate    SE      %        t statistic
                         per 100k
Homicide
District of Columbia
  Gun-related           20.9    -4.1   1.18   -20       -3.47
  Non-gun-related       10.87    1.15  0.66    11        1.74
Maryland and Virginia
  Gun-related            3.12   -0.4   0.29   -13       -1.54
  Non-gun-related        1.66    0.19  0.14    11        1.29

Suicide
District of Columbia
  Gun-related            4.13   -0.63  0.37   -15        1.72
  Non-gun-related        7.29   -0.28  0.62    -4       -0.45
Maryland and Virginia
  Gun-related            5.18   -0.14  0.37    -3       -0.37
  Non-gun-related        5.38   -0.43  0.4     -8       -0.41

Update: Lindgren responds here, while Bogus responds here.

More like this

What is the statistical benefit of adjusting for age shifts, beyond making the numbers look "good" again? When death rates are compared in other studies, they are seldom described as being age-adjusted. Age buckets that might be used for this kind of adjustment tend to be somewhat arbitrary, rather than being granular enough to account for the "hidden" variables in homicide and suicide rates.

death rates are almost always age-adjusted, Michael. By adjusting for age in crime statistics you allow for changes in the size of the population of young adults, from which the largest number of crimes are committed. If you just adjust for the whole population, the whole population might not change but the proportion under 25 (and most likely to commit crime) could increase - as may be happening in, for example, London at the moment.

Off topic, but I don't see an open thread handy--

Link

The link is to a new study that attempts to show that passive surveillance methods undercount war deaths. (No surprise there.) The Iraq war is mentioned, but what they say seems confused and garbled to me--they seem to be comparing 2006 estimates from L2 and the recent NEJM paper with IBC numbers from 2008. Maybe I'm the one that's confused. Their own number is close to that of the NEJM paper.

By Donald Johnson (not verified) on 29 Jun 2008 #permalink

>>the study did not conclude that the reductions were caused by the ban since it is possible that some other factor was the cause.<<

I'm confused. The last sentence of the abstract is "Our data suggest that restrictions on access to guns in the District of Columbia prevented an average of 47 deaths each year after the law was implemented." Are you saying this means something other than what it apparently means, or are you simply suggesting that the use of "suggest" means this statement is not a "conclusion"?

>>This corrects for change in the age structure of the population as well as its size.<<

Umm, I don't think so.

Age-adjustment is intended to correct for the fact that the mortality risk is not distributed evenly across all age groups. From 1940 to 2000, a standard population profile, derived from the 1940 census, was used for all such mortality age-adjustment, regardless of gender or race (and new tables were generated in 2000). The standard method of age adjustment does not correct for changes in the age structure of the population; in fact that is a substantial criticism of age-adjustment and limits its usefulness. It is almost certainly not the case that Loftin et al. computed the age profile of the DC population for the periods before and after the handgun ban and used different weighting factors to compute the gun-homicide risk. That is not what age adjustment is intended to accomplish.

The discrepancy between Lindgren's non-age-adjusted deltas and Loftin's age-adjusted deltas raises some significant questions. (1) Did, in fact, the DC age structure change over the study period? (2) Did the difference between the DC age structure and the 1940 standard age structure make a difference in the conclusion? (3) Did the difference in gender between the DC homicide victim population and the 1940 standard make a difference?

I agree that age adjustment is useful for sorting out some population risk comparisons. In the case of gun deaths among blacks the difference between age-adjusted and non-age-adjusted risks is striking. But it is important to stipulate what ensemble is being evaluated. For example, the 2002 National Vital Statistics Report from the CDC indicates that 5% of the deaths of blacks are by gun violence (homicide + suicide with guns involved). But age-adjustment indicates that the risk that a given black person is going to die by gun violence is only about 2%. In plain English, if you died in 2002 and you were black, there was a 5% chance that guns were involved. But if you're black, there is a 2% chance that you will die by gun violence. The difference is due to the fact that gun violence generates victims disproportionately among the young. The "average" black person is beyond the age of maximum risk of gun violence.

I am also troubled by the use of t-statistics to pick out the significant line in the table you quoted. First of all, it seems to me that an ANOVA is called for. And when making multiple comparisons like this, t-statistics are not reliable for picking outliers. More sophisticated tests like Newman-Keuls or Fisher's LSD are needed.

BBB

Thanks to Donald Johnson for that interesting article.

1) The key claim of the article, for purposes of Lancet aficionados, is that "media estimates capture on average a third of the number of deaths estimates from a population based surveys." This matters because Les Roberts has been running around for years claiming that passive surveillance (as Iraq Body Count uses) is a horrible method of estimating mortality and never (except possibly in places like Bosnia) captures a small percentage of all deaths. (In fairness to Roberts, this is a new article and so, perhaps, his previous claims were justified by the research he had access to.)

Will Roberts now acknowledge this? Time will tell.

2) Is this the paragraph that you think is "confused?"

As a final point of comparison, we applied our correction method, derived from the comparison of survey estimates with Uppsala/PRIO data, to data from the Iraq Body Count project's most recent report of 86 539 (the midpoint of the 82 772 to 90 305 range reported in April 2008) dead in Iraq since 2003. Our adjusted estimate of 184 000 violent deaths related to war falls between the Iraq Family Health Survey estimate of 151 000 (104 000 to 223 000) and the 601 000 estimate from the second Iraq mortality survey by Burnham and colleagues. [footnotes omitted]

Clearly, a better comparison would have looked at the IBC numbers to July 2006 (mid-point 47,668), thus covering the same time period as L2 and IFHS.

Let's see. If we multiply the IBC number by 3 we get pretty darn close to the IFHS number. Coincidence? I think not. Or do we need to add Obermeyer, Murray and Gakidou to the list of Lancet "denialists?" Just asking!

And, of course, if you adjust the L2 results in the same way as IFHS does with its raw counts, the L2 violent death toll rises to around 1.2 million. (Details here.)

So, if IFHS and IBC are consistent with each other, why is does L2 record violent mortality approximately 8 times higher? I think that the raw data underlying L2 is not reliable.

This probably should go in the open thread, David. Anyway, on point 1, I thought this verified Roberts' claim that passive methods greatly undercount deaths--on average by a factor of three. The correction factor for Iraq is unknown. Obviously Roberts thinks it is more like a factor of ten in that case.

What I recall is that the IBC defenders wouldn't even go with a factor of 3 back in 2004-2006. Sloboda once acknowledged they could be low by a factor of four, but he didn't think so, and his most passionate defender at medialens and here didn't think they were off by anything close to that. L1 was entirely out of bounds for them. Since IFHS came out, though, the midrange (excluding Fallujah) number from L1 is looking entirely reasonable, though it was dismissed out of hand by Lancet critics then.

On point 2, yes, that was what confused me. Why compare 2008 IBC numbers to studies that only covered up to June 2006? And yes, a factor of 3 applied to the appropriate 2006 IBC number would give the IFHS figure (and similarly, the L1 number in the fall of 2004.) But again, we don't know what the true correction factor is and of course if the true correction factor is 3, it is a coincidence. There's no law of nature that says Iraq has to be exactly like the "average" war--it could be higher or lower.

By Donald Johnson (not verified) on 01 Jul 2008 #permalink

1) I extended my comments here. See also the follow up from JoshD of IBC in the comments to that post.

2) "Roberts' claim that passive methods greatly undercount deaths" is not a useful description of the debate. Roberts favorite example is Guatemala (in fact, he regularly claims that this is almost the only one he could "find") and, in that case, he seems to argue for a factor of 10 or more. He has never, ever (corrections welcome!) suggested that a factor of 3 is remotely plausible, not just in Iraq but in any war (except perhaps Bosnia). The OMG paper, on the other hand, claim that a factor of 3 (even 2) is the best estimate for a typical war.

3) By the way, if you had comments on this post, I would be eager to read them.

Brief response--

The idea that one would go to a household in a war zone, tell the residents that you come from a government tied to death squads, ask about the number of violent deaths, and expect to get trustworthy data is so bizarre to me that if you'd described this procedure to me several years ago I probably would have thought it a joke.

I have little idea how many Iraqis have died violently. Too many incentives to lie, and too many people with axes to grind arguing about it, including me.

By Donald Johnson (not verified) on 02 Jul 2008 #permalink

Donald, your post (and David, your response) probably should not have gone into this thread. I'll follow-up in the current open thread.

However, pursuant to this thread, BBB wrote:

From 1940 to 2000, a standard population profile, derived from the 1940 census, was used for all such mortality age-adjustment, regardless of gender or race (and new tables were generated in 2000).

I'm not sure that's right. I've done a moderate amount of mortality estimation taking into account differences in age structure and I've never used a 1940 census-based population profile as a standard. I didn't even know that one existed (though admittedly, as David Kane has amply demonstrated, that one is not aware of something is often more a statement about the person making the claim than the claim itself. Corrections welcome!). For the simplest example, period LT's use synthetic cohort techniques.

Robert wrote:
I'm not sure that's right. I've done a moderate amount of mortality estimation taking into account differences in age structure and I've never used a 1940 census-based population profile as a standard. I didn't even know that one existed

Well, I have done NO mortality estimation, but in the manner of the modern academy, I have taught how it's done. ;-)

Here is what the 2002 NVSR says in the technical notes:

>>Beginning with the 1999 data year, a new population standard was adopted by NCHS for use in age-adjusting death rates. Based on the projected year 2000 population of the United States, the new standard replaces the 1940 standard population that had been used for over 50 years. The new population standard affects levels of mortality and to some extent trends and group comparisons. Of particular note are the effects on race comparison of mortality. For detailed discussion see Age Standardization of Death Rates: Implementation of the Year 2000 Standard (74).<<

This tracks what other web sites have to say about age-adjusting mortality rates. Please enlighten me if I am misreading this.

BBB

Here is the document explaining the change to the Year 2000 age-adjustment standard:

http://www.cdc.gov/nchs/data/nvsr/nvsr47/nvs47_03.pdf

Look at Figure 1. I haven't run any numbers, but it seems to me that if Loftin used the 1940 age profile to do his age-adjustment, but the population was closer to the Y2K standard, there are substantial reasons to doubt the accuracy of the age-adjustment for any mortality risk that strongly distinguishes between 20-year-olds and 40-year-olds.

BBB

BBB wrote:

but in the manner of the modern academy, I have taught how it's done [...] This tracks what other web sites have to say about age-adjusting mortality rates. Please enlighten me if I am misreading this.

Then surely you must know that there's more than one way to standardize for changing age structure, and that for this particular way there's really nothing intrinsically wrong with using any given standard as long as you're using it consistently -- [this report](http://www.cdc.gov/nchs/data/nvsr/nvsr49/nvsr49_09.pdf) says that one of the reasons to change to a 2000 standard was "to deal with the perception that the 1940 population is outdated." We compare e0's from period LTs across time and area without using a 1940 standard population. When we look at TFRs we standardize on a population with a uniform age distribution, and we compare TFRs across time and area although a uniform age structure is neither realistic nor at all similar to the 1940 population. I'd be slightly surprised if Loftin et al. age-standardized in this way for a cause-specific mortality study but I don't see an error that would vitiate their finding if they had (which isn't to say that there may be other critical errors -- just that using, or not using, a 1940 standard isn't one of them).

BBB wrote:

http://www.cdc.gov/nchs/data/nvsr/nvsr47/nvs47_03.pdf

Look at Figure 1. I haven't run any numbers, but it seems to me that if Loftin used the 1940 age profile to do his age-adjustment, but the population was closer to the Y2K standard, there are substantial reasons to doubt the accuracy of the age-adjustment for any mortality risk that strongly distinguishes between 20-year-olds and 40-year-olds.

Thanks. On page 2 of that report one finds this: "Workshop participants concluded that although there were no compelling technical reasons to change population standards [yadda yadda yadda]." Then, on page 5 one finds: "Choice of the age standard does affect trends in some of the leading causes of death. The effect is least when changes in age specific rates are parallel and is greater when age-specific trends diverge over time. For most of the leading causes, trends in age-adjusted death rates are virtually parallel regardless of the standard. Thus, trends for [...] homicide [...] are approximately the same using the year 2000 standard and the 1940 standard."

Robert wrote:
>>on page 5 one finds: "Choice of the age standard does affect trends in some of the leading causes of death. The effect is least when changes in age specific rates are parallel and is greater when age-specific trends diverge over time. For most of the leading causes, trends in age-adjusted death rates are virtually parallel regardless of the standard. Thus, trends for [...] homicide [...] are approximately the same using the year 2000 standard and the 1940 standard."

Thanks for pointing this out.

You still have to run the numbers and see if Loftin's conclusions hold up. If you look at Table D in the age standardization document, you will note that the change from the 1940 to the 2000 standard changed the 1979-1995 trend in homicide rate from -7.8% to -13.6%. This is "approximately the same" compared to the changes in the trends in other causes of death but the magnitude of this difference is large compared to the effect Loftin claims to measure.

Note (a) the homicide trend is inverse to most causes of death, in that the risk decreases with age, and so forms an obvious exception to Gompertz's law, and (b) the inverse trend effect is magnified for gun deaths. Specifically, drawing numbers from Table 11 of nvsr53_05acc.pdf, the homicide rate peaks at 12.9 per 100k for the age group 15-24, and drops by a factor of 6 as age increases. The subcategory of gun homicides peaks at 10.6, and drops by a factor of 12. Now, both 6 and 12 are small ratios compared to the magnification associated with Gompertz's law; hence the claim in the age standardization document that the homicide rate is nearly constant with age group. But this is a significant age bias for the trend Loftin studied -- in fact the present thread is largely about whether age adjustment is enough to salvage Loftin's conclusions, assuming per Lindgren that the population adjustment wiped out the trend. So, if you are comparing age-adjusted homicide rates to age-adjusted Alzheimer's rates, the former don't change much between the 1940 and 2000 standard and the latter do. But if you are comparing gun homicide rates between Maryland and DC in two proximate time periods, it makes a significant difference which age-adjusting standard you use.

BBB

BBB wrote:

You still have to run the numbers and see if Loftin's conclusions hold up [...]This is "approximately the same" compared to the changes in the trends in other causes of death but the magnitude of this difference is large compared to the effect Loftin claims to measure.

Oh, absolutely, I agree: the magnitude of the effect depends on the standard -- but I would have said that in these kinds of first-cut studies it's more important to pay attention to the big picture, and the big picture is the sign of the effect rather than its magnitude.

Note (a) the homicide trend is inverse to most causes of death, in that the risk decreases with age, and so forms an obvious exception to Gompertz's law, and (b) the inverse trend effect is magnified for gun deaths.

Yeah, but here's the thing: although the age structure in DC changed over the (not quite) two decades of this study, the mean age of gunshot homicide victims probably didn't change tremendously -- it may have changed some but not hugely. If you're familiar with stable age structure calculations (DC didn't have a stable age structure but the principle roughly applies) you'll remember that the amount of leverage a change in age structure has on a given population characteristic is proportional to the change in mean age of that characteristic. It's still important to do the actual calculation but you can see why Alzheimer's depends so much more on changes in the age structure than some of the other causes.

Tim:

I respond at Volokh. I saved my pointed criticisms, not for you, but for the Loftin study.

Jim Lindgren

By James Lindgren (not verified) on 08 Jul 2008 #permalink

Isn't this just looking into the numbers too deeply and missing for forest? If there are still gun crimes in DC decades after the ban was in place doesn't this mean the ban was ineffective?

By Sebastian the Ibis (not verified) on 09 Jul 2008 #permalink

Yep, Sebastion, it means that and a whole lot more. DC has all sorts of problems, and guns are the least of them.

Sebastian the Ibis asked:

If there are still gun crimes in DC decades after the ban was in place doesn't this mean the ban was ineffective?

To which Ben responded:

Yep

I don't really have a comment to make. I'm just responding so I can put a tag on this and find it again.

Ben:

Thanks for the extra effort but, really, it's unnecessary. I've already tagged your message.