Lott’s reply to Duncan’s article raises some disturbing questions about Lott’s honesty. See also James Lindgren’s report on his attempt to find some evidence that Lott actually conducted a DGU survey.
Where did that 98 percent come from?
98 percent claims before 1997
The answer is that the gun never needs to be fired in 98% of the instances of a successful self-defense with a gun. The criminals just leave abruptly, instead.”
Kleck says in the magazine “Social Problems” (2/88):
“there were about 8,700-16,600 non-fatal, legally permissible woundings of criminals by gun armed civilians” annually, and “the rest of the one million estimated defensive gun uses, over 98% involved neither killings nor woundings but rather warning shots fired or guns pointed or referred to.”
So I certainly screwed up by not including the warning shots fired, plus the times the victim simply missed. From Kleck’s latest survey as reported here, the warning shots and misses are very significant (~14%).
You can see several more examples of people saying that 98% of with-gun defences just involve brandishing here, here, here,
here and here. All of these examples were written before Lott made similar claims from 1997 onwards. When these people do give a source for the statistic it is invariably Kleck. Each of them was probably trying to quote the Kleck estimate that 98% involved mere brandishing, or warning shots or misses, but forgot about the last two cases.
Survey data on how often defenders shoot
Kleck’s 98% “brandish, warning shot or miss” estimate was based on an indirect (and very rough) estimate of the number of defensive woundings. Subsequent research on defensive gun use allowed more direct estimates, and estimates of how often defenders fired their gun.
|NCVS 1987-1990||28||Duncan 2000|
|NCVS 1987-1992||38||Rand 1994|
|NCVS 1992-2001||21||NCVS online analysis system|
|Cambridge Reports||67||Kleck 1995|
While there is some variation, the lowest number is 21%, and anyone who claims that only 2% of defenders fire their weapons is strongly contradicted by these surveys.
Lott makes the 98% claim over and over and over again
Lott has repeatedly made the claim: “98% of the time when people use a gun defensively, merely brandishing the weapon is sufficient to stop an attack.”
“There are surveys that have been done by the Los Angeles Times, Gallup, Roper, Peter Hart, about 15 national survey organizations in total that range from anything from 760,000 times a year to 3.6 million times a year people use guns defensively. About 98 percent of those simply involve people brandishing a gun and not using them.”
Page 41, State of Nebraska, Committee on Judiciary LB465, February 6, 1997, statement of John Lott, Transcript prepared by the Clerk of the Legislature, Transcriber’s Office.
“Guns clearly deter criminals, with Americans using guns defensively over 2 million times each year — five times more frequently than the 430,000 times guns were used to commit crimes in 1997, according to research by Florida State University criminologist Gary Kleck. Kleck’s study of defensive gun uses found that ninety-eight percent of the time simply brandishing the weapon is sufficient to stop an attack.”
John Lott Gun Locks: Bound to Misfire online publication of the Independence Institute, Feb. 9, 2000.
Just like the other people who gave the 98% statistic, Lott attributed it to Kleck. The most likely explanation is that Lott made an similar error to that made by the others who have made similar claims — he forgot that the 98% figure included shots that missed.
However, in his reply to Duncan’s article where Duncan makes a similar suggestion, Lott denies this, claiming instead that the 98% figure came from his own survey of 2,424 people conducted over three months in 1997. In the second edition of More Guns Less Crime Lott changed the statement
“If national surveys are correct, 98 percent of the time that people use guns defensively, they merely have to brandish a weapon to break off an attack.”
“If a national survey that I conducted is correct, 98 percent of the time that people use guns defensively, they merely have to brandish a weapon to break off an attack.”
Lott’s change was deliberately misleading — instead of informing of what national surveys actually found, he claimed that the 98 percent figure came from his own survey.
Problems with Lott’s survey claim
If the 98% figure came from a survey conducted over three months in 1997, how was Lott able to present the figure on February 6, 1997? Lott has made further claims about defensive gun use that raise more questions about the survey he claims to have conducted:
“Less than one of every thousand times that people use guns defensively is the attacker killed. Ninety-eight percent of the time, surveys indicate that up to 98% of the time simply brandishing [a] gun is sufficient to make a criminal break off an attack. And in only about one and a half percent of the time is a warning shot fired. And in about half of one percent of the time is the gun fired in the direction of the attacker and a lot of those miss. Woundings are extremely rare but even they exceed killing of attackers by about an eight to one ratio.”
John Lott speech at Sioux Falls City Club, 25 Oct 2000
Broadcast on South Dakota Public Radio
At time 26:30 in the speech Link
“In fact, in 98% of the cases, simply brandishing a gun is sufficient to stop a crime. Research at Florida State University and at the University of Chicago indicates that only one out of 1,000 defensive gun uses results in the attacker’s death.”
LA Times Friday, March 30, 2001
Others Fear Being Placed at the Mercy of Criminals
by John Lott Jr.
If 0.1% of defensive gun uses (DGUs) result in the death of the attacker and woundings are eight times as frequent as deaths, that means 0.8% of DGUs are woundings. This is not at all consistent with his claim that in only 0.5% of the cases is the gun fired at the offender.
Furthermore, Lott claims that his survey yields an estimate of 2.1 million DGUs. This implies that about 1% of his respondents reported a personal DGU in the past year. With a sample size of 2,424, that’s about 25 DGUs. 2% of 25 is 0.5. Clearly it is not possible for half a person to report firing their weapon.
The mysterious survey
Duncan has tried to find out more details about this survey. I quote from an email from him:
I queried him [Lott] by mail and e mail (I sent the mail certified but he claims not to have received the e mail, which was identical) and got nothing more from him besides what is in the passage in his “Reply” except for the detail that the interviewing was done by students using their own computers. They fed their results into his computer, and it was this one that experienced the crash. He assures me that others lost data in that crash too. Strangely enough, in the phone conversation of May 1999, presumably long after the crash, he said nothing about it. And he has left the timing rather ambiguous. If he did a survey in early 1997 but too late to mention in his Valparaiso Law Review article, he would still have got the 98% figure by the time his book went to press. Indeed, he says that he had planned to include a discussion of the survey in the book, but the crash occurred just before his deadline for turning in the book. (This could refer to the 2000 edition, but that would raise other problems.) So, it is not at all clear why he couldn’t have said the same thing on p. 3 in 1998 as he did in 2000. The timing strongly suggests that the figure of 98% came first, the story about the survey and its loss in the computer crash came afterward. A second letter to Lott this summer went unanswered. In it I had asked for specific evidence such as a copy of his questionnaire, a copy of the computer printout from which he derived his 98% etc., names of some of the students, etc. He has not answered that letter. I too would like to have an explanation from Lott. I considered the possibility that he indeed thought of doing some kind of survey, and maybe went so far as to have some students do trial interviews, but gave it up for whatever reason. And possibly something of that material went into the computer that subsequently crashed.
No one knows but Lott.
It should be trivial for Lott to prove that he conducted his survey — he just needs to show the records listing thousands of phone calls made in early 1997. The fact that he cannot do this or provide any other evidence that this survey really happened is highly suspicious. James Lindgren reports that Lott has told him that all evidence for the survey was either lost in his computer crash (survey data), thrown out (records of payments to the students), forgotten (names of students, survey questions) or lost (survey transcripts). Lott also claimed that he did not discuss the survey with anyone else at the time.
Most likely explanation
The most likely explanation of what has happened is that Lott made a similar error to Tavares, but rather than correct it like Tavares did, Lott chose to make up a story about the 98% statistic coming from his survey.
“The reluctance of gun-control advocates to release their data is quite widespread. In May 1997 I tried to obtain data from the Police Foundation about a study that they had recently released by Philip Cook and Jens Ludwig, but after many telephone calls I was told by Earl Hamilton on May 27. “Well, lots of other researchers like Arthur Kellermann do not release their data.” I responded by saying that was true, but it was not something that other researchers approved of, nor did it give people much confidence in his results”
John Lott, page 291 of “More Guns, Less Crime”
On a pattern of behaviour
On many other occasions Lott has made claims that he has been unable to substantiate, claims that find no support anywhere in the relevant literature. Lott never corrects these mistakes or admits to making an error. Here are some examples:
How many national surveys?
Lott claims that “fifteen national polls… imply that there are 760,000 defensive handgun uses to 3.6 million defensive uses of ant type of gun per year”. However, the reference he cites gives a table containing data from thirteen polls. However, three of the polls are not national polls, but are confined to a single state and two of the polls do not yield an estimate at all. Two later surveys(Kleck’s surveys and the NSPOF) could be added to this yielding a total of ten “national polls”. In his reply to Duncan’s article, Lott makes things worse. Rather than admit to making a mistake, he falsely claims that the table contains fifteen polls. It doesn’t—there are thirteen polls listed in that table. He also claims that he included his own mysterious survey amongst the “fifteen national polls”. However, he also made the claim about 15 national surveys on February 6, 1997, before his own survey was completed. If it was a legitimate survey, how could he have known the results before it was finished?
Does the NCVS weight its sample correctly?
Lott claims that the National Crime Victimization Survey does not weight regions by population and relies too heavily on urban data. Lott offers no evidence for this claim and apparently would have us believe that the NCVS has been conducted incompetently for over 25 years and no-one has noticed and made the trivial fix to the problem.
I pointed this out to him and asked for an explanation and this is what I got:
This information was from when I worked in federal law enforcement in Washington during the late 1980’s and from talking to friends that still work there it is my understanding that it has not been fixed. I will check into documentation.
Lott still has not provided any documentation for his claim, but continues to make it.
Are women 2.5 times as likely to be injured if they don’t use a gun?
On dozens of occasions Lott has claimed that the probability of serious injury from an attack is 2.5 times greater for women offering no resistance than for women resisting with a gun. Lott attributes this finding to Lawrence Southwick. However, what Southwick actually found (and clearly stated) was that there was no statistically significant difference in the injury rates. His data showed that 3% of women offering no resistance were injured. The data also showed that 80 women resisited with a gun. If 3% of these had been injured, there would have been two injured. Instead, only one was injured. The whole of Lott’s claim rests on one single case. It is statistical malpractice to make a generalization like Lott does.
Furthermore, the data come from the NCVS, the same NCVS that Lott claims is not weighted properly.
Lott is aware of this but continues to make the claim.
Were all the “gun in the home” victims killed by intruders?
In a discussion of the Kellermann study that found that having a gun in the home was associated with a three times higher rsik of homicide, Lott claims “all or virtually all the homicide victims were killed by weapons brought into their homes by intruders” and
Recent work by Gary Kleck using what has been released confirms that at absolute most 4 percent of the homicides could be attributed to the gun owned by those in the residence. Kellermann’s study incorrectly assumed that all the gun deaths were caused by the gun in the home. Obviously, this assumption is crucial for his claim. At least 96 percent of his deaths were falsely assigned.
Lott has repeated such claims on many occasions. When pressed for details he just claimed that this was what Kleck found. Needless to say, the claim is false and can be found nowhere in Kleck’s extensive writings on Kellermann’s work.
In a Usenet discussion on this matter I found a minor error in Kellermann’s paper. I emailed him about it and he quickly published a correction in the New England journal of Medicine. Lott has yet to publish a correction of his incorrect statements about Kellermann’s work.
Are defensive gun uses five times as common as gun crimes?
Lott claims that “Guns are used for defensive purposes about five times as often as they are used for crimes.” In fact, the National Crime Victimization Survey indicates that the number of gun crimes (about 850,000 in 1996) is about twelve times as much as the number of defensive gun uses (about 72,000 in 1996). This is surely not surprising – criminals are more likely to be involved in a situation where a gun might be useful, and so have more incentive to carry a gun. They can also only choose to commit crimes on the occasions when they are carrying a gun.
Lott arrives at his claim by taking the lowest available estimate for gun crimes (430,000 from the FBI’s UCR) and a high estimate for defensive gun uses (An average of the estimates computed by Kleck omitting the NCVS estimate). While that produces a ratio favourable to Lott’s position, it is impossible for both estimates to be correct. According to the respondents in Kleck’s survey (which is the basis for all the estimates he averages) one fifth of his estimated 2.5 million defensive gun uses were against gun crimes, implying that every single time a criminal committed a gun crime, they encountered an armed victim. This is clearly impossible.
Incorrect calculations of standard errors
In his paper “More Guns More Crime” Duggan pointed out an error in Lott’s analysis:
One problem with these regression estimates is that Lott and Mustard are implicitly assuming that these laws are varying at the county level, when in fact they are varying only at the state level.
The reason this is a problem is that you would expect crime rates in counties within the same state to be correlated. This problem does not bias the estimates of the law’s effect, but causes the standard errors to be underestimated, so that some results may appear to be statistically significant when they are not. On page 278, note 3 of “More Guns, More Crime” of Lott comments on this problem, but erroneously claims that including dummy variables for all counties solves the problem. This is clearly false. The dummy variables only account for fixed differences between counties and do not address the within-state correlations between counties. After adjustments to account for this problem, Duggan found that
none of the coefficient estimates on the CCW variable remain statistically significant.
Lott’s response to Duggan’s paper was to repeat his false claim:
The correlation of the error terms across counties is picked up when one has county fixed effects included in the regression. He does not do the adjustment recognizing that the county fixed effects are already picking up what he wants to adjust for.
In his paper “Testing for the Effects of Concealed Weapons Laws” Moody noticed the same problem as Duggan:
Merging an aggregate variable with microlevel variables causes ordinary least squares formulas to severely overestimate the t-ratios associated with the aggregate variables. … I reestimated the model using the original county-level data set but adjusted the standard errors for clustering within states. The results were somewhat different from the original Lott and Mustard findings. … While shall-issue laws reduce violent crime in general in all models, the effects seem to be concentrated in robbery. Murder and rape are significantly reduced in only one version of the model.
In Lott’s response to Moody he still did not admit to making a mistake but rather stated that he “had already discussed this issue”.
On page 234 of The Bias Against Guns Lott still will admit to making an error:
Duggan provides no evidence that the adjustments that he makes are appropriate (indeed, my original paper with Mustard discussed these adjustments).
Critique of Lott
There are many many other problems with Lott’s work. My critique is here.