Tim Lambert’s questions:

  1. Why did Lott repeatedly make false claims that the 98% figure came from other studies and from Kleck?
  2. Even Lott cannot possibly be sure that the correct result of his survey was 98% since there is no way to check his calculations. Why did he repeat the figure over and over again?
  3. Lott has conceded that the size of the defensive gun use sample in his survey was very small. Too small, in fact, for the result to be statistically reliable. Why did he never even mention the markedly different results obtained from the other surveys with vastly greater sample sizes?
  4. Why did he make his 98% claim well before his survey was completed? (And without attributing it to his survey.)

From: John Lott
This is an edited version of what I sent Kleiman on 1/22.

I haven’t read whatever you are referring to by Lambert. As to checking whether the results are correct, I have made the raw data from my 2002 survey available. We are not discussing something complicated, but simply producing a weighted average along the line of what we e-mailed back and forth about yesterday. People have managed to replicate my results on a whole range of questions involving guns and many other issues (e.g., concealed handgun laws, safe storage law, multiple victim public shootings, reputational penalties for criminals or firms, educational expenditures). Is there any evidence to suggest that I can’t figure out a weighted average?

There are different standards of evidence: beyond a reasonable doubt or preponderance of the evidence. The central focus of my survey was to estimate the total number of defensive gun uses. The issue of brandishing, where one is looking at a subsample within a subsample, falls into the preponderance of the evidence category given the limited data at this point. Would one want even larger samples for brandishing so as to get even tighter confidence intervals? Sure, but I have limited personal resources and the point estimate gives us the best guess that we have for the rate of brandishing. I do not believe you can point to anything that has me claiming more for this result than was appropriate. The sentence in the second edition (2000) even added a cautionary phrase at the beginning of the sentence.

Some perspective is useful here. I think that the information on brandishing is a useful fact, and because of that I have made reference to it, but the issue of brandishing was not a central point being made any place in my work. The survey was done to examine the rate of defensive gun use generally. With respect to brandishing, we are talking about one number in one sentence when it was discussed. I did not emphasize this statistic and did not go into all the related issues. In the book that I have coming out in March, I have a more detailed discussion of the survey issues.

I have already answered the issue discussed in question 4 in a previous post. The survey was done over three months, but virtually all of it was done during the first four or five weeks of the year. As the students got more deeply involved in classes, effort in the survey dropped off markedly. It is hard to remember exactly what happened at this point after six years, but I believe that the additional survey data did not produce any more defensive gun uses. For a survey where only one percent of those surveyed say that they used a gun defensively this is not too surprising. I let things go on in the hopes that the students would get going again seriously, but at some point I realized that wasn’t going to happen and took what had been done.

As to attributing things, in op-eds or talks I simply don’t go through and explain where every statistic that I mention comes from. The more important statistics may be discussed in some detail and I may talk about it if questions are raised.

Here is the bottom line: You all have the survey questions and a detailed discussion of the survey methodology that applies to both surveys. You also have the raw data from the 2002 survey. Further questions can be answered by directing questions to the students who conducted the survey. If you don’t believe the results or want smaller confidence intervals, I suggest that you pay for the survey to be replicated. People (even my harshest critics) seem to have now agreed that I have indeed conducted the survey twice. The 2002 survey replicated my earlier survey. Instead of nitpicking about these things (such as questioning whether I can calculate an average correctly), redo the survey yourselves. Between my two surveys I have already surveyed 3,439 people, and I am not planning on surveying any more.