Otis Dudley Duncan

This discussion is concerned with four topics: (1) Lott’s references to, remarks about, and discussions of DGU statistics originating in sample surveys or polls carried out by other investigators; (2) Lott’s claims about a survey he says he conducted in 1997; (3) Lott’s reports on a survey he conducted in 2002; (4) several matters that have proved to be distractions from the careful consideration of the foregoing. Section (5) presents my conclusions. They may be read at once by anyone who has followed closely the Internet exchanges about the Lott case.

Before proceeding, I record these disclaimers. I am concerned only with what Lott has presented in regard to surveys of defensive gun use. Others more competent than I have commented on his studies of crime rates, about which I am not well informed. But I do know something about surveys, having relied heavily, in four decades of research, on survey data published or in archives as well as data collected in three surveys in which I had a role in designing the inquiry as well as analyzing the results. I have no interest in guns as such. I am not a gun nut. But neither am I a gun control nut. I don’t know how to control guns any more than I know how to control drugs, alcohol, or abortion. I wish I knew how to control junk science and the misuse of genuine science by advocates on both sides of any significant social issue.

(1) Other surveys

The earliest published statement on DGU statistics by Lott that I have found is in the article, John R. Lott, Jr., and David B. Mustard, “Crime, Deterrence, and Right-to-Carry Concealed Handguns,” Journal of Legal Studies, v.26, no. 1, pp. 1-68, January 1997. This article was widely circulated in 1996. A copy dated July 26, 1996, is in the public record as Senate Federal and State Affairs Committee of the State of Kansas Legislature, as attachment #4, 2-10-97. Curiously enough, the article does not appear in the bibliography of either the 1998 or the 2000 edition of Lott’s book, More Guns, Less Crime (University of Chicago Press), although there are references to “Lott and Mustard” in the text.

In their opening paragraph and in footnote 4 referenced therein, the authors provide a cursory review of surveys that provide estimates of the frequency of DGUs. After mentioning the estimated 80,000 to 82,000 DGUs reported by the National Crime Victimization Survey, they note that “other surveys imply that private firearms may be used in self-defense up to two and a half million times each year” and mention as their source, Gary Kleck and Marc Gertz, “Armed Resistance to Crime: the Prevalence and Nature of Self-Defense with a Gun,” Journal of Criminal Law and Criminology, v. 86, no. 1, pp. 150-187. In footnote 4, they state, “Kleck and Gertz’s survey (1995, pp. 182-3) of 10 other nationwide polls implies a range of 764,036 to 3,609,682 defensive uses of guns per year.”

These statements are a bit confusing. The only “other survey” that provided a DGU estimate of two and a half million times each year is the National Self Defensive Survey (NSDS) the report on which comprises the bulk of the cited article by Kleck and Gertz. The estimate of 2.5 million appears in Table 2, p. 184, of that article; that table and page number are not mentioned by Lott and Mustard. And it is a puzzle as to how “up to” 2.5 million jibes with the range mentioned in footnote 4, implying an “up to” of 3.6 million.

Lott and Mustard give as the source for the “10 other nationwide polls” pp. 182-3 of the Kleck/Gertz article. On those pages appears a compilation (which L & M refer to as a “survey”) of results obtained from 13 “previous surveys” (i.e., previous to NSDS), of which 10 are national. In that table, Kleck and Gertz present their “Implied number of defensive gun uses” for 8, not 10, of the national surveys. The entries for the 1978 Cambridge Reports and the 1989 Time/CNN polls read “n.a.” Hence, the quoted range properly refers not to 10 but to 8 previous national surveys.

Nothing that I have mentioned so far should be regarded as an instance of completely erroneous statements by Lott. But his exposition certainly leaves something to be desired in regard to clarity and precision. For future reference I wish to emphasize that the quoted material implies that as early as 1996 Lott was aware that the estimate of 2.5 million DGUs was reported by Kleck and that Kleck and Gertz present a compilation that summarizes results of only 10 national surveys.

In footnote 4 there is also a discussion of an estimate of 200,000 annual woundings attributed to Kleck and Gertz. I do not find this figure in the article by Kleck and Gertz, but it is implied by their result that 17 of 205 DGUs, or 8.3%, involved incidents in which respondents reported that they wounded an offender. (Actually, in the NSDS questionnaire, the response category is “wounded or killed.”) In round numbers, 8% of 2.5 million is 200,000. Kleck and Gertz take some pains to register their skepticism of the 8% wounding rate, as Lott and Mustard duly note. But they go on to say that “200,000 woundings seems somewhat plausible.” This assertion reappears in footnote 50 to Chapter One of More Guns, Less Crime, where on p. 3 of both editions we are informed that 98% of the time DGUs involve merely brandishing the weapon. If Kleck’s estimate of DGU frequency is accepted, this statistic implies that the weapon is fired in 2% or 50,000 of the 2.5 million DGUs. (But note for future reference that the figure 2.5 million is not actually quoted in this footnote.) If Lott’s own later estimate of 2.1 million is accepted, some 42,000 firings are implied. So it is a puzzle how the “somewhat plausible” 200,000 woundings can have been produced by so small a number of firings, particularly as in later writing Lott is at pains to emphasize that most DGU firings are only “warning shots.”

I put this puzzle to Lott in a letter of 2/20/99. I received no answer from him, nor have I seen any explanation elsewhere.

For a conference held late in 1996 Lott prepared the article later published as “Does Allowing Law-Abiding Citizens to Carry Concealed Handguns Save Lives?” Valparaiso University Law Review, 31(2): 355-63, Spring, 1997. In the second paragraph of this article appears the statement, "Those who advocate letting law-abiding citizens carry concealed handguns point to polls of American citizens undertaken by organizations like the Los Angeles Times and Gallup showing that Americans defend themselves with guns between 764,000 and 3.6 million times each year, with the vast majority of cases simply involving people brandishing a gun to prevent attack." The Kleck/Gertz article is given as the reference. In Table 2 of that article we see that the 764,000 is Kleck’s “implied” DGU frequency for the 1994 Tarrance poll and the 3.6 million pertains to the 1994 L. A. Times poll. These and other “implied” frequencies are the work of Kleck and are uniquely attributable to him. But neither the Tarrance nor the L.A. Times, nor either of the 1991 and 1993 Gallup polls provided a figure for number of “cases involving people brandishing a gun to prevent attack.” (Incidentally, here Lott speaks of “preventing” attack, although his later statements more often mention “breaking off” attacks; surely the two are not the same.) As noted above, the compilation of data from “previous surveys” includes 10 national polls. But information on percent firing is available for only 3 of them. Kleck and Gertz indicate that the 1978 Cambridge Reports poll found 18% of their respondents “ever” used guns, not excluding uses against animals or military or police uses; and 12% of the respondents reported firing the gun. Hence, the implied proportion firing was 12/18 = 67%, proportion simply brandishing, 33%, not even a non-“vast” majority. The 1978 DMI poll reported 15% using, 6% firing, hence a proportion brandishing of 9/15 = 60% simply brandishing, a “vast” or not so “vast” majority, according to taste. The 1989 Time/CNN poll found 9% to 16% of firearm owners fired their guns, but no figure is given for the total percentage who used their guns, firing or not; hence it is uncertain whether a majority simply brandish or not.

Presumably Lott knew of the NSDS finding that 23.9% of respondents reporting DGU fired their weapons; so there was indeed one survey that one could say found a “vast majority” merely brandishing, if brandishing is taken to include pointing the gun at the offender or other actions that Kleck was at some pains to distinguish. As of late 1996 Lott may not have known of the Police Foundation survey conducted in 1994 by Philip J. Cook and Jens Ludwig, although the report on it, Guns in America: Summary Report, was published by the Police Foundation in 1996 and results were presented at two conferences in that year. There is no reference to the Cook and Ludwig survey in More Guns, Less Crime, 1998 edition. However, that book does refer to Gary Kleck, Targeting Guns (Aldine de Gruyter, 1997). Table 5.1 of Kleck’s book recapitulates the compilation of survey results presented by Kleck and Gertz in 1995, and adds to it figures from the NSDS and Police Foundation surveys. For NSDS, Kleck shows 1.326% of respondents using guns defensively, 0.63% firing, implying 0.63/1.326 = 47.5% of uses involved firing, or 52.5% brandishing, hardly a “vast majority.” This result is, of course, inconsistent with the 23.9% of DGUs with firing reported earlier for NSDS, and Professor Kleck has informed me that the figure 0.63 is in error. For the Police Foundation survey, Kleck’s table gives 1.44% using gun, 0.70% firing, or 48.6% of DGUs involving firing, 51.4% brandishing, another less than “vast” majority. This is but one of several statistics bearing on this matter available from the Police Foundation study. The one that Cook and Ludwig seem to prefer is 27.0% firing, for 85 respondents reporting incidents in the previous 5 years. The complement, 73% brandishing, could be regarded as a “vast” majority, to be sure. But, to repeat, this result probably was not known to Lott when he wrote his law review article. This is the sum total of the evidence from national surveys pertaining the assertion about the “vast majority” of DGUs. It seems like rather thin support for Lott’s statement. But it does not provide even that much support for a nearly contemporaneous statement. Indeed, it is radically inconsistent with what Lott testified in early 1997: "There are surveys that have been done by the Los Angeles Times, Gallup, Roper, Peter Hart, about 15 national survey organizations in total that range from anything from 760,000 times a year to 3.6 million times a year people use guns defensively. About 98 percent of those simply involve people brandishing a gun and not using them." (Page 41, State of Nebraska, Committee on Judiciary LB465, February 6, 1997, statement of John Lott, Transcript prepared by the Clerk of the Legislature, Transcriber’s Office.)

Not only does the actual evidence contradict the 98% figure; we have two more specifically named surveys, Hart and Roper, that did not provide any information at all about firing and brandishing. Indeed, there never was any Roper survey on DGU at all, as one can verify by consulting the iPOLL archive at the Roper Center, which surely would know of any such survey. Roper is not in Kleck’s list either. And, as of early 1997, Lott would not have had information about 15 (rather than 10) national polls.

But here is the most important question concerning this testimony: What is the source that Lott consulted in turning “vast majority” merely brandishing into 98%? I ventured a speculation about how he got that figure in my article, “Gun Use Surveys: In Numbers We Trust?The Criminologist Vol. 25, No 1 Jan/Feb. 2000 pp 1, 3-7). I pointed to an article by George Will in Newsweek (11/15/93), where he wrote, quoting Jeffrey Snyder, "Florida State University criminologist Gary Kleck, using surveys and other data, has determined that armed citizens defend their lives or property with firearms against criminals approximately 1 million times a year. In 98 percent of these instances, the citizen merely brandishes the weapon or fires a warning shot. Only in 2 percent of the cases do citizens actually shoot their assailants." Snyder’s statement traces back to an estimate Kleck made in 1988; for details, see my article. Note that the 98% derives from surveys and other data, not just from a survey by itself (such as Kleck’s NSDS pertaining to DGU in the period 1988-1993). Note further that the 98% includes “fires a warning shot” and is not restricted to merely brandishing. Moreover, the misinterpretation of Kleck’s 98% (as of 1988) that would be involved if Lott actually obtained his figure there, wittingly or unwittingly (as I think is likely), had actually been made by others as early as 1993; see Tim Lambert’s comments on the Duncan-Lott exchange.

But Lott has explicitly denied that Kleck’s 1988 figure was the source of his claim that 98% of DGUs involve merely brandishing; see “John R. Lott. Jr.’s Reply to Otis Duncan’s Recent Article in The Criminologist,” The Criminologist Vol. 25, No 5 Sep../Oct. 2000 pp. 1,6. He also mentions that he “told Duncan on the telephone last year that the "98 percent" number came from the survey that I had done and I had also mentioned the source for the 2 million number.” Actually, that same statement was made not only in the phone call (which took place on 5/21/99) but in a letter from Lott to me dated May 13, 1999, but received in California after the phone call. The letter is more explicit about the source: “The information of over 2 million defensive uses and 98 percent is based upon survey evidence that I have put together involving a large nationwide telephone survey conducted over a three month period during 1997.” I actually took that information at face value for a short time. But within a day or so after receiving it, I had in hand the first statements by Lott to come my way by way of the Internet. They were printed out on May 17, 1999, and mailed to me from Pennsylvania by a person who had access to the Internet, as I did not until 2001. The relevant ones are quoted in my article; here is one: "Polls by the Los Angeles Times, Gallup and Peter Hart Associates show that there are at least 760,000, and possibly as many as 3.6 million, defensive uses of guns per year. In 98 percent of the cases, such polls show, people simply brandish the weapon to stop an attack." Chicago Tribune, 8/6/98. (In the article as printed, the source is incorrectly given as Los Angeles Times.) As of the time I was writing my article, early in the Fall of 1999, I had not seen Lott’s article in the Valparaiso Law Review, quoted above; and I only saw the February 6, 1997, testimony in early October, 2002. Had these and other relevant statements I found only in 2001-2003 been available earlier, I would have raised more questions than I did. Before May, 1999, I knew only that Lott had made erroneous statements about “national surveys,” on p. 3 of More Guns, Less Crime. Indeed, on April 2 of that year, I so informed the University of Chicago Press. In response I received a letter from Geoffrey J. Huck dated 7 April 1999 indicating that my information was being forwarded to Lott, “as he may want to acknowledge it in future work.”

So, as of early Fall of 1999 when I was writing my article for The Criminologist I had seen no published statement concerning Lott’s 1997 survey, let alone a reasonably credible report on that survey. Had I mentioned that survey, I would have been in the position of accepting his claim about a survey that, to my knowledge, he had never described in public. And, realizing that the statement about “national surveys” on p. 3 of his book was no slip of the tongue but consistent with other false statements Lott had published, I simply ignored the claim about the 1997 survey. Incidentally, the excuse that the survey data were totally lost in a computer crash only came to light in Lott’s article in The Criminologist, Sept./Oct., 2000. It was not mentioned in his letter to me and I will testify that he did not mention it in the phone call. I do not have an infallible memory, and I did not take notes during the call. But a piece of information as dramatic as a computer crash with all data lost is an item not easily forgotten.

As far as I know, Lott has never yet acknowledged making false statements about the results obtained by others, but has only changed his story about the source of his data, as in the statement quoted above, claiming that the 98% figure originated in his 1997 survey. But the 1997 survey, assuming there was one, cannot have been the source of the statement made on February 6, 1997, in the Nebraska testimony, inasmuch as that survey took three months in 1997 to complete. To be sure, Lott has now informed his readers about his 2002 survey, remarking that "Overall the survey results here are similar to one I conducted primarily during January 1997 which identified 2.1 million defensive gun uses, and that in 98 percent of them, the gun was simply brandished." See The Bias Against Guns (Regnery, 2003), pp. 259-60. If “primarily during January 1997” is intended to convey the claim that on February 6 it was his own survey that he had in mind as his source in testifying that “98 percent of those simply involve people brandishing a gun and not using them,” then he is admitting that his testimony to the Nebraska legislature was a deliberate falsification, since Lott knew that the statement about other polls was not correct and would have had in mind his own survey as the actual source. There had been no computer crash as of the time of his testimony. Why would he not have mentioned what he now implies was his actual source?

One of the intriguing features of the erroneous statements about sources of the 98% figure, before Lott got around to claiming it as a result of his own survey, is their variety. Wording resembling that of the 1998 Chicago Tribune article occurs in articles in the Wall Street Journal, the Washington Times, and American Experiment Quarterly appearing in July, 1997, August, 1998, and Summer, 1999. In other statements, Lott states only that “Polls show,” or he credits “studies by respected institutions.”

But most interesting is the statement, "Guns clearly deter criminals, with Americans using guns defensively over 2 million times each year — five times more frequently than the 430,000 times guns were used to commit crimes in 1997, according to research by Florida State University criminologist Gary Kleck. Kleck’s study of defensive gun uses found that ninety-eight percent of the time simply brandishing the weapon is sufficient to stop an attack." John Lott, “ Gun Locks: Bound to Misfire,” Intellectual Ammunition, Mar 1, 2000. The same article was also published on the Independence Institute’s Op-Ed page on Feb. 9, 2000. But in March 2003, the sentence "Kleck’s study of defensive gun uses found that ninety-eight percent of the time simply brandishing the weapon is sufficient to stop an attack." was deleted from the article. Also, the Independence Institute’s copy of the article was deleted.

Lest there be some inference that Lott’s claim about Kleck’s finding was a simple mistake or an editorial insertion, consider the following statements by Lott:

"People use guns defensively about 2.5 million times each year, and 98% of the time simply brandishing the weapon is enough to stop an attack." “Will Suing Gun Manufacturers Save Lives?” Investor’s Business Daily, May 27, 1998.

"More than 450,000 crimes, including 10,744 murders, are committed with guns each year. But Americans also use guns defensively about 2.5 million times a year, and 98 percent of the time merely brandishing the weapon is sufficient to stop an attack." “Bogus lawsuits a crime against gun-owning public,” Washington Times Feb. 24, 1999.

Three other similar statements in 1998 are on the record. As was noted above, Lott has been quite clear that the 2.5 million DGU estimate is Kleck’s. No other survey has obtained this specific estimate. And here is what Lott wrote in his Sept./Oct. 2000 article in The Criminologist: "As I told Duncan last year in a telephone conversation, I had no idea why the estimated 2.5 million defensive gun uses was attributed to me. The 2.5 million estimate obviously comes from Kleck.” Moreover, in the same article, Lott stated, “Indeed as Duncan himself notes (p. 5), my book both mentions "the 2.5 million annual DGUs" as arising from Kleck as well as my mild "reservations" about the evidence.” But Duncan — who pleads guilty to some clumsy writing — did not “note” any such thing. Moreover, Lott does not refute Duncan in the obvious way, which would be to mention the page in More Guns, Less Crime, where the 2.5 million figure is mentioned and attributed to Kleck. Although that information was given in the earlier Lott/Mustard article, it was not carried over into the 1998 edition of the book. (Just look up every page in the index entry for Kleck.) It does appear in the new material on p. 219 of the 2000 edition, in this wise: “[Kleck’s] own survey results … indicate that citizens use guns to stop violent crime about 2.5 million times each year.” (As is noted below, this statement is inaccurate. Kleck’s survey was not restricted to violent crime.)

What Duncan wrote was “[Lott] rebuts his own assertion quite effectively. On p. 190 of his book, he takes note of Kleck’s observation (Targeting Guns, p. 162) that "Data from the NSDS indicate that no more than 8% of the 2.5 million annual DGUs involved defenders who claimed to have shot their adversaries, or about 200,000 total." What I am referring to in this passage is the same note 50 to Chapter One of the book where that was discussed but without mentioning the 2.5 million DGUs. That is hairsplitting, to be sure, but Lott is mistaken; he did not take the trouble to look for the 2.5 million figure in his own book.

What is not hairsplitting is the error in the statement attributing to Kleck the finding that citizens use guns to stop violent crime about 2.5 million times each year. Lott has made the most elementary mistake in the books. He has not looked carefully at the wording of questions. It is clear throughout all his discussions that Lott himself has in mind only DGUs against violent crime. And the wording of his 2002 questionnaire makes that explicit: “During the last year, were you ever threatened with physical violence or harmed by another person or were you present when someone else faced such a situation? (Threats do not have to be spoken threats. Includes physically menacing. Attacks include an assault, robbery or rape.)” This is the very first question in the questionnaire (which Lott calls a “survey”) reproduced on pp. 257-259 of The Bias against Guns. But NSDS was not restricted to violent crimes. The screening question in Kleck’s survey did not have to do with victimization but with defensive gun use. And something like 40% of the DGUs in Kleck’s data pertain not to robbery, rape, or assault, but to defenses against assorted other, non-violent, crimes. And the same caution applies to all the surveys that preceded NSDS. None of them is limited to violent crimes. This makes nonsense of every comparison of Lott’s results with other survey results.

After writing the foregoing paragraphs, I learned of Lott’s statement on his web site www.johnLott.org where he writes,

If the reference in the second sentence had been to "these" polls and not "such" polls, I would think that the critics would have a much better argument. Instead, I view "such polls" as merely referring back to this type of polls and not those specific polls. Still there is admittedly an error in using the plural. The most plausible explanation is that I was describing what findings had been generated by the polls, in other words I was viewing them in general as a body of research.

…………..

The bottom line is that there is not a single place where I have directly attributed the 98 percent figure to Kleck or anybody else’s study. The only thing that can be charge is that I likely on a couple of cases must have made some trivial plural/singular mistake.

A cursory inspection of Lott’s statements finds not a couple but three times that many uses of the plural, “surveys,” named or not named, most notably on p. 3 of More Guns, Less Crime, 1998 edition. The issue is a simple one. Is this a true statement? — “If national surveys are correct, 98 percent of the time that people use guns defensively, they merely have to brandish a weapon to break off an attack.” Name one. (OK, the phantom 1997 poll conducted by Lott, if you insist.) Name two. It is very hard to believe that the use of the plural, “surveys,” was inadvertent. In any case, it is not “trivial.” If Lott meant only one survey, he should have said which one.

We are not quite done with Lott’s unprofessional statements about work by others. In note 1 of the 1997 law review article, Lott states that the “NCVS is not a representative sample of the national population.” This remark is not attributed to anyone else; no source for it is provided; no evidence of its truth is referred to, much less presented. On p. 11 of More Guns, Less Crime, Lott writes, “Other national polls weight regions by population, and thus have he advantage, unlike the National Crime Victimization Survey, of not relying too heavily on data from urban areas.” Note 48 is referenced at the end of this sentence. But note 48 provides no justification for the statement; it is entirely devoted to a discussion of the biases that would be produced by “relying too heavily on data from urban areas.” This statement reveals — to anyone even moderately well informed about the NCVS — a lamentable ignorance on the part of the author, who clearly does not understand multistage stratified area probability sampling. Anyone who wants to know, in general terms, how NCVS sampling works can consult this document for starters.

In this brief statement there is a clue as to how Lott’s misunderstanding arose. “Large [Primary Sampling Units] were included in the sample automatically” and “are considered to be self-representing.” But be assured that the Bureau of the Census knows how to weight sample data to correct for any bias that this part of the sample design might otherwise produce. Users of the NCVS data files archived at the Inter-University Consortium for Political and Social Research are given full instructions on how to make use of weighted data. Perhaps Lott does not know that the sample design, data collection, and data processing phases of the NCVS are carried out not at the Bureau of Justice Statistics but by the Bureau of the Census, which has over six decades of experience in designing and evaluating sample designs. Indeed, the modern theory of survey sampling was largely invented in that very agency quite some years before Lott was born. But, as in the law, ignorance is no excuse. If Lott wishes to accuse a statistical agency of gross incompetence, it is his responsibility to produce the evidence.

And here is another story. On pp. 36-37 of The Bias Against Guns Lott recalls a conversation with Tom Smith about a large drop in gun ownership recorded in the General Social Survey. “I didn’t ask him whether he had deliberately phrased his question in such a manner to obtain an artificially low gun ownership rate. But the question certainly crossed my mind.” Is this a gratuitous snide remark or another example of culpable ignorance? Lott should know that the original intent of the GSS and a continuing feature of its mission is to measure social change by comparing GSS results with those obtained by surveys done before GSS was instituted. Hence question wording is maintained from the previous usage, even if improvements in wording might be suggested. To proceed otherwise would destroy comparability of earlier with later results. As it happens, the gun ownership question was originated by the Gallup Poll and was introduced into GSS in 1973, at which time Tom Smith was not connected with GSS and had no responsibility for selecting questions.

(2) The 1997 Survey

As far as we now know, the first published statement concerning the survey was in Lott’s letter to the Wall Street Journal, May 25, 1999, a few days after his communications to me. The one relevant sentence reads, “My own survey put the defensive uses at about 2.1 million in 1997.” There is no mention of the brandishing statistic.

That statistic appears, apparently for the first time, on p. 3 of the 2000 edition of More Guns, Less Crime and reads as follows: “If a national survey that I conducted is correct, 98 percent of the time that people use guns defensively, they merely have to brandish a weapon to break off an attack.” There is no mention of the DGU frequency, nor is there any other material to explain this rather cryptic statement.

Such piecemeal announcement of results from a major piece of research is, shall we say, unusual in scientific circles. (No, we shall not resort to such a euphemism. It is a clear breach of the ethics of scientific inquiry, one that comes up over and over again in regard to premature announcements of new medical “discoveries,” and so on.) But it is possible that I have missed some earlier and more detailed report on the survey, and I would be happy to make any needed correction.

The two figures were brought together, as far as I know for the first time, in Lott’s reply to my article:

The "about 2 million" reference is the average of the 15 national surveys and is very similar to my own estimate of a little over 2 million defensive uses. The survey that I oversaw interviewed 2,424 people from across the United States. It was done in large part to see for myself whether the estimates put together by other researchers (such as Gary Kleck) were accurate. The estimates that I obtained implied about 2.1 million defensive gun uses, a number somewhat lower than Kleck’s. However, I also found a significantly higher percentage of them (98 percent) involved simply brandishing a gun. My survey was conducted over 3 months during 1997. I had planned on including a discussion of it in my book, but did not do so because an unfortunate computer crash lost my hard disk right before the final draft of the book had to be turned in.

Not only the figures in this passage but also the news about the computer crash appear here in print for the first time, it seems. Various inquiries made by others are consistent with Lott’s recent assertion that the crash occurred in June of 1997.

As I have already pointed out, Lott does not have 15 estimates from national surveys of DGU that he can have averaged. Moreover, since his “about 2 million” is imprecise and he does not tell us which estimates from which surveys he averaged, I have been unable to confirm the figure of “about 2 million.” But this is a mere quibble as compared with the question that arises about the numbers as given. If there were 2.1 million uses, only 2% of which involved firing, the estimated number of firings should be 42,000. In a population of about 200 million adults, as the Bureau of the Census estimated for 1997, that works out as a firing rate of .00021 per capita, approximately 1 firing per 5,000 population. It is putting a rather heavy burden on a sample of only 2,424 respondents to force it to yield an estimate of something occurring so infrequently.

But that is not all. On half a dozen occasions, Lott has reported that most of the firings were warning shots, that is, were not directed at the offender. The first such statement yet found is this: "Also ignored is that 98% of the time when people use a gun defensively, merely brandishing the weapon is sufficient to stop an attack. In less than 1% of the cases is a gun even fired directly at the attacker." “Gun Control Advocates Purvey Deadly Myths,” Wall Street Journal, Nov. 11, 1998. The same statement appeared in “Debunking gun myths” By John R. Lott Jr. August 8, 1999 Journal Gazette, Fort Wayne, Indiana. More or less precise statements along this line were made by Lott in several radio and TV talk shows, the available records of which are probably only the tip of an iceberg. Two commercially available video tapes present quite specific statements. On the TV show Hardball, CNBC, August 18, 1999, Lott stated that people used guns defensively to stop violent crimes over 2 million times in 1997. 98 percent of the time, when people use guns defensively, simply brandishing a gun is sufficient to cause a criminal to break off an attack. In less than 2 percent of the time is the gun fired. About three-quarters of those are warning shots. [Video tape obtained from www.burrelles.com.] At the Eagle Council Forum XXVIII, September 24-26, 1999, Lott stated that in less than 2% of the cases is the gun fired, and the firings are mostly warning shots. In less than half of one percent of the uses the gun is fired at the offender, and only a tiny fraction of these result in the death of the attacker. [Video tape obtained from ACTS, Inc.] (The two foregoing statements are accurate paraphrases of what Lott said.)

If only a quarter of the firings were intended to harm the offender the incidence rate of this kind of event is about 1/4 of 1 in 5,000 or 1 in 20,000. That is, defenders shoot with intent to harm at a rate of only once a year for every 20,000 persons in the population. I want to be shown not only that by some artifice of weighting such an estimate is arithmetically possible with a sample size of 2,424; but also I want to see the actual sample counts and their weights that produces the advertised result. It is very difficult to conjecture how this could happen. Moreover, the partitioning of firings into 3/4 warning shots, 1/4 shots at offender is not supported by either of the two surveys that have credible evidence on this. In NSDS 8.3% of defenders fired warning shots and 15.6% fired at the offender. In the Police Foundation survey, 11.3% fired warning shots only, 15.7% aimed at the offender. I see no alternative to describing the 3:1 ratio of warning shots to shots aimed to harm the offender as a figment — in a class with those fictions which, repeated often enough, become “facts.”

The combination of the dubious arithmetic and the story of the computer crash have led some to question whether Lott actually did a survey in 1997. And in his only response to my question about documentation for the survey as well as in conversations with others reported on the Internet, Lott has conceded his inability to supply any documentation whatsoever. But I, for one, am inclined to forego idle speculation about whether there was any survey at all. The well known difficulty of proving a negative should inhibit such speculation. The only serious question is whether Lott has publicized statistics that cannot be confirmed by any documentation or detailed report of the evidence. So far, Lott has conceded his inability to supply such documentation. And any submission of such at this late date would surely be suspect. Instead of providing such evidence, Lott has recently been able to “recall” the fact that all the reports from defensive gun users obtained during three months in 1997 were actually in hand by the end of January of that year (i.e., by the time of his February 6 testimony to the Nebraska legislature) and recalls that there were 28 of them in his weighted sample, 2 of whom fired. How he learned that only one quarter of the firings by these two were aimed at the offender he has not yet recalled, it seems.

I would also like to call attention to Lott’s quixotic statement about his intention when he mounted his survey, as he recalled it in 2000: “It was done in large part to see for myself whether the estimates put together by other researchers (such as Gary Kleck) were accurate.” Really, is this to be taken seriously? Lott mounts a study with a sample half the size of Kleck’s, restricts his inquiries to those who defended against a violent crime, and in many other ways fails to replicate Kleck’s survey. What then are we to make of the result? Was or was not Kleck “accurate”? If the 1997 survey was anything like Lott’s more fully reported 2002 survey it was the work of a rank amateur checking up on a survey that meets rather high professional standards. Any discrepancy between the two investigations, if not explicable as sampling error, could reflect a number of things other than inaccuracy of the earlier study: the obvious differences in study design, the changes in the incidence rate of crime over the period between the two inquiries, the casual nature of the execution of the later survey, and so on. If it were a question of “accuracy,” the verdict would surely have to be that it was Lott’s study that was inaccurate. There is nothing in the substance or manner of reporting on the 1997 survey or the remarks attributed to Lott by those who have talked with him about it to suggest that his statistics should be entertained seriously for one moment.

But the absolutely crucial consideration is that the lack of any decent approximation to adequate documentation of the survey design and results utterly disqualifies it from consideration as a piece of social science research. This renders moot any discussion about whether there really was a survey and whether the results were lost in a computer crash or otherwise disposed of. Making an issue of whether there was a survey by either Lott’s critics or his supporters is a disservice to rational discussion of the frequency and character of defensive gun use in the United States. This is not just my opinion: it is part of the universally understood ethics of scientific inquiry that findings must be verifiable. And the national Office of Research Integrity has made it explicit that loss of data, for whatever reason, does not justify the publication of undocumented, unverifiable findings.

Notwithstanding that arguments about the trustworthiness of Lott’s research threaten to go on forever, the only honorable thing for him to do at this point is to withdraw his claims about the 1997 survey at the same time as he disavows his erroneous statements about the research of others, making appropriate apologies to all who have innocently facilitated the dissemination of his junk science.

(3) The 2002 Survey

Regarding the purpose of the new survey, Lott has this to say: “I wanted to get a rough idea of the magnitude of the effects involved.” He does not indicate what is the cause of the “effects” he proposes to study. Both the statement and its incompleteness in this regard suggest that the project should not be taken too seriously. Moreover, any pretense that the 2002 findings can validate the results reported for 1997, no matter how embroidered with t-statistics, is sheer bluff. The 2002 survey data are documented; the 1997 data are only “recalled” by the investigator.

Some information about the 2002 survey is given in Lott’s document, “What Surveys Can Help Us Understand About Guns,” On that page there are also Tim Lambert’s critical comments about a number of Lott’s statements, and I have nothing to add to them. They are devastating.

But the 2002 survey does have one potentially useful feature that distinguishes it from most non-governmental surveys of defensive gun use. His screening question serves to identify all respondents who were victims of violent crimes or threats of violence. Hitherto the main source for data on both victimization and gun defense has been NCVS. In response to his first question —

During the last year, were you ever threatened with physical violence or harmed by another person or were you present when someone else faced such a situation? (Threats do not have to be spoken threats. Includes physically menacing. Attacks include an assault, robbery or rape.)

— Lott gets an unweighted sample count of 114 respondents reporting victimization, or 11% of the 1,015 sample cases. To find out if this figure is “accurate,” I looked at results reported by two other organizations.

In October, 2002, the Gallup Poll asked 1,012 respondents three questions concerning violent crime victimization:

Please tell me which, if any, of these incidents have happened to you or your household within the last twelve months: — Money or property taken from you or other household member by force, with gun, knife, weapon, or physical attach, or by threat of force. — You or other household member mugged or physically assaulted. — You or other household member sexually assaulted or raped.

The “Yes” answers amounted to 1% for robbery, 3% for assault, and 2% for rape or sexual assault. The same questions asked in August-September, 2000, elicited affirmative responses by 2%, 3%, and 1%, respectively, of Gallup’s respondents. In both surveys the total for violent crime victimization was 6%. There may have been some respondents reporting more than one of these crimes. But more important, by including explicit reference to other household members Gallup was casting a larger net than did Lott. Conservatively, then, we can conclude that Lott finds twice as much violent crime victimization as Gallup. It is also relevant that in August, 2000, Gallup had an additional question — Have you, personally, ever been the victim of a crime where you were physically harmed or threatened with physical harm? Some 23% answered, “Yes.” It looks as though it would take only two or three years for Lott’s rate of 11% to cumulate to the lifetime total of 23% obtained by Gallup. Any way you look at it, Lott finds a lot more victims than Gallup.

I also consulted the NCVS report on victimization in 2001, and found that the rate for personal crimes of violence for persons 20 years old and over (for comparability with Lott) was just 2%, one-third of the Gallup rate, and less than one-fifth of Lott’s incidence.

So is this the proper question: Who is “accurate” and who has only “a rough idea of the magnitude of the effects involved”? No, that is obviously a meaningless question. Everything depends on the study design, the question(s) asked, the probes employed to confirm initial responses, and so on. NCVS has a most formidable apparatus for ascertaining and confirming victimization, much too elaborate to describe here. Gallup is quite a bit more specific than Lott. The subdivision of the question into three parts, by itself, invites the respondent to be reasonably specific about the violence experienced or threatened. It is entirely understandable that the most searching inquiry yields the lowest incidence rate and the most casual one the highest.

If the truth could be known, there must be a wide spectrum of kinds of encounters of people with people and means of managing them that have some element of threat or actual violence in them and some kind of resort to weaponry. Bumping into someone unintentionally can lead to an accusation of assault, a request for a handout with visible bulge in a pocket can be interpreted as an attempted armed robbery. No statistic taken by itself as an estimate of how often problematical encounters occur is of any great value. Everything depends on what qualifies as a victimization and what measures are taken to elicit a coherent account of it from the respondent.

Maybe this analogy would help to convey my thought. Imagine you have filled a large box with baseballs. Although it is “full” in one sense, there is room for marbles between the large balls. And after the box has been filled with baseballs and marbles, there is still room for many tiny ball bearings. NCVS counts the baseballs; Gallup counts the baseballs and the marbles; Lott counts all the round objects. All three are entitled to call the box full. But none of them is entitled to use his results to validate or falsify the results of the others, or vice versa. In short, Lott, Gallup, and NCVS are not comparable.

Not only do they get radically different counts; they are concerned with what must be quite different distributions of kinds of events. A sophisticated controlled experiment that supports this suggestion is described in an article by David McDowall, Colin Loftin, and Stanley Presser, “Measuring Civilian Defensive Firearm Use: A Methodological Experiment,” Journal of Quantitative Criminology, Vol. 16, No. 2, 2000. They find that respondents report more gun defenses in surveys like NSDS and other surveys summarized by Kleck and Gertz than does a survey using questions like those in NCVS. Moreover, they indicate that “the NCVS and the other surveys measure responses to largely different provocations.” But Lott’s 2002 survey does not use questions that closely resemble those in NSDS or NCVS. So the most that can be said in its favor is that it is perhaps useful as a pre-test or pilot study for a larger inquiry using a very loose criterion of victimization. Why one would want to carry out such an inquiry needs to be much more carefully argued. But Lott could perhaps claim to have estimated an upper limit for the frequency of some kind of violent victimization — unless someone can think up an even looser definition of victimization than his.

The usefully skeptical question to put to any statistic is “As Compared to What?” To quote the little girl in the “Peanuts” comic strip, “That’s my new philosophy.” There is much more to be said about this principle, and I have said only a little bit of it in a 48-page memorandum: “As Compared to What? Offensive and Defensive Gun Use Surveys, 1973-94,” 2000, NCJ 185056. But anyone who claims to be a serious critic of any survey research in this field should be forced to grapple, as I did, with the details of the serious difficulties in the way of making useful comparisons. A shorter road to the horrible truth might be to read the entertaining but sobering essay by Richard Lewontin, “Sex, Lies, and Social Science,” in his collection, It Ain’t Necessarily So (New York Review Books, 2000). The parallels with gun use surveys will be obvious.

Lott has presented comparisons of his estimates of DGU frequency with those of other surveys — comparisons that are worthless because of his failure to establish comparability. When comparing his estimates of the frequency of DGUs with Kleck’s, he fails to note that (1) the 2.5 million figure published by Kleck is not restricted to violent crimes, whereas Lott’s 2002 questionnaire makes it quite explicit that he is only concerned with violent crimes; and (2) Kleck’s figure pertains to the number of gun defenders, not the number of DGU incidents, whereas Lott counts uses, not just the number of defenders. To get roughly comparable figures, note that Kleck’s report on type of crimes indicates that not more than 59.1% of DGUs in his survey involved robbery, rape, or assault. (This percentage may be slightly too high as incidents could be included in more than one category. And, strictly speaking, the type-of-crime classification pertains to the most recent incident in the preceding five years, not necessarily within the preceding 12 months.) So, in round numbers, Kleck’s statistics imply an annual 1.5 million persons defending against violent crimes with guns. Lott’s 1,015 respondents include 7 gun defenders against violent crimes. Accepting his 206.99 million as the appropriate population base, the (unweighted) estimate of one-year incidence is 1.4 million persons. A calculation by Tim Lambert using the weighted percentage of users implies an estimate of 1.8 million.

This calculation, which assumes closer comparability than is really warranted, is intended only to illustrate that valid comparisons require more careful reasoning than Lott has demonstrated in his statements. Other interesting, but necessarily inconclusive, comparisons of Lott’s 2002 results with the NSDS and Police Foundation statistics could be computed but would require new tabulations from those surveys, as the published figures are not restricted to violent crimes.

But no matter how Lott’s recent survey compares with surveys by others or with the unverifiable results of his purported 1997 survey, the latter can in no sense be validated by the 2002 results. Apart from everything else, Lott can only “recall” what questions he asked in 1997 and what population was sampled, so there is no possibility of establishing comparability between the two inquiries.

One more comment about the 2002 survey: By virtue of including the screening question about victimization along with the question about gun defense, the design of Lott’s questionnaire can in principle provide data on the effectiveness of that defense. (Hitherto the main source of data on that topic has been NCVS.) Lott asked all victims, “Were you or the person you were with harmed by the attack?” Tim Lambert has run tabulations from Lott’s data file. Counting incidents, the proportions reporting harm were 30/454 = 7% for those not using guns, 1/13 = 8% for gun users. The figures for numbers of persons, irrespective of how many times they were victimized, are 29/107 = 27% where there was no gun defense as compared to 1/7 = 14% for gun users. That is indeed a striking comparison; but it is practically meaningless. The Fisher Exact Test is applicable to this kind of problem. There is a good discussion of the test and a quick calculator here where you can confirm my result. The data input is in the form

harmed     not harmed
not users 29 78
users 1 6

The program computes the so called p-value as .43 for the one-tailed test and .67 for the more appropriate two-tailed test. Both tests indicate that the contrast between 27% and 14% is not large enough, given the small sample size, to conclude that there is a real difference in the population from which the sample was drawn. (A significant p-value is usually considered to be 0.05 or less; that is, there is no more than a 1 in 20 chance of finding a difference that is equal to or larger than the one observed in your sample if in fact there was no difference in the population.)

Conclusion: Lott’s data pertaining to effectiveness of gun defense are inconclusive. As in so many other research projects, the only firm conclusion would be that “more research is needed.”

Really, it is past time when shoe-string surveys on defensive gun use are worthwhile. There are strong reservations about the usefulness of surveys of phenomena like defensive gun use as a general proposition that should be considered very seriously before anyone else does another one. (See, e.g., the Lewontin essay mentioned earlier.) Waiving these reservations, it is absolutely clear that a sample size as large as that taken by NCVS is mandatory if firm conclusions are to be reached about whether the relative frequency of injuries or other harm or losses reported by victims can be reduced by brandishing a gun. If Lott had supposed that his survey would contribute to our confidence that there is a beneficial effect of gun use, he simply did not think through the issues of sample size and study design before collecting his data.

But at least in this project he shows an awareness of a need for data on this topic. In his past presentations of the 98% brandishing statistic, he has tacitly assumed — inasmuch as he never says anything to the contrary — that no harm comes to the defender who uses a gun. This assumption is not borne out by NSDS data. Kleck and Gertz report that 11 defenders out of their sample of 205 were attacked and injured. That amounts to 5.5% of all defenders, or about 9% of the defenders victimized in violent crimes. Consider how large a sample you would need to conclude that brandishing is as effective as shooting when one is using a gun defense. Again, “As compared to what?” is the question that Lott has not faced effectively.

Indeed, he has obscured the issue by his erratic reporting of his own figure. Most often he writes that by brandishing the gun the defender was able to “stop” the attack. About half as often he writes “break off” an attack. And one or the other of these expressions occurs in about 4/5 of his statements. But in some of the remaining ones we learn that when people brandish they “prevent” attacks. In at least one statement, they both “prevent” and “break off” the attack. It is not clear how a prevented attack can be broken off. And a few times we learn that a brandished gun will (always?) cause the offender to “flee” or to “run away.” Seriously now, did Lott’s or any other survey specifically inquire whether the offender ran away? (The National Rifle Association likes to report incidents in which the defender holds the offender at gun point until the police arrive.) In about 5 instances, Lott only reports that 98% of users simply brandished their weapon without indicating what was accomplished by that. It is noteworthy that the 2002 survey has no question besides the one on harm that could elicit a specific report on what, exactly, the offender did when faced with a brandished gun, a fired gun, some other weapon, or no active defense. In short, hitherto Lott has just assumed or asserted that mere brandishing of a gun is an effective defense against violent crime. How can he have got away with this kind of sloppy reasoning so long?


(4) Other Issues

(a) The Mary Rosh caper, tawdry and distasteful as it is, is not strictly germane to the issue of the acceptability of Lott’s gun use statistics. Let the columnists and cartoonists have their fun with this.

(b) Comparisons with the Bellesiles case are irrelevant. That case seems to have been worked through and very little about it is helpful in inquiring and thinking about what Lott did and claims to have done.

(c) The same must be said about attempts to resurrect old claims about Lott’s bias as revealed by the sources of his support. The relevant question concerns the validity and value of his research, not where he got the money to do it.

(d) There is no excuse for continuing the practice of labeling critics or defenders of Lott’s work with offensive epithets and imputing motives to them. This kind of rhetoric simply obscures or distorts the plain evidence of the public record. Maybe it would help if all parties would imagine themselves in a court, serving as witnesses or attorneys. They would quickly be called down for any ad hominem remarks.

(e) But do the questions about the brandishing statistic really matter, or are we discussing a throwaway statistic appearing on one page of a book that deals primarily with something else not clearly related to it?

The short answer is that Lott has made it matter. Don’t forget his early statement:

Those who advocate letting law-abiding citizens carry concealed handguns point to polls of American citizens undertaken by organizations like the Los Angeles Times and Gallup showing that Americans defend themselves with guns between 764,000 and 3.6 million times each year, with the vast majority of cases simply involving people brandishing a gun to prevent attack. [emphasis mine]

And his frequent repetition of the brandishing statistic, especially in his media presentations, shows that he sets great store by it. That one statistic is the centerpiece of the first section of the introduction to More Guns, Less Crime, where it is surrounded by anecdotal material. For rhetorical purposes, Lott needs a statistic showing that the stories about gun defense are representative.

Another answer is that influential commentators who share Lott’s general views about gun control have made pointed use of his claim about the prevalence and character of gun defenses. I cite only the three such personages whose writing about this have come to my attention; there must be many more. In his 1/24/99 column in the Washington Post, George Will used Lott’s purported finding to argue that municipalities save large sums of money because citizen DGUs supplement police services. In an article otherwise nearly bare of statistics, appearing in U.S. News & World Report, May 31, 1999, Michael Barone cited Lott’s research and wrote, “Citizens stop crimes 2 million times a year by brandishing guns.” Recently, syndicated columnist Thomas Sowell in his July 15, 2002, cited More Guns, Less Crime as the place where “Lott points out that most instances of the successful use of a gun in self-defense do not involve actually firing it.” This statement is offered in support of Sowell’s advocacy of arming airline pilots. (Ironically, in his own essay on this topic, Lott does not mention this supposed finding. No doubt he is aware that what holds for a civilian population in its accustomed environment is a bad analogy for what suicidal terrorists are likely to do on an airplane.)

In any event, the issue is not how important a statistic is but whether there is justification for presenting it as a result of a scientific investigation. In the absence of documentation, there is not. And presenting an unverifiable statistic, however minor in importance it may be, is not acceptable behavior for an ethical investigator.

(5) Conclusions

The relevant standards here are not necessarily those proposed by commentators and combatants in the gun debate. They are the same as the standards that apply to biomedical experiments, surveys of consumer spending patterns, historical reconstructions of demographic data, or any other domain of empirical science, including social science. See, e.g., the statement on standards by the American Association for Public Opinion Research:

Investigators are obliged to tell the truth about what they take from the work of other investigators and to provide verifiable evidence and complete documentation for statements made in reports on their own research. They are responsible for telling the “whole truth” about it, to use the legal phraseology, and for enabling others to confirm or falsify their results. As far as his claim about the evidence on gun brandishing obtained in 1997 is concerned, John Lott has failed on these counts. Keep in mind that the burden of proof is his, as in all inquiry. In the vernacular, Put up or shut up.

Despite his admission that he cannot document his 1997 survey, Lott continues to discuss the alleged findings from that project as if they are acceptable as statistical evidence. (See “What Surveys Can Help Us Understand About Guns?” cited earlier.) It appears that he has learned nothing from his critics about the ethical requirements of the scientific enterprise. So it is time for Lott’s supporters to advise him that his best course of action now is to retract his claims concerning the 1997 survey. Say-so and “recall” that gets more elaborate as time goes by are simply not acceptable. And it is long past time for him to retract his manifestly false allegations about what other investigators have found. His failure to do so is much more reprehensible than the Mary Rosh foolishness.

People who violate traffic laws can be sent back to school to relearn the etiquette of the road. Would that there were a comparable resource for those who violate professional ethics.

The Lott episode is just one incident in a seemingly inexorable trend toward eliminating professionally competent research from discussions of social policy or overwhelming it with junk science. If that trend is not halted, the life blood of democracy itself will dry up. The people cannot make sensible choices without reliable information.

A good cause does not need junk science. That too is my new philosophy.

[This article was finished on 4/11/03, and I do not plan on making any further public statements about Lott’s research unless some really important new factual statements about the topics of sections (1) and (2) turn up. The probability of this happening seems low. But I have authorized Tim Lambert to make editorial insertions, so identified, in my text by way of correction, clarification, amplification, or summary of new information that he feels are indicated. His writing and mine will be clearly distinguished, of course, and the responsibilities likewise.]