James B. Clark writes:

Here’s your table. You didn’t include the data from 1967-1972, so I

have no idea what it looks like. If you pick 1975, 1976, 1977, or

1978 I would bet that a decline in the homicide rates would be

detected by the methods in the Loftin study.

Probably. All this means however, is that because the data is noisy,

statistics cannot tell us exactly when the drop occured.

Charles Scripter writes:

Curious. If the data is so noisy that one cannot determine exactly

when the “drop” occurred, then Pim certainly cannot claim that the

“drop” corresponds to the gun ban.

He cannot claim that the drop occured **exactly** at the time of the ban.

He can claim that the drop occured at **about** the time of the ban.

Even if we believe that all of Kleck’s respondents were truthful, his

estimate of the number of DGU’s is plus or minus 600,000 (95%

confidence interval). I don’t see you complaining that people cannot

claim that the number is 2,549,862 because of this uncertainty.

James B. Clark writes:

If they had used used 1976 as the cutoff instead of 1977, they’d

have gotten even more statistically significant results. The

problem is, you can’t look at data, notice a trend, then test for

the trend.

Which they didn’t do. (Else they would have used 1976 as the cutoff

as you note.)

Charles Scripter writes:

If the “statistical significance” test would have indicated 1976,

then how did they choose 1977? Ah, I understand… The authors

wishedto find a correlation with the 1977 gun ban, and massaged the

data to match…

You’re confused. I’ll say it again — The problem is, you can’t look at

data, notice a trend, then test for the trend. This is real trap in

the analysis of quasi-experimental data. With an experiment you can

spot a trend and then collect more data to test the significance.

This is not possible in a quasi-experiment. You have to start with

the hypothesis to be tested before you go looking at the data.