Eric Schwitzgebel has been doing a lot of thinking about the relationship between thinking about ethical behavior and actually behaving ethically. In his most recent post, he takes on a meta-analysis claiming that religious belief correlates negatively with criminal activity:
I found a 2001 “meta-analysis” (Baier & Wright) of the literature that shows all the usual blindnesses of meta-analyses. Oh, you don’t know what a meta-analysis is? As usually practiced, it’s a way of doing math instead of thinking. First, you find all the published experiments pertinent to Hypothesis X (e.g., “religious people commit fewer crimes”). Then you combine the data using (depending on your taste) either simplistic or suspiciously fancy (and hidden-assumption-ridden) statistical tools. Finally — voila! — you announce the real size of the effect. So, for example, Baier and Wright find that the “median effect size” of religion on criminality is r = -.11!
Schwitzgebel, as you might expect, doesn’t buy Baier & Wright’s conclusion. Not having read the study, I can’t really comment on it. I’m also not an expert on meta-analysis, but it does strike me that a few of the points Schwitzgebel makes about meta-analysis might not be as solid as he suggests. I’ll discuss those below.
First, Schwitzgebel cites Robert Rosenthal’s estimate that for every journal article published, 5 go unpublished because they didn’t get clear results. Now, if you look at the rejection rates (PDF) for APA journals (they averaged 86 percent in 2005), that sounds like a conservative estimate. However, getting rejected from one journal doesn’t mean an article is unpublished. Usually a scholar just resubmits to the next journal on her list, so arguably the percentage of studies that actually get published is much higher (another way of putting this is that one study could count for several rejections but just one publication).
Further, a study might go unpublished not because it gets null results, but because it replicates previous work. So Schwitzgebel’s assumption that all those unpublished correlations should get averaged in as zero is also suspect.
I would also submit that larger studies are very unlikely to not find publishers, no matter whether they have a null result. If you conduct a survey of thousands of people, someone’s going to publish it even if you’ve found nothing, just so that data set is out there. Let’s suppose in a given period that 100 studies are undertaken. Four of those studies have 5000 participants, sixteen have 500 participants, and the rest have 100 participants. In all likelihood, all four of the large studies will be published. Somewhat fewer than half the medium studies will be published, and a smaller number of the small studies will be published. If we believe Rosenthal’s estimate that only 20 percent of all studies are published, then perhaps ten of these smaller studies will make it through peer review. That means that of the 36,000 participants studied, over 24,000 are represented in published journal articles. A meta-analysis of this data, then, would consider over two-thirds of the research participants in that area of study — far more than Rosenthal’s gloomy 20 percent estimate (however, in fields such as cognitive psychology where participant pools are nearly universally small, then Rosenthal’s estimate would still apply).
Another critique Schwitzgebel levels at the Baier & Wright study is also suspect: He says that “A ‘median effect size’ of religion on criminality of r = -.11 means that half the published studies found a correlation close to zero.” Actually it doesn’t mean that. It could mean that some studies found a positive correlation. It could mean that all the studies are clustered right around the r = -.11 level. But even assuming that neatly half the studies found a negative correlation larger than r = -.11, and the other half were exactly null, that’s still a significant correlation for the sample size we’re talking about. Just because some religious people commit crimes doesn’t mean that overall religion doesn’t have some negative effect on criminal behavior.
The most important critique, however, is quite valid: we’re talking about correlation here, not causation. This result doesn’t suggest that joining the church will “cure” criminals or prevent future criminal behavior. At best it says that people who join churches also tend — in a very small way — to be people who don’t commit crimes.
Update: I should also add that we’d probably see a similar — or even larger — negative correlation between being a scientist and criminality. Or even just having a job and criminal activity. Also, Schwitzgebel’s concerns about church sponsorship of these studies and the questionable nature of the journals they appear in are important and relevant to the issue.