What's the bigger crime: Religion, non-religion, or meta-analysis?

Eric Schwitzgebel has been doing a lot of thinking about the relationship between thinking about ethical behavior and actually behaving ethically. In his most recent post, he takes on a meta-analysis claiming that religious belief correlates negatively with criminal activity:

I found a 2001 "meta-analysis" (Baier & Wright) of the literature that shows all the usual blindnesses of meta-analyses. Oh, you don't know what a meta-analysis is? As usually practiced, it's a way of doing math instead of thinking. First, you find all the published experiments pertinent to Hypothesis X (e.g., "religious people commit fewer crimes"). Then you combine the data using (depending on your taste) either simplistic or suspiciously fancy (and hidden-assumption-ridden) statistical tools. Finally -- voila! -- you announce the real size of the effect. So, for example, Baier and Wright find that the "median effect size" of religion on criminality is r = -.11!

Schwitzgebel, as you might expect, doesn't buy Baier & Wright's conclusion. Not having read the study, I can't really comment on it. I'm also not an expert on meta-analysis, but it does strike me that a few of the points Schwitzgebel makes about meta-analysis might not be as solid as he suggests. I'll discuss those below.

First, Schwitzgebel cites Robert Rosenthal's estimate that for every journal article published, 5 go unpublished because they didn't get clear results. Now, if you look at the rejection rates (PDF) for APA journals (they averaged 86 percent in 2005), that sounds like a conservative estimate. However, getting rejected from one journal doesn't mean an article is unpublished. Usually a scholar just resubmits to the next journal on her list, so arguably the percentage of studies that actually get published is much higher (another way of putting this is that one study could count for several rejections but just one publication).

Further, a study might go unpublished not because it gets null results, but because it replicates previous work. So Schwitzgebel's assumption that all those unpublished correlations should get averaged in as zero is also suspect.

I would also submit that larger studies are very unlikely to not find publishers, no matter whether they have a null result. If you conduct a survey of thousands of people, someone's going to publish it even if you've found nothing, just so that data set is out there. Let's suppose in a given period that 100 studies are undertaken. Four of those studies have 5000 participants, sixteen have 500 participants, and the rest have 100 participants. In all likelihood, all four of the large studies will be published. Somewhat fewer than half the medium studies will be published, and a smaller number of the small studies will be published. If we believe Rosenthal's estimate that only 20 percent of all studies are published, then perhaps ten of these smaller studies will make it through peer review. That means that of the 36,000 participants studied, over 24,000 are represented in published journal articles. A meta-analysis of this data, then, would consider over two-thirds of the research participants in that area of study -- far more than Rosenthal's gloomy 20 percent estimate (however, in fields such as cognitive psychology where participant pools are nearly universally small, then Rosenthal's estimate would still apply).

Another critique Schwitzgebel levels at the Baier & Wright study is also suspect: He says that "A 'median effect size' of religion on criminality of r = -.11 means that half the published studies found a correlation close to zero." Actually it doesn't mean that. It could mean that some studies found a positive correlation. It could mean that all the studies are clustered right around the r = -.11 level. But even assuming that neatly half the studies found a negative correlation larger than r = -.11, and the other half were exactly null, that's still a significant correlation for the sample size we're talking about. Just because some religious people commit crimes doesn't mean that overall religion doesn't have some negative effect on criminal behavior.

The most important critique, however, is quite valid: we're talking about correlation here, not causation. This result doesn't suggest that joining the church will "cure" criminals or prevent future criminal behavior. At best it says that people who join churches also tend -- in a very small way -- to be people who don't commit crimes.

Update: I should also add that we'd probably see a similar -- or even larger -- negative correlation between being a scientist and criminality. Or even just having a job and criminal activity. Also, Schwitzgebel's concerns about church sponsorship of these studies and the questionable nature of the journals they appear in are important and relevant to the issue.

Tags

More like this

Oh, and one more thing: A lot of research isn't published because it's bad. Poorly controlled, poorly executed, and so on. For example, if a study claimed to correlate religious belief with criminality but neglected to study actual criminals. This sort of thing happens all the time, and those studies should be excluded not just from journals, but also from meta-analyses.

Meta-Analysis again? Is it me or are a lot more of these being reported... I have put up screeches on several sites about the uselesses of almost all meta-analysis studies for gleaning real correlations. In a meta-analysis you are only possibly as good as the worst study in the group. Likely worse. Why?

- You have to decide what the set of base studies is. Even when you don't use one you are biasing results because of that. How do you choose? You may actually have good criteria, but how do you KNOW that with some confidence without doing some study on that?

- You have to check that data from the base studies is comparable.
Generally it isn't so then you then have to:

- Guess at some transfer function on one or both sets of results to make them comparable. You are doing this even if all that is happening is, for example, taking the averages and averaging them. Or do you weight by size, oh wait the base data isn't the same order, so.... how do you know what you are doing is reliable in this instance?

- Somehow combine the ERROR estimates like the previous point. If the studies are quite divergent the error types will be even more so.

- Then you still have to do all the regular checks just like a regular study and figure out error estimates and such separate from the underlying ones.

- ... well at this point I do not believe there is much real quantitative information left.

I am still looking for some serious correlation that was discovered by meta-analysis that isn't better understood without it.

I do think meta-analysis might be useful in looking at how good studies themselves are. Meta-analysis is not an analysis of data, but just like it says it is an analysis of the analysis! That is where the usefulness may come from.

It is possible to merge data from multiple studies also - but that is re-analysis, not meta-analysis. You are still looking at data or a simple transformation thereof, not results of studies...

Well the regular M-A rant is over...

Excellent piece. However, I found your headline misleading & in dubious taste. None of those things is a "crime." If you're using it as a teaser, please leave that sort cheapness to the tabloids.

Well, Dave, as you probably gathered, I was feeling a bit crotchety when I posted that. I think it was partly frustration at spending so much time reading so many poor studies and thoughtless meta-analyses. (I'll be cooking up a related post on the literature I've been reviewing on the effectiveness of business ethics courses.)

A few further thoughts:

* Indeed, half the studies included in the meta-analysis found very small correlations (absolute value less than .11). There were no strongly positive correlations.

* In your comments above you neglect to consider the many studies that don't even get submitted to journals. These can be completed studies, but often are simply pilot studies or student projects that don't seem to have very promising results. Many of these are small studies with null results. If an *optimistic* estimate of the effect size is r = +/- .11 then we should expect many small studies to have null results.

* As I mentioned in the original post, the large studies were generally the ones that found the smaller correlations. Not a good sign.

* I like your last point. As Paul Meehl pointed out, pretty much everything correlates with everything. Correlations as small as this, in the causally complex social world, unless they are very well controlled (which these are not) say little.

How about a redefining crime... where mass murdering innumerable iraquis is a crime, blowing up buildings is a crime, lynching of the witches is a crime... killing innocent people by a signature is crime... Now let us count the number of people who go to churches, mosques and temples and their contributions and correlation to crime

Correlation of crime and religion to my observation should be positive since less educated people are more religious and at the same time less educated people tend to have smaller income and tend to commit more crimes. Please note that I am not saying one have effect on other, it is just obvious like in anecdotal correlation between intelligence and foot size for kids - obviously most kids get smarter when they grow up.

I played around with the GSS data a bit on this question:
http://www.aleph.se/andart/archives/2007/01/criminal_because_of_god_or_…

My conclusion was that crime causes fundamentalism as a defence reaction rather than fundamentalism making people more or less criminal. Fundamentalists do not appear to commit more crimes, but they live in more unsafe lower-class environments and experience more fear in their daily life, which would lead to a mortality salience increase of conservatism.