Correlation v. Causation

Via xkcd, an unusually clever comic even by the standards of this unusually clever comic:

i-dd907cad8c11afeb35cfaa3cde3ecaf5-xkcd.png

It goes right to the heart of one of the greatest philosophical difficulties of science. All we can do is measure correlation. We can never be assured that we're not just getting lucky and that in fact the fundamental-seeming physical laws we deduce are just flukes.

But science does seem to work pretty well, and in fact lots of things that correlate with each other do in fact cause each other. We seem to be able to tease apart the relationship on a case-by-case basis with pretty fair accuracy. Which is one reason I make a habit of not worrying about the philosophy of science too much. Sure someone ought to, but I'm pretty much satisfied by the fact that science seems to work pretty well so far.

On the other hand, tomorrow I could wake up and find out that the entirety of statistical mechanics was all a gigantic coincidence and the studying I've been doing is now useless. I sure hope not though.

More like this

I happen to really dislike the notion that correlation implies causation, or indeed, that correlation implies anything at all. Instead of repeating myself, I'll just link to one of my previous rants on the subject.

By ObsessiveMathsFreak (not verified) on 06 Mar 2009 #permalink

We can only measure correlation?

What about looking for mechanisms? If we correlate exposure to benzene with cancer, and also observe a mechanism - that oxidation products of benzene bond to or tear apart DNA, then that's a hell of a lot better than just correlation, isn't it?

By Moderately Unb… (not verified) on 06 Mar 2009 #permalink

ObsessiveMathsFreak,
I think you need to redo your calculations on how Saturn affects the S&P500. I suspect that r<<0.88 :)

On a more serious note, in response to your liked, "Correllation at the very best, bookkeeping," I think it's more than that. It is a way to quantify observation. We can do an experiment and say two things have a relation but we need correlation to say the strength of that relationship. Alone that's meaningless, but as part of a larger hypotheses or experimental model, correlation is vital.

oops should have previewed the comment. r is much less than 0.88. I used a symbol that makes HTML unhappy.

OMF, the correlation coefficient assumes independent data, and time series are "auto-correlated" (successive observations tend to be similar, for obvious reasons) so an r of 0.88 is just a pretty meaningless number.

You need another example to disprove (A->B|B->A|C->[A,B]).

"If we correlate exposure to benzene with cancer, and also observe a mechanism - that oxidation products of benzene bond to or tear apart DNA, then that's a hell of a lot better than just correlation, isn't it?"

Which is just to say you're taking into account the additional correlation between DNA damage and cancer. Now I agree with you completely, I'm just saying that from a purely philosophical standpoint it's still measuring a correlation.

Regarding the example of benzene and cancer, (not that I mean to make any statement on the subject), it can also be the case that benzene exposure would cause cancer if it was present in sufficient amounts but it never is. But the correlation follows from benzene exposure being correlated with something else, that does cause cancer.

Recently I heard a news report that wine reduces esophagus cancer but beer and hard liquor do not. Of course my mind jumped to the conclusion that the authors of the report are members of the upper class and are more likely to be wine drinkers than beer. But the same thing can explain the correlation; maybe beer nuts [or any of a billion other things] cause cancer, and wine drinkers tend to avoid beer nuts.

By Carl Brannen (not verified) on 06 Mar 2009 #permalink

Validation is suggestive, falsification is definitive. Space is isotropic, Noether's theorem, angular momentum is locally conserved. A reproducible counter-demonstration would be disaster! (an undeserved boon for untenured faculty; possibly tough on ice skaters) Theorists boast promiscuity while empiricists pay child support.

Euclid was good for 2000 years, Euclid was incomplete. Newton was good for 200 years, Newton was incomplete. Load an Eotvos balance with chemically identical, single crystal, enantiomorphic space groups P3(1)21 (right screw axes) and P3(2)21 (left screw axes) test masses in opposition. If the net output is zero SOP, no problem. If the net output is not zero... space is consistent with prior observation but anisotropic toward chiral mass distributions. How much fun would that be?

The problem of incorrectly inferring causation from correlation is so old it has a Latin name: Post hoc ergo propter hoc (literally, "after this, therefore caused by this").

You cannot use statistics to prove a point directly; at best you can only prove that alternative hypotheses are false. What it can do is tell you where to look for causal mechanisms. To take MUS's example: Benzene exposure and DNA damage are both correlated with cancer. Laboratory experiments tell us that benzene and/or its oxidation products damage DNA. That does not prove that benzene exposure causes cancer (or, for that matter, that DNA damage causes cancer). It does, however, mean that if DNA damage can be shown in controlled experiments to cause cancer, we would then be able to state that benzene exposure causes cancer. You have to establish both links in the chain.

Note that this does not work in the other direction: Even if you established by controlled experiments that benzene exposure causes cancer, that does not prove that DNA damage causes cancer. It would still be possible (at least in philosophical principle) for benzene exposure to cause cancer by some other mechanism. You would have to do further studies to rule out all of these alternative mechanisms.

By Eric Lund (not verified) on 06 Mar 2009 #permalink

A minor quibble: Statistics can directly prove a point - disproving a hypothesis with experimental data (as opposed to observational). The benzene-->DNA-damage-->cancer example requires "both links in the chain", which Eric correctly notes as an indirect link.

BTW: Don't forget to visit the original XKCD comic and view the "mouse over" message, or you are missing half the fun.

The problem lies not with correlations themselves, but the fact that a certain proportion of correlations are expected to occur purely by coincidence and therefore the chances of any randomly encountered correlation being due to a casual relationship is generally quite low. However, if some particular quantities are predicted to be correlated in advance by a theory, then any subsequently observed correlation can be construed as good evidence for there actually being a causal relationship. Simply fishing for correlations in the absence of any theory is bound to turn up false positives, which is why the general rule "correlation does not imply causation" rings true; but this does not necessary mean that correlation is never evidence for causation.

By Anonymous (not verified) on 06 Mar 2009 #permalink

Well, running the inverse is always true and very useful though: while correlation does not imply causation, the lack of correlation certainly implies the lack of causation.

By ScentOfViolets (not verified) on 06 Mar 2009 #permalink

ScentOfViolets at #12 is not correct either -- absence of correlation is not evidence of absence of causation, for two sorts of reasons. The commonest reason is that the sample is too small to detect the effect, but you can also have systematic "suppression" effects.

Example: in a certain population earnings rise with experience, and with education. However, if access to education is growing strongly over time, experience (age) will be negatively correlated with education. In certain circumstances, the effects can cancel each other out, such that, for instance, the correlation between education and earnings may be insignificantly different from zero.

Per #8, falsification is the key. You say that you don't worry much about the philosophy of science, but if it were not for Popper and related philosophical thinkers, science might be very different than it is today. Go back a few hundred years in scientific method, and induction was very much the order of the day as opposed to falsification. The 20th century brought about a lot of change in terms of solidification of the scientific method, and it's important to not forget that. It's one of the reasons science "seems" to work-- and why it didn't really work so well in the past.

No, Brendan, that is simply incorrect. You are confusing being able to detect a correlation with the correlations themselves. And this is easily seen by the using the implication p->q and taking it's converse, ~q->~p. In the specific instance, since causation implies correlation, lack of correlation implies lack of causation.

Since the implication runs both ways, i.e., p->q iff ~q->~p, what you are saying is that causation does not imply correlation. Well, no, that's simply untrue. But it's certainly possible that, say, observed correlations between the position of Mars in the sky and the height of the tides are nonexistant.

By ScentOfViolets (not verified) on 07 Mar 2009 #permalink

ScentofViolets, I am not at all sure what you mean by "correlation", but I am using it in the sense of a Pearson correlation coefficient calculated on sample data (though the logic applies to any measure of association). You seem to be using it in some far more essential sense, in which it might really be there even if one can't detect it.

Note that I am not talking, in the second example, about the sample being too small to detect the effect, but about the idea the true value of the unconditional correlation might be really be zero, while the correlation conditional on a third variable (and, anteriorly, the causal effect) might be real.

Contrariwise, I am not at all sure what you mean; all I can see is that you're saying certain samples won't show a correlation when we know there is causation. So what? This happens all the time. In your example, for instance, if you control for age the correlation (presumably) shows up quite strongly.

You do agree that causation implies correlation, yes? What I am saying is just the logical equivalent. The only way for the contrapositive to be false is if you do not think that causation implies correlation. Using your example, suppose you knew education was positively correlated with income yet did not see it in your sample. What would you conclude?

Are you trying to say that there is the the problem of inferring a true lack of correlation as opposed to only knowing that there are no correlations in your particular data set? Well, sure. But again, hardly news.

By ScentOfViolets (not verified) on 07 Mar 2009 #permalink

SoV, we're clearly using "correlation" differently. I'm using it to mean a pattern you observe (or not) in a data set, but as you seem have something more essential (and less easily observed) in mind.

In my terms, it is certainly possible that causation can be present with a correlation of zero in the population (not just the sample, though we will only observe samples), if supression is going on. Yes, we detect this by controlling for the suppressor variable, if (i) we have reason to believe one exists, (ii) we have an idea what it is, (iii) it is in the data set, and (iv) it is actually measurable. That is, it may be hard or even impossible to elicit the conditional correlation.

In your terms, an absence of correlation implies an absence of causation. However, since correlation in your sense has a rather more complex relationship to the data, knowing that you have "an absence of correlation" is far harder (how do you know that you have controlled for all possible suppressor variables?). Thus while your initial statement is true (for your definition of correlation) it is not very powerful. For my definition, your statement would be false, but very powerful!

Powerful in something other than the statistical sense :-) I would tend to disagree about the power of this formulation though. Because, really, this is exactly what falsification of a hypothesis is all about, though stated more carefully. Let me use the following example, not because of it's politics, but to illustrate what I mean.

Up until fairly recently, some people would have it that Republicans are simply better on economic issues. Really? a guy calling himself Cactus went through some very basic data mining to see if this was the case. And it turns out that by any of a number of reasonable metrics - GDP, job growth, etc - that this is not the case. Note that I don't say that Democrats are better for the economy, merely that Republicans haven't proven their case that they are.

Anyway, a number of readers suggested various reasons as to why Republicans really are better than Democrats on this set of issues, but it wouldn't necessarily show up in this type of analysis. And so various possibilities were tried - maybe it's because of lag effects. Maybe it's because monetary vs fiscal policy. Maybe there were some adverse external events that just happened by sheer bad luck to have happened on their watch. Time and time again, this was proven to inadequately account for the results (assuming, nonstatistically, that the hypothesis was true.) In the end, some people suggested that there were effects that were unaccounted for, but the fact that they didn't know what they were didn't mean that they didn't exist.

And this is just what I mean by the power of my formulation of the statement: it's all very well to say that falsification is the key. But how do we know when a hypothesis has been falsified? Because, quite simply, the predicted correlations don't exist, ie, ~q->~p. But, as you note, it's not enough to simply say that there are no observed correlations. This has got to apply over a whole range of circumstance. The notion that lack of correlation implies lack of causality is then just a way of ticking off the various options. Which can often be ordered in a quite useful way.

By ScentOfViolets (not verified) on 08 Mar 2009 #permalink

There are two problems with making the link from correlation to causation. The first is the problem of coincidence, means that the very existence of a correlation requires statistics, and is to some extent a statement of probability. The more severe problem is that the two correlated observations may both both be consequences of an unknown common cause, instead of their being a direct causal relationship between them. The scientific strategy for dealing with this possibility is experiment: force one of the two correlated phenomena to occur using an experimental intervention--a known cause of one phenomenon that has no possibility of directly causing the other, and then see if the correlation persists. There are two caveats with this: a) sometimes, there is no practicable method of producing the required singular effect, and b) it is hard to absolutely eliminate the possibility that the experimental intervention is capable of producing both phenomena through an unknown mechanism. Experimental controls, such as placebos, are designed to test for such potential confounds, but it is hard to be certain that they have been eliminated entirely. Nevertheless, it is reasonable to say that a correlation that persists in an appropriate experimental paradigm provides strong evidence for a causal relationship.

A reader on my own blog reminded me of the good old "number of pirates vs average global temperature" graph as another wonderful example of correlation vs causation when I posted that cartoon!
http://blog.sciencegeekgirl.com/2009/03/09/correlation-vs-causation/

Of course, I think the graph needs to be updated now that pirates have made a resurgence on the coast of Africa. Perhaps that's counter-proof of the correlation?? :-)