Kent Holsinger sends along this statistics discussion from a climate scientist. I don't really feel like going into the details on this one, except to note that this appears to be a discussion between two physicists about statistics. The blog in question appears to be pretty influential, with about 70 comments on most of its entries. When it comes to blogging, I suppose it's good to have strong opinions even (especially?) when you don't know what you're talking about.
P.S. Just to look at this from the other direction: I know next to nothing about climate science, but at least I recognize my ignorance! This is perhaps related to Kaiser's point that statisticians have to be comfortable with uncertainty. In contrast, I could well believe that someone who goes into physics would find complete certainty to be appealing. The laws of physics are pretty authoritative, after all.
P.P.S. Further discussion here. (We seem to have two parallel comment threads. I'd say that we could see how long they need to run before mixing well, but this note has induced between-thread dependence, so the usual convergence diagnostics won't be appropriate.)
- Log in to post comments
Tamino, the proprietor of the Open Mind blog, is a statistician.
Wow. Good job on being flippant, dismissive, judgmental and wrong all in just two paragraphs.
Concerning "the wrong" part: Ever heard of quantum mechanics? That's uncertainty the way we physicists like it - fundamental and profound.
Nick: I learned about quantum mechanics many years ago on the way to getting my undergraduate degree in physics. I agree that quantum-mechanical uncertainty is fundamental and profound. However, the laws of physics (including quantum mechanics) are not uncertain in the way that models in applied statistics are uncertain. Applied statisticians are, I believe, not so inclined to be so sure of ourselves when venturing outside our areas of expertise. We just have too much experience seeing models go wrong. The blogger I linked to appears to have a physicist's attitude that he or she can't be wrong. But if you look carefully, the blogger was writing from a position of ignorance of the statistical literature. It's fine to be ignorant of the statistical literature, but it's not so fine, I believe, to not recognize your own ignorance.
If you really are discussing the linked article, "Good Bayes Gone Bad", then you have completely mis-characterized the article.
The article is about an example in a textbook which apparently shows a discrepancy between frequentist and Bayesian results for the interpretation of a (hypothetical) clinical trial example. The article is not about "certainty" or being right (indeed, the entire language of the post is about plumping for uncertainty), it's about the sensitivity of Bayesian analysis to subtle assumptions.
In particular the discussion is of whether the text book authors' assumption that \theta_A and \theta_B are the same produces an exaggerated statistical significance when compare to a more biologically plausible assumption that \theta_A and \theta_B are different.
Nothing in the post (or the comments looking at different ways to set the null hypothesis and its prior probability) implied that statistics was easy.
Now, unless the statistical literature has produced a better way to chose prior probability than actually looking at the biology/chemistry/physics behind those assumptions, I cannot see how it would help in this circumstance.
So perhaps you can enlighten us on how you would choose an appropriate null hypothesis for the example given, and estimate the prior probabilities. We biological scientists who actually do these experiments would like to know (seriously, it's a huge topic in my discipline).
(Unless of course you are actually mean the post prior to the Bayesian one "The Power â and Perils â of Statistics" which debunks a particularly stupid piece by climate change denialists, which means you comment is still a misrepresentation. Then again, I think Ioannidis should be whipped with a wet noodle for not titling his paper "Why most research done on dodgy, un-validated genechips is wrong", but hey, it wouldn't be so headline grabbing)
PS, the textbook author is an information theoretician, not a climate scientist.
PPS Despite the blog, I'm a biomedical researcher during the day.
Here's a comment from Tamino (the bloke who wrote the "Good Bayes Gone Bad" post) in the "The Power â and Perils â of Statistics" thread that may have some relevance. He's replying to two physicists.
I may indeed have mischaracterized the blogger. And part of it may be stylistic differences; another post lower down on the page at that same blog concludes with "Are you man enough?"--which is the kind of thing you'd expect to see on a political blog, not so much on a statistics blog!
My main reaction was that the discussion of Bayes on that blog was occurring pretty much in isolation of the statistical literature. The blogger was discussing a statistics textbook written by a physicist, and he wrote certain things that appear to betray some confusion about the basic principles of statistics; for example, "it's likely (one could even say 'statistically significant') that the treatment is effective, but that doesnât mean proved conclusively." Looking at the blog entry more carefully, though, I realize I'm being a bit unfair, as the blogger states right at the beginning that he is responding to a reader who recommended that particular book.
The sort of very simple example being discussed misses a lot of the point of Bayesian inference, in that it's an unusual case in which the data model (the "likelihood") is uncontroversial. Once you move to linear regression, logistic regression, and so forth, any statistician--Bayesian or not--is going to be making a lot of assumptions, and in this context the prior distribution is just part of the larger model.
In general I don't think it makes sense to assign positive probability to the even that two treatment effects are identical, but of course it's difficult to say much more than this, given that the example is entirely hypothetical.
I don't think it would be fair to categorize Prof. MacKay as a "just" a physicist; he has made many worthwhile contributions to Bayesian statistics and information theory, so he is an expert on the topic.
For my part, my objection was to tamino treating a bit of Bayesian inference as if it were a hypothesis test, and using a threshold of 95% to mean significance; when the usual interpretation of Bayes factors would suggest the evidence is weaker (MacKay's intepretation of the Bayes factor of 99 as being "very strong" is in line with normal practice AFAICS).
However tamino is now editing and deleting my posts without bothering to consider the content (for instance to explain why he adopted th 95% threshold rather than the normal interpretation scale). So the mixing will probably improve now that an inconvenient mode has been deleted ;o)
It helps to know the context. Anthony Watts has been particularly reprehensible in making fact-free claims and ignoring refutations. If you have never had to deal with climate change denialists, creationists and anti-vaxxers before let me tell you it is only a matter of time before you snap when dealing with them. Tamino was quite mild given the relentless ignoring of data practised by Watts (Tamino has just demonstrated a central claim of Watts was false, and had several people independently replicate his finding).
I would second what Ian says above; purely electronic means of communication tends to bring out the worst in most of us, and anyone arguing against the denialists have had more than their fair share of provocation.
Tamino has been doing an excellent job debunking some of the sceptic nonsense that has appeared in print on climate change, and he seems to have an excellent graps of frequentist time series analysis (and communicates it very well to the general public).
[sigh] Actually posted a substantial statistical comment as well. It's somewhere in moderation because it quote a URL.