LHC and blackholes

[text updated]

Questions about the validity of previously calculated blackhole creation probabilities at LHC are discussed in this New Scientist article.

The conclusion? We don't have a clue of what the range of probabilities are. It is however still small compared to, say, getting hit by a car or dying in a plane crash. Questions about the validity of calculations are legitimate. However, I am not sure if I will enjoy thinking about the implications of this particular question (especially if it leads to more wailing from those who are scared of blackholes). How do you arrive at a decision weighing probabilities on something like this. Is it fine to risk our existence for more knowledge when the risk is once every billion years, or is it fine if the risk is quantified as once every trillion years? If we decide it's too risky, are we going to wait for another billion years to run the LHC?

If we go, I say let's go with a bang. The fear of unknown is such a spoilsport. Let us be reckless with the LHC and really push it, taunt Nature to show us what it has got. Give us Higgs or give us a Balckhole, dammit.

More like this

What was the New Scientist headline? Was it "John Wheeler Was Wrong"?!

My view on this is that it's the best type of calculation to be wrong about - if they screw up, nobody will get to complain :).

Although well-written, the New Scientist article seems to suffer from some superficial approaches to the topic.

1) Physics is more uniform in opinion and yet less biased toward consensus than US Economics, so the current global crisis (which is not unprecedented) would not seem to be analogous to determining the presence or absence, but certainly unevidenced, principles of the universe which might endanger humanity as a result of, specifically, the LHC.

2a) Far from being "trenchant," the unpublished draft by Toby Ord, Rafaela Hillerbrand, and Anders Sandberg is just mushy Bayesian reasoning that contradicts itself. It cites typesetting mistakes as calculation flaws and argues that mistakes nearly always increase the risk. But the only relevant example, the empirical bounds on strange matter disaster at the RHIC was lowered from 10^-9 to 10^-12. In addition, they miss the point that the RHIC paper gives a bound on probability, not an estimate of probability.
http://arxiv.org/abs/0810.5515

2b) Like too many Bayesian "proofs" of existence, and like the Drake equation, they attempt to bootstrap guesses about the unknown into calculation of the unknown. And yet it remains unknown.

2c) It fails to account for competitiveness and scrutiny in that the error rate of peer reviewed and highly anticipated and scrutinized papers by illustrious experts is surely lower than, say, an unanticipated draft by outsiders. (Conflating upper bounds with estimates, arguing from ignorance, an unsupported theory of science as an arbitrary and capricious system of postulates, and typos in 1980's Nature references are among problems I found on my first read. By the time this draft was posted, the Giddings and Mangano paper was already slated for Physical Review D.)

3) New Scientist's Mark Buchanan fails to read critically, and accepts the summary of the RHIC probability bound as a probability estimate, and the rest of it.

4) Science solves the problem of epistemological uncertainty by requiring you to have a natural model to reason from to be taken seriously. Otherwise, you have to always add the chance that operation of the LHC will anger Jehovah and bring about Armageddon, and the chance that operation of the LHC will anger Odin and bring about the Fimblewinter, etc. Science says nothing about "magic" black holes that ignore the constraints on gravity and angular momentum. Science says nothing about equally "magic" pink unicorns which would rise up in revolution if the LHC is switched on. In the naive model of science being used here, all of these are included in this "epistemological uncertainty".

5) But science is progressive and so even if the triple predicate of correct reasoning, correct model and correct theory does not hold, there are strong constraints on what possible reasonings, models and theories could possibly hold. This is where outsiders can be blinded by their own arrogance, because experience, evidence and therefore expertise does matter. It is not enough to quibble: "But you might be wrong." You have to explain the specific manner in which scientific papers are wrong when it comes to reasoning and if you disagree with the model or theory, you need to present your own model and/or theory to stand side-by-side for scrutiny. This is why unconstrained guessing of numbers for the probability of LHC disaster should the assumed laws of physics not hold is not justified.

6) In addition, the progressive nature of science already incorporates the concepts of uncertainty and revision of estimates. But you have to do this with the best tools we have, evidence and logical reasoning. When you abandon these tools, when you substitute guesses for inferences, you get the anti-LHC scaremongering which seems to have caused harm. http://news.bbc.co.uk/2/hi/south_asia/7609631.stm

So does the draft paper by Ord, Hillerbrand, and Sandberg contribute to human knowledge? Is it just motivated by envy of science? Or is there a better description?

Someone said, perhaps it was Tomas Aquinas: 'If thou would sin; sin boldly.

I spoke to Brian Greene yesterday and he had some cute things to say about the whole black hole thing.

From the public relations standpoint, having the whole black-hole-destroy-the-world thing was very good. I must have done 6 or 8 programs on the LHC, and ultimately that question was why I was there and why anyone else was there. Is it possible? If you ask me, âIs it possible that the moon will turn into a big ball of Swiss cheese?â I guess itâs possible. Itâs so incredibly unlikely that itâs not worth thinking about it or speaking about it, and thatâs the kind of possibility weâre talking about.

Read my little post at The Row Boat.

Brian Greene seems to be saying it is not the job of physicists to quantify the risks to the Earth from Russell's Teapot, until evidence supports the physical model. This is akin to the average person not finding it necessary to look both ways _and up_ while crossing the street.

I'm all for physicists being humble and a certain amount of ontological uncertainty, but once you throw physics itself out the window, then you destroy your ability to predict anything and the language of probability and risk management becomes meaningless.

A guess that a calculation event X having less that a 10^-12 chance might be wrong in some detail with probability p does not imply anything new about the probability of X.

In the droll language of Bayesian math
E(X) = E(X|A)E(A) + E(X|!A)E(!A) = E(X|A)(1-p) + E(X|!A)p
But in the absences of evidence, E(X|!A) = E(X|A), thus E(X) = E(X|A).

That's because physicists already know that their working model of the universe is not the correct one -- it's the best we got, but we know it's not correct and even if you told us what the correct one was we have no evidence today that you are correct.