“Faultily faultless, icily regular, splendidly null, dead perfection; no more” -Lord Alfred Tennyson
Ahh, the Standard Model of elementary particles and their interactions. It’s right up there with General Relativity — our theory of gravitation in the Universe — as the most successful physical theory of all-time.
While General Relativity describes the relationship between matter-and-energy and spacetime, the Standard Model describes all the known particles in the Universe and how they interact with one another. This ranges from the simple things you may be familiar with, like an electron absorbing a photon (or what you call “why sunlight feels warm”), to the rare decays of bound states of unstable fundamental particles.
Well, guess which one I’m here to talk to you about today?
No. Wrong. Guess again.
Okay, as you may know if you’ve been coming here regularly, the Standard Model does an amazing job of accurately describing pretty much every phenomenon about the Universe we’ve ever been able to measure or understand. There are a few exceptions — neutrino masses and oscillations, the lack of strong CP-violation, the origin of the matter/antimatter asymmetry, why fundamental particles have the masses they do, and dark matter and dark energy — but as far as all the quarks, leptons, and gauge bosons go, the Standard Model has never failed us. Not once, ever, for anything and everything we’ve ever measured. And believe me, we’re trying.
That’s one of the major goals of the Large Hadron Collider: to search for (and hopefully find) observations that are inconsistent with the Standard Model. The way we try and do this is twofold, and requires a marriage of incredible theory with the most advanced experiment ever devised!
From a theoretical point-of-view, we take known combinations of fundamental particles — things like baryons and mesons, which are quark (and/or antiquark) combinations — and calculate all the different ways they can decay.
In particular, we calculate what we call branching ratios, which is a fancy way to say “if we create an arbitrarily large number of these particles, this fraction of them will decay into these particles.” And when we say these particles, we usually mean specific particles with certain properties: energies, momenta, angular distributions, etc. Some of the most stringent tests of the Standard Model involve searches for flavor-changing-neutral-currents (FCNCs), an example of a very, very rare standard model interaction.
It’s so rare because at what we call tree-level — or an interaction diagram with no loops — FCNCs are forbidden. But they can occur in loop diagrams, which are suppressed. However, many extensions to the Standard Model, like supersymmetry or technicolor in particular, have these FCNCs at tree-level, which means that even if the SUSY or technicolor particles are very heavy, they should cause departures from the Standard Model in these rare decays.
Many of these decays are being measured at the Large Hadron Collider for the very first time in the decays of B-mesons, or quark/antiquark pairs where (at least) one of the pair members is a bottom quark, or the second-heaviest type. Two of them, in particular, are of special interest because of the intense (but well-quantified) theoretical rarity of certain decay channels: the Bs particle* (bottom paired with an anti-strange), which decays into two muons (μμ) with a branching ratio of 3.52 × 10-9, and the Bd particle* (bottom with an anti-down), which decays into two muons with a branching ratio of 1.07 × 10-10.
In other words, we need to make billions of them to detect even a few of these two muon events. But because two muon events are something the detectors at the LHC are outstanding at seeing, these are outstanding grounds for precision testing of the Standard Model.
Well, the latest results are out, and they’re absolutely wonderful if you’re a fan of the Standard Model:
- The Bs ⇒ μμ decay is seen, and is seen with a branching ratio of (3.0 ± 1.0) × 10-9, totally consistent with the prediction of 3.52 × 10-9.
- The Bd ⇒ μμ decay is not seen, placing an upper limit of 1.0 × 10-9 on its branching ratio, with a best-fit of (4.0 ± 3.0) × 10-10, in grand agreement with the Standard Model’s predicted 1.07 × 10-10.
This places even tighter constraints on extensions of the Standard Model like supersymmetry, which I still contend has already fallen. These latest results come from the CMS collaboration, and the LHCb collaboration (not one of the two “big” detectors, which are CMS and ATLAS, but this one is designed to look for B-mesons) has also released results which are in accord with (although slightly less significant than) this. The agreement with the boring old standard model (the star, below) is still spectacular.
But that’s not the only rare decay; these B-mesons (the Bd in particular) should also decay to two muons and an excited-state Kaon. In our notation, this looks like Bd ⇒ (K*)μμ ⇒ (Kπ)μμ. The Standard Model makes some very, very explicit predictions about not only the branching ratios of this decay, but also the distribution — or the angular separation — of the different particles arising from this decay. A paper was just released, examining the LHCb results for this, and for the (are you ready) 47 observable parameters related to this decay.
What did they find? One of these observables disagrees with the Standard Model, at a deviation of 4.5-σ from the Standard Model’s prediction. Now, 5-σ is regarded as the “gold standard” for discovery in particle physics, but something that’s got 4.5-σ significance only has a 0.00034% chance of happening by chance. Are we on the verge of new physics here?
I say doubtfully, and here’s why.
The 4.5-σ significance only happens if you take a subset of the data: the large recoil bins only. If there’s a weird effect, it should appear regardless of the recoil magnitude, but when you include the low-recoil data too, the significance drops down to 3.7-σ.
But we’re also not just looking at one parameter; we’re looking at 47. If there was a 1-in-1000 chance of a weird thing happening, and you only looked at 1 thing, it would be weird if that thing happened, wouldn’t it? But if you looked at 1000 different things, each with a 1-in-1000 chance of a weird thing happening, well, it wouldn’t be weird at all if a weird thing happened in one of them! When you take that effect into account — that we’re looking for a weird thing among 47 things — the significance drops even further, from 3.7-σ down to just 2.8-σ. (Or, if you think that separating the low-from-large-recoil is physically motivated, from 4.5-σ down to 3.6-σ.)
And those levels of deviation happen all the time with limited statistics, and have pretty much always gone away. If you’ve followed particle physics for a while, you might have heard of leptoquarks; there was a large buzz in 1997 that the HERA experiment had found them at nearly 4-σ significance. With more data, however, that “discovery” went away. This paper is based solely on LHCb data, and solely on 2011 data at that. More than three times as much data is actually available (on disk, if you include 2012) for LHCb, and CMS/ATLAS should have something to say about this as well. It’s possible that there’s new physics here, and that’s something we have to look for, but is this compelling? Not at all; it’s merely suggestive of an intriguing possibility, as others note. It turns out that breaking the Standard Model is really, really hard!
* – Yes, I know these aren’t their standard names; they’re the names I used because I wanted to be understood. If you’re a particle physicist who’s irate about this… well, that’s what comments are for! Thanks to Brian Koberlein and Rob Krol for urging me to write about this.