Saturday, March 25, 2017

An isolated standard model contradicts nothing we know

Today, the Moriond 2017 particle physics conference ends. Especially the CMS has presented the newest results – analyses of some 35 inverse femtobarns of the data collected at the two protons' total energy of \(13\TeV\).



Almost a decade ago, I made an asymmetric bet against Adam Falkowski, a particle phenomenologist now in Paris. He claimed that supersymmetry wouldn't be found before a deadline and I claimed it could be. If it were found, I would have won $10,000. If it weren't found, I would pay $100. So it was a 100-to-1 bet, basically implying the consensus probability of the early enough supersymmetry discovery at 1%. I accepted the bet because my subjective probability of a SUSY discovery was much higher than 1% and I still think it was reasonable – and an analogous assumption is still reasonable for the next collider.

The deadline was defined a bit arbitrarily – but it was "after the results of at least 30/fb of the data at design energy are collected". The design energy was \(14\TeV\) and \(8\TeV\) is clearly lower – the collisions at this lower energy may produce SUSY particles about 10 times less frequently than those at \(14\TeV\) – but \(14\TeV\) is close enough to \(13\TeV\) so it's obvious that those 35/fb at \(13\TeV\) that we have are basically equivalent to 30/fb at \(14\TeV\). So right now it's the ideal balanced moment that almost exactly agrees with the conditions of our bet, I think, and because supersymmetry hasn't been discovered yet, I should pay $100 to Adam.

As I have already mentioned, this lost bet is a technicality for me and doesn't change my belief that supersymmetry somewhere in Nature, beneath the Planck scale, is very likely and SUSY around the corner is always a possibility. I am sure that many of you agree that the opposite result would be way more interesting – from the financial viewpoint, from the viewpoint of our TRF community, and because of the excitement it would create among physicists.




So I want to send him his $100 – although, obviously, there's still a potential that some game-changing paper based on the same dataset will be published in the future. This trivial transfer would have taken place if Falkowski had a PayPal account or could accept goods from Amazon etc. But he must live at some uncivilized place of the globe that is decoupled from all the technological and financial progress, among the sheep who look for, eat, and use lipsticks that tourists randomly threw away – it is in Paris, as I have mentioned – and he prefers the ancient transnational bank transfers over things like PayPal, Amazon etc. that he doesn't want to get involved with.

I could send a bank transfer to France – in a bank account of mine, it's a command that exists in the online banking – but it's been not tested. I don't want to send this money in ways that are so untested, or test them by smaller amounts etc. If someone finds it safe and easy to send payments to French banking accounts and can accept a PayPal payment from me or a $95-$105 plus shipping package from Amazon.com in the U.S. (with products according to his or her choice) as a compensation, please let me know.




This null result of the post-Higgs-discovery LHC experiments hasn't surprised us. It's not anything we could have been unprepared for. People were extremely prepared for it – although they hadn't wanted it: the outcome has often been referred to as the Nightmare Scenario. The work of the 6,000 people at the LHC since Summer 2012 has almost the same value as the statement "the SM is still OK, move on". A secretary would type this sentence for a cheaper salary than $10 billion. The Standard Model was a theory that worked up to energies up to \(200\GeV\) or so – and needed particles of similar masses (top quark, massive gauge bosons, Higgs boson) – and it could have broken down right above \(200\GeV\) and be extended or supplemented or replaced with a broader theory. But it didn't happen. The same Standard Model works up to \(1\TeV\) or a bit higher. Just to be sure, different types of proposed new particles are excluded to different energies in different scenarios and lots of particles lighter than \(1\TeV\) may still exist, of course.

It's not a sharp contradiction with anything we know about physics that the Standard Model remained isolated, as James Wells et al. and Jon Butterworth have called it.

Why have many particle phenomenologists preferred to believe that something else should have been discovered along with the Higgs boson or soon after the Higgs boson? The argument basically boils down to one parameter, the Higgs mass \(m_h\) or the electroweak scale \(v\) (the vacuum expectation value of the Higgs field). They are much (fifteen orders of magnitude or so) smaller than the fundamental scales of Nature – which are arguably close to the Planck mass \(10^{18}\GeV\). And the "generic" value of this ratio would be "of order one" in a natural theory. The scalar bosons' masses should be driven towards the heaviest mass scale in physics, the Planck scale, by any quantum corrections unless there is something that changes the rules of the game and keeps the Higgs boson (along with all the massive elementary particles we know) much lighter.



But let's look a bit more closely. The Higgs vev \(v\) as well as the Higgs mass etc. are basically calculated from the Higgs potential\[

V(h) = \frac{1}{4}\lambda h^4 - \frac{1}{2}\mu^2 h^2

\] It's the function that looks like the champagne bottle's bottom (in the first world) or Landau's buttocks (in the second world) or the Mexican hat potential (in the third world). And yes, the value of \(\lambda\) is of order one, in some normalization close to one quarter. (Exercise: find what it is in my conventions.) Set the derivative \(V'(h)\) to zero and you will find that the minimum of the potential i.e. the vev is at \(v^2 = h^2=\mu^2 / \lambda\). Calculate the second derivative to see that the mass is of order \(\mu\) again, \(m_h\sim \mu\).

OK, the value of \(\lambda\) is of order one, as I mentioned, and completely "natural" while it's only the value of the fundamental parameter \(\mu\) that looks unnaturally small. We may write it as\[

\mu^2 \sim 10^{-30} m_{\rm Planck}^2

\] The coefficient as it actually appears in the Lagrangian, namely the square (tachyonic) mass term, is 30 orders of magnitude smaller than the fundamental unit with the same units. That's an apparent fine-tuning. However, this small value could result from some instanton-like or otherwise naturally small effect. Also, there could still be new physics or superpartner masses at \(\Lambda=3\TeV\) or so in which case it would make sense to write\[

\mu^2 \sim 10^{-3} \Lambda^2

\] OK, so the whole mystery of the "isolated Standard Model" could be just about one number in the Lagrangian that is about \(0.001\) times its generically predicted value. The probability that a random number uniformly distributed between \(0\) and \(1\) is smaller than \(0.001\) is \(0.001\), too. It's unlikely but not insanely unlikely. This small probability corresponds to some slightly than higher 3-sigma noise.

So you can say that all the "surprising" isolatedness of the Standard Model is basically just a 3-sigma or so deficit in the squared Higgs mass.

It's not a big deal and because the explanations why the value is small may be clever, instanton-based, or otherwise naturally refuting the assumption that the distribution for \(\mu^2\) is uniform, the deficit could very well be much less than 3 sigma, too. And even if it were 3 sigma or higher, you could also refer to the anthropic thinking that – even when used as a co-argument – may make the smallness look much more natural than it would look otherwise.

I want to make you sure that once \(\mu\) is comparable to \(100\GeV\), everything else will be, too. I've told you why the Higgs vev and Higgs mass are comparable to \(\mu\) for \(\lambda\sim O(1)\). But the W-boson and Z-boson masses may be seen to be \(v\) times some gauge couplings which are also of order one, so these masses are also of order \(\mu\), and the fermion masses are at most \(\mu\) or so (like the top quark) and generally equal to this constant multiplied by the Yukawa coupling constants (that are smaller than one).

Again, if you filed a complaint with Nature about the "miraculously, unacceptably tiny" value of the elementary particle masses etc. relatively to the Planck mass with Mother Nature, She would ignore your complaint. You're just a bound state of strings that has no right to complain. And I think that She would be even morally justified not to give a damn about your complaints because it may be just a 3-sigma deficit in the value of a single coefficient in the Lagrangian, \(\mu^2\).

New physics at the \(100\TeV\) collider is possible and well-motivated but I would only enter a similar bet as the one SUSY bet against Adam Falkowski. It's in no way guaranteed that there must be new physics. With such a bigger collider, the deficit as "calculated" above could increase to 4-sigma but it's still no big deal and with some hidden patterns violating the assumption about the uniformity of \(\mu\), the deficit may really be much smaller or completely absent.

Low-energy supersymmetry is almost the only other kind of physics that would explain smallness of \(\mu^2\) without requiring some "new nearby physics" to explain the smallness of its own new parameters.

But those who thought it was "almost certain" that the Standard Model has to be accompanied with some additional new physics – different than SUSY – were implicitly assuming (perhaps without even realizing or at least acknowledging it) that this line of reasoning basically implies that the whole logarithmic axis between the electroweak scale and Planck scale has to be filled with new scales and segments of physics. The Standard Model must be accompanied by another model, e.g. the Nude Model. But the Nude Model won't ever stay alone, either. So there would have to be another model at slightly higher energies, the Horny Model. But that model also has scalars that are small and unexplained so there has to be the nearby Morality Police Model. And then the Constitutional Court Model, and so on, up to the model of quantum gravity near the Planck scale.

I find this scenario with lots of scales possible but rather unnatural according to my general interpretation of the word "natural". If the number of scales (layers of the physical onion) between the everyday life scales and the Planck scale were much greater than one, it would be just another parameter that should naturally be of order one but it isn't. So when you divide the desert between the electroweak and Planck scale to 16 pieces, it's like a pizza cut to 16 pieces. Madam, would you like me to cut the pizza to 8 or 16 pieces? The blonde answered: Only eight: sixteen would be too much for me to eat.

Madam, I think that fundamentally, the cutting doesn't really change the overall severity of the problem (or the amount of food).

So I have never really thought – and I still don't think – that adding too many scales (layers of the physical onion) that are too close to each other on the energy scale makes the physical picture more natural. You know, the gap between the electroweak scale and the Planck scale is a demonstrable fact that we already know. So a deep physical theory has to explain this small number in one way or another. Having an onion with lots of thin layers is just one extreme strategy to explain it. It's not the only one and it's not a particularly natural or elegant one, either, I think.

If there is a big desert between the electroweak scale and the Planck scale that is explained by some theory that makes \(\mu^2\) vanish in the leading approximation (conformal symmetry?) but produces some small \(\mu^2\) by some naturally small (e.g. instanton) effects, it's fine with me. I still think that low-energy SUSY is better than a fundamental, slightly broken conformal symmetry in the spacetime, but I am not a bigot and there's no solid evidence that the smallness "has to be" explained by one type of an explanation or another.



Here you have a logarithmic chart of the masses of known elementary particles. Note that photons, gluons, and gravitons are fundamentally "exactly massless" as far as all experiments and theories extracted from them go. On the other side, there is some reduced Planck scale near \(10^{18}\GeV\). But what about the massive particles? The heaviest ones are near \(100\GeV\), the top quark, the Higgs boson, and the massive electroweak gauge bosons. And then you have all the charged fermions with smaller and smaller masses down to the electron at \(5\times 10^{-4}\GeV\). The widest ratio of the neighbors' masses is about 15 in this quasi-continuum.

Beneath the electron, there is another gap (the teeth in the diagram mean that six floors are omitted!) and below this gap, you find neutrinos with masses comparable to \(10^{-11}\GeV\). You see that the masses of an electron and the neutrinos differ by 7 orders of magnitude and there's a not so small desert in between. That's why I think that we should say that a desert of this magnitude – and perhaps even a bigger one – isn't a big deal. But the LHC hasn't actually proven any "really big gap" so far. If the "next heavier" particle above the (so far most massive) top quark were just 15 times heavier, and the ratio of 15 wouldn't be unprecedented, as I have mentioned, the next new particle would be around \(3\TeV\) in mass. There are lots of scenarios like that which haven't excluded by the LHC yet!

This chart of the particle masses is mixing apples and oranges, a particle physicist would say, because the origin of masses is very different for the different particles. I have sketched that the masses of all the particles heavier than neutrinos do boil down to the parameter \(\mu\) after all. But the neutrino masses don't. The Majorana neutrino masses can't be obtained from a renormalizable mass term because that wouldn't be gauge-invariant. They can only be extracted from some non-renormalizable interactions – which include a higher power of the Higgs field \(h\) that accompanies \(v\) – and their magnitude is extracted from some physics at a very high energy scale, e.g. by the "seesaw mechanism".

Such mechanisms and perhaps even more clever ones are probably used by Nature at many other places. They may guarantee that any potential contradiction or unnaturalness of small parameters is even more innocent than what it looks like.

To summarize, I don't think that the null results of the 4+ post-Higgs years of the LHC contradict some theories or principles about Nature that we have learned. The discovery of new particles was possible but it was never guaranteed and the belief that it would emerge has always been driven by a wishful thinking to one extent or another (or by someone's plan to become famous as quickly as possible). I need to emphasize that this opinion of mine is in no way new. I've believed the same thing for decades. The fact that some parameters are as small as \(0.01\) or \(0.001\) isn't a terribly strong hint of anything. After all, the ordinary fine-structure constant is \(1/137.036\) and we don't think that this small value proves some amazingly difficult hierarchy problem in Nature. The constant \(\alpha\sim 1/137.036\) is small partly because of \(1/4\pi\) that is naturally incorporated in it, partly because of the smallness of gauge couplings or their ratio (angle) calculated from them, partly from RG running that makes electromagnetism intrinsically weaker at long distance, and partly by some "slightly less than one" values of the fundamental gauge couplings at the GUT scale. There's simply no "shocking", unexplained smallness of \(\alpha\).

The "at least moderate desert" above the Standard Model has become a fact. The Standard Model physics has become a bit lonely place but the Standard Model is consistent up to much higher energies, plausible explanations for the relative smallness of the one dimensionful parameters exist, and one simply cannot derive any strong contradiction. One cannot make any big conclusion – e.g. that the world requires the anthropic principle – either. The estimated distance between the Standard Model island to the next islands or continents has simply increased a bit. That's it. It's probably not an exciting enough "story" but it seems to be the truth, anyway.



Geological metaphor

I actually think that the very analogy with the "islands" in geology implies a similar conclusion, "not a big deal". Imagine that you find yourself living on an island. What's the distance of this island from the closest other landmass – island or continent – divided by the Earth's radius?

The naive naturalness consideration would lead you to say that the distance between continents is comparable to the Earth's radius – it's surely true for Europe (or Eurasia) and America or Australia or Antarctica. The islands should be uniformly distributed in the remaining ocean or seas so their distance from the nearby continent should be comparable to the Earth's radius, too.

However, you find many more islands that are actually close to continents. For example, Greece has lots of islands from which the illegal immigrants may swim to the European continent. It makes sense that the islands are actually closer to the continents: the waters are shallower near the continental beaches which makes islands – localized fluctuations of the altitude above zero – more likely. Such a nearby landmass may serve as a candidate for an "explanation" why your island exists at all: it's some random fluctuation added next to some bigger nearby landmass.

However, some islands are very far from the continents or other islands, too. Their distance may be comparable to the Earth's radius but it may also be "in between", e.g. 500 km. How is it possible? Well, it's just possible. I think that the argumentation in the case of geology – why there are islands whose distance from the continents is either very small or intermediate or comparable to the Earth's radius is qualitatively equivalent to the argumetation why \(\mu^2\) may be very close to \(\Lambda^2\) of some new physics, four orders of magnitude lower, and maybe even 30 orders of magnitude lower. Some islands are there as disconnected pieces of nearby continents. Others are tiny continents by themselves. Some islands may result from the landing of an asteroid. Yet another group of islands were libertarian paradises paid for and built by Peter Thiel, and so on. The diversity of explanations of "islands of phenomena in particle physics" may be analogously diverse although you shouldn't pick my list of causes literally.

The belief in a clumping of islands and continents (or other islands) is just a belief, not a very solid argument. And when applied consistently within a theory of everything, it's basically equivalent to the opinion that the big oceans shouldn't exist at all (or, in physics, deserts are prohibited). The world in which the "islands of physics" would be densely distributed may look more intriguing or exciting for some people but that doesn't make it more likely. And the Universes with big deserts may be not only likely but also very elegant and intriguing, too. Their sexiness is of a different type than the sexiness of archipelagos near continents but it's very real. One must be able to see that some hypotheses are just hypotheses and they are driven by prejudices, not rational arguments, and the belief that new physics "must always be around the corner" was always a prejudice.

It's possible but it's also possible that it's false.

No comments:

Post a Comment