Open Questions: Beyond the Standard Model

[Home] [Up] [Glossary] [Topic Index] [Site Map]

Prerequisites: The standard modelHiggs physics

See also: SupersymmetryCP symmetry violationMatter/antimatter asymmetryNeutrino physicsThe big bangCosmic inflationMagnetic monopoles


Motives for unification

Symmetry groups, unification, and model building

Grand unified theory models

General implications

Proton decay

Cosmic inflation

Magnetic monopoles


Exotic particles and dark matter

The electroweak mixing angle

The hierarchy problem


Composite models

Testability of grand unified theories

Successes of grand unified theories

Open questions

Recommended references: Web sites

Recommended references: Magazine/journal articles

Recommended references: Books


The standard model of elementary particle physics was largely completed in the early 1970s. It was an impressive advance in the understanding of the behavior of matter and forces at high energies. Its most outstanding accomplishments were probably the combination of the electromagnetic and weak forces into a single unified theory and the development of a coherent theory of the strong force (including forces within atomic nuclei) and the heavier particles (baryons) in a theory called quantum chromodynamics.

Equally important was the fact that these theories all had the same mathematical form – Yang-Mills gauge theories – and that they were thoroughly based on the mathematical concept of symmetry.

Nevertheless, physicists realized that their work was far from complete, and that the standard model left a great many questions unanswered. We have described these questions in some detail elsewhere (such as the pages listed at the top).

It was seen that, at the same time, a number of both the key successes as well as the chief shortcomings were to be found in the way that fundamental forces were unified. Here unification means, specifically, that two (or more) forces previously considered distinct can actually be described by the same equations. And, further, that these equations are invariant under symmetry operations that exchange distinct fundamental particles. That is, as far as the equations are concerned, an electron and a neutrino (for instance) behave substantially the same.

One of the primary entries in the success column for the standard model is the unified theory of the electroweak force. Yet this same theory illustrates some of the shortcomings. The symmetry between the forces is broken because the electromagnetic force and the weak force don't have the same strength and because otherwise similar particles (such as electrons and neutrinos) have quite different masses. Further, the unification itself isn't as seamless as it could be. One of the key parameters of the theory – the electroweak mixing angle which describe how the forces combine – is not specified by the theory, but instead can be determined only by experiment.

So. The standard model showed that two seemingly distinct forces could be successfully unified in a single, elegant mathematical theory. But at the same time, physicists still had a lot of explaining to do, in terms of how to clean up the unification of the electromagnetic and weak forces, and then to go further and add the strong force into the mix.

Motives for unification

Clearly, on aesthetic grounds alone, the pursuit of further unification would appear to be an irresistable attraction for theorists. But there are a number of other indications that a lot of development towards unification and beyond the standard model was (and still is) needed.

Particle/force couplings

The way in which quarks and leptons interact with three of the four known elementary forces (excepting only gravity) is the same. All of these interactions are described by very similar Yang-Mills gauge theories, in which interactions occur by the exchange of vector bosons.

The weak force affects all quarks and leptons, except for neutrinos with right-handed helicity (which might not even exist). And, although we lack a satisfactory gauge theory of gravity, it still affects all known particles – even massless ones, since mass and energy are equivalent.

Nevertheless, there are gaps – asymmetries – in the way different particles couple to some forces. The electromagnetic force doesn't affect particles without electric charge (neutrinos and some bosons). Likewise, leptons entirely lack color charge, so they are not affected by the strong force.

Electric charge quantization

The heavy leptons (electrons, muons, and taus) all have one unit of electric charge, while the quarks have an electric charge that is in units of exactly a third of the electron charge. There's no obvious relation between quarks and leptons which explains why the electromagnetic force couples to both in just this way.

Even though the standard model has no explanation why all known particles (including bosons as well as fermions) have commensurable amounts of electric charge, it surely isn't just a coincidence.

Quark/lepton generations

There are three "generations" of both quarks and leptons. That is, for instance, there is no discernible difference among electrons, muons, and taus except for there mass. Further, given that there are multiple generations, there's certainly no obvious reason the number should be the same for both quarks and leptons.

Moreover, within each generation, quarks and leptons occur in the same pattern. Within each generation, quarks and leptons are grouped in couplets whose particles are related by the SU(2) electroweak symmetry. In other words, they have a similar characteristic called "isospin".

Other quark/lepton similarities

There are other suspicious similarties between quarks and leptons, in addition to the facts that their interactions are described by Yang-Mills gauge theories, the electroweak force operates on them in the same way, the units of electric charge they carry are exactly commensurate, and they are arranged in three generations of particle couplets. The additional similarities include these facts about quarks and leptons:

Asymptotic equality of forces

At energies, and hence distance scales, which are accessible to experimental investigation, all known forces have very different strengths. However, what we know about the forces makes it possible to extrapolate their strengths (i. e. "coupling constants") to very high energies and very small distances.

When that is done, a very curious thing is noticed. Namely, three of the four forces (electromagnetic, weak, and strong) appear to have very nearly the same strength at a distance of about 10-32 m, corresponding to an energy of about 2×1016 GeV. This too is unlikely to be a coincidence.


Gravity seems to have very little to do with the other three forces. We do not even have a satisfactory quantum theory of gravity yet. Nevertheless, there are still suggestive similarities of gravity to the other forces. For instance, its fundamental equation (the Einstein equation) observes a symmetry (known as the Lorentz group), which is a geometric symmetry akin to the rotational symmetries (U(1), SU(2), and SU(3)) of the other forces.

And strikingly, when the force of gravity is extrapolated to very small distances, it appears to have about the same strength as the other three forces at a distance known as the "Planck scale", about 10-35 m. This suggests that not just three, but all four of the known forces might possibly be unified at sufficiently high energies.

Too many arbitrary parameters

The standard model is full of arbitrary parameters which can be determined by experiment but which have no theoretical explanation within the theory. Such parameters include charges, particle masses, and coupling constants. There seem to be more than twenty such arbitrary parameters. All these parameters ought to be related to each other, so that very few, if any, must be assigned arbitrarily. This doesn't mean that the standard model is "wrong", just that it is incomplete, and unaesthetically satisfying. To put the matter more strongly, the theory is ugly.

In particular, there is one parameter known as the "electroweak mixing angle". It specifies in a precise way how the electromagnetic and weak forces are related. It can be measured experimentally, but is not determined by the supposedly "unified" electroweak theory. A genuinely unified theory should determine this parameter.

The foregoing list points out some discrepancies among the theories of the various forces, but also some tantalizing indications of similarities. We should expect to find clues here as to how best to proceed.

Keep in mind that there are, as yet, no known egregious experimental problems with the standard model. This is actually a hindrance to progress, since the lack of experimental failures means we can't pinpoint just where the theory is "broken". But it is obviously incomplete in many ways and does have more than its share of conceptual problems. It needs to be augmented to be a more satisfactory theory. Quite possibly even replaced as just a crude approximation of a better and more comprehensive theory. Yet there are also clues about what we should be looking at.

Symmetry groups, unification, and model building

Take another look at the table of elementary particles:

Elementary particles of matter
Leptons Quarks
Electron Electron neurtino Up quark Down quark
Muon Muon neutrino Charm quark Strange quark
Tau Tau neutrino Top quark Bottom quark

Isn't it pretty obvious we should be looking for a symmetry between the right and left halves of this table? Such a symmetry would help account for several of the clues we mentioned, in particular the fact that there are the same number (three) of generations of quarks as leptons and the fact that the electric charges of particles are multiples of the same fundamental unit (1/3 the charge of an electron). This isn't enough to build a theory on, but it's a start.

Let's approach this from a slightly different angle. We want to have a Yang-Mills gauge theory which is based on a local symmetry like the U(1) of electromagnetism, SU(2) of the weak force, and SU(3) of the strong (color) force. We need a symmetry group which is consistent with all of these. In practice, that means it should contain those three groups as subgroups, so that the symmetry contains all symmetries of the simpler theories. In fact, it should contain the product group U(1)×SU(2) (of the electroweak theory) as a subgroup.

The first problem is that, although this is a constraint, it's not a strong one. There are many groups which contain those others as subgroups – in fact, infinitely many. But the larger the group, the more symmetry, so we should look for the smallest group that suffices. After all, we want to account for existing observations, but there's no point in adding symmetries that have never been observed or even hinted at. Always choose the simplest theory that suffices.

We're still left with many groups to consider, starting with an obvious candidate, the product group U(1)×SU(2)×SU(3). But that is unsatisfactory, for the same reason that U(1)×SU(2) leaves something to be desired as a Yang-Mills symmertry group for the electroweak theory. Mathematically, these are what are known as "product" groups. One can construct product groups in a straightforward manner for any two groups at all. The result doesn't capture anything special about the way the two groups are related in some particular situation.

In our case, this is the reason that the electroweak mixing angle is an arbitrary parameter of the electroweak theory. It is the additional information which needs to be added about the physics above and beyond what the group provides. As we noted, this information ought to fall out of the theory itself. We will have a similar problem in unifying the strong force with the electroweak force if we just use a larger product group. What we really need is a larger group containing U(1), SU(2), and SU(3) as subgroups in such a way that the physics is naturally included.

Perhaps it will help to explain the mathematics a little more fully. U(1), SU(2), and SU(3) are all instances of what are known as "Lie groups", since this type of group was first investigated by Sophus Lie in the 1870s. Abstractly, a Lie group is an infinite group which has the topological structure of a manifold over the real or complex numbers. A manifold, in turn, is a kind of topological space which looks "locally" like a copy of an n-dimensional real or complex space (i. e. Rn or Cn). A little more loosely, a manifold is a topological space, small parts of which look like ordinary (1-dimensional) curves or surfaces (of 2 or more dimensions).

If this mathematical terminology is unfamiliar, it is fortunate that most Lie groups of interest actually occur as groups of square matrices over the real or complex numbers (R or C). If even the term "matrix" is a little daunting, don't be too concerned. A square matrix is nothing more than a table of (real or complex) numbers having the same number of rows and columns. Matrices are fundamental objects in linear algebra, i. e. in the study of solutions of simultaneous linear equations.

In order to have a group, there must be an operation of "multiplication", which is straightforward (if tedious to actually perform) for matrices. Each element must also have an inverse with respect to this operation, which is expressed in other words by saying the matrix is "non-singular". This is equivalent to requiring the determinant of the matrix to be non-zero. And if the matrix consists of coefficients of a set of linear equations, this non-singulartity condition is necessary and sufficient for being able to find a unique solution of the set of equations. Since this is quite an important property, the group of all such matrices is called GL(n,R) or GL(n,C), depending on whether the matrices involve real or complex numbers.

In addition to their use in solving systems of linear equations, matrices are a natural way to represent "linear transformations" of the n-dimensional Euclidean spaces Rn and Cn. That is, they correspond to certain types of geometric transformations on such spaces The non-singularity condition says that the transformation does not reduce the dimension of the space.

Most – but not all – of the Lie groups used in particle physics are groups of this sort, including U(1), SU(2), and SU(3). In fact, they are usually of a somewhat more restricted type – a transformation which geometrically is essentially just a rotation or a reflection. What this means is that the transformation preserves the angles between vectors in the space, without stretching or distortion. In this sense, it is a "rigid" transformation which does not distort geometrical objects. (In technical terms, the transformation preserves the "inner product" of the vector space.)

When the space in question is Cn and the matrices may contain complex numbers, the group of such transformations is called U(n), where "U" stands for "unitary". The determinant of a unitary matrix is a complex number of absolute value one. If it is further required that the determinant be exactly 1, then we have the even smaller (but still infinite) groups SU(n) – the "special unitary" groups.

When the space in question is Rn and the matrices must contain only real numbers, the group of such transformations is called O(n), where "O" stands for "orthogonal". The determinant of a matrix in O(n) is +1 or -1. There are also subgroups where the determinant is exactly 1: the "special orthogonal" groups SO(n).

There is an elaborate theory for classifying Lie groups, which makes it possible to systematically arrange Lie groups into various types. This in turn makes it easier to examine their properties for use in particle physics. In the case of a certain type of Lie group described as "semi-simple", a complete classification is possible. This is even more helpful. The definition of a semi-simple Lie group is somewhat technical, but it has the nice property of excluding groups, such as product groups, which are constructed in a trivial way out of smaller groups. As indicated above, this is a sensible restriction.

Unitary and orthogonal matrix groups are not the only broad types of Lie groups among the semi-simple ones. But they are a little easier to analyze mathematically and hence are encountered most often. In addition to U(1), SU(2), and SU(3), we will very shortly encounter two more examples.

Shifting gears back to physics now, we recall that what we want to do is to find larger symmetry groups which relate the particles which occur in the table of elementary particles. We want to use these symmetries in a Yang-Mills type of local gauge theory analogous to the successful theories of the electroweak and strong forces, but constructed in such a way that all of the known particles and forces are "unified" in a single theory, using a single symmetry group. There's no guarantee such an approach must succeed, but it's certainly a plausible thing to try.

We recall, further, that the way the group relates to the particles is by permuting particles which are in certain sets called multiplets. Finding such multiplets which, taken together, cover all the particle types, is known as finding a "representation" of the group.

This activity is a kind of game with definite rules. The rules say that all known particles must be accounted for, but it's permissible to add new, as yet undiscovered particles. Similarly, the three forces – electromagnetic, weak, and strong – must be accounted for. (Gravity is left out for the present as being too difficult.) But it's permissible to add new forces.

The game involves finding a symmetry group and a representation of it in which the corresponding particle multiplets collectively account for all known particles. (Groups may have more than one representation.) Further, the forces must fall out naturally of a Yang-Mills type gauge theory when the group is used as a local symmetry group.

The name of this game is "model building". Particle physicists played it extensively in the 1960s and 1970s. There were two noteworthy winners: the SU(2) theory of the electroweak force and the SU(3) theory of the strong force. These two theories together, therefore, became known as the standard model.

The next big prize would go to the winner in the game to find a model which unified the electroweak and strong forces. Theories of this sort became known as grand unified theories.

A winner has yet to appear. So it has come to seem as though this approach may not work. It seems that something may be missing.

But we can look at a couple of the best attempts.

Grand unified theory models

It turns out that there are a large number of groups which seem like they might work, and many of these have been investigated. Some can be ruled out because they make predictions which have been contradicted experimentally. While many others haven't been ruled out, this is also a problem – since we have (as yet) not found a way to determine which group is the "right" one.

The SU(5) model

It turns out that SU(5) is the smallest semi-simple Lie group containing U(1)×SU(2)×SU(3). That makes it a very good candidate. And indeed, it was proposed by Sheldon Glashow and Howard Georgi in 1974. It was natural for Glashow to have a part in this, as he had been one of the first (along with Murray Gell-Mann) to study SU(2) in connection with the weak force. (And he shared a Nobel prize for that contribution.)

The representation of SU(5) contains particle multiplets consisting of both 5 and 10 particles. The 5-particle multiplet (or 5-plet) contains 3 down quarks (in each of the three colors), a positron (anti-electron), and an anti-neutrino. There's another 5-plet containing the antiparticles of each of these. Then there's a 10-plet containing up, anti-up, and down quarks, in each of the three colors (9 total particles), plus a positron. And another 10-plet contains the corresponding antiparticles. This pattern is repeated again with 5-plets and 10-plets for the additional two particle generations.

The theory predicts that members of any multiplet are interchangable in the equations of the theory. That is, the equations are invariant under the symmetry operation. Hence this is saying that not only electrons and neutrinos, but also (certain) quarks obey the same equations and hence are much more similar than appearances would suggest.

Furthermore, the forces which arise from the Yang-Mills gauge theory using SU(5) are capable of changing any particle of a multiplet into any other particle of the same multiplet. Such forces are mediated by one of the bosons of the theory. We already knew this was possible in some cases. For instance, quarks of a certain flavor can become quarks of the same flavor but a different color, via the strong force, by exchanging gluons. Electrons and neutrinos can be transformed, via the weak force, by exchange of W bosons.

But the astonishing thing is that in SU(5), quarks can turn into leptons (either electrons or neutrinos) – or vice versa – by an entirely new force. This force is mediated by a new type of boson, called simply X (or X boson). The X bosons carry both electromagnetic and color charge, in order to ensure proper conservation of those charges in any interactions.

The theory predicts that the X bosons must be extremely massive, with mass-energy in the unification range of about 1016 GeV. Consequently, the force mediated by X bosons must be extremely weak (i. e., extremely improbable to cause an interaction) and extremely short range. This range is on the order of 10-30 cm. Unless particles approach each other this closely, a virtual X boson could not come into existence long enough to cover the distance between the particles. The fact that X bosons must be so massive also means that it is not possible to create them in any conceivable particle accelerator that could be built. They can exist as free particles only at a very early stage of the big bang from which the universe emerged.

When you count up all the ways that the particles in an SU(5) 5-plet could be interchanged, 24 distinct bosons would be required to effect the change. 12 bosons are already known – the 8 gluons, plus W+, W-, and Z bosons of the weak force, plus the photon. Hence there must be 12 new X bosons, distinguished by the varying amounts of color and electric charge they carry.

There are a couple of very important immediate consequences of this SU(5) model. The first is that the units of electic charge carried by quarks and electrons must be commensurable, since the particles can change into each other. This resolves the long-standing puzzle of why protons and electrons have exactly equal amounts of electric charge (but opposite in sign) – since protons consist of three quarks.

A second consequence is even more striking: quarks can "decay" into leptons, and hence protons too can decay. This would be the first known example of the non-conservation of a quantity known as baryon number. Recall that baryons are particles composed of three quarks. Before SU(5) (and other grand unified theories) it appeared from all experimental observations that baryon number was always conserved. If this were the case, protons could not decay, since they are the lightest baryons, and hence there are no other baryons they could decay into.

SU(5) says this is wrong – protons can decay, because the quarks they are composed of can decay. This is an extremely important prediction of grand unified theories. It is also one that has proven quite problematical, since many experiments trying to observe proton decay have been attempted, without any success at all. In fact, current measurements of the minimum possible half-life of protons are already too large to be compatible with the predictions of SU(5), so SU(5) cannot be correct as it stands. We'll have more to say about proton decay later.

Nevertheless, there is indirect evidence that baryon number is in fact not conserved. This comes from the apparent asymmetry of matter and antimatter in the universe – for which non-conservation of baryon number is a necessary condition. (Otherwise the net baryon number of the universe would be 0.) We'll have more to say about this too.

There's one more feature of SU(5) to point out. In spite of the apparent complexity that it adds by introducing a whole new type of force and 12 new bosons to mediate it, SU(5) is still the smallest semi-simple Lie group that contains U(1)×SU(2)×SU(3), and hence it adds as little complexity as possible. In particular, it predicts that there should be no new particles or forces that appear between the energy scale of 100 GeV (where electromagnetic and weak forces unify) and 1016 GeV. That's a difference of 14 orders of magnitude. It seems rather unlikely that nothing new would appear in that whole range. Indeed, the theory of supersymmetry (which we'll introduce in a little bit) predicts lots of new particles with masses between 100 and 1000 GeV.

So. SU(5) is just too simple to be true. Not only is it implausible in the way just described, but a number of its predictions have already been contradicted experimentally. For starters, it predicts that protons decay with a half-life of about 1031 years, whereas the best measurements as of this writing indicate a half-life no less than 1.9×1033 years. To its credit, SU(5) does predict a value of the electroweak mixing angle. Unfortunately, the prediction is off by about 10% compared to the best experimental measurements.

And these aren't the only problems. SU(5) predicts that neutrinos have exactly zero mass. This is now considered almost certainly wrong, based on several lines of evidence that neutrinos do have nonzero mass. SU(5) also is not adequate to support the theory of cosmic inflation – which seems to require some sort of grand unified theory and is itself gathering better supporting evidence as time goes on. And there are other cosmological calculations that SU(5) seems to get wrong as well. We'll go into a little more detail on many of these points later, in considering grand unified theories as a whole.

But as for SU(5), alas, it is a theory which is as simple as possible – too simple. Nice try, but no cigar.

The SO(10) model

There are many, many other models which have been investigated as candidates for a grand unified theory. We'll look at just one more, to give an idea of the ways that models differ.

SO(10) is the group of 10 by 10 real orthogonal matrices. It was investigated by Howard Georgi and others quite early on. It's semi-simple, like SU(5), and contains SU(5) as a subgroup. Full SO(10) doesn't have a 5- or 10- dimensional representation so it doesn't have multiplets containing 5 or 10 particles as SU(5) does, nor does it have a 15-dimensional representation which might combine the SU(5) multiplets. It does, however, have a 16-dimensional representation, so there is a 16 particle multiplet.

In addition to the 15 particles in the combined SU(5) 5-plets and 10-plets, the SO(10) 16-plet contains one additional particle. This corresponds to a right-handed neutrino. A right-handed neutrino has never been observed in nature because (even if it actually exists) it is not affected by electromagnetic, weak, or strong forces. (Neutrinos have no electric or strong (color) charge, and the weak force affects only left-handed neutrinos.) If such a particle exists (as SO(10) would require), it would have to be very massive to have avoided detection thus far.

Any particle in a fundamental multiplet can be converted into any other by the exchange of a vector boson. In SU(5) where the fundamental multiplet has 5 particles, there are 52-1 possible nontrivial interactions – one for each possible particle pair, less one (corresponding to the "identity" operation, which is the "nothing happens" case). SU(5), therefore, has 24 bosons – 12 already known, and the 12 new X bosons. Similarly, SU(3) (the theory of the strong force), which has a fundamental triplet, has 8 (i. e. 9-1) bosons – the 8 gluons. And SU(2) (the theory of the electroweak force) has fundamental couplets (e. g. electron and neutrino), and 3 bosons – W+, W-, and the photon.

Analogously, SO(10) should have 162-1 = 255 bosons, most of which would be very massive. It would be quite a complicated theory. Unlike SU(5), as a result of this complexity, there ought to be effects that could be observed in the energy range between 100 GeV and 1016 GeV. In particular, there would be minute changes in how the weak and strong forces behave at distance scales smaller than 10-16 cm. These effects would be due to "screening" caused by some of those numerous additional massive bosons.

Another effect would be that the amount of energy required to cause protons to decay would increase somewhat, thereby decreasing the probability of decay and increasing the half-life of the proton. Calculations indicate that the half-life of the proton in the SO(10) theory would be just barely beyond the currently measured lower limit of 1.9×1033 years. So SO(10) isn't excluded by observation – yet – but it's pretty close.

General implications

Let's pause and take stock of some general implications that all (or most) grand unified theories have in common. In general, these theories (or most of them anyhow) predict certain novel effects, which in turn have far-reaching observable consequences. These consequences are especially noteworthy with respect to cosmological phenomena – because most of these effects come into play only at the extremely high energy scale near that of the big bang itself.

To the extent these predictions are verified, we have indirect evidence for some sort of grand unified theory. Conversely, failure to verify these predictions may be evidence against theories of this type – although there are usually ways to amend or extend the theory to handle the problem.

We'll summarize some of these predicted effects here and indicate the relationship between them. In later sections we'll go into a little more detail.

Proton decay

The bedrock of a grand unified theory is that the strong force and the electroweak force become unified at very high energy. This means that the forces are essentially the same and obey the same equations at sufficiently high energy. As a result, there must be a symmetry between quarks and leptons at high energy, and processes which can convert these particles into each other. Consequently, one of the three quarks within a proton can change into an positron, which causes the proton itself to decay. Although a huge amount of energy is required to make this happen, there is a very small (but nonzero) probability for the necessary virtual X boson to appear, giving protons a very large (but not infinite) half-life.


All of our observational evidence to date has indicated that "baryon number" is a conserved quantity. Baryons are particles consisting of three quarks, such as protons and neutrons. Interaction of particles have never been observed in which the net number of baryons (where antiparticles are counted as a negative number) changes. But if protons can decay, this conservation law must fail. The particle reactions, however, are reversible, so if baryons can decay, they can also be created.

And there is evidence that the conservation law needs to fail, because the universe appears to consist almost entirely of matter instead of antimatter. I. e., there is a positive baryon number, when a total baryon number of zero ought to be expected. There ought to be an explanation for this anomaly – and grand unified theories implying nonconservation of baryons can provide it.

More about baryogenesis

Additional Higgs fields

If there are particle interactions which convert between quarks and leptons, then extremely massive bosons are necessary to mediate the force that this implies. But Yang-Mills gauge theories which involve massive bosons require additional Higgs fields to give mass to these bosons, and these new Higgs fields imply new Higgs bosons which are themselves extremely massive.

The nature of these Higgs fields is that at very high energies, the state of lowest energy of the field occurs when the field itself is zero. But at some critical point as the energy level drops, the lowest energy state (i. e., the "vacuum state") changes so that it occurs at a nonzero field strength.

Cosmic inflation

In a system which is cooling from a very high temperature – such as the universe very soon after the big bang – there is a critical point where certain Higgs fields must assume a nonzero value in order to be in their lowest energy state. At this critical point, a sudden phase transition can occur in which enormous quantities of mass appear out of the vacuum itself. This sudden appearance of mass has drastic consequences for the system – the hypothesized phenomenon of cosmic inflation, in particular.

Magnetic monopoles

Magnetic monopoles are exotic forms of matter which carry a unit of magnetic charge. Although foreseen be theorists as early as P. A. M. Dirac in the 1930s, they have never been observed. However, they are a natural consequence of the physics of Higgs fields in grand unified theories. This is sometimes considered a problem with Higgs physics, in view of the fact that monopoles haven't been observed. Given the supermassive Higgs bosons that should exist in grand unified theories, existence of monopoles should be pretty certain. But at the same time, the phenomenon of cosmic inflation provides an explanation for why monopoles are not observed.

Neutrino mass

In the standard model, neutrinos are massless. That is, the model doesn't provide any reason for neutrinos to have mass. And in fact, if no particles exist other than those covered by the model, neutrinos must be massless. The same is true of the SU(5) unified theory. This is equivalent to saying that neutrinos do not couple with the Higgs field(s) in these theories.

Yet experimental evidence which has accumulated in the last few years indicates that neutrinos do have mass, albeit in rather small amounts. Unified theories more complex than SU(5) provide additional Higgs fields to which neutrinos may couple in order to acquire mass.

Exotic particles

Most grand unified models, even SU(5), predict the existence of new and as yet undiscovered elementary particles. In the case of SU(5), the model specifically excludes any new particles that are affected by the electroweak or strong forces. So any additional particles would be very hard to detect, because they would not interact with the kind of matter we currently know of, which is subject to one or more of the electromagnetic, weak, or strong forces. One possible particle in this class is known as the "axion".

The SO(10) model would include a right-handed neutrino (and corresponding left-handed antineutrino), as well as many exotic bosons. Such a neutrino might be very massive, but would be very hard to detect, since it would not be affected by any of the known forces except gravity. The additional bosons would be carriers of one or more new forces even weaker than the weak nuclear force, and of which (therefore) we currently know nothing.

Other models would have even more exotic fermions and bosons. Some of them might be affected by electromagnetic, weak, or strong forces, but all that do would have to be extremely massive, since they have left no evidence of themselves in accelerator experiments. One of the best indications that some sort of massive exotic particles do exist is the very strong evidence that most of our universe is actually composed of non-baryonic dark matter, which simply cannot be accounted for by the standard model.

Proton decay

When you stop to think about it for a moment, it would be rather surprising if protons didn't decay.

In the first place, from the theoretical point of view, the "ideal" state of the universe would be one in which maximal symmetry prevails. This would entail, among other things, just a single type of particle (which would be massless) and just a single type of force. But such high symmetry is unstable, like a pencil balanced on its point. This much symmetry could exist, if ever, only in the first instant after the big bang when the universe was unimaginably hot. As it cooled, all of the particles and forces we know today would have very quickly precipitated out. Leptons and quarks, in particular, would have become distinct. But just before that point, protons could not exist stably because quarks and leptons would be constantly turning into each other. (Protons might not exist for other reasons as well – the temperature would be so high that quarks could not form a bound system. Instead, a "quark-gluon plasma" would prevail.)

In the second place, from an observational point of view, we are almost certain that there is far more matter in the universe than antimatter. A necessary condition for this asymmetry (as we shall discuss) is that baryon number not be conserved. In other words, matter/antimatter asymmetry is inconsistent with baryon number conservation, and hence with absolute stabililty of protons.

In spite of all this, we still have no direct evidence that protons do decay. And it's not for lack of trying. A number of experiments have been conducted trying to detect proton decay. The first began around 1978 in the Kolar gold mine in India. Others have since been conducted in a northeastern Ohio salt mine (the "IMB experiment"), in the Mont Blanc tunnel between France and Italy, and in a silver mine in Utah. The most recent is the Kamiokande experiment in Japan, which has also been used in neutrino studies.

Although it might seem impossible to observe an event like proton decay, which would take longer than 1031 years to happen with any given proton, it's not quite so hard as it sounds. Several thousand tons of matter (typically water) contain about 1033 protons, so about 100 of them might decay every year. Detectors consist of a container with this much water and thousands of photo-multipler tubes to register flashes of light that would be the signal of a decay event. Such experiments are conducted deep underground in order to shield them from cosmic rays which would produce false signals.

The SU(5) theory predicts proton half-life of about 1031 years. Although several suspicious events were noted fairly early, nothing which could conclusively be deemed a proton decay has been confirmed to have occurred. Already around 1982 it became pretty clear that the proton half-life must greatly exceed 1031 years, so that SU(5) was definitely ruled out.

Pushing the minimum half-life much beyond this requires watching larger quantities of matter for longer periods of time. At present the minimum is about 1.9×1033 years, with still no sign of proton decay. This is almost enough to eliminate other possible grand unified theories, such as SO(10). But for all anyone knows, there are theories consistent with much longer proton half-lives. It's certainly starting to appear that we won't be so lucky to be able to measure a definite proton half-life, which would give us some information about which theories could be viable. At least, not with the current experimental approaches.

It turns out, in most unified theories, that the probability of proton decay is proportional to the 4th power of the X boson mass – a fairly sensitive dependency. However, there's an uncertainty in this mass of at least a factor of 10, so there can be an uncertainty in the decay probability of at least a factor of 10,000 – quite a lot, especially when pushing the limits of experimental technology.

It is also possible to construct grand unified theories in which protons do not decay. These would be automatically refuted if we ever did observe definite decay events. While such a model might give us a little more comfort with grand unified theory as long as proton decay goes undetected, it may not be what we want anyhow. As indicated above, proton decay is tied up with other phenomena for which there is observational evidence – baryogenesis and matter/antimatter asymmetry.

Cosmic inflation

We're going to discuss cosmic inflation in some detail elsewhere, so it would be redundant to say too much about it here. But it is highly relevant to grand unified theories, because it turns out to depend on two main ideas. The first of these is Higgs physics. This involves the way a phase transition can occur in which Higgs fields suddenly take on large nonzero values in order to minimize their vacuum energy.

The second idea, which is where grand unified theories come in, is that the Higgs fields of importance are those which generate the masses of the extremely heavy X bosons which mediate transformations among quarks and leptons. In other words, these are the Higgs fields which break the symmetry of the grand unified theory.

It is expected that there will be many Higgs fields involved with GUT symmetry breaking. But suppose, for simplicity, that there are only two. At any point in space, the value of one Higgs field is just a scalar – a number, which can be positive or negative. So the values of two Higgs fields can be plotted in two dimensions with a standard x-y coordinate system. Any point in this plane represents some numerical value assumed by each of the fields. Along the third axis one can plot the energy existing in the vacuum as a result of these two fields. That is, above each point in the plane is a third point, representing energy. (In this discussion, energy is a positive quantity, so the energy is actually on or above the plane.) The resulting energy graph is a two dimensional surface.

"Normally" what one would expect is that the energy would be zero when both fields are zero, and nonzero (positive) otherwise. In this case, if we assume the energy is symmetric under exchange of the two field values, the energy graph would resemble a paraboloid. That is, rather like an egg shell which has been cut in half through the middle and is situated with the tip of the shell at the point (0,0).

This is what the theory postulates for the relation between Higgs field strength and vacuum energy when the temperature is sufficiently high. Under these circumstances, the system is in a state of minimal energy when both (all) fields have a value of zero.

But the theory also postulates that when the temperature drops below a certain point, the energy graph takes a different form. It becomes shaped somewhat like a Mexican hat or the bottom of a wine bottle, with a hill in the middle. The minimum of the energy then occurs when both (all) fields are non-zero.

Since we don't have a satisfactory grand unified theory, we don't know very precisely what the unification energy scale is, but estimates are generally around 1015 to 1016 GeV. The conversion factor between GeV and temperature (degrees K) is derived from the Boltzmann constant, kB, which is about 8.617×10-5 eV per degree K. As a handy approximation, we can say 1 GeV is about 1013 degrees K. So we are talking about a temperature of 1028 K to 1029 K. Thus the typical photon at a temperature of 1028 K has an energy of 1015 GeV.

There is likewise an uncertainty of precisely when the universe was in this state, but estimates are around 10-35 to 10-36 seconds after the big bang. In this state where all forces are unified, everything is very symmetrical. X bosons abound, and leptons and quarks turn into each other with blythe abandon. In fact, the different types of particles really are not distinguishable.

But as the temperature falls through that point, the vacuum itself becomes unstable. The vacuum energy graph described above no longer has its minimum when all Higgs fields are zero. Instead, it has minimal energy when the fields are not zero. This unstable vacuum energy persists for a very brief time. During this instant we have what is called a "false vacuum". There is a huge amount of energy "trapped" in the vacuum, due to the shape of the energy function. This is similar to the state of water when it is "supercooled" below 0° C. Although the system is still very symmetrical, it is now unstable.

This vacuum energy can be described through the general theory of relativity as the term in Einstein's equation called the "cosmological constant". It represents a kind of antigravity, a force acting repulsively rather than attractively. This force of repulsion will exist as long as the false vacuum.

The net result is a violent expansion of the universe, which is what is meant by the term "cosmic inflation". The rate of expansion depends on details of the specific GUT involved, but is in the general range of a doubling in size every 10-37 second. Hence the universe could go through 100 such doublings in size in just 10-35 seconds. This would amount to an expansion by a factor of 1030.

How many doublings in size actually occur? That depends on how long the inflation actually lasts, which in turn is very dependent on the nature of the specific GUT involved. In fact, this is one of the trickier parts of the whole theory: the question of what brings inflation to an end.

Inflation proceeds as long as the unstable false vacuum persists. It is thought that the false vacuum will eventually decay by a process like quantum tunneling. Something like this is needed, because the shape of the energy function is actually a little more complicated than the Mexican hat or wine bottle. The Higgs fields, it seems, are trapped in a small energy well at the top of the energy surface. The potential energy would need to increase somewhat before it could be eventually released. But this can also occur if the fields "tunnel" through the energy barrier, just as a quantum particle can tunnel out of an energy well.

Eventually, through some tunneling process, the symmetry is broken. The Higgs fields suddenly take on nonzero values and lower the potential energy of the vacuum. The potential energy suddenly drops – or rather, it is converted into the kinetic energy of particles which are now massive (because the Higgs fields are no longer zero) and moving at near the speed of light.

Different GUTs predict different forms of the energy surface, and hence the details of how long inflation actually persists vary quite a bit. But the end result is much the same, whether the actual amount by which the universe inflates is a factor of 1030 or 1060.

There are several noteworthy effects of this enormous expansion. One is that any curvature which may have existed in the prior universe is reduced to essentially zero – total flatness. Our best measurements of the shape of the universe indicate that it is in fact very close to flat. And this is more of a problem than one might think, because (without inflation) the amount of flatness we see now can only have resulted from an incredible degree of fine tuning in the early universe.

Another problem that inflation solves is the degree of isotropy and homogeneity observed in the universe today. That is, as far as we can tell, every part of the universe, in every direction, is very like every other part. This is reflected, for instance, in how nearly the temperature of the cosmic microwave background radiation is the same in every direction. The problem is that we can observe portions of the universe, in opposite directions from each other, which could not have been in physical contact at a very early stage. Thus it is puzzling how they could be so much alike. Inflation, again, handles this by providing a means by which all of the visible universe in fact could have been causally connected – since the part we can see now is just a tiny part of a causally connected region that has inflated enormously in size.

And there are additional observational facts that inflation handles well, such as the origins of the large scale structure of the universe. But we'll save more details about such matters for elsewhere. Except for one specific item – magnetic monopoles. Most GUTs predict monopoles should exist. The problem is, we've never observed (for sure) a single monopole. Inflation can handle this problem.

More detail on cosmic inflation

Magnetic monopoles

The existence of monopoles, like cosmic inflation, is a phenomenon predicted by Higgs physics in the context of a grand unified theory. Some sort of GUT seems to be a necessary condition. Monopoles do not occur in the standard model, even though the theoretical possibility of monopoles had been noticed by Dirac in the 1930s.

Monopoles can be described as "topological defects" – kinks in quantum fields, the Higgs fields in particular. Another way to say this is that monopoles are composed (in some sense) of Higgs fields. They represent discontinuities in the fields, a little like the discontinuity of a broken piece of sidewalk that is easily stumbled over. Another term sometimes used is that monopoles are topological "solitons" – a kind of wave having finite physical extent.

Although Higgs fields are scalar fields, there are usually many of them (in GUTs). Taking the values of all the fields together gives you a vector, which has both a direction and a length. Vector fields tend to have isolated discontinuities. An example of this is a tornado or water running down a drain. The vector fields indicating the speed and direction of the wind or water are continuous, except for the point in the very center.

Calculations indicate that GUT monopoles ought to be extremely massive – about 100 times the unification energy scale or the mass of an X boson. Since this scale, inferred from the failure to observe proton decay, is in excess of 1016 GeV, monopoles must weigh in at more than 1018 GeV.

As already noted, monopoles have never been observed, despite many diligent searches. It's not that monopoles are just carefully hidden away, deep inside stars or planets, for example. Because monopoles are so massive, calculations of their expected rate of production in the early universe indicate that their mass should absolutely dominate everything else in the universe – causing it to collapse in as little as 1000 years.

But the solution of this monopole problem is close at hand. It's inflation, again. The universe would have expanded so drastically during the inflationary period that the resulting number of monopoles in each cubic lightyear is negligible.

More detail on magnetic monopoles


The standard model doesn't really say much about neutrinos. They are incorporated in the model, of course. But that's just because the model has been constructed to contain them, since their existence is an observational fact. Indirectly, the standard model implies that neutrinos are massless.

That's because the standard model incorporates only left-handed neutrinos (and right-handed antineutrinos). That is all the SU(2) electroweak symmetry provides for, because right-handed neutrinos don't couple to the electroweak force. Consequently, neutrinos do not couple with the Higgs field(s) of the standard model (needed for electroweak symmetry breaking), so they can't have mass from that source.

It turns out that if right-handed neutrinos don't exist, then neutrinos must be massless. Stated another way, if neutrinos have any mass at all, then right-handed neutrinos (still not coupling to the weak force) must exist. This means that in the standard model neutrinos must be massless. And since the SU(5) GUT doesn't include right-handed neutrinos either, they must be massless in that theory as well. SO(10), on the other hand, would allow them.

But there is increasing evidence that neutrinos do have mass. One clue was the observation of neutrinos from Supernova 1987a. If neutrinos have any mass at all, they cannot travel at exactly the speed of light. Consequently, neutrinos produced in that supernova should have had a small range of velocities just under the speed of light, and as a result, they should have arrived in our detectors at slightly different times. This is in fact what was observed. The spread of arrival times was only 12 seconds. This implies that the mass of the neutrinos could have been at most about 16 eV. Unfortunately, most of the spread of times could also have been due to differences in when the neutrinos were produced.

Better evidence for neutrino mass comes from measurements of solar neutrinos. Observations have consistenly shown only about 1/3 of the number of neutrinos expected to have been produced normally within the Sun. It is now accepted that this implies a phenomenon called neutrino oscillation. This means that neutrinos spontaneously change among the types associated with electrons, muons, and taus. In order for this to occur, it is necessary that neutrinos have some mass.

So the evidence is now pretty good for neutrino mass. Although this means the standard model is incomplete, that's not much of a surprise. It's no surprise that this also rules out a SU(5) GUT, since that model has a number of other problems as well. On the other hand, this gives a bit of indirect evidence for some other type of GUT, in that other models do allow for right-handed neutrinos and neutrino mass.

More detail on neutrinos

Exotic particles and dark matter

"New" particles don't appear for no reason at all. Nature usually has its reasons. If some grand unified theory is correct, there are several reasons why various as yet unobserved exotic particles may exist.

One way that exotic particles may come out of the theory is due to the nature of the models themselves. Along with the symmetry group that belongs to the model there are group representations consisting of one or more fermion multiplets. Each multiplet consists of particles which can be interchanged by symmetries of the group. SU(5) is the only grand unified model whose multiplets consist exclusively of known particles. With larger symmetry groups, additional particles need to exist simply to fill out the symmetry. This sort of circumstance has been around for some time in the theory of elementary particles. In 1961, for instance, Murray Gell-Mann's "eightfold way" symmetry required the existence of an as-yet unknown particle (the Omega-minus) in order to be complete. The particle was actually found in 1964.

In addition to fermions predicted by a model, there are also bosons mediating gauge forces. If the smallest fermion multiplet has N particles (an "N-dimensional representation"), there need to be N2-1 bosons. For instance, in the SU(2) theory of the weak force there are 3 fundamental bosons. (Additional bosons occur as superpositions of the fundamental ones.) In the SU(3) theory of the strong force there are 8 fundamental bosons – the gluons. As noted above, the SU(5) theory has 24 bosons (12 already known, and 12 new X bosons). The SO(10) model has 255 fundamental bosons.

So if one of these unified models is actually correct, there must be a significant proliferation of "elementary" particles. How could they all manage to escape detection? The answer seems simple enough – the new particles would have to be so massive that they couldn't be created in experiments with existing accelerators. Most such particles would be unstable as well, so they wouldn't be found in nature, as cosmic rays for example. But this answer is a little too simple. Even if a particle is too heavy to be created in an accelerator, its potential existence as a virtual particle should have effects on particles that can be created. This is especially true of gauge bosons. Since they may exist virtually for very short periods of time, they can have subtle effects on observable processes. They will make some small contribution to the probabilities of various kinds of interactions, for instance.

At present the most sensitive experiments haven't been able to identify any effects that, with great certainty, must be due to such additional particles. This tends to be evidence against unified models other than SU(5) (which is itself ruled out for a variety of other reasons).

There is, however, one other significant potential source of new, exotic particles: supersymmetry. We'll go into it a little more below (as well as elsewhere), but the basic idea is that for every known fermion or boson the symmetry provides a new particle of the opposite type (boson or fermion) that corresponds to it. (Some particles may have more than one such "superpartner" in some versions of supersymmetry.) None of the known particles has the right properties to be a superpartner of any other known particle, so if supersymmetry is correct, the number of "elementary" particles is immediately doubled (at least).

Here again, that none of these supersymmetric particles have been observed (assuming they exist) is probably because they are very massive. But there are also reasons for expecting that the lightest superpartner could not be too much more massive than current experiments are able to probe. So discovery of one or more supersymmetric particles could occur almost any time – and supersymmetry as a theory will start looking pretty shaky if nothing turns up within, say, the next 10 years.

There is one additional theoretical source of exotic new particles worth noting. It comes from the phenomenon of CP symmetry breaking and a puzzle related to this and the strong force, known as the "strong CP problem". Out of this, using what is essentially a Higgs mechanism, there should come a very light particle known as the "axion". We'll discuss that more elsewhere, and merely take note of it here.

All of these various exotic particles are extremely interesting for two reasons. In the first place, observation of any of them will give instant credibility to the theory which predicts them – one of the grand unified models or supersymmetry in particular. Discovery – or nondiscovery – of any of these particles makes for a very good test of the various theories.

In the second place, the almost certain predominance of dark matter in the universe makes it equally certain that some sort of exotic non-baryonic particles must exist in great abundance. We would plainly like very much to know exactly which of the many possibilities the dark matter actually consists of. (It may be several of the candidates, of course.) In addition to the exotic particles describe above, there aren't all that many other good candidates for the non-baryonic dark matter that come out of plausible theories. Massive neutrinos (including the hypothetical right-handed kind) are one other possibility. Magnetic monopoles are another. But that's about it.

We stress the "non-baryonic" type of dark matter for a good reason. It would take far too long here to go into how the amount of dark matter in the universe is estimated. But when everything is calculated out, there are couple of key points. First, only about 1/6 of the mass of the universe is baryonic (i. e. protons and neutrons, mostly), while the other 5/6 is non-baryonic. This is estimated by means of fairly reliable computations of big-bang nucleosynthesis (the creation of light atoms such as hydrogen and helium). In other words, over 80% of the mass of the universe is not only dark, but is actually non-baryonic. Second, of the baryonic part, only about 20% of it is in the form of visible matter, such as stars, galaxies, and luminous gas clouds. This is estimated by such things as the rotation rates of galaxies and the motion of galaxies in clusters. Therefore probably at most 3% to 4% of the matter in the universe is visible, and the rest is (by definition) dark matter – so there is quite a bit of it.

This non-baryonic matter, then, which comprises more than 80% of the mass of the universe, is made up of some mixture of massive neutrinos and other particles even more exotic, as outlined above. There are good reasons for thinking most of it is not neutrinos. (If there were very much of this so-called "hot" dark matter, galaxies would not cluster to the extent that they are observed to do.) Consequently, more than half (and probably much more) of the mass of the universe consists of exotic particles which are predicted by grand unified theories and/or supersymmetry.

The point is, we really need something beyond the standard model simply to understand what most of the universe is composed of.

More detail on dark matter

More detail on CP symmetry violation and axions

The electroweak mixing angle

We have already noted that one problem addressed by grand unified theories is that of how to provide a more seamless unification of the theories of the electromagnetic and weak forces. It's been observed above that in the standard model there is little theoretical guidance about how these two forces are actually related – more specifically, how the strengths of the two forces are related. This can be measured experimentally, of course. But then it's just another arbitrary parameter of the theory – one of about 20. We would prefer that the theory itself determine the parameter – but it had better agree with experiments.

To begin with, we should explain better what this quantity is. In the first place, why is it called an "angle"? One answer is that it comes out of the original metaphor we used to understand SU(2) symmetry This goes back to the notion of "isospin" introduced by Werner Heisenberg. This idea allowed him to regard the two nucleons – protons and neutrons – as really different forms of the same particle. These two forms are related to each other by rotations in an abstract space – rotations through some particular angle.

Since protons and neutrons aren't elementary particles, this idea didn't really accomplish much. But it turned out that an exactly analogous idea – now called "weak isospin" does work for pairs of particles such as electrons and neutrinos or up and down quarks. The symmetry built on those particle couplets is the SU(2) of the electroweak theory.

Sheldon Glashow in 1961 came up with the idea that is essentially the electroweak mixing angle in the course of making his contribution to the electroweak theory. However, to his consternation, the quantity was for a time known as the "Weinberg angle", after Steven Weinberg, who independently came up with the same idea about 6 years later. The concept now goes under several other names as well, such as "electroweak unification angle" and "weak mixing angle".

Physicists refer to the strength of a force as a "coupling constant", so the most intuitive way to compare the electromagnetic and weak forces is by means of the ratio of the electromagnetic and weak coupling constants. The best current measurement of this ratio is about .472. So in some sense the electromagnetic force is about half as strong as the weak force.

How does an angle figure into this? One simply defines the electroweak mixing angle as the angle x such that the given ratio is sin(x). (This angle is usually written as the Greek letter theta, as one commonly does with angles.) On the basis of this definition, the mixing angle is about 28.16 degrees. (That's the inverse sine function of .472.)

But why go to the trouble of expressing this value in terms of an angle? The answer is that the sine of the angle plays an important role in the theory. One of the main functions of the mathematics involving the Higgs fields was to account for the fact that the gauge bosons of the weak force (Z, W+, and W-) all had considerable mass. Also, instead of the neutral Z vector boson which is actually observed in connection with a phenomenon called a "neutral current", the theory yields a different neutral vector boson, denoted by W0.

What's wrong with this picture? Well, for one thing, we wanted to obtain the familiar photon, which is massless, as one of the gauge bosons of the theory. And at the same time, we wanted to have Z instead of W0. Is there some way to get the photon and the Z out of the theory?

The answer is, yes. To begin with, the electroweak theory yields an electrically neutral massive vector boson, call it V, which is produced by the U(1) symmetry. So it's a little like a photon, but not exactly, since it has mass. But at least we now have two neutral bosons, V and W0. We can, via rotation through an angle x and superposition of the results, produce two new particles:

V = W0 sin(x) + V cos(x)
Z = W0 cos(x) - V sin(x)

Both of these are still electrically neutral. But, magically, it turns out that when x is the electroweak mixing angle defined as above in terms of a ratio of coupling constants, then v is in fact the massless photon, and Z is the neutral vector boson associated with neutral currents.

Well, that's very nice, but so what? Remember that x is an experimentally measurable value, a ratio of coupling constants. Now, the standard model has no way to predict this value. It's just an arbitrary constant. The big deal here is that a grand unified theory can predict this value.

Unfortunately, at first (around 1974), the predicted and measured values of the mixing angle did not agree. But surprisingly, as time went on, the measured value changed until it agreed to within about 5% of the theoretically predicted value. Much better, but still not quite right.

The theoretical prediction, however, was made using the minimal SU(5) theory. And that theory has already been ruled out for various other reasons. When a unified theory which incorporates supersymmetry is used to compute the mixing angle, it is found that theory and experiment agree to within about half a percent.

That agreement of theory and experiment is the big deal. It's a fairly nice piece of evidence in favor of both some type of grand unified theory and supersymmetry.

We are going to see that unified theory faces other challenges – and supersymmetry again comes to the rescue.

The hierarchy problem

The hierarchy problem has been mentioned before in passing. There are various ways to think about it. As we have discussed it before, it has to do with the fact that electroweak symmetry is broken at an energy of about 100 GeV, while the "grand unified" symmetry is broken at an energy around 1016 GeV. This is a difference of 14 orders of magnitude – i. e. a difference of 14 decimal places of accuracy. Roughly the same sort of difference exists between the masses of the gauge bosons that carry the electroweak and strong forces (the W/Z and the X bosons).

If one assumes that a Higgs mechanism is responsible for the symmetry breaking and the boson masses, then there will also be a similar difference of 14 orders of magnitude in the masses of the relevant Higgs bosons. It is this circumstance which makes the hierarchy problem especially acute. If a Higgs mechanism were not involved, then the huge differences in masses and field strengths would be only a curiosity. But the mathematics of the Higgs mechanism – in particular its inclusion of spinless ("scalar") bosons – establishes relationships between the different sorts of Higgs fields and their magnitudes. In other words, there are equations relating these different quantities, and for the equations to work, they must contain parameters which are exact to 14 or so decimal places in order that different terms will cancel each other almost – but not quite – exactly, so that there can be related fields with such disparate strenghts.

We can be a little more specific about this problem. Recall that it was the Higgs mechanism which gave mass to the gauge bosons in the electroweak theory. There must be at least one Higgs particle involved here, but there may be more than one. When we work with a grand unified theory, there will be many more Higgs particles that give mass to the X bosons of the GUT. These latter Higgs particles must have masses comparable to the the emormous masses of the X bosons. The problem is that all the Higgs particles are involved even with the much lighter W/Z bosons. That is, all of the Higgs particles contribute something to the W/Z masses. But since most Higgs particles would be extremely heavy, the result could be the relatively miniscule W/Z masses only if there were exceedingly precise cancellations from different terms. One would expect a far more likely result to be that the W/Z masses are close to those of the X bosons.

So all of this leads to another way of describing the hierarchy problem – as a "fine tuning" problem. Somehow, the equations, which are suited to describing quantities of similar magnitude manage to also describe far smaller quantities which arise from very precise cancellation of the larger numbers.

Here's an analogy: Suppose you have a recipe for making a souffle, in which two ingredients must be mixed in exactly the proportion of 2:1 for the recipe to succeed. The proportion must be exact to fifteen decimal places, or else the recipe will fail completely, and all you will get is a soupy mess. That is very closely parallel to what we have with this hierarchy problem.

Note that this problem arises in the presence of two distinct features: (1) We are trying to formulate a grand unified theory which relates the electroweak and the strong force. (2) We want to use a Higgs mechanism to explain the breaking of the symmetry at two very different energy scales. It may be that these goals are simply incompatible. Unification may be impossible, or the whole idea of the Higgs mechanism could be a mistake.

Strictly speaking, the Higgs mechanism could be dispensed with. All that the electroweak theory (and similarly, an analogous grand unified theory) needs is for the symmetry between the forces to be broken in some way. The Higgs mechanism can do this, but doing it is enough. Actual details about how the mecanism works aren't important. (And for this reason, the standard model and GUTs which extend it say very little about details of the mechanism, such as the masses of the Higgs particles.)

But physicists are very reluctant to give up on either the unification of forces or the Higgs mechanism, because of their inherent elegance. So there is a problem, and an intensive search for some way to reconcile these two theoretical approaches. It turns out that supersymmetry provides just what is needed.


We brought up the idea of supersymmetry in connection with discussion of Higgs physics, because supersymmetry theory leads to the Higgs mechanism in a natural way. Not only that but, as it turns out, supersymmetry also solves some of the problems – such as the hierarchy problem – which the Higgs mechanism brings with it.

We go into much more detail on supersymmetry elsewhere. Just as a reminder, the essential idea is to postulate one more symmetry, even more sweeping than what is done in grand unified theories. This new symmetry relates bosons to fermions. Under this symmetry, there are transformations which actually exchange particles of these two very different types.

Supersymmetry is an attractive theory for several reasons. One of the primary ones is that the theory is actually able to make a natural (if still not quite consistent) quantum theory of gravity. That would be enough in itself, but another reason supersymmetry is interesting is that it is also able to solve several problems that arise in grand unified theories.

As noted, supersymmetry postulates the existence of transformations which relate fermions to bosons. In particular, to each particle of either type there must exist another particle – called the supersymmetric partner – whose spin differs by exactly 1/2. However, none of the known particles appears to be a superpartner of another known particle. Thus, all superpartners must be particles which are as yet entirely unknown.

What is it that renders supersymmetry so magically effective at solving certain problems? The answer is that all those superpartners make the equations of the theory very symmetrical. For any given particle and every corresponding term in an equation of the theory, there is a superpartner whose own term in the equation occurs with the opposite sign. The result is that, in a typical calculation, there is a large amount of exact cancellation of terms right away. This has a number of pleasant consequences.

First, it takes care of the hierarchy problem in grand unified theories. In computations of the mass of the electroweak gauge bosons (W and Z), everything having to do with contributions from Higgs fields related to the much heavier X bosons cancels out exactly. The problem simply goes away.

In a little more detail, as we noted, spinless particles (the Higgs bosons) cause trouble in this regard. This is because virtual particles in the vacuum interact with all spinless ("scalar") particles and hence affect their mass. So the mass of each scalar particle is related to the mass of all the rest. This makes it very likely that the Higgs boson(s) of the electroweak theory should have masses similar to the masses of the bosons of the grand unified theory – which are very large. Consequently, the W/Z bosons of the electroweak theory should be much more massive than they are. But with supersymmetry, each term affecting the mass of a Higgs boson comes from a particle but is cancelled out by another term coming from the particle's superpartner. The net result is that the mass of a Higgs particle is not affected by the masses of other Higgs particles.

The second pleasant thing is that a number of other calculations also become much easier, for similar reasons. As a result, several other problems can be resolved. In particular:

  1. The asymptotic values of the coupling constants for the electromagnetic, weak, and strong forces can be shown to become equal at exactly (not just approximately) the same energy scale.
  2. The electroweak mixing angle can be predicted more or less independently of details of the exact form of the unified theory. And the answer agrees very well with experiments. (This is a consequence of the preceding point.)
  3. The masses of the welter of quarks, leptons, and gauge bosons already known can be computed in terms of the masses of superpartners (if such can be measured) and just four arbitrary parameters.

Supersymmetry seems almost too good to be true. And perhaps it is. But at least it is something we have a reasonable prospect of testing.

More detail on supersymmetry

Composite models

With all these complications we've seen in trying to extend the standard model, one might well wonder if we're not just on the wrong track. Surely there must be an easier way. Perhaps all the leptons and quarks of the standard model aren't really elementary particles after all. Perhaps they are really composite, just as mesons and baryons were discovered to be some time ago. Could everything be explained just by going to a new, much smaller level of matter?

In the past, it has certainly been true that experimental findings have forced physicists to recognize smaller levels of structure. Atoms, for example, were found not to be truly elementary. They consist of a small, heavy nucleus enveloped in a much larger cloud of much lighter electrons. The nucleus, next, turned out to be made of smaller particles – protons and neutrons. And then even the protons and neutrons were found to be composed of quarks. Couldn't this sort of nesting of smaller and smaller structures go on indefinitely?

Maybe not. It's important to note that in the past there were always puzzling experimental findings which were inconsistent with existing theory and required a smaller level of structure to exist to make any sense. With the standard model, however, there are no obvious inconsistencies with experiments, even though the model does fail to explain many things that seem to require a more complete theory.

In any case, theorists have explored ways to extend the standard model by supposing some sort of composite structure for quarks and leptons. Abdus Salam, who was one of the main contributors to the electroweak theory, played with the idea of "preons" as possible constituents of quarks and leptons.

A somewhat more serious attempt was made by a physicist named Haim Harrari to build a composite model using just two types of particles he called "rishons". He supposed there might be two spin-1/2 particles, one with 1/3 unit of electric charge and the other electrically neutral. (There would be corresponding antiparticles, with the opposite electric charge.) When these two particles are combined in groups of three, there are four possibilities, having electrical charges of 1, 2/3, 1/3, and 0. These could correspond to a positron, an up quark, an anti-down quark, and an electron neutrino. There is a way to include color charge as well. Since the quarks which make up a proton can swap different types of these rishons, it is even possible to have proton decay. Unfortunately, measured limits on the proton half-life are very hard to reconcile with the rishon theory.

Indeed, there are several facts, both experimental and theoretical, which present almost insurmountable problems for any type of composite model yet considered.

The first problem is that there is no experimental evidence of any composite structure inside leptons and quarks, in spite of the fact that the energy scales and distance scales which have been explored are far in excess of what might have reasonably been expected to reveal smaller structure – by a factor of around 10,000. Specifically, any smaller structure must be at a scale of less than 10-22 m, corresponding to an energy of 1 million GeV. Although current accelerators are far from being able to probe such energies directly, it turns out that calculations using QED can be made to a very great accuracy which is capable of distinguishing differences in the way that an electron would behave if it were a particle whose charge were distributed over the indicated distance instead of truly a point particle.

A second problem arises if an electron has a very small but non-zero radius. In that case if the electron had smaller constituent particles, then they would be confined to an extremely small space, and therefore by the uncertainty principle have a very large kinetic energy. This energy would be far larger (by a factor of perhaps 107) than the equivalent rest mass of the electron. This could be cancelled out by the potential energy due to forces between the constitutent particles, but this would require extremely fine tuning. Instead, the electron should have a rest mass much closer to that of its constitutents – which is simply impossible.

A third problem is that lack of any contradiction between the standard model and experiments. It is consistent with all observations made to date (even if it does not always predict them). It can be consistently extrapolated to much smaller distance scales. And it does this without even having a way to incorporate the possibility of particles making up leptons and quarks.

Fourth, and finally, when the forces covered by the standard model are extrapolated to the unification scale, they are found to be asymptotically equal as a natural consequence. This kind of prediction does not come out of composite models. Therefore, if unification is a reality (as just about everyone expects), it would have to be a pure coincidence as far as any composite model is concerned.

In short, no one is taking compsite models seriously these days.

Testability of grand unified theories

The most important characteristic of a scientific theory – after certain basics such as logical and mathematical consistency – is testability. Grand unified theories include several speculative ideas, such as the Higgs mechanism, symmetry between quarks and leptons, and unification of three of the fundamental forces (excepting only gravity). This can make GUTs seem to some a bit less like physics than metaphysics. So it's especially important in a situation like this to identify specific predictions the theory makes that can be verified against experiments. Fortunately, this can be done.

For each prediction, there are three possible outcomes. First, the prediction can be verified. This doesn't confirm the whole theory, but it does provide evidence in its favor. Second, the prediction can be falsified if experimental observations contradict it – that is, there are experimental results that are not consistent with the prediction. In this case, there is something definitely wrong with the theory, and it will need to be modified or (if modification isn't somehow possible) abandoned. Third, all efforts either to verify or falsify a prediction can fail, in which case little can be said either way.

Keep in mind that grand unified theories come in many variant forms. GUTs really represent a whole class of models, using a variety of symmetry groups and hypothetical forces and particles. Some predictions are common to all or most models, while others are different from model to model, and experimental results can gradually provide increasing evidence for or against specific models, as well as the theory as a whole.

Here, then, are some of the testable predictions relevant to grand unified theories.

Exotic particles and dark matter

The confirmed discovery of one or more particles predicted by one of the grand unified models would be the most dramatic kind of evidence. Unfortunately, there is a vast energy range above what our experiments can probe (roughly 1000 Gev) and below the unification scale (1016 GeV). Most exotic particles could be far too heavy to create and detect. On the other hand, the firm implications that much more than half of the mass of the universe consists of dark matter composed of exotic particles is strong evidence of something. If that something is particles that are at all like what we already know of, they ought to fit into a similar kind of theory.

Neutrino mass

Evidence is accumulating that neutrinos do have mass. As noted above, this rules out the SU(5) model, but is consistent with most other forms of unified theory.

Proton decay

Most forms of unified theory predict proton decay. The only question is, what's the expected lifetime of a proton? So far, the best measurements indicate the half-life can't be less than 1.9×1033 years. This is more evidence against SU(5), which predicts a half-life of about 1031 years. As yet, there is no definite evidence for any proton decay. However, many models allow a half-life longer than the current limit. Raising that limit is becoming increasingly difficult because of the number of protons which need to be observed. It's possible that this observational test will not be conclusive anytime soon.

Electroweak mixing angle

This has been a great success for unified theories. The quantity can be measured experimentally and also calculated in each model. In most models (except SU(5)) the disagreement between experiment and theory is less than 1%. This is especially noteworthy, since early experimental measurements were quite far from what is now accepted.

Magnetic monopoles

Almost all grand unified models predict the existence of magnetic monopoles. But despite extensive searches, no monopoles have ever been detected. Monopoles, if they do exist, must be so massive there is no hope of creating them in accelerators. Observation of monopoles would have been good evidence in favor of GUTs. However, the failure to find them is not especially troublesome, as there are good reasons (in the theory of cosmic inflation) why monopoles should be very hard to find.

Cosmic inflation

Most grand unified theories lead to scenarios in which cosmic inflation occurs during the first instant after the big bang. And evidence for inflation itself continues to grow, so this is indirect evidence for unified theories. There is, however, considerable uncertainty as to the exact form of unified theory that is consistent with plausible forms of inflation. As we lean more about details of the very early universe, we may acquire good information that can constrain what sorts of grand unified theory models are viable.

Successes of grand unified theories

Before we go on to sumamrize the problems and open questions that arise from unified theories, let's pause for a moment to make a short list of some of the modest successes such theories have achieved over the standard model. It's only fair to note that what we have is not some specific model which has made correct and unexpected predictions. It is, instead, a matter of there being a large class of models that can be constructed, which are all of the same general type, and that many or most of these models are capable of explaining already observed phenomena which the standard model can't account for.

Theoretical successes

These are things which are unexplained in the standard model and for which an extended model has been constructed explicitly to explain. So it isn't quite fair to say that the unified models "predict" the effects in question. They have been designed to account for these things. But it is still noteworthy that theories of a sort which is reasonably well understood mathematically could be built to do this.

There are two key phenomena in this category:

  1. Explanation of the quantization of electric charge and the commensuability of its value between leptons and quarks.
  2. Unification of three of the four known fundamental forces in a consistent mathematical theory that is analogous to the very successful theory of quantum electrodynamics.

Experimental results

The one significant success in this area is the calculation of the electroweak mixing angle. This is related to, and in essence follows from, the exact equality at the unification energy scale of the electromagnetic, weak, and strong forces. If these were not, in fact, all merely aspects of one unified force, even their approximate equality of strength at some energy scale would be an astonishing coincidence.

Ability to account for cosmological facts

Many of the predictions of grand unified theories concern energy scales vastly in excess of anything achievable in the laboratory, or even anywhere else in the universe at the present time. The only time such energy scales could have existed is in the first fraction of a second after the big bang. Yet conditions at that time would leave unmistakable traces in the universe at the present time – and a number of these traces are being increasingly well confirmed by observation every year.

The list of cosmological facts, and very probable facts, which can be explained by grand unified theories includes:

  1. The fact that the mass of the universe is absolutely dominated by non-baryonic dark matter.
  2. The fact that the universe has any baryonic matter at all, and the apparent fact that this baryonic matter is asymmetrically composed of essentially no antimatter.
  3. The increasingly likely existence of a period of exponential cosmic inflation, which in turn is able to explain other cosmological puzzles.

Open questions

OK, so much for the good news. Now what about the bad news – what problems do grand unified theories leave unresolved?

First, let's just make one little point. There really is no such thing as "bad news" in science. The universe is the way it is, and there's nothing to be gained by fretting about things when they don't turn out according to the best guesses we can make at any given time. We just accept that our best guesses were wrong, and use the new information to make better guesses.

That's why science is always avidly seeking new information – either the information will confirm our guesses (big ego boost there), or else it can be used as a new hint to help solve the problem.

As far as grand unified theories are concerned, it's clear enough they have come up short. Not much progress has been made with them in the last 20 years. The following list readily confirms that.

But that doesn't mean they have been a complete failure. What nature is telling us, obviously, as far as GUTs are concerned, is that we are missing something important. Yang-Mills gauge symmetry theories are very elegant, and they are wonderful when they work well – as is the case with quantum electrodynamics.

However, there must be something else, some additional principles (maybe one or two, maybe many) we haven't grasped yet, which must be added to get a completely satisfactory theory. Supersymmetry is a good candidate for such an additional principle. But we shouldn't let ourselves become too dependent on it, as very soon it could be ruled out (at least, in its current form).

Or maybe supersymmetry will be confirmed, but it isn't the final answer either. It leaves unanswered questions of its own. Perhaps superstrings will be an even better answer.

Whatever new principles appear, ultimately, to help construct a better theory, many of the good features of grand unified theories will remain. It should certainly be expected, for example, that three of the fundamental forces will be unified at what we've been calling the unification energy scale. Gravity, too, will follow at a somewhat higher energy. At that scale, there will be just one type of particle and one type of force. We can't guess exactly what principles will underlie the ultimate theory. But the standard model has to appear as a low-energy limiting case when all is said and done, simply because it gives the experimental results we can already confirm.

Whatever elementary particles and fundamental forces turn out, ultimately, to be – some strange kinks in spacetime, perhaps – there will, quite likely, be some symmetry group analogous to SU(5) and SO(10) which organizes, in the same way, those elementary particles and which treats the fundamental forces as effects of gauge symmetries.

Which grand unified model, if any, is the right model?

Well, the answer probably is "none". It's pretty certain that some additional principles are needed before we have a good theory of elementary particles and forces. What will almost certainly turn out to be the case is that some GUT model is a decent low-energy approximation to the real theory. It certainly wouldn't hurt to identify that model (or models), but the best approach is probably going to be to find the more general principles first, and then work out the low-energy approximations. That's probably going to be more successful than adding small corrections and enhancements to a theory which pretty clearly is missing some important general concepts.

Do protons ever decay? When?

As noted above, it would actually be rather surprising if protons don't decay. Three of the four fundamental forces – and probably eventually all of them – will eventually be unified somehow, and that alone is enough to make protons unstable. Go to a high enough energy, and everything, protons included, ought to melt. It seems like this conclusion can be avoided only if there is some principle we're totally unaware of yet which makes proton decay impossible. Because, in nature, anything which isn't forbidden will eventually happen. Of course, that leaves us nowhere at all as far as guessing how long it will actually take for protons to decay.

What resolves the hierarchy problem – supersymmetry, or some alternative to the Higgs mechanism?

This is admittedly a rather technical problem, but a tough one. There's a really important issue at stake here, one that will be crucial as far as the form of the real theory of matter and energy is concerned. The basic problem is, there needs to be an explanation for the breaking of electroweak symmetry (and of the additional symmetry with the strong force, if unification is a fact). The Higgs mechanism can do this. But it has two problems. First, Higgs bosons have not yet been observed. If they aren't soon, the Higgs mechanism is probably a bad idea. Since supersymmetry implies the Higgs mechanism, it must fail too. In that scenario, there isn't necessarily a hierarchy problem, but we still need to solve the symmetry breaking problem. The second problem with the Higgs mechanism is the hierarchy problem. Supersymmetry is one solution to that, but there could be others.

So here are the alternatives:

  1. There is no Higgs mechanism, no hierarchy problem, and no supersymmetry. But an explanation for symmetry breaking, the origins of mass, and the renormalizability of unified theories is then needed.
  2. There is a Higgs mechanism, but no supersymmetry. Then there is a hierarchy problem which needs some other solution, as well as a need to explain where Higgs fields come from.
  3. There is a Higgs mechanism and supersymmetry. Hence no hierarchy problem. But necessary experimental evidence for this case is lacking.

What controls the cosmological constant?

The Higgs mechanism is involved with this too. If there is a Higgs mechanism, then it should result in a cosmological constant many orders of magnitude in excess of what observation allows. Consequently there needs to be some unknown principle that causes all contributions to the cosmological constant to almost (but not quite) cancel each other out entirely. Such cancellation couldn't be just an accident. Grand unified theories offer no help, as far as can be seen. Even supersymmetry cannot help here (as it does do in other cases when "magical" cancellations are needed). But if supersymmetry is real, it too is a broken symmetry, and whatver effect accounts for that symmetry breaking may be involved in the cosmological consant problem.

Why three generations of quarks and leptons?

Same question as always. Grand unification has nothing to contribute here, as far as is known. It certainly looks like the generations result from some sort of symmetry, but the correct form remains a mystery.

Can arbitrary parameters be mostly or entirely eliminated?

There are about 20 arbitrary parameters in the form of particle masses, force coupling constants, and the like, even in grand unified theories. All these parameters can be measured experimentally but not derived from the theory, so they have to be added "by hand". This makes the theory very unaesthetic, though not necessarily wrong. A more aesthetic theory would have few – or no – such arbitrary parameters. But what principle says nature must be aesthetic?

What causes CP symmetry breaking?

This seems to be another technical problem. We know CP symmetry breaking occurs. By the CPT theorem (which says that the combination of the three discrete symmetries cannot be broken), it follows that time reversal symmetry can also be broken. (And there is direct experimental evidence of this.) In other words, physical phenomena aren't always exactly reversible in time. There really is (at least in some circumstances) a definite physical direction of the arrow of time. Physicists expect that there is always a reason when important symmetries like this are broken. Grand unified theories offer no clue what this reason might be.

What about gravity?

And last, but not least, the perennial question. Grand unified theories alone offer no help. In fact, it rather looks as though the energy scale at which the other three forces attain the same strength is a couple of orders of magnitude lower than that where gravity comes in. It seems rather odd that three of the fundamental forces would reach the same strength at exactly the same point, while the fourth remains a holdout. There really is something strange about gravity.

Unification of the electroweak and strong forces

A different problem with the standard model is that the electroweak and strong forces are not "unified" in the theory - there is not even a broken (much less an exact) symmetry that relates them. It is possible to think of adding a new symmetry to the theory to do this. The result is often called a "Grand Unified Theory" (GUT). This new symmetry would be broken, of course, but at least that is no worse than the situation with the electroweak symmetry.

What such an extension would offer is the possibility of treating all of the known forces except gravity with a single set of field equations. The attractiveness of such a theory is mainly mathematical and aesthetic, since it is a way to use the same fundamental principles and equations to explain the known forces and particles (always excepting gravity).

Adding a new symmetry is not entirely arbitrary, since it interacts with already established symmetries. The simplest symmetry in the standard model is that of quantum electrodynamics. Its symmetry group is denoted as U(1) (1-dimensional unitary group). The symmetry gruop of the weak force is denoted SU(2) (2-dimensional special unitary group). The symmetry group of the electroweak theory is the direct product U(1)xSU(2).

The symmetry group of the strong force is SU(3), so a unified theory should at the very least contain the direct producct U(1)xSU(2)xSU(3). For technical reasons, the group must actually be larger yet contain this direct product as a subgroup. It has been shown that SU(5) is the smallest symmetry group that will serve, though there are many other (larger) possibilities. Unfortunately, there are a variety of problems in constructing a grand unified theory based on SU(5) or any of the other candidates - one of which is simply discovering experimental tests to distinguish which would be the appropriate symmetry group.

In spite of the difficulties, it seems plausible that a unified theory like this should exist. The main reason is that there are a number of similarities between quarks and leptons. Namely, both quarks and leptons:

Thus it's not inconceivable there could be a unified symmetry which mixes leptons and quarks. But many obstacles stand in the way of such a theory. For one thing, not only are all the forces involved of different strength, but even more fundamental quantities which occur in the equations, known as "coupling constants" are very different. The coupling constants represent, in some sense, the intrinsic strengths of the various forces.

A coupling constant describes the probabilities for emission of various particles in the course of an interaction: the larger the constant the larger the associated probabilities. It turns out that these "constants" depend slightly on the energy of the interaction. It is at least plausible that the coupling constants of all forces of the standard model - and maybe gravity as well - become equal at a high enough energy (comparable to the energy of the big bang).

It seems that today's separate theories of the electroweak and strong forces are just (very) low-energy approximations of a more theoretically attractive unified theory. So it may be appropriate, for theoretical purposes, to consider only very high energies in formulating a unified theory. Unfortunately, the energies which must be considered are very high - on the order of the so-called Planck energy, about 1027 electron volts.

The unification energy scale can be computed approximately. At the Planck energy, the intrinsic strengths of gravitation and the strong force become equal. At an energy "only" a factor of 100 less (i. e. 1025 EV) the intrinsic strengths of the strong and electroweak forces become equal. However, the electromagnetic and weak forces become equal around a mere ten trillion (1013) EV - which is a factor of 1014 less than the Planck energy. This is a rather large discrepancy, indicating a vast difference of some sort between the electroweak and strong forces. It is a rather significant breaking of symmetry, and any unified theory needs to be able to account for it - just as the symmetry breaking within the electroweak theory needs to be explained. The problem of explaining these large differences is sometimes called the "hierarchy problem".

Now, one important virtue of the theory of the strong force (i. e. "quantum chromodynamics") is that - like the electroweak theory - it is "renormalizable". That is, there are ways to redefine the masses which enter into the equations so that quantitative computations do not diverge. This is no small virtue, since it has proven impossible to develop a renormalizable quantum field theory of gravity.

However, this renormalization is possible only with the simplest forms of the field equations of quantum chromodynamics. These equations actually have more symmetry than is actually postulated to exist. That is, there are no particular observational reasons to expect as much symmetry as the equations allow for. A basic mathematical principle (due to Emmy Noether) is than any symmetry in the equations implies a conservation law, and vice versa. (For instance, the symmetry between electricity and magnetism in Maxwell's equations implies conservation of energy, regardless of the form of the energy in an electromagnetic wave.)

It has often been assumed that a quantity called "baryon number" is conserved. A baryon is a particle composed of three quarks, such as a proton and a neutron. Since a proton is the lightest baryon, it could not decay as long as baryon number really is conserved. But there is no particular reason to believe that baryon number really is conserved. (I. e., we have no explanation for such a law.) And in a theory that unifies the strong and electroweak forces, we would actually expect that baryon number isn't conserved, so that a proton could decay into several leptons. (Since according to this higher symmetry, all such particles are "really" of the same kind.) If there is no such baryon conservation law, then field equations should not have so much symmetry as they would in their simplest form.

Indeed, there are theoretical advantages to adding terms to the field equations of a grand ujnified theory, even though this suggests baryon non-conservation (and raises renormalization issues as well). Such terms are proportional to the energy of the quanta involved but also have coefficients on the order of magnitude of the Planck energy. Hence they do not affect the theory at low energy (or cause renormalization problems at low energy), yet allow for possibilities like proton decay.

Proton decay, if it occurs at all, is therefore very unlikely at low energies. However, if it is not forbidden, then it could occur in a sufficiently large collection of protons. Experiments have been conducted for many years to detect proton decay in very large volumes of water, and nothing has so far been observed. This implies that proton decay, if it occurs at all, must have a probablility of less than about 10-31 per proton per year.

However, there is another sort of consequence of additional terms in the field equations of a grand unified theory, namely the non-conservation of lepton number. This would mean that neutrinos are not massless and that it is possible to have processes called "neutrino oscillation" in which electron, muon, and tau neutrinos could convert among themselves. Interestingly enough, ongoing experiments that observe solar neurtinos now indicate that neutrino oscillation occurs - which requires that neutrinos do have a nonzero mass. –>

Recommended references: Web sites

Site indexes

Sites with general resources

IMB Proton Decay Experiment
IMB stands for "Irvine-Michigan-Brookhaven" – the instutions which sponsored an experiment which ran from 1979 to 1989 searching for evidence of proton decay. The site provides a historical overview.

Surveys, overviews, tutorials

Grand unification theory
Article from Wikipedia. See also Theory of everything.
Beyond the Standard Model
Elementary tutorial from The Particle Adventure.
Nobel laureate Burton Richter to speak about future of particle physics
Summary of a talk given by Richter in February 2007 on research to be performed at the Large Hadron Collider and possible future locations, and about the kinds of questions that will be studied.
Reality check at the LHC
January 2011 article from Physics World, by Matthew Chalmers. "Particle physicists start the new year with a mild dose of empirical medicine, as the LHC closes in on models of extra dimensions and supersymmetry."
Measuring (almost) zero
December 2009 article from Physics World. "The electron's electric dipole moment is unimaginably tiny - and may not even exist. But as Chad Orzel explains, that has not stopped experimentalists from trying to measure it, since a non-zero result could imply the existence of new physics."
Ending the great drought
October 2008 article from Physics World, by Michael Riordan. For more than 20 years, particle theory has left experiment far behind in its wake. The Large Hadron Collider may help bring particle physics back to its experimental roots.
A taste of LHC physics
Summary of May 2008 article from Physics World, by Tim Gershon. "Recent measurements of the bizarre properties of B-mesons hint at the existence of new fundamental particles. [The author] describes how the LHCb detector at CERNís Large Hadron Collider could soon establish beyond doubt whether the effect is real."
Let the cooling begin at the LHC
November 2007 article from Physics World. "Tens of thousands of tonnes of equipment must be cooled to near absolute zero before the Large Hadron Collider can detect its first exotic particle. The head of CERN's cryogenics group, Laurent Tavian, tells Hamish Johnston how this will be done."
Life at the high-energy frontier
October 2006 article from Physics World, by Matthew Chalmers. "The Large Hadron Collider at CERN and its cathedral-sized detectors will change the course of particle physics forever. The author visits the lab to capture the mood as the most ambitious scientific project ever undertaken prepares for switch-on."
Expedition to inner space
October 2006 article from Physics World, by Andy Parker. "Particle physicists hope that the Large Hadron Collider will discover the Higgs boson and the first evidence of physics "beyond the Standard Model". The author explains how these and other physics goals will be achieved using the giant general-purpose detectors ATLAS and CMS."
Particle physics: the next generation
December 1999 article from Physics World, by John Ellis. "Although the basic building blocks of matter and their interactions have been placed on a firm theoretical footing, many fundamental questions remain unanswered and await the experiments of the future."
Discovery Prospects at the Large Hadron Collider
April 2006 news release about research to be conducted using the Atlas experiment at the LHC. Subjects of investigation include Higgs particles, supersymmetry, possible more fundamental particles composing quarks and leptons, dark matter and dark energy, and CP symmetry violations.
Unification of the Fundamental Interactions
Brief article by Chris Nieter.
Grand Unified Theories
Lecture notes by Steve Lloyd, from a course on Elementary Particle Physics.
Proton Decay
A brief explanation from the Super-Kamiokande site at Boston University.

Recommended references: Magazine/journal articles

Signatures of new physics
Tommaso Dorigo
Physics World, March 2011, pp. 26-30
The data being collected this year by the ATLAS and CMS experiments at the Large Hadron Collider might spark a revolution in physics, but just how and when will the first solid indication appear of new physics beyond the Standard Model?
Large Hadron Collider
Ron Cowen
Science News, July 19, 2008
The Coming Revolutions in Particle Physics
Chris Quigg
Scientific American, February 2008
Large Hadron Collider: The Discovery Machine
Graham P. Collins
Scientific American, February 2008
Building the Next Generation Collider
Barry Barish; Nicholas Walker; Hitoshi Yamamoto
Scientific American, February 2008
Low-Energy Ways to Observe High-Energy Phenomena
David B. Cline
Scientific American, September 1994, pp. 40-47
As accelerators which could test theories that go beyond the standard model recede into the future, it becomes necessary to use low-energy phenomena to search for new physics, such as supersymmetry. An example of this is experiments looking for flavor-changing neutral currents.
Unification of Couplings
Savas Dimopoulos; Stuart A. Raby; Frank Wilczek
Physics Today, October 1991, pp. 25-33
The minimal SU(5) model for the unification of fundamental forces has had reasonable success at predicting how the force coupling constants converge at high energies. But even better results, as well as other theoretical benefits, come about when supersymmetry is added to the model.
The Quest for the Elementary Particles of Matter
O. W. Greenberg
American Scientist, July-August 1988, pp. 361-363
Grand unified theories such as the SU(5) model are not the only possible way to extend the standard model. Composite models, such as the preon theory of Pati and Salam, offer an alternative.
Flavor SU(3) Symmetries in Particle Physics
Howard Georgi
Physics Today, April 1988, pp. 29-37
From one point of view, the most important questions in particle physics are: (1) What breaks the SU(2)xU(1) electroweak symmetry? (2) What process tells quarks and leptons that SU(2)xU(1) is broken? Existing proposed answers such as the standard model Higgs mechanism, technicolor, and supersymmetry all have their shortcomings. SU(3) flavor symmetry may suppy clues of an alternative.
Elementary Particles and Forces
Chris Quigg
Scientific American, April 1985, pp. 84-95
Disparate theories have been developed which successfully describe elementary particles and fundamental forces. Unification of these diverse theories is a possibility now on the horizon.
The Structure of Quarks and Leptons
Haim Harari
Scientific American, April 1983, pp. 56-68
One way to extend the standard model in order to remedy some of its shortcomings is to postulate that quarks and leptons, currently considered elementary, are actually composite. Several different composite models, including "preons" and "rishons" have been proposed.
A Unified Theory of Elementary Particles and Forces
Howard Georgi
Scientific American, April 1981, pp. 48-63
At a range of 10-29 cm. all the different types of elementary particles and fundamental forces may be essentially the same. If the unified theory which predicts this is correct, matter itself may be unstable.

Recommended references: Books


Copyright © 2002 by Charles Daney, All Rights Reserved