Prerequisites: The standard model – Higgs physics
See also: Supersymmetry – CP symmetry violation – Matter/antimatter asymmetry – Neutrino physics – The big bang – Cosmic inflation – Magnetic monopoles
Equally important was the fact that these theories all had the same mathematical form – Yang-Mills gauge theories – and that they were thoroughly based on the mathematical concept of symmetry.
Nevertheless, physicists realized that their work was far from complete, and that the standard model left a great many questions unanswered. We have described these questions in some detail elsewhere (such as the pages listed at the top).
It was seen that, at the same time, a number of both the key successes as well as the chief shortcomings were to be found in the way that fundamental forces were unified. Here unification means, specifically, that two (or more) forces previously considered distinct can actually be described by the same equations. And, further, that these equations are invariant under symmetry operations that exchange distinct fundamental particles. That is, as far as the equations are concerned, an electron and a neutrino (for instance) behave substantially the same.
One of the primary entries in the success column for the standard model is the unified theory of the electroweak force. Yet this same theory illustrates some of the shortcomings. The symmetry between the forces is broken because the electromagnetic force and the weak force don't have the same strength and because otherwise similar particles (such as electrons and neutrinos) have quite different masses. Further, the unification itself isn't as seamless as it could be. One of the key parameters of the theory – the electroweak mixing angle which describe how the forces combine – is not specified by the theory, but instead can be determined only by experiment.
So. The standard model showed that two seemingly distinct forces could be successfully unified in a single, elegant mathematical theory. But at the same time, physicists still had a lot of explaining to do, in terms of how to clean up the unification of the electromagnetic and weak forces, and then to go further and add the strong force into the mix.
The weak force affects all quarks and leptons, except for neutrinos with right-handed helicity (which might not even exist). And, although we lack a satisfactory gauge theory of gravity, it still affects all known particles – even massless ones, since mass and energy are equivalent.
Nevertheless, there are gaps – asymmetries – in the way different particles couple to some forces. The electromagnetic force doesn't affect particles without electric charge (neutrinos and some bosons). Likewise, leptons entirely lack color charge, so they are not affected by the strong force.
Even though the standard model has no explanation why all known particles (including bosons as well as fermions) have commensurable amounts of electric charge, it surely isn't just a coincidence.
Moreover, within each generation, quarks and leptons occur in the same pattern. Within each generation, quarks and leptons are grouped in couplets whose particles are related by the SU(2) electroweak symmetry. In other words, they have a similar characteristic called "isospin".
When that is done, a very curious thing is noticed. Namely, three of the four forces (electromagnetic, weak, and strong) appear to have very nearly the same strength at a distance of about 10^{-32} m, corresponding to an energy of about 2×10^{16} GeV. This too is unlikely to be a coincidence.
And strikingly, when the force of gravity is extrapolated to very small distances, it appears to have about the same strength as the other three forces at a distance known as the "Planck scale", about 10^{-35} m. This suggests that not just three, but all four of the known forces might possibly be unified at sufficiently high energies.
In particular, there is one parameter known as the "electroweak mixing angle". It specifies in a precise way how the electromagnetic and weak forces are related. It can be measured experimentally, but is not determined by the supposedly "unified" electroweak theory. A genuinely unified theory should determine this parameter.
The foregoing list points out some discrepancies among the theories of the various forces, but also some tantalizing indications of similarities. We should expect to find clues here as to how best to proceed.
Keep in mind that there are, as yet, no known egregious experimental problems with the standard model. This is actually a hindrance to progress, since the lack of experimental failures means we can't pinpoint just where the theory is "broken". But it is obviously incomplete in many ways and does have more than its share of conceptual problems. It needs to be augmented to be a more satisfactory theory. Quite possibly even replaced as just a crude approximation of a better and more comprehensive theory. Yet there are also clues about what we should be looking at.
Elementary particles of matter | |||
---|---|---|---|
Leptons | Quarks | ||
Electron | Electron neurtino | Up quark | Down quark |
Muon | Muon neutrino | Charm quark | Strange quark |
Tau | Tau neutrino | Top quark | Bottom quark |
Isn't it pretty obvious we should be looking for a symmetry between the right and left halves of this table? Such a symmetry would help account for several of the clues we mentioned, in particular the fact that there are the same number (three) of generations of quarks as leptons and the fact that the electric charges of particles are multiples of the same fundamental unit (1/3 the charge of an electron). This isn't enough to build a theory on, but it's a start.
Let's approach this from a slightly different angle. We want to have a Yang-Mills gauge theory which is based on a local symmetry like the U(1) of electromagnetism, SU(2) of the weak force, and SU(3) of the strong (color) force. We need a symmetry group which is consistent with all of these. In practice, that means it should contain those three groups as subgroups, so that the symmetry contains all symmetries of the simpler theories. In fact, it should contain the product group U(1)×SU(2) (of the electroweak theory) as a subgroup.
The first problem is that, although this is a constraint, it's not a strong one. There are many groups which contain those others as subgroups – in fact, infinitely many. But the larger the group, the more symmetry, so we should look for the smallest group that suffices. After all, we want to account for existing observations, but there's no point in adding symmetries that have never been observed or even hinted at. Always choose the simplest theory that suffices.
We're still left with many groups to consider, starting with an obvious candidate, the product group U(1)×SU(2)×SU(3). But that is unsatisfactory, for the same reason that U(1)×SU(2) leaves something to be desired as a Yang-Mills symmertry group for the electroweak theory. Mathematically, these are what are known as "product" groups. One can construct product groups in a straightforward manner for any two groups at all. The result doesn't capture anything special about the way the two groups are related in some particular situation.
In our case, this is the reason that the electroweak mixing angle is an arbitrary parameter of the electroweak theory. It is the additional information which needs to be added about the physics above and beyond what the group provides. As we noted, this information ought to fall out of the theory itself. We will have a similar problem in unifying the strong force with the electroweak force if we just use a larger product group. What we really need is a larger group containing U(1), SU(2), and SU(3) as subgroups in such a way that the physics is naturally included.
Perhaps it will help to explain the mathematics a little more fully. U(1), SU(2), and SU(3) are all instances of what are known as "Lie groups", since this type of group was first investigated by Sophus Lie in the 1870s. Abstractly, a Lie group is an infinite group which has the topological structure of a manifold over the real or complex numbers. A manifold, in turn, is a kind of topological space which looks "locally" like a copy of an n-dimensional real or complex space (i. e. R^{n} or C^{n}). A little more loosely, a manifold is a topological space, small parts of which look like ordinary (1-dimensional) curves or surfaces (of 2 or more dimensions).
If this mathematical terminology is unfamiliar, it is fortunate that most Lie groups of interest actually occur as groups of square matrices over the real or complex numbers (R or C). If even the term "matrix" is a little daunting, don't be too concerned. A square matrix is nothing more than a table of (real or complex) numbers having the same number of rows and columns. Matrices are fundamental objects in linear algebra, i. e. in the study of solutions of simultaneous linear equations.
In order to have a group, there must be an operation of "multiplication", which is straightforward (if tedious to actually perform) for matrices. Each element must also have an inverse with respect to this operation, which is expressed in other words by saying the matrix is "non-singular". This is equivalent to requiring the determinant of the matrix to be non-zero. And if the matrix consists of coefficients of a set of linear equations, this non-singulartity condition is necessary and sufficient for being able to find a unique solution of the set of equations. Since this is quite an important property, the group of all such matrices is called GL(n,R) or GL(n,C), depending on whether the matrices involve real or complex numbers.
In addition to their use in solving systems of linear equations, matrices are a natural way to represent "linear transformations" of the n-dimensional Euclidean spaces R^{n} and C^{n}. That is, they correspond to certain types of geometric transformations on such spaces The non-singularity condition says that the transformation does not reduce the dimension of the space.
Most – but not all – of the Lie groups used in particle physics are groups of this sort, including U(1), SU(2), and SU(3). In fact, they are usually of a somewhat more restricted type – a transformation which geometrically is essentially just a rotation or a reflection. What this means is that the transformation preserves the angles between vectors in the space, without stretching or distortion. In this sense, it is a "rigid" transformation which does not distort geometrical objects. (In technical terms, the transformation preserves the "inner product" of the vector space.)
When the space in question is C^{n} and the matrices may contain complex numbers, the group of such transformations is called U(n), where "U" stands for "unitary". The determinant of a unitary matrix is a complex number of absolute value one. If it is further required that the determinant be exactly 1, then we have the even smaller (but still infinite) groups SU(n) – the "special unitary" groups.
When the space in question is R^{n} and the matrices must contain only real numbers, the group of such transformations is called O(n), where "O" stands for "orthogonal". The determinant of a matrix in O(n) is +1 or -1. There are also subgroups where the determinant is exactly 1: the "special orthogonal" groups SO(n).
There is an elaborate theory for classifying Lie groups, which makes it possible to systematically arrange Lie groups into various types. This in turn makes it easier to examine their properties for use in particle physics. In the case of a certain type of Lie group described as "semi-simple", a complete classification is possible. This is even more helpful. The definition of a semi-simple Lie group is somewhat technical, but it has the nice property of excluding groups, such as product groups, which are constructed in a trivial way out of smaller groups. As indicated above, this is a sensible restriction.
Unitary and orthogonal matrix groups are not the only broad types of Lie groups among the semi-simple ones. But they are a little easier to analyze mathematically and hence are encountered most often. In addition to U(1), SU(2), and SU(3), we will very shortly encounter two more examples.
Shifting gears back to physics now, we recall that what we want to do is to find larger symmetry groups which relate the particles which occur in the table of elementary particles. We want to use these symmetries in a Yang-Mills type of local gauge theory analogous to the successful theories of the electroweak and strong forces, but constructed in such a way that all of the known particles and forces are "unified" in a single theory, using a single symmetry group. There's no guarantee such an approach must succeed, but it's certainly a plausible thing to try.
We recall, further, that the way the group relates to the particles is by permuting particles which are in certain sets called multiplets. Finding such multiplets which, taken together, cover all the particle types, is known as finding a "representation" of the group.
This activity is a kind of game with definite rules. The rules say that all known particles must be accounted for, but it's permissible to add new, as yet undiscovered particles. Similarly, the three forces – electromagnetic, weak, and strong – must be accounted for. (Gravity is left out for the present as being too difficult.) But it's permissible to add new forces.
The game involves finding a symmetry group and a representation of it in which the corresponding particle multiplets collectively account for all known particles. (Groups may have more than one representation.) Further, the forces must fall out naturally of a Yang-Mills type gauge theory when the group is used as a local symmetry group.
The name of this game is "model building". Particle physicists played it extensively in the 1960s and 1970s. There were two noteworthy winners: the SU(2) theory of the electroweak force and the SU(3) theory of the strong force. These two theories together, therefore, became known as the standard model.
The next big prize would go to the winner in the game to find a model which unified the electroweak and strong forces. Theories of this sort became known as grand unified theories.
A winner has yet to appear. So it has come to seem as though this approach may not work. It seems that something may be missing.
But we can look at a couple of the best attempts.
The representation of SU(5) contains particle multiplets consisting of both 5 and 10 particles. The 5-particle multiplet (or 5-plet) contains 3 down quarks (in each of the three colors), a positron (anti-electron), and an anti-neutrino. There's another 5-plet containing the antiparticles of each of these. Then there's a 10-plet containing up, anti-up, and down quarks, in each of the three colors (9 total particles), plus a positron. And another 10-plet contains the corresponding antiparticles. This pattern is repeated again with 5-plets and 10-plets for the additional two particle generations.
The theory predicts that members of any multiplet are interchangable in the equations of the theory. That is, the equations are invariant under the symmetry operation. Hence this is saying that not only electrons and neutrinos, but also (certain) quarks obey the same equations and hence are much more similar than appearances would suggest.
Furthermore, the forces which arise from the Yang-Mills gauge theory using SU(5) are capable of changing any particle of a multiplet into any other particle of the same multiplet. Such forces are mediated by one of the bosons of the theory. We already knew this was possible in some cases. For instance, quarks of a certain flavor can become quarks of the same flavor but a different color, via the strong force, by exchanging gluons. Electrons and neutrinos can be transformed, via the weak force, by exchange of W bosons.
But the astonishing thing is that in SU(5), quarks can turn into leptons (either electrons or neutrinos) – or vice versa – by an entirely new force. This force is mediated by a new type of boson, called simply X (or X boson). The X bosons carry both electromagnetic and color charge, in order to ensure proper conservation of those charges in any interactions.
The theory predicts that the X bosons must be extremely massive, with mass-energy in the unification range of about 10^{16} GeV. Consequently, the force mediated by X bosons must be extremely weak (i. e., extremely improbable to cause an interaction) and extremely short range. This range is on the order of 10^{-30} cm. Unless particles approach each other this closely, a virtual X boson could not come into existence long enough to cover the distance between the particles. The fact that X bosons must be so massive also means that it is not possible to create them in any conceivable particle accelerator that could be built. They can exist as free particles only at a very early stage of the big bang from which the universe emerged.
When you count up all the ways that the particles in an SU(5) 5-plet could be interchanged, 24 distinct bosons would be required to effect the change. 12 bosons are already known – the 8 gluons, plus W^{+}, W^{-}, and Z bosons of the weak force, plus the photon. Hence there must be 12 new X bosons, distinguished by the varying amounts of color and electric charge they carry.
There are a couple of very important immediate consequences of this SU(5) model. The first is that the units of electic charge carried by quarks and electrons must be commensurable, since the particles can change into each other. This resolves the long-standing puzzle of why protons and electrons have exactly equal amounts of electric charge (but opposite in sign) – since protons consist of three quarks.
A second consequence is even more striking: quarks can "decay" into leptons, and hence protons too can decay. This would be the first known example of the non-conservation of a quantity known as baryon number. Recall that baryons are particles composed of three quarks. Before SU(5) (and other grand unified theories) it appeared from all experimental observations that baryon number was always conserved. If this were the case, protons could not decay, since they are the lightest baryons, and hence there are no other baryons they could decay into.
SU(5) says this is wrong – protons can decay, because the quarks they are composed of can decay. This is an extremely important prediction of grand unified theories. It is also one that has proven quite problematical, since many experiments trying to observe proton decay have been attempted, without any success at all. In fact, current measurements of the minimum possible half-life of protons are already too large to be compatible with the predictions of SU(5), so SU(5) cannot be correct as it stands. We'll have more to say about proton decay later.
Nevertheless, there is indirect evidence that baryon number is in fact not conserved. This comes from the apparent asymmetry of matter and antimatter in the universe – for which non-conservation of baryon number is a necessary condition. (Otherwise the net baryon number of the universe would be 0.) We'll have more to say about this too.
There's one more feature of SU(5) to point out. In spite of the apparent complexity that it adds by introducing a whole new type of force and 12 new bosons to mediate it, SU(5) is still the smallest semi-simple Lie group that contains U(1)×SU(2)×SU(3), and hence it adds as little complexity as possible. In particular, it predicts that there should be no new particles or forces that appear between the energy scale of 100 GeV (where electromagnetic and weak forces unify) and 10^{16} GeV. That's a difference of 14 orders of magnitude. It seems rather unlikely that nothing new would appear in that whole range. Indeed, the theory of supersymmetry (which we'll introduce in a little bit) predicts lots of new particles with masses between 100 and 1000 GeV.
So. SU(5) is just too simple to be true. Not only is it implausible in the way just described, but a number of its predictions have already been contradicted experimentally. For starters, it predicts that protons decay with a half-life of about 10^{31} years, whereas the best measurements as of this writing indicate a half-life no less than 1.9×10^{33} years. To its credit, SU(5) does predict a value of the electroweak mixing angle. Unfortunately, the prediction is off by about 10% compared to the best experimental measurements.
And these aren't the only problems. SU(5) predicts that neutrinos have exactly zero mass. This is now considered almost certainly wrong, based on several lines of evidence that neutrinos do have nonzero mass. SU(5) also is not adequate to support the theory of cosmic inflation – which seems to require some sort of grand unified theory and is itself gathering better supporting evidence as time goes on. And there are other cosmological calculations that SU(5) seems to get wrong as well. We'll go into a little more detail on many of these points later, in considering grand unified theories as a whole.
But as for SU(5), alas, it is a theory which is as simple as possible – too simple. Nice try, but no cigar.
SO(10) is the group of 10 by 10 real orthogonal matrices. It was investigated by Howard Georgi and others quite early on. It's semi-simple, like SU(5), and contains SU(5) as a subgroup. Full SO(10) doesn't have a 5- or 10- dimensional representation so it doesn't have multiplets containing 5 or 10 particles as SU(5) does, nor does it have a 15-dimensional representation which might combine the SU(5) multiplets. It does, however, have a 16-dimensional representation, so there is a 16 particle multiplet.
In addition to the 15 particles in the combined SU(5) 5-plets and 10-plets, the SO(10) 16-plet contains one additional particle. This corresponds to a right-handed neutrino. A right-handed neutrino has never been observed in nature because (even if it actually exists) it is not affected by electromagnetic, weak, or strong forces. (Neutrinos have no electric or strong (color) charge, and the weak force affects only left-handed neutrinos.) If such a particle exists (as SO(10) would require), it would have to be very massive to have avoided detection thus far.
Any particle in a fundamental multiplet can be converted into any other by the exchange of a vector boson. In SU(5) where the fundamental multiplet has 5 particles, there are 5^{2}-1 possible nontrivial interactions – one for each possible particle pair, less one (corresponding to the "identity" operation, which is the "nothing happens" case). SU(5), therefore, has 24 bosons – 12 already known, and the 12 new X bosons. Similarly, SU(3) (the theory of the strong force), which has a fundamental triplet, has 8 (i. e. 9-1) bosons – the 8 gluons. And SU(2) (the theory of the electroweak force) has fundamental couplets (e. g. electron and neutrino), and 3 bosons – W^{+}, W^{-}, and the photon.
Analogously, SO(10) should have 16^{2}-1 = 255 bosons, most of which would be very massive. It would be quite a complicated theory. Unlike SU(5), as a result of this complexity, there ought to be effects that could be observed in the energy range between 100 GeV and 10^{16} GeV. In particular, there would be minute changes in how the weak and strong forces behave at distance scales smaller than 10^{-16} cm. These effects would be due to "screening" caused by some of those numerous additional massive bosons.
Another effect would be that the amount of energy required to cause protons to decay would increase somewhat, thereby decreasing the probability of decay and increasing the half-life of the proton. Calculations indicate that the half-life of the proton in the SO(10) theory would be just barely beyond the currently measured lower limit of 1.9×10^{33} years. So SO(10) isn't excluded by observation – yet – but it's pretty close.
To the extent these predictions are verified, we have indirect evidence for some sort of grand unified theory. Conversely, failure to verify these predictions may be evidence against theories of this type – although there are usually ways to amend or extend the theory to handle the problem.
We'll summarize some of these predicted effects here and indicate the relationship between them. In later sections we'll go into a little more detail.
And there is evidence that the conservation law needs to fail, because the universe appears to consist almost entirely of matter instead of antimatter. I. e., there is a positive baryon number, when a total baryon number of zero ought to be expected. There ought to be an explanation for this anomaly – and grand unified theories implying nonconservation of baryons can provide it.
The nature of these Higgs fields is that at very high energies, the state of lowest energy of the field occurs when the field itself is zero. But at some critical point as the energy level drops, the lowest energy state (i. e., the "vacuum state") changes so that it occurs at a nonzero field strength.
Yet experimental evidence which has accumulated in the last few years indicates that neutrinos do have mass, albeit in rather small amounts. Unified theories more complex than SU(5) provide additional Higgs fields to which neutrinos may couple in order to acquire mass.
The SO(10) model would include a right-handed neutrino (and corresponding left-handed antineutrino), as well as many exotic bosons. Such a neutrino might be very massive, but would be very hard to detect, since it would not be affected by any of the known forces except gravity. The additional bosons would be carriers of one or more new forces even weaker than the weak nuclear force, and of which (therefore) we currently know nothing.
Other models would have even more exotic fermions and bosons. Some of them might be affected by electromagnetic, weak, or strong forces, but all that do would have to be extremely massive, since they have left no evidence of themselves in accelerator experiments. One of the best indications that some sort of massive exotic particles do exist is the very strong evidence that most of our universe is actually composed of non-baryonic dark matter, which simply cannot be accounted for by the standard model.
In the first place, from the theoretical point of view, the "ideal" state of the universe would be one in which maximal symmetry prevails. This would entail, among other things, just a single type of particle (which would be massless) and just a single type of force. But such high symmetry is unstable, like a pencil balanced on its point. This much symmetry could exist, if ever, only in the first instant after the big bang when the universe was unimaginably hot. As it cooled, all of the particles and forces we know today would have very quickly precipitated out. Leptons and quarks, in particular, would have become distinct. But just before that point, protons could not exist stably because quarks and leptons would be constantly turning into each other. (Protons might not exist for other reasons as well – the temperature would be so high that quarks could not form a bound system. Instead, a "quark-gluon plasma" would prevail.)
In the second place, from an observational point of view, we are almost certain that there is far more matter in the universe than antimatter. A necessary condition for this asymmetry (as we shall discuss) is that baryon number not be conserved. In other words, matter/antimatter asymmetry is inconsistent with baryon number conservation, and hence with absolute stabililty of protons.
In spite of all this, we still have no direct evidence that protons do decay. And it's not for lack of trying. A number of experiments have been conducted trying to detect proton decay. The first began around 1978 in the Kolar gold mine in India. Others have since been conducted in a northeastern Ohio salt mine (the "IMB experiment"), in the Mont Blanc tunnel between France and Italy, and in a silver mine in Utah. The most recent is the Kamiokande experiment in Japan, which has also been used in neutrino studies.
Although it might seem impossible to observe an event like proton decay, which would take longer than 10^{31} years to happen with any given proton, it's not quite so hard as it sounds. Several thousand tons of matter (typically water) contain about 10^{33} protons, so about 100 of them might decay every year. Detectors consist of a container with this much water and thousands of photo-multipler tubes to register flashes of light that would be the signal of a decay event. Such experiments are conducted deep underground in order to shield them from cosmic rays which would produce false signals.
The SU(5) theory predicts proton half-life of about 10^{31} years. Although several suspicious events were noted fairly early, nothing which could conclusively be deemed a proton decay has been confirmed to have occurred. Already around 1982 it became pretty clear that the proton half-life must greatly exceed 10^{31} years, so that SU(5) was definitely ruled out.
Pushing the minimum half-life much beyond this requires watching larger quantities of matter for longer periods of time. At present the minimum is about 1.9×10^{33} years, with still no sign of proton decay. This is almost enough to eliminate other possible grand unified theories, such as SO(10). But for all anyone knows, there are theories consistent with much longer proton half-lives. It's certainly starting to appear that we won't be so lucky to be able to measure a definite proton half-life, which would give us some information about which theories could be viable. At least, not with the current experimental approaches.
It turns out, in most unified theories, that the probability of proton decay is proportional to the 4th power of the X boson mass – a fairly sensitive dependency. However, there's an uncertainty in this mass of at least a factor of 10, so there can be an uncertainty in the decay probability of at least a factor of 10,000 – quite a lot, especially when pushing the limits of experimental technology.
It is also possible to construct grand unified theories in which protons do not decay. These would be automatically refuted if we ever did observe definite decay events. While such a model might give us a little more comfort with grand unified theory as long as proton decay goes undetected, it may not be what we want anyhow. As indicated above, proton decay is tied up with other phenomena for which there is observational evidence – baryogenesis and matter/antimatter asymmetry.
The second idea, which is where grand unified theories come in, is that the Higgs fields of importance are those which generate the masses of the extremely heavy X bosons which mediate transformations among quarks and leptons. In other words, these are the Higgs fields which break the symmetry of the grand unified theory.
It is expected that there will be many Higgs fields involved with GUT symmetry breaking. But suppose, for simplicity, that there are only two. At any point in space, the value of one Higgs field is just a scalar – a number, which can be positive or negative. So the values of two Higgs fields can be plotted in two dimensions with a standard x-y coordinate system. Any point in this plane represents some numerical value assumed by each of the fields. Along the third axis one can plot the energy existing in the vacuum as a result of these two fields. That is, above each point in the plane is a third point, representing energy. (In this discussion, energy is a positive quantity, so the energy is actually on or above the plane.) The resulting energy graph is a two dimensional surface.
"Normally" what one would expect is that the energy would be zero when both fields are zero, and nonzero (positive) otherwise. In this case, if we assume the energy is symmetric under exchange of the two field values, the energy graph would resemble a paraboloid. That is, rather like an egg shell which has been cut in half through the middle and is situated with the tip of the shell at the point (0,0).
This is what the theory postulates for the relation between Higgs field strength and vacuum energy when the temperature is sufficiently high. Under these circumstances, the system is in a state of minimal energy when both (all) fields have a value of zero.
But the theory also postulates that when the temperature drops below a certain point, the energy graph takes a different form. It becomes shaped somewhat like a Mexican hat or the bottom of a wine bottle, with a hill in the middle. The minimum of the energy then occurs when both (all) fields are non-zero.
Since we don't have a satisfactory grand unified theory, we don't know very precisely what the unification energy scale is, but estimates are generally around 10^{15} to 10^{16} GeV. The conversion factor between GeV and temperature (degrees K) is derived from the Boltzmann constant, k_{B}, which is about 8.617×10^{-5} eV per degree K. As a handy approximation, we can say 1 GeV is about 10^{13} degrees K. So we are talking about a temperature of 10^{28} K to 10^{29} K. Thus the typical photon at a temperature of 10^{28} K has an energy of 10^{15} GeV.
There is likewise an uncertainty of precisely when the universe was in this state, but estimates are around 10^{-35} to 10^{-36} seconds after the big bang. In this state where all forces are unified, everything is very symmetrical. X bosons abound, and leptons and quarks turn into each other with blythe abandon. In fact, the different types of particles really are not distinguishable.
But as the temperature falls through that point, the vacuum itself becomes unstable. The vacuum energy graph described above no longer has its minimum when all Higgs fields are zero. Instead, it has minimal energy when the fields are not zero. This unstable vacuum energy persists for a very brief time. During this instant we have what is called a "false vacuum". There is a huge amount of energy "trapped" in the vacuum, due to the shape of the energy function. This is similar to the state of water when it is "supercooled" below 0° C. Although the system is still very symmetrical, it is now unstable.
This vacuum energy can be described through the general theory of relativity as the term in Einstein's equation called the "cosmological constant". It represents a kind of antigravity, a force acting repulsively rather than attractively. This force of repulsion will exist as long as the false vacuum.
The net result is a violent expansion of the universe, which is what is meant by the term "cosmic inflation". The rate of expansion depends on details of the specific GUT involved, but is in the general range of a doubling in size every 10^{-37} second. Hence the universe could go through 100 such doublings in size in just 10^{-35} seconds. This would amount to an expansion by a factor of 10^{30}.
How many doublings in size actually occur? That depends on how long the inflation actually lasts, which in turn is very dependent on the nature of the specific GUT involved. In fact, this is one of the trickier parts of the whole theory: the question of what brings inflation to an end.
Inflation proceeds as long as the unstable false vacuum persists. It is thought that the false vacuum will eventually decay by a process like quantum tunneling. Something like this is needed, because the shape of the energy function is actually a little more complicated than the Mexican hat or wine bottle. The Higgs fields, it seems, are trapped in a small energy well at the top of the energy surface. The potential energy would need to increase somewhat before it could be eventually released. But this can also occur if the fields "tunnel" through the energy barrier, just as a quantum particle can tunnel out of an energy well.
Eventually, through some tunneling process, the symmetry is broken. The Higgs fields suddenly take on nonzero values and lower the potential energy of the vacuum. The potential energy suddenly drops – or rather, it is converted into the kinetic energy of particles which are now massive (because the Higgs fields are no longer zero) and moving at near the speed of light.
Different GUTs predict different forms of the energy surface, and hence the details of how long inflation actually persists vary quite a bit. But the end result is much the same, whether the actual amount by which the universe inflates is a factor of 10^{30} or 10^{60}.
There are several noteworthy effects of this enormous expansion. One is that any curvature which may have existed in the prior universe is reduced to essentially zero – total flatness. Our best measurements of the shape of the universe indicate that it is in fact very close to flat. And this is more of a problem than one might think, because (without inflation) the amount of flatness we see now can only have resulted from an incredible degree of fine tuning in the early universe.
Another problem that inflation solves is the degree of isotropy and homogeneity observed in the universe today. That is, as far as we can tell, every part of the universe, in every direction, is very like every other part. This is reflected, for instance, in how nearly the temperature of the cosmic microwave background radiation is the same in every direction. The problem is that we can observe portions of the universe, in opposite directions from each other, which could not have been in physical contact at a very early stage. Thus it is puzzling how they could be so much alike. Inflation, again, handles this by providing a means by which all of the visible universe in fact could have been causally connected – since the part we can see now is just a tiny part of a causally connected region that has inflated enormously in size.
And there are additional observational facts that inflation handles well, such as the origins of the large scale structure of the universe. But we'll save more details about such matters for elsewhere. Except for one specific item – magnetic monopoles. Most GUTs predict monopoles should exist. The problem is, we've never observed (for sure) a single monopole. Inflation can handle this problem.
More detail on cosmic inflation
Monopoles can be described as "topological defects" – kinks in quantum fields, the Higgs fields in particular. Another way to say this is that monopoles are composed (in some sense) of Higgs fields. They represent discontinuities in the fields, a little like the discontinuity of a broken piece of sidewalk that is easily stumbled over. Another term sometimes used is that monopoles are topological "solitons" – a kind of wave having finite physical extent.
Although Higgs fields are scalar fields, there are usually many of them (in GUTs). Taking the values of all the fields together gives you a vector, which has both a direction and a length. Vector fields tend to have isolated discontinuities. An example of this is a tornado or water running down a drain. The vector fields indicating the speed and direction of the wind or water are continuous, except for the point in the very center.
Calculations indicate that GUT monopoles ought to be extremely massive – about 100 times the unification energy scale or the mass of an X boson. Since this scale, inferred from the failure to observe proton decay, is in excess of 10^{16} GeV, monopoles must weigh in at more than 10^{18} GeV.
As already noted, monopoles have never been observed, despite many diligent searches. It's not that monopoles are just carefully hidden away, deep inside stars or planets, for example. Because monopoles are so massive, calculations of their expected rate of production in the early universe indicate that their mass should absolutely dominate everything else in the universe – causing it to collapse in as little as 1000 years.
But the solution of this monopole problem is close at hand. It's inflation, again. The universe would have expanded so drastically during the inflationary period that the resulting number of monopoles in each cubic lightyear is negligible.
More detail on magnetic monopoles
That's because the standard model incorporates only left-handed neutrinos (and right-handed antineutrinos). That is all the SU(2) electroweak symmetry provides for, because right-handed neutrinos don't couple to the electroweak force. Consequently, neutrinos do not couple with the Higgs field(s) of the standard model (needed for electroweak symmetry breaking), so they can't have mass from that source.
It turns out that if right-handed neutrinos don't exist, then neutrinos must be massless. Stated another way, if neutrinos have any mass at all, then right-handed neutrinos (still not coupling to the weak force) must exist. This means that in the standard model neutrinos must be massless. And since the SU(5) GUT doesn't include right-handed neutrinos either, they must be massless in that theory as well. SO(10), on the other hand, would allow them.
But there is increasing evidence that neutrinos do have mass. One clue was the observation of neutrinos from Supernova 1987a. If neutrinos have any mass at all, they cannot travel at exactly the speed of light. Consequently, neutrinos produced in that supernova should have had a small range of velocities just under the speed of light, and as a result, they should have arrived in our detectors at slightly different times. This is in fact what was observed. The spread of arrival times was only 12 seconds. This implies that the mass of the neutrinos could have been at most about 16 eV. Unfortunately, most of the spread of times could also have been due to differences in when the neutrinos were produced.
Better evidence for neutrino mass comes from measurements of solar neutrinos. Observations have consistenly shown only about 1/3 of the number of neutrinos expected to have been produced normally within the Sun. It is now accepted that this implies a phenomenon called neutrino oscillation. This means that neutrinos spontaneously change among the types associated with electrons, muons, and taus. In order for this to occur, it is necessary that neutrinos have some mass.
So the evidence is now pretty good for neutrino mass. Although this means the standard model is incomplete, that's not much of a surprise. It's no surprise that this also rules out a SU(5) GUT, since that model has a number of other problems as well. On the other hand, this gives a bit of indirect evidence for some other type of GUT, in that other models do allow for right-handed neutrinos and neutrino mass.
One way that exotic particles may come out of the theory is due to the nature of the models themselves. Along with the symmetry group that belongs to the model there are group representations consisting of one or more fermion multiplets. Each multiplet consists of particles which can be interchanged by symmetries of the group. SU(5) is the only grand unified model whose multiplets consist exclusively of known particles. With larger symmetry groups, additional particles need to exist simply to fill out the symmetry. This sort of circumstance has been around for some time in the theory of elementary particles. In 1961, for instance, Murray Gell-Mann's "eightfold way" symmetry required the existence of an as-yet unknown particle (the Omega-minus) in order to be complete. The particle was actually found in 1964.
In addition to fermions predicted by a model, there are also bosons mediating gauge forces. If the smallest fermion multiplet has N particles (an "N-dimensional representation"), there need to be N^{2}-1 bosons. For instance, in the SU(2) theory of the weak force there are 3 fundamental bosons. (Additional bosons occur as superpositions of the fundamental ones.) In the SU(3) theory of the strong force there are 8 fundamental bosons – the gluons. As noted above, the SU(5) theory has 24 bosons (12 already known, and 12 new X bosons). The SO(10) model has 255 fundamental bosons.
So if one of these unified models is actually correct, there must be a significant proliferation of "elementary" particles. How could they all manage to escape detection? The answer seems simple enough – the new particles would have to be so massive that they couldn't be created in experiments with existing accelerators. Most such particles would be unstable as well, so they wouldn't be found in nature, as cosmic rays for example. But this answer is a little too simple. Even if a particle is too heavy to be created in an accelerator, its potential existence as a virtual particle should have effects on particles that can be created. This is especially true of gauge bosons. Since they may exist virtually for very short periods of time, they can have subtle effects on observable processes. They will make some small contribution to the probabilities of various kinds of interactions, for instance.
At present the most sensitive experiments haven't been able to identify any effects that, with great certainty, must be due to such additional particles. This tends to be evidence against unified models other than SU(5) (which is itself ruled out for a variety of other reasons).
There is, however, one other significant potential source of new, exotic particles: supersymmetry. We'll go into it a little more below (as well as elsewhere), but the basic idea is that for every known fermion or boson the symmetry provides a new particle of the opposite type (boson or fermion) that corresponds to it. (Some particles may have more than one such "superpartner" in some versions of supersymmetry.) None of the known particles has the right properties to be a superpartner of any other known particle, so if supersymmetry is correct, the number of "elementary" particles is immediately doubled (at least).
Here again, that none of these supersymmetric particles have been observed (assuming they exist) is probably because they are very massive. But there are also reasons for expecting that the lightest superpartner could not be too much more massive than current experiments are able to probe. So discovery of one or more supersymmetric particles could occur almost any time – and supersymmetry as a theory will start looking pretty shaky if nothing turns up within, say, the next 10 years.
There is one additional theoretical source of exotic new particles worth noting. It comes from the phenomenon of CP symmetry breaking and a puzzle related to this and the strong force, known as the "strong CP problem". Out of this, using what is essentially a Higgs mechanism, there should come a very light particle known as the "axion". We'll discuss that more elsewhere, and merely take note of it here.
All of these various exotic particles are extremely interesting for two reasons. In the first place, observation of any of them will give instant credibility to the theory which predicts them – one of the grand unified models or supersymmetry in particular. Discovery – or nondiscovery – of any of these particles makes for a very good test of the various theories.
In the second place, the almost certain predominance of dark matter in the universe makes it equally certain that some sort of exotic non-baryonic particles must exist in great abundance. We would plainly like very much to know exactly which of the many possibilities the dark matter actually consists of. (It may be several of the candidates, of course.) In addition to the exotic particles describe above, there aren't all that many other good candidates for the non-baryonic dark matter that come out of plausible theories. Massive neutrinos (including the hypothetical right-handed kind) are one other possibility. Magnetic monopoles are another. But that's about it.
We stress the "non-baryonic" type of dark matter for a good reason. It would take far too long here to go into how the amount of dark matter in the universe is estimated. But when everything is calculated out, there are couple of key points. First, only about 1/6 of the mass of the universe is baryonic (i. e. protons and neutrons, mostly), while the other 5/6 is non-baryonic. This is estimated by means of fairly reliable computations of big-bang nucleosynthesis (the creation of light atoms such as hydrogen and helium). In other words, over 80% of the mass of the universe is not only dark, but is actually non-baryonic. Second, of the baryonic part, only about 20% of it is in the form of visible matter, such as stars, galaxies, and luminous gas clouds. This is estimated by such things as the rotation rates of galaxies and the motion of galaxies in clusters. Therefore probably at most 3% to 4% of the matter in the universe is visible, and the rest is (by definition) dark matter – so there is quite a bit of it.
This non-baryonic matter, then, which comprises more than 80% of the mass of the universe, is made up of some mixture of massive neutrinos and other particles even more exotic, as outlined above. There are good reasons for thinking most of it is not neutrinos. (If there were very much of this so-called "hot" dark matter, galaxies would not cluster to the extent that they are observed to do.) Consequently, more than half (and probably much more) of the mass of the universe consists of exotic particles which are predicted by grand unified theories and/or supersymmetry.
The point is, we really need something beyond the standard model simply to understand what most of the universe is composed of.
More detail on CP symmetry violation and axions
To begin with, we should explain better what this quantity is. In the first place, why is it called an "angle"? One answer is that it comes out of the original metaphor we used to understand SU(2) symmetry This goes back to the notion of "isospin" introduced by Werner Heisenberg. This idea allowed him to regard the two nucleons – protons and neutrons – as really different forms of the same particle. These two forms are related to each other by rotations in an abstract space – rotations through some particular angle.
Since protons and neutrons aren't elementary particles, this idea didn't really accomplish much. But it turned out that an exactly analogous idea – now called "weak isospin" does work for pairs of particles such as electrons and neutrinos or up and down quarks. The symmetry built on those particle couplets is the SU(2) of the electroweak theory.
Sheldon Glashow in 1961 came up with the idea that is essentially the electroweak mixing angle in the course of making his contribution to the electroweak theory. However, to his consternation, the quantity was for a time known as the "Weinberg angle", after Steven Weinberg, who independently came up with the same idea about 6 years later. The concept now goes under several other names as well, such as "electroweak unification angle" and "weak mixing angle".
Physicists refer to the strength of a force as a "coupling constant", so the most intuitive way to compare the electromagnetic and weak forces is by means of the ratio of the electromagnetic and weak coupling constants. The best current measurement of this ratio is about .472. So in some sense the electromagnetic force is about half as strong as the weak force.
How does an angle figure into this? One simply defines the electroweak mixing angle as the angle x such that the given ratio is sin(x). (This angle is usually written as the Greek letter theta, as one commonly does with angles.) On the basis of this definition, the mixing angle is about 28.16 degrees. (That's the inverse sine function of .472.)
But why go to the trouble of expressing this value in terms of an angle? The answer is that the sine of the angle plays an important role in the theory. One of the main functions of the mathematics involving the Higgs fields was to account for the fact that the gauge bosons of the weak force (Z, W^{+}, and W^{-}) all had considerable mass. Also, instead of the neutral Z vector boson which is actually observed in connection with a phenomenon called a "neutral current", the theory yields a different neutral vector boson, denoted by W^{0}.
What's wrong with this picture? Well, for one thing, we wanted to obtain the familiar photon, which is massless, as one of the gauge bosons of the theory. And at the same time, we wanted to have Z instead of W^{0}. Is there some way to get the photon and the Z out of the theory?
The answer is, yes. To begin with, the electroweak theory yields an electrically neutral massive vector boson, call it V, which is produced by the U(1) symmetry. So it's a little like a photon, but not exactly, since it has mass. But at least we now have two neutral bosons, V and W^{0}. We can, via rotation through an angle x and superposition of the results, produce two new particles:
Both of these are still electrically neutral. But, magically, it turns out that when x is the electroweak mixing angle defined as above in terms of a ratio of coupling constants, then v is in fact the massless photon, and Z is the neutral vector boson associated with neutral currents.
Well, that's very nice, but so what? Remember that x is an experimentally measurable value, a ratio of coupling constants. Now, the standard model has no way to predict this value. It's just an arbitrary constant. The big deal here is that a grand unified theory can predict this value.
Unfortunately, at first (around 1974), the predicted and measured values of the mixing angle did not agree. But surprisingly, as time went on, the measured value changed until it agreed to within about 5% of the theoretically predicted value. Much better, but still not quite right.
The theoretical prediction, however, was made using the minimal SU(5) theory. And that theory has already been ruled out for various other reasons. When a unified theory which incorporates supersymmetry is used to compute the mixing angle, it is found that theory and experiment agree to within about half a percent.
That agreement of theory and experiment is the big deal. It's a fairly nice piece of evidence in favor of both some type of grand unified theory and supersymmetry.
We are going to see that unified theory faces other challenges – and supersymmetry again comes to the rescue.
If one assumes that a Higgs mechanism is responsible for the symmetry breaking and the boson masses, then there will also be a similar difference of 14 orders of magnitude in the masses of the relevant Higgs bosons. It is this circumstance which makes the hierarchy problem especially acute. If a Higgs mechanism were not involved, then the huge differences in masses and field strengths would be only a curiosity. But the mathematics of the Higgs mechanism – in particular its inclusion of spinless ("scalar") bosons – establishes relationships between the different sorts of Higgs fields and their magnitudes. In other words, there are equations relating these different quantities, and for the equations to work, they must contain parameters which are exact to 14 or so decimal places in order that different terms will cancel each other almost – but not quite – exactly, so that there can be related fields with such disparate strenghts.
We can be a little more specific about this problem. Recall that it was the Higgs mechanism which gave mass to the gauge bosons in the electroweak theory. There must be at least one Higgs particle involved here, but there may be more than one. When we work with a grand unified theory, there will be many more Higgs particles that give mass to the X bosons of the GUT. These latter Higgs particles must have masses comparable to the the emormous masses of the X bosons. The problem is that all the Higgs particles are involved even with the much lighter W/Z bosons. That is, all of the Higgs particles contribute something to the W/Z masses. But since most Higgs particles would be extremely heavy, the result could be the relatively miniscule W/Z masses only if there were exceedingly precise cancellations from different terms. One would expect a far more likely result to be that the W/Z masses are close to those of the X bosons.
So all of this leads to another way of describing the hierarchy problem – as a "fine tuning" problem. Somehow, the equations, which are suited to describing quantities of similar magnitude manage to also describe far smaller quantities which arise from very precise cancellation of the larger numbers.
Here's an analogy: Suppose you have a recipe for making a souffle, in which two ingredients must be mixed in exactly the proportion of 2:1 for the recipe to succeed. The proportion must be exact to fifteen decimal places, or else the recipe will fail completely, and all you will get is a soupy mess. That is very closely parallel to what we have with this hierarchy problem.
Note that this problem arises in the presence of two distinct features: (1) We are trying to formulate a grand unified theory which relates the electroweak and the strong force. (2) We want to use a Higgs mechanism to explain the breaking of the symmetry at two very different energy scales. It may be that these goals are simply incompatible. Unification may be impossible, or the whole idea of the Higgs mechanism could be a mistake.
Strictly speaking, the Higgs mechanism could be dispensed with. All that the electroweak theory (and similarly, an analogous grand unified theory) needs is for the symmetry between the forces to be broken in some way. The Higgs mechanism can do this, but doing it is enough. Actual details about how the mecanism works aren't important. (And for this reason, the standard model and GUTs which extend it say very little about details of the mechanism, such as the masses of the Higgs particles.)
But physicists are very reluctant to give up on either the unification of forces or the Higgs mechanism, because of their inherent elegance. So there is a problem, and an intensive search for some way to reconcile these two theoretical approaches. It turns out that supersymmetry provides just what is needed.
We go into much more detail on supersymmetry elsewhere. Just as a reminder, the essential idea is to postulate one more symmetry, even more sweeping than what is done in grand unified theories. This new symmetry relates bosons to fermions. Under this symmetry, there are transformations which actually exchange particles of these two very different types.
Supersymmetry is an attractive theory for several reasons. One of the primary ones is that the theory is actually able to make a natural (if still not quite consistent) quantum theory of gravity. That would be enough in itself, but another reason supersymmetry is interesting is that it is also able to solve several problems that arise in grand unified theories.
As noted, supersymmetry postulates the existence of transformations which relate fermions to bosons. In particular, to each particle of either type there must exist another particle – called the supersymmetric partner – whose spin differs by exactly 1/2. However, none of the known particles appears to be a superpartner of another known particle. Thus, all superpartners must be particles which are as yet entirely unknown.
What is it that renders supersymmetry so magically effective at solving certain problems? The answer is that all those superpartners make the equations of the theory very symmetrical. For any given particle and every corresponding term in an equation of the theory, there is a superpartner whose own term in the equation occurs with the opposite sign. The result is that, in a typical calculation, there is a large amount of exact cancellation of terms right away. This has a number of pleasant consequences.
First, it takes care of the hierarchy problem in grand unified theories. In computations of the mass of the electroweak gauge bosons (W and Z), everything having to do with contributions from Higgs fields related to the much heavier X bosons cancels out exactly. The problem simply goes away.
In a little more detail, as we noted, spinless particles (the Higgs bosons) cause trouble in this regard. This is because virtual particles in the vacuum interact with all spinless ("scalar") particles and hence affect their mass. So the mass of each scalar particle is related to the mass of all the rest. This makes it very likely that the Higgs boson(s) of the electroweak theory should have masses similar to the masses of the bosons of the grand unified theory – which are very large. Consequently, the W/Z bosons of the electroweak theory should be much more massive than they are. But with supersymmetry, each term affecting the mass of a Higgs boson comes from a particle but is cancelled out by another term coming from the particle's superpartner. The net result is that the mass of a Higgs particle is not affected by the masses of other Higgs particles.
The second pleasant thing is that a number of other calculations also become much easier, for similar reasons. As a result, several other problems can be resolved. In particular:
Supersymmetry seems almost too good to be true. And perhaps it is. But at least it is something we have a reasonable prospect of testing.
In the past, it has certainly been true that experimental findings have forced physicists to recognize smaller levels of structure. Atoms, for example, were found not to be truly elementary. They consist of a small, heavy nucleus enveloped in a much larger cloud of much lighter electrons. The nucleus, next, turned out to be made of smaller particles – protons and neutrons. And then even the protons and neutrons were found to be composed of quarks. Couldn't this sort of nesting of smaller and smaller structures go on indefinitely?
Maybe not. It's important to note that in the past there were always puzzling experimental findings which were inconsistent with existing theory and required a smaller level of structure to exist to make any sense. With the standard model, however, there are no obvious inconsistencies with experiments, even though the model does fail to explain many things that seem to require a more complete theory.
In any case, theorists have explored ways to extend the standard model by supposing some sort of composite structure for quarks and leptons. Abdus Salam, who was one of the main contributors to the electroweak theory, played with the idea of "preons" as possible constituents of quarks and leptons.
A somewhat more serious attempt was made by a physicist named Haim Harrari to build a composite model using just two types of particles he called "rishons". He supposed there might be two spin-1/2 particles, one with 1/3 unit of electric charge and the other electrically neutral. (There would be corresponding antiparticles, with the opposite electric charge.) When these two particles are combined in groups of three, there are four possibilities, having electrical charges of 1, 2/3, 1/3, and 0. These could correspond to a positron, an up quark, an anti-down quark, and an electron neutrino. There is a way to include color charge as well. Since the quarks which make up a proton can swap different types of these rishons, it is even possible to have proton decay. Unfortunately, measured limits on the proton half-life are very hard to reconcile with the rishon theory.
Indeed, there are several facts, both experimental and theoretical, which present almost insurmountable problems for any type of composite model yet considered.
The first problem is that there is no experimental evidence of any composite structure inside leptons and quarks, in spite of the fact that the energy scales and distance scales which have been explored are far in excess of what might have reasonably been expected to reveal smaller structure – by a factor of around 10,000. Specifically, any smaller structure must be at a scale of less than 10^{-22} m, corresponding to an energy of 1 million GeV. Although current accelerators are far from being able to probe such energies directly, it turns out that calculations using QED can be made to a very great accuracy which is capable of distinguishing differences in the way that an electron would behave if it were a particle whose charge were distributed over the indicated distance instead of truly a point particle.
A second problem arises if an electron has a very small but non-zero radius. In that case if the electron had smaller constituent particles, then they would be confined to an extremely small space, and therefore by the uncertainty principle have a very large kinetic energy. This energy would be far larger (by a factor of perhaps 10^{7}) than the equivalent rest mass of the electron. This could be cancelled out by the potential energy due to forces between the constitutent particles, but this would require extremely fine tuning. Instead, the electron should have a rest mass much closer to that of its constitutents – which is simply impossible.
A third problem is that lack of any contradiction between the standard model and experiments. It is consistent with all observations made to date (even if it does not always predict them). It can be consistently extrapolated to much smaller distance scales. And it does this without even having a way to incorporate the possibility of particles making up leptons and quarks.
Fourth, and finally, when the forces covered by the standard model are extrapolated to the unification scale, they are found to be asymptotically equal as a natural consequence. This kind of prediction does not come out of composite models. Therefore, if unification is a reality (as just about everyone expects), it would have to be a pure coincidence as far as any composite model is concerned.
In short, no one is taking compsite models seriously these days.
For each prediction, there are three possible outcomes. First, the prediction can be verified. This doesn't confirm the whole theory, but it does provide evidence in its favor. Second, the prediction can be falsified if experimental observations contradict it – that is, there are experimental results that are not consistent with the prediction. In this case, there is something definitely wrong with the theory, and it will need to be modified or (if modification isn't somehow possible) abandoned. Third, all efforts either to verify or falsify a prediction can fail, in which case little can be said either way.
Keep in mind that grand unified theories come in many variant forms. GUTs really represent a whole class of models, using a variety of symmetry groups and hypothetical forces and particles. Some predictions are common to all or most models, while others are different from model to model, and experimental results can gradually provide increasing evidence for or against specific models, as well as the theory as a whole.
Here, then, are some of the testable predictions relevant to grand unified theories.
There are two key phenomena in this category:
The list of cosmological facts, and very probable facts, which can be explained by grand unified theories includes:
First, let's just make one little point. There really is no such thing as "bad news" in science. The universe is the way it is, and there's nothing to be gained by fretting about things when they don't turn out according to the best guesses we can make at any given time. We just accept that our best guesses were wrong, and use the new information to make better guesses.
That's why science is always avidly seeking new information – either the information will confirm our guesses (big ego boost there), or else it can be used as a new hint to help solve the problem.
As far as grand unified theories are concerned, it's clear enough they have come up short. Not much progress has been made with them in the last 20 years. The following list readily confirms that.
But that doesn't mean they have been a complete failure. What nature is telling us, obviously, as far as GUTs are concerned, is that we are missing something important. Yang-Mills gauge symmetry theories are very elegant, and they are wonderful when they work well – as is the case with quantum electrodynamics.
However, there must be something else, some additional principles (maybe one or two, maybe many) we haven't grasped yet, which must be added to get a completely satisfactory theory. Supersymmetry is a good candidate for such an additional principle. But we shouldn't let ourselves become too dependent on it, as very soon it could be ruled out (at least, in its current form).
Or maybe supersymmetry will be confirmed, but it isn't the final answer either. It leaves unanswered questions of its own. Perhaps superstrings will be an even better answer.
Whatever new principles appear, ultimately, to help construct a better theory, many of the good features of grand unified theories will remain. It should certainly be expected, for example, that three of the fundamental forces will be unified at what we've been calling the unification energy scale. Gravity, too, will follow at a somewhat higher energy. At that scale, there will be just one type of particle and one type of force. We can't guess exactly what principles will underlie the ultimate theory. But the standard model has to appear as a low-energy limiting case when all is said and done, simply because it gives the experimental results we can already confirm.
Whatever elementary particles and fundamental forces turn out, ultimately, to be – some strange kinks in spacetime, perhaps – there will, quite likely, be some symmetry group analogous to SU(5) and SO(10) which organizes, in the same way, those elementary particles and which treats the fundamental forces as effects of gauge symmetries.
So here are the alternatives:
What such an extension would offer is the possibility of treating all of the known forces except gravity with a single set of field equations. The attractiveness of such a theory is mainly mathematical and aesthetic, since it is a way to use the same fundamental principles and equations to explain the known forces and particles (always excepting gravity).
Adding a new symmetry is not entirely arbitrary, since it interacts with already established symmetries. The simplest symmetry in the standard model is that of quantum electrodynamics. Its symmetry group is denoted as U(1) (1-dimensional unitary group). The symmetry gruop of the weak force is denoted SU(2) (2-dimensional special unitary group). The symmetry group of the electroweak theory is the direct product U(1)xSU(2).
The symmetry group of the strong force is SU(3), so a unified theory should at the very least contain the direct producct U(1)xSU(2)xSU(3). For technical reasons, the group must actually be larger yet contain this direct product as a subgroup. It has been shown that SU(5) is the smallest symmetry group that will serve, though there are many other (larger) possibilities. Unfortunately, there are a variety of problems in constructing a grand unified theory based on SU(5) or any of the other candidates - one of which is simply discovering experimental tests to distinguish which would be the appropriate symmetry group.
In spite of the difficulties, it seems plausible that a unified theory like this should exist. The main reason is that there are a number of similarities between quarks and leptons. Namely, both quarks and leptons:
A coupling constant describes the probabilities for emission of various particles in the course of an interaction: the larger the constant the larger the associated probabilities. It turns out that these "constants" depend slightly on the energy of the interaction. It is at least plausible that the coupling constants of all forces of the standard model - and maybe gravity as well - become equal at a high enough energy (comparable to the energy of the big bang).
It seems that today's separate theories of the electroweak and strong forces are just (very) low-energy approximations of a more theoretically attractive unified theory. So it may be appropriate, for theoretical purposes, to consider only very high energies in formulating a unified theory. Unfortunately, the energies which must be considered are very high - on the order of the so-called Planck energy, about 10^{27} electron volts.
The unification energy scale can be computed approximately. At the Planck energy, the intrinsic strengths of gravitation and the strong force become equal. At an energy "only" a factor of 100 less (i. e. 10^{25} EV) the intrinsic strengths of the strong and electroweak forces become equal. However, the electromagnetic and weak forces become equal around a mere ten trillion (10^{13}) EV - which is a factor of 10^{14} less than the Planck energy. This is a rather large discrepancy, indicating a vast difference of some sort between the electroweak and strong forces. It is a rather significant breaking of symmetry, and any unified theory needs to be able to account for it - just as the symmetry breaking within the electroweak theory needs to be explained. The problem of explaining these large differences is sometimes called the "hierarchy problem".
Now, one important virtue of the theory of the strong force (i. e. "quantum chromodynamics") is that - like the electroweak theory - it is "renormalizable". That is, there are ways to redefine the masses which enter into the equations so that quantitative computations do not diverge. This is no small virtue, since it has proven impossible to develop a renormalizable quantum field theory of gravity.
However, this renormalization is possible only with the simplest forms of the field equations of quantum chromodynamics. These equations actually have more symmetry than is actually postulated to exist. That is, there are no particular observational reasons to expect as much symmetry as the equations allow for. A basic mathematical principle (due to Emmy Noether) is than any symmetry in the equations implies a conservation law, and vice versa. (For instance, the symmetry between electricity and magnetism in Maxwell's equations implies conservation of energy, regardless of the form of the energy in an electromagnetic wave.)
It has often been assumed that a quantity called "baryon number" is conserved. A baryon is a particle composed of three quarks, such as a proton and a neutron. Since a proton is the lightest baryon, it could not decay as long as baryon number really is conserved. But there is no particular reason to believe that baryon number really is conserved. (I. e., we have no explanation for such a law.) And in a theory that unifies the strong and electroweak forces, we would actually expect that baryon number isn't conserved, so that a proton could decay into several leptons. (Since according to this higher symmetry, all such particles are "really" of the same kind.) If there is no such baryon conservation law, then field equations should not have so much symmetry as they would in their simplest form.
Indeed, there are theoretical advantages to adding terms to the field equations of a grand ujnified theory, even though this suggests baryon non-conservation (and raises renormalization issues as well). Such terms are proportional to the energy of the quanta involved but also have coefficients on the order of magnitude of the Planck energy. Hence they do not affect the theory at low energy (or cause renormalization problems at low energy), yet allow for possibilities like proton decay.
Proton decay, if it occurs at all, is therefore very unlikely at low energies. However, if it is not forbidden, then it could occur in a sufficiently large collection of protons. Experiments have been conducted for many years to detect proton decay in very large volumes of water, and nothing has so far been observed. This implies that proton decay, if it occurs at all, must have a probablility of less than about 10^{-31} per proton per year.
However, there is another sort of consequence of additional terms in
the field equations
of a grand unified theory, namely the non-conservation of lepton
number. This would mean that neutrinos are not massless and that it is
possible to have processes called "neutrino oscillation" in which
electron, muon, and tau neutrinos could convert among themselves.
Interestingly enough, ongoing experiments that observe solar neurtinos
now indicate that neutrino oscillation occurs - which requires that
neutrinos do have a nonzero mass.
–>
Copyright © 2002 by Charles Daney, All Rights Reserved