Open Questions: The Standard Model

[Home] [Up] [Glossary] [Topic Index] [Site Map]

See also: Higgs physics -- Beyond the standard model -- The strong force and QCD

Introduction

Components of the model

Symmetry and symmetry breaking

Symmetries of the standard model

Fields and quantum fields

Yang-Mills gauge fields

The strong force and quantum chromodynamics

The electroweak force

Open questions


Recommended references: Web sites

Recommended references: Magazine/journal articles

Recommended references: Books

Introduction

The "standard model" refers to the synthesis that physicists achieved in the early 1970s of the knowledge which had been rapidly accumulating in the preceding decades regarding the elementary particles and forces of nature.

First, there are the particles, the true "elementary" constituents of matter. Some time was required to identify what these elementary constituents really were. We'll take a very brief look at the history -- feel free to skip ahead if this is all very familiar.

The first particle that can actually be said to have been "found" was the electron, whose existence was deduced by J. J. Thomson as a result of his experiments on electric charge at the Cavendish Laboratory in England in 1897. What Thomson in fact found was that electric charge was not infinitely divisible, but appeared in discrete units -- quanta (though he didn't coin the term). The electron was a carrier of the unit electrical charge.

At about the same time, in 1900 to be exact, the German physicist Max Planck announced that his experiments showed the energy of electromagnetic radiation (such as light) comes only in discrete amounts which he called "quanta" (thus coining the term). Then, in 1905, the same year Albert Einstein published his special theory of relativity, he also published a paper that explained Planck's observations of the "photoelectric effect" in terms of an actual particle that carried the energy. This particle, the quantum of the electromagnetic field, later came to be known (naturally enough) as the photon. But of course, speculation about such a thing had been going on since Newton's "corpuscular theory of light", and similar ideas went back even farther (to the Greeks).

The next particle to be "discovered" -- or rather, recognized, since it had been sitting under physicists' noses for awhile -- was the proton, the nucleus of a hydrogen atom. Ernest Rutherford, also of the Cavendish, gave the name "proton" to the bare hydrogen nucleus in 1920. The proton needed to have one unit of positive electric charge, to balance the electron's negative charge. It also needed to be much heavier than the electron, to account for the roughly known mass of a hydrogen atom.

People quickly realized that atomic nuclei heavier than hydrogen could not consist entirely of protons, since if they were they would have too much electrical charge for their weight. Thus it was conjectured that electrons could somehow coexist or combine with protons in the nucleus to cancel out some of the charge. The easiest way to achieve this was a combination of one electron and one proton to make a neutral particle. So physicists went looking for a neutrally charged particle with about the same weight as the proton. James Chadwick, again at the Cavendish, discovered the neutron in 1931. (Though neutrons outside a nucleus quickly decay into protons and electrons, by virtue of what is called the "weak force", a neutron is now understood not to actually consist of an electron and a proton, being made up, like the proton, of more elementary "quarks" instead.)

Things started to move very rapidly, with theoretical physicists as well as experimentalists starting to contribute. In 1927 Paul Dirac formulated an equation which (partially) reconciled quantum mechanics with special relativity, thereby creating the relativistic theory of electrons. Dirac himself called the new theory "quantum electrodynamics". It quickly appeared that the equation predicted the existence of a (perhaps) new particle having "negative energy" and positive charge. Since there was little experience at that time with theory actually predicting new particles, Dirac thought the particle involved might be simply the proton, but in 1932 Carl Anderson discovered, in cloud chamber experiments studying "cosmic rays", a particle with the mass of the electron and a positive charge -- the positron. This turned out to be the first "antiparticle" and what Dirac's equation was actually predicting.

Physicists were now on the alert for the possibility of "new" particles of matter turning up when they looked hard enough. Observations of the phenomenon of "beta decay", in which a neutron decays into a proton and an electron, indicated that energy and momentum were not being properly conserved. Since no such anomaly could be tolerated, the existence of a new particle to carry the "missing" energy and momentum was postulated in 1930 by Wolfgang Pauli. A little later, Enrico Fermi gave it the name "neutrino". The experiments showed that neutrinos had no electric charge and were very light -- probably (it was thought then) without any mass at all. This meant that they could be observed only indirectly (which is still true). Nevertheless, their existence was quickly accepted because having them made everything work out just right in any particle interaction which involved the "weak" force (such as beta decay).

Theorists remained active, cheered on by the successful prediction of the positron. Around 1935 the Japanese physicist Hideki Yukawa was thinking about an analogy between the electromagnetic force (responsible for electrical energy and magnetism as well as light) and the "strong" force that binds protons and neutrons together in the nucleus. He decided that the strong force should be quantized just as electromagnetism is by the photon. Hence there should be a particle which mediates (or "carries") the strong force, and he calculated it should have a mass intermediate between that of the electron and the proton. Coincidentally, at just that time, various researchers studying cosmic rays with cloud chambers (including Anderson) found a new particle which seemed to be exactly what Yukawa had predicted. Because of its weight intermediate between electrons and protons, the particle was called a "meson".

It later turned out that the particle which had been discovered (the "mu meson" or "muon") was not the one required by Yukawa's theory. (In fact, the muon is not in the class of particles now called mesons, being much more like the electron, which is classed as a "lepton".) About 10 years later when cosmic rays were being studied by means of tracks in photographic emulsions, and more accurate estimates of particle energies (hence masses) could be made, it was realized that there were really two sorts of particles involved, having similar masses but very different properties. The particle Yukawa had predicted (the "pi meson" or "pion") was actually regcognized at that later time. Although pions played the role that Yukawa had theorized in mediating the strong nuclear force, it was later realized that they were not really fundamental, being composed of two quarks.

The muon, on the other hand, genuinely does seem to be fundamental -- but very odd. A muon, it seemed, was identical to an electron in most respects, except for being about 273 times as heavy. There did not seem to be any obvious purpose for it, nor any theory which predicted it. (And there still isn't!) This circumstance led to I. I. Rabi's famous question, "Who ordered that?"

In spite of the success of searching for new particles in cosmic rays, that type of research ceased to be important in finding elementary particles when machines such as "cyclotrons" and "synchrotrons" started being built in the early 1950s to "smash" atoms. These machines were so successful at producing "new" particles that they not only supplanted cosmic ray studies, but they started to produce... chaos and confusion, because there were so many new particles being discovered. Physicists suddenly needed much better theories more than they needed any additional particles.

It took about 20 more years to come up with the required theories, which we now know as the "standard model" or particle physics.


Components of the model

At this point we need to leave off with the historical development of the subject, as there are simply too many threads to follow. It has, nevertheless, been useful to consider the history, since several important concepts have come out. First, we have seen examples of each of the principal types of particles that occur in the standard model. Some of these particles can be considered "matter" particles, since they are the fundamental units that matter is composed of. Others are "force" particles, which mediate or "carry" one of the four fundamental forces recognized in physics. Gravity, electromagnetism, plus the weak and strong nuclear forces are the four fundamental forces of physics. The existence and importance of these forces is the second significant concept.

A third concept is the way in which theoretical constructs are introduced to solve a puzzle but quickly (if they are truly useful) become critical parts of the model, perhaps long before the construct can be observed experimentally. (This is the case, for instance, with "Higgs particles" at this time, whose existence most physicists expect to be verified, even though actual detection is still elusive.)

So let's look at where the standard model is today. Here's a table of the particles that the model recognizes as the fundamental constituents of matter:

Elementary particles of matter
Leptons Quarks
Electron Electron neurtino Up quark Down quark
Muon Muon neutrino Charm quark Strange quark
Tau Tau neutrino Top quark Bottom quark

This is just the beginning, of course. A lot of details have been left out. For instance, this says nothing about antiparticles, such as the positron. The distinction isn't made in the table; think of the positron (e. g.) as occupying the same box as the electron.

The first two columns consist of leptons. The name denotes "light", as in small mass. Originally physicists also recognized "mesons", which are particles with intermediate mass, and "baryons", which have relatively large masses. Mesons and baryons, however, turned out to be composed of quarks -- 2 in the case of mesons, 3 in the case of baryons -- so they are not fundamental. (Such particles composed of quarks are called "hadrons".)

The arrangement of the table indicates other properties in addition to mass. The most important property is which of the four forces the particle "feels", i. e. is susceptible to. The major distinction is with respect to the strong force. That force is felt only by quarks (and hence by mesons and baryons composed of quarks). The electromagnetic force is felt by particles in all columns except for the neutrinos, which have no electric charge, unlike the others. The weak force is felt by all particles in the table, as is gravity.

The table also indicates particles' electrical charge. The charge is -1 in the first column (or 1 for the corresponding antiparticles). Neutral particles in the second column have a charge of 0 (as do their antiparticles). Quarks in the third column have a charge of 2/3 (-2/3 for the antiparticles), and quarks in the fourth column have a charge of -1/3 (1/3 for the antiparticles).

The strong force has its own type of "charge" associated with it, called a "color" charge. This charge is not specified as a number but instead (by convention) as "red", "green", or "blue". Only quarks have color charge. Leptons have none, since they don't feel the strong force. One more piece of terminology: historically, each of the different types of quark is sometimes referred to as a different "flavor".

The rows in this table are significant as well as the columns. Each row is said to represent a "generation". Only particles in the first row (electrons, electron neutrinos, and the up/down quarks) occur in ordinary matter, since these are the only ones which do not decay into lighter particles. As far as we can tell, they are absolutely stable. Particles in the second and third rows are heavier, decay readily, and do not occur in ordinary matter. Except for those differences of mass and stability, particles in the second and third rows seem to be exactly like those in the corresponding columns of the first row. Various observations (including some from cosmology) have made it certain that there are no additional particle generations. But why there should be more than one -- and three in particular -- remains a total mystery.

There is one additional property which is shared by all particles in the table. It is called "spin", since it is somewhat like angular momentum possessed by everyday spinning objects. Of course, like most quantities pertaining to things in the quantum world, it occurs only in discrete amounts; that is, it is quantized. The smallest possible nonzero amount of spin is 1/2 (or -1/2). The sign of the spin (plus or minus) is referred to as "up" or "down" (no relation to quarks of the same name). All particles in the table have a spin of 1/2 or -1/2.

Are there any particles with other values of spin? Yes, definitely. Most common are particles with a spin of 1 (or -1). Such particles, or any particle whose spin is an integral value (including 0) are called "bosons", because they behave according to a type of statistics called Bose-Einstein statistics (after Albert Einstein and Satyendra Bose, who came up with the notion). Particles with half integral spin, such as the ones in the table above, are called fermions, because they behave according to Fermi-Dirac statistics (named for Paul Dirac and Enrico Fermi).

These two types of statistics governing the behavior of particles make a profound difference. Specifically, two fermions of the same type having exactly the same quantum numbers can not occur together in an assemblage of particles. This is the Pauli exclusion principle, which was first enunciated by Wolfgang Pauli. Bosons are not subject to any such restriction. It is this property of fermions which dictates, for example, how electrons are distributed in "shells" within an atom -- and ultimately, therefore, leads to the differing properties of distinct chemical elements, not to mention the resistance to compression of bulk matter. (It is why, although matter is actually mostly empty space, we can sit on a chair without falling through it.)

What sorts of particles, then, are bosons? They are, for the most part, the particles which carry one of the fundamental forces. The bosons, as it happens, cannot be arranged so neatly in a table like the one above. Bosons having a spin of 1 (or -1) mediate the forces of electromagnetism (the photon), the weak force (W and Z bosons), and the strong force (gluons). Such bosons are called "gauge" bosons, because their force field can be described with a "gauge" theory. (To be explained shortly.) A field theory of gravity describes the field as being carried by a spin 2 boson, the graviton. There is also a hypothetical spin 0 boson (possibly more than one) -- the infamous Higgs boson. It is sometimes called a "scalar" boson because it generates a scalar quantum field (a field whose value at every point is a scalar number, as will be described later.) Bosons with a spin of 1 are known collectively as "vector" bosons, because they are associated with vector fields.

This distinction between fermions and bosons (or matter and force particles, if you like) is quite important. The statistical differences in how such particles behave in large numbers explain a great deal of the bulk properties of matter and the behavior of forces. The distinction also turns out to the crucial to the important notion of "supersymmetry" -- but that is outside the realm of the standard model.


Symmetry and symmetry breaking

Please be patient as we take a little excursion into the idea of symmetry.

Everyone knows what symmetry is in the everyday world. Usually it's associated with similarities or repetitions occuring in physical objects. Such as: the way the left and right sides of most animal bodies are (almost) mirror images of each other, the way a geometrical pattern repeats in wallpaper, the way that a crystal looks the same when turned through specific angles.

Ultimately, symmetry can be understood as a mathematical concept, something which is every bit as mathematical as number or geometrical shape. In fact, principles of symmetry turn out to be basic to distinguishing different types of geometry. Technically, symmetries are described abstractly by the mathematical concept of a "group". A "group" is a mathematical system consisting of a set of elements and a way of combining or "composing" two elements to produce a third, subject to specific rules which are abstractions of the way that numbers behave when added.

With regard to symmetry, the group elements consist of "transformations" or "operations" on a set of objects. (Groups can arise in other ways, but they can always be "represented" as transformations.) An example is the set of points in a plane. The group of transformations might be all mappings from the set to itself in such a way that distances and angles are preserved -- or, less abstractly, translations, rotations, and mirror reflections. Another example is "permuations" of a finite set -- any rearrangement of the set which interchages set elements while leaving the set itself unchanged.

So we have two somewhat different types of things here. First of all, there is a set of objects, which may be finite or infinite in number. A finite example might be a deck of cards, consiting of 52 distinct physical objects. The set of points in a plane would be an infinite example. Second, we have sets of transformations that operate on the objects in the sets of the first type. A natural set of operations on a deck of cards is all possible shufflings of the deck -- which can be shuffled in about 8 times 1067 ways. So the set of operators is still finite, though rather large. The set of all translations of points in a plane, however, is infinite, like the underlying set, even though it isn't even the set of all possible 1-to-1 mappings from a plane to itself. Keep this distinction in mind. The underlying set can be anything at all. The "symmetry", however, is a collection of transformations of the underlying set, which form a mathematical group if properly chosen.

There will almost always be many possible symmetries for any given set. They may range from any permutation at all of the set elements (the largest possible such group), all the way down to just one single transformation -- the one that takes every set element to itself. (Known as the trivial group, for obvious reasons.)

Ordinarily, one considers just a subset of transformations on the set which in some sense leaves the set the "same" after one of the transformations is applied. If the underlying set is an infinite plane with a wallpaper design, the symmetry group might consist of just those translations which leave the pattern unchanged. If the set is a deck of cards, it might be just those permutations which may change only the rank, but not the suit, of each card (or vice versa).

A subtle point to note here is that it is the (sub)group of transformations which in effect defines what is meant by "same". This might correspond to some easily recognizable property of the set's elements. For instance, with playing cards, it may correspond to what we understand about having the "same" suit or the "same" rank. In this case, the set of transformations tells us something about the set itself -- provided we have a given and fixed definition of the meaning of "same". But the transformations could also be any arbitrary group of transformations. In that case, set elements would be considered the "same" if there is a transformation which takes one element into the other. (Mathematicians say that set elements related in this way are in the same "orbit" or "equivalence class".)

The specification of a transformation group, then, on a set helps us understand what is meant by "same", or "equivalent", or "similar". Physicists use the term "symmetry breaking" when the notion of sameness starts to get a little fluid. That is, a theory may specify a large degree of symmetry, i. e. a relatively large group of symmetry transformations. Yet in practice, less symmetry -- a smaller group -- is actually observed. For instance, an egg balanced on its end is positioned symmetrically, since it would look the same under any rotation of the surface it's resting on. But this symmetry (and possibly the egg) is likely to be broken when the egg falls over. Then there will be much less symmetry, even though the theory can't say exactly what the result will look like. We still have the "same" egg, though it now looks somewhat different depending on how the surface is rotated. We might say that all views of the egg are "equivalent", even though not "identical"... but these are semantic differences which are only captured precisely by specifying the relevant transformation groups.

To give another example, consider political systems. In a pure democracy, every person is the "same" in the sense of "one person-one vote", even though each person is still a distinct individual, with many distinguishing characteristics. The pure democracy has "more symmetry" than real-world democracies or oligarchies, in which some individuals are "more equal" than others. The symmetry group of a pure democracy is the set of all permutations of people. If the democracy is not pure, some members lack full rights. Children or foreigners, for example, may not be allowed to vote. Then the symmetry group is smaller, since there is no transformation that can exchange a person allowed to vote with one who isn't. A "pure dictatorship" has the lowest possible symmetry, since one individual has all the power, and determines exactly how much (or how little) influence everyone else has.


Symmetries of the standard model

This is all relevant to physics in general, and the standard model in particular. Because a key feature of the model is that it uses symmetries, explicitly in the form of certain groups, to classify the fundamental particles and their interactions. And not only that, but the very existence of the interaction forces follows as a necessary consequence of the symmetries.

Recall, from the history lesson, that in the 1950s physicists began to discover an increasing profusion of seemingly different particles. This made it difficult to tell which ones were "elementary" and which weren't, to say nothing of the confusion of simply having to keep track of them all. But there was a simple expedient to reduce the number of particles and hence the degree of confusion: just regard certain groups of particles as actually being the "same" in kind. How does one do this? By introducing symmetry transformations which carry "different" particles into each other.

We have already seen one example of this. Associated with every particle there is a corresponding antiparticle. (In a few cases a particle may be its own antiparticle.) Thus for electrons there are anti-electrons (i. e. positrons), for protons there are anti-protons, for neutrinos there are anti-neutrinos, for quarks there are anti-quarks, and so on. There is a name for the symmertry operation which takes a particle to its antiparticle. It is called "charge conjugation" symmetry, because if the particle has electric charge, the antiparticle under this operation has the same amount of charge but with the opposite sign. (It also applies to electrically neutral particles, even though the charge doesn't change, since it's 0.)

Another early example of this trick was introduced by Werner Heisenberg in 1932. Except for their charge and a slight difference in mass, the proton and neutron are very similar. Both are nucleons subject to the strong force which binds them together in atomic nuclei. Heisenberg suggested that they could be regarded as just variants of one sort of particle, a nucleon, differing only in a property he called "isospin" (or "isotopic spin"). There is then a symmetry operation which transforms a proton into a neutron, and vice versa.

Quantum mechanics adds an additional "twist" to this idea. The theory enables one to form linear "superpositions" of related objects like neutrons and protons, so that it makes sense to have (conceptually) a particle which is 23% proton and 77% neutron. That is, because there is inevitably an undertainty (Heisenberg's principle) in measuring the property of isospin, one can have a situation where any particular measurement would have a 23% chance of finding a proton and a 77% chance of finding a neutron. So one can actually form a continuous group of transformations taking a proton to some arbitrary mixture of proton and neutron.

The symmetry group we're dealing with here has a 2-dimensional "representation" that arises from its operation on the proton-neutron doublet. One can think of the operations as rotations in an abstract space. So, mathematically, the group turns out to be isomorphic to what is called SU(2). (SU stands for "special unitary".) SU(2) is the group of rotations of the surface of an ordinary sphere (denoted as S2 by mathematicians) in 3-dimensional space. Note that the sphere's surface itself is "locally" 2-dimensional, since a point on it can be specified by 2 coordinates.

Nowadays, protons and neutrons are no longer considered to be fundamental particles, so this isospin symmetry isn't fundamental either. Instead, the particles listed in the table above are regarded as fundamental, so we are interested in symmetries which relate them. But the procedure is the same. We look for sets of particles which can be converted into each other by transformations in a symmetry group. Such a set is called a multiplet. (Or, mathematically, an equivalence class.)

Abstractly, one could construct multiplets arbitrarily. However, the goal is to find multiplets where the transformations are physically important. It happens that there are just a small number of interesting ways to do this. For simplicity to begin with, consider a set of only 2 particles -- a doublet. We could form a doublet with just, say, an electron and a bottom quark. However, that doesn't turn out to be physically very interesting.

The table of elementary particles has already been arranged to suggest what transformations might actually be interesting. How about a doublet consisting of an electron and an electron neutrino? Bingo! That turns out to be very interesting. As it happens, one can extend such a transformation to one which has doublets consisting of the leptons in each row of the table, and also of quarks in each row of the table -- six doublets in all. That is, the same transformation also exchanges (say) top and bottom quarks. By analogy with the property of isospin that distinguishes protons and neutrons, the property which distinguishes particles related by the weak nuclear force is called "weak isospin". The symmetry group involved here is also SU(2). While it is not in fact the same group of operations which has proton-neutron doublets, it is mathematically isomorphic to it.

But there is more. The symmetry transformations which do all these exchanges have an additional and immensely important property. That is, the equations which describe the weak nuclear force are invariant under the application of one of these transformations. In other words, the equations remain true when one particle of any doublet is exchanged with the other. (Recall that all particles in the table feel the weak force, hence all particles are members of some doublet or other.)

This circumstance is almost "magic". It didn't automatically have to be true. Yet it is. And the fact that it is can be regarded as a very deep physical truth. As we shall see shortly, the mathematical magic is common to the standard model's description of electromagnetism and the weak and strong nuclear force -- three of the four fundamental forces. In other words, all of these forces can be described by equations which are invariant under a group of symmetry transformations which exchanges members of various particle multiplets.

So how does this apply to the strong nuclear force? There is one crucial fact about the strong force which isn't indicated in the table and hasn't been discussed yet. That is the fact that each and every quark carries an additional property called a "color charge". You can think of color charge as analogous to electric charge, except it comes in only three possible values. (Electric charge can be an arbitrarily large number, though always a multiple of a fundamental charge unit.) Three of the values are called "red", "green", and "blue". (The names are arbitrary, but chosen to fit the "color" metaphor, even though they have nothing to do with visible color.) There are three additional color values, possessed by antiparticles: anti-red, anti-green, and anti-blue (or "cyan", "magenta", and "yellow").

Any quark, of whatever flavor, can have precisely one color charge value. This means that there are in fact 18 different quarks, counting color as well as flavor, and 36 quarks total, counting antiparticles. You can see why it's so important to use symmetry principles to reduce the number of distinguishable particles one has to deal with. The charge conjugation symmetry of particles and antiparticles, for example, cuts in half the number of "different" particles that need to be considered.

Physicists sought one symmetry group to keep track of all these variations. As you would anticipate, there are particle multiplets which should be permuted by transformations of this group. The definitions of the different quarks and their flavors have already been rigged to accommodate such a group. The multiplets involved are simply triplets consisting of quarks of some specific flavor and each of the three colors. There are six such multiplets (twelve counting antiparticles). Each of the multiplets is permuted in the same way by every transformation of the group. Mathematically, the group is isomorphic to what mathematicians call SU(3). This group is harder to visualize geometrically -- it's the group of all rotations of a sphere in Euclidean 4-space, known by mathematicians as S3, a surface which "locally" has 3 independent spatial dimensions.

This group acts only on particles that feel the strong force, so it does not affect the leptons. But, as you might expect, it has the magical property that the equations which describe the strong force are invariant under the action of the group. That is, the equations remain valid when all particles are permuted by a group operation. In this manner, we get a concise description of the other half of the standard model, the part which deals with the strong force.

It seems rather magical and mysterious that there should be some relationship between physical forces and the mathematical abstraction of shuffling particles around as if they were playing cards. And in fact, in some sense, it is a great mystery. Yet it is possible to understand better how this happens, by analogy with other kinds of symmetries and physical forces which are a little more familiar. To see how this works, we need to learn a little about "gauge fields". And before we can do that, we need a few words about the idea of "fields" in general.


Fields and quantum fields

Mathematically, a field in 3-space (i. e. ordinary Euclidean space), is simply an assignment of a number, a vector, or some more complicated mathematical object (such as a tensor) to every point of the space. (The term "field" in mathematics also has a very different meaning, referring to an algebraic structure that models the arithmetic of rational numbers. We are not talking about that kind of field.)

For a simple example, if you consider the temperature at every point in space, you have a "scalar" field. ("Scalar" is just another name for a single number.) If you are concerned with hydrodynamics, then at every point in a certain volume through which a fluid is flowing, there is a specific direction and velocity associated with the flow. Direction and velocity can be encoded as a 3-component vector, so the fluid flow can be described by a vector field. The electromagnetic field due to a charge or an electric current corresponds to a vector having 6 components at each point (3 for the electric field and 3 for the magnetic field). These are just a few of the physical situations which can be represented by fields.

Physical forces -- such as those of electricity and magnetism -- are very naturally modeled as fields. That is because kinematic forces (i. e. forces that cause movement, according to Newton's laws) have both a strength and a direction, and so they are naturally represented as vectors. If a force is present at every point in space but changes from point to point what you get is precisely a force field. Gravity presents another obvious example of a force field. (Its strength decreases as you move farther from a large mass such as the Earth.) Yet another example is wind force.

When Newton came up with his theory of gravity around 1665, philosophers had trouble with it because it seemed to require "action at a distance". The Earth orbited the Sun because of the effect of gravity, in spite of the vast intervening distance (93 million miles). Philosophers worried about how such a thing was possible. By comparison, a force caused by wind is easier to understand, since there is something (air in the case of wind) which "carries" the force throughout a region. Philosophers wanted to know what that something was which carried the force of gravity in Newton's theory. Well, they could worry all they wanted, but it didn't make much difference, because the theory obviously worked very well, even if it wasn't clear exactly how it worked. (This happens a lot with advanced theories in physics.)

The problem, however, did not go away, even though it was "only" philosophical in nature. When Einstein came along in 1905 with his special theory of relativity, the problem returned with a vengeance. The difficulty was that relativity requires that not only physical objects but even physical effects and "signals" are limited to moving no faster than the speed of light. Does this law apply to physical effects like gravitational force? Indeed it does. If the Sun could suddenly stop gravitating, it would take about 8.3 minutes before that would be noticed at the Earth (at exactly the same time there would be visible clues that something fairly catastrophic had happened).

In other words, forces like gravity propagate through "empty" space just as light does, at the same speed even. Physicists and philosophers for a long time had problems with the notion of anything propagating through "empty" space. As long as light was regarded as a kind of vibrational wave phenonmenon, it was thought there needed to be some medium that vibrated. Thus the 19th century concept of the "aether". The Michelson-Morley experiment and Einstein's special theory of relativity pretty much quashed the aether idea, but the mystery remained.

Then quantum theory arrived (in the 1920s) bearing the notion that light could be regarded as having a particle nature as well as a wave nature. Light, already successfully understood by Maxwell's equations, as waves of electromagnetic force was also comprehensible as carried by particles -- photons. People naturally wondered: could other forces also be understood as being carried from place to place by particles? Perhaps there were such things as "gravitons" that carried the force of gravity?

To this day, physicists have not succeeded in applying this idea completely to gravity. But that's the only sort of force in the world of quantum pysics that has not succumbed to a dualistic wave/particle mathematical description -- a "quantum field" theory. A quantum field theory is simply what one gets when the principles of quantum mechanics are applied to a classical field theory. (Physicists use "classical" to mean "non-quantum-mechanical".) When this is done, the field is said to be "quantized". In this process, quantities which were observables in the classical theory (such as the components of the electric or magnetic field at a particular point) are represented by certain mathematical objects, specifically "operators" on a "Hilbert space".

In a quantum field theory, any force describable by a field can also be interpreted as being carried by some sort of particle -- a quantum of the field. Conversely, any particle can be viewed as generating a force field of which it is the quantum carrier of the force. The way this works is that any two particles which interact via some sort of force do so by exchanging a third particle which carries the force between them. This is the case regardless of whether the force is attractive or repulsive.

Wait a minute. If all particles can be considered to generate force fields, what are the forces associated with, say, electrons or quarks? Why do we never here about them? The answer is simple. All matter particles, such as electrons and quarks, are fermions which obey Fermi-Dirac statistics. The Pauli exclusion principle says that two such particles cannot occur in the same quantum state in the same place. The upshot is that matter particles (fermions) cannot bunch together in such a way as to generate a measurable force. Bosons are under no such constraint. Hence all the forces of importance in particle physics are carried by bosons.

There is another important issue to note here. When a boson is exchanged between two other particles to carry a force, where does it come from? The answer is: it comes from the vacuum, out of nowhere (so to speak). How can that be? Doesn't it violate the law of conservation of energy? The answer is: no, because the force particle can exist for just a very short time, but enough to travel between the two particles which feel the force. This is because of the Heisenberg uncertainty principle. The product of the uncertainty in time and the uncertainty in energy must be greater than Planck's constant. If the time uncertainty is small, the energy uncertainty must be large. This means that within a very small time interval, particles of substantial energy may appear without causing any problems. Such particles are called "virtual" particles. The vacuum is crammed full of them.

Do virtual particles have mass? Yes, insofar as mass is equivalent to energy, they do. But a lot depends on how fast the virtual particle moves. It the particle moves at the speed of light, as a photon does, it must have a so-called "rest mass" of zero, since otherwise special relativity would require that the particle should have infinite energy. On the other hand, there are force particles which do have non-zero rest mass, such as the bosons which carry the weak force (the W and Z bosons). This presents something of a dilemma for such particles and the force they carry. Because of their rest mass, they cannont have arbitrarily small energy as a photon can. Hence there is an upper bound on how long such particles can exist in a virtual state without violating the Heisenberg relationship. Hence, since they can't travel any faster than light, there is an upper bound on how far they can carry a force. Hence there is an upper bound to the range of the force they carry. This is the fate of the weak nuclear force -- its range cannot extend beyond a certain distance (about 10-18 meters).

Conversely, when the boson that carries a force has a zero rest mass, the range of the force is infinite. This is the case with the electromagnetic force. (Although its strength drops off as the inverse of the distance squared, this never actually becomes zero.) The same is true of gravity -- if there really is a graviton, it must have zero rest mass. The gluons which carry the strong force are also thought to have zero rest mass. So why doesn't the strong force seem to have infinite range? Good question. It may well have such range, in principle. However, the theory of QCD ("quantum chromodynamics") which governs the strong force does not allow for computing how the force behaves at large distances -- this is a major open question in the theory. What appears to happen is that the force does not decrease in strength with distance. It may even increase. But when two quarks which are bound by the force are separated, the potential energy between them becomes so large that new particles are created out of that energy. The details of this phenomenon make up the still open question of "quark confinement".

We note, for future reference, that when a force-carrying boson has a non-zero rest mass, as is the case with W and Z bosons, the theory becomes much more complicated. There are problems with "renormalizing" the theory to avoid producing infinities in theory calculations. Symmetries are broken. All sorts of bad stuff happens. This is precisely what occurs with the electroweak theory, and it gives rise to a need to explain the symmetry breaking. This problem is what the so-called "Higgs field" and "Higgs boson" have been proposed to deal with. There are major open questions here.


Yang-Mills gauge fields

But we're getting ahead of the story. Time to go back and explain gauge fields. They arise in a natural way as a means to describe how the operations of "local" symmetries applied at all points in space give rise to a field which explains the variation in local symmetry from point to point. And this field, in turn, gives rise to forces and bosons that carry the forces -- which turn out to be exactly the fundamental forces (electromagnetic, weak, strong, and perhaps gravitational) we're interested in. In short -- pick the right symmetry group, let it act locally, and out pops a force.

Most measurements have a certain degree of arbitrariness associated with them. In the case of temperature, there are different temperature scales one can use -- Fahrenheit, Centigrade, etc. If you switch from one scale to another, all the numbers associated with a temperature field would change, but the physical situation would not change at all. Switching measurement scales provides an example of a global transformation, a global symmetry, because nothing really changes. Similarly, if you had a velocity vector field, all the vectors would change if you were in a different reference frame moving at constant velocity with respect to the old one. Yet nothing in the field itself would really change. This is said to be a global transformation of the "gauge".

Suppose however, you find arbitrary changes to the field values at every point, in any way which your "gauge" allows. No global transformation could account for this. More precisely, if you measure a temperature field at every point in some volume and you find that it is different at every point, you would not attribute this to a recalibration of your thermometer for each measurement. You would instead suppose that there is something that "causes" the variation. Perhaps the results differ in a systematic way. For instance, in the center of the volume all measurements are much higher, while the farther you look from the center, the values are less and less. You would conclude that there is some real physical process going on -- like a strong heat source in the center of the volume -- not just come capricious recalibration of your thermometer.

Now consider another situation. Suppose someone is rolling bowling balls on a perfectly flat horizontal surface. The balls are always rolled in the same direction, with the same velocity. Then on an ideal frictionless surface, no matter where you measure, the value of the velocity vector at every point would be exactly the same. (This is Newton's first law.) But suppose you made the measurement and found this wasn't the case. Then you would have to conclude that "something" is happening to change the speed and/or direction of the balls -- some "force" to be precise. Perhaps there's a strong wind, or maybe the bowling balls are made of steel and there's a magnetic field. The net result is that any variation you find in the field that you measure from what you expect to measure can be accounted for by a force. (This is Newton's second law.)

Let's return to quantum fields. Specifically, consider the field generated by an electron. In a quantum field, we must consider the wave nature of the particle. One of the essential characteristics of a wave is a value called the "phase". Abstractly, a wave represents simply the periodic variation of some quantity, such as height or pressure or magnitude of a force. The phase simply designates some particular point in the cyclic variation. We can measure phase the same was as angles are measured. Concretely, thinking about points on a circle, any point corresponds to a particular angle, which can range from 0 to 360 degrees before returning to the start. Since phase is basically an angle, we can also think of it as a rotation of a circle through that angle. Such rotations form a group, which mathematicians call U(1).

Of course, the choice of starting point from which phase is defined is arbitrary. This is much like the arbitrariness in selecting the 0 point on a thermometer. Since phase is not well-defined, it is not actually measurable directly. However, differences in phase can be measured -- just as when measuring temperature, we are actually measuring the difference in temperature between the air (or whatever) and the freezing point of water.

In any case we can think of the phase of the field generated by an electron as a gauge field, where the local symmetry corresponds to a different choice of starting point for defining the phase at each point. It turns out that when all of the mathematics is done for the local gauge field theory describing the phase of an electron wave, the force that pops out in order to preserve local gauge symmetry is none other than the electromagnetic force. And the field of this force is the field corresponding to a massless spin 1 boson which carries the force. This particle is precisely the photon.

The theory which results from all this is called quantum electrodynamics (QED). It was originally formulated in an ad hoc way by P. A. M. Dirac in the course of producing a quantum theory of electrons which obeyed the principles of special relativity -- the same theory which predicted the existence of positrons.

This theory was greatly extended in the 1940s by Richard Feynman, Julian Schwinger, and Sin-itiro Tomonaga, who put it on a more rigorous mathematical basis. In particular, they showed that it was "renormalizable", which means that it could be arranged to avoid the production of infinite quantities in calculations. Such "infinities" are of course not acceptable in a proper theory. Unfortunately, quantum theories tend to produce infinite values rather easily unless great care is taken. This is because an unlimited number of "virtual" particles can come into existence briefly out of the vacuum, and each must be taken into account in the calculations.

This new formulation of QED used the ideas of local gauge symmetry we have been discussing. The magic of this approach was that renormalizability was a consequence of the local gauge symmetry. This symmetry, in turn, depended on the fact that the boson that generated the field, the photon in this case, had zero rest mass, so that the field could have unlimited range and actually provide for the local symmetry everywhere.

This formulation of QED was so successful and so elegant it was inevitable that physicists would attempt to apply it to other forces. So it was that in 1954 C. N. Yang and R. L. Mills had the idea of using the SU(2) symmetry group which describes isospin as a local gauge field symmetry. They hoped that by doing this they would get fields that would ensure the gauge field symmetry, and that these fields could in turn be regarded as arising from observable particles -- spin 1 bosons -- that mediated the strong force in the same way as photons mediate the electromagnetic force.

In this hope, they were in fact not successful. Ultimately, the problem is that protons and neutrons are not really elementary particles, since they're composed of quarks. But there were other problems that caused difficulties right away. To begin with, the theory was necessarily more complicated. Mathematically, SU(2) is what's called a non-commutative group, because its operations do no commute with each other. (A common synonym is "non-abelian", in contrast to abelian groups, named after the mathematician N. H. Abel, where the operations are commutative.) That is, the result of two operations applied consecutively depends on the order in which the operations are performed.

This is not a fatal problem, but it makes the theory more difficult to work with. The theory is also more complicated because it requires 3 gauge bosons instead of just one. Further, because the group is non-abelian, each boson may carry abstract charges, unlike photons which have no electromagnetic charge. This means that the bosons are capable of interacting with each other, which adds greatly to the complexity.

The fatal problem was that the 3 massless gauge bosons which were predicted by this theory to mediate the strong force simply never turned up experimentally. At first there was some hope that observed particles such as the pi mesons might be what was required, or perhaps some other meson such as rho. Unfortunately, these mesons are fermions rather than bosons -- and not massless besides.

Nevertheless, the elegance of this type of theory was so compelling that physicists could not give up on it. And so theories of this type are still called "Yang-Mills gauge theories", even though Yang and Mills' initial attempt failed to achieve its objective.


The strong force and quantum chromodynamics

So far in the story, we have a nice Yang-Mills gauge theory of electomagnetism called QED. The objective remains to come up with a similar theories for the weak and strong (nuclear) forces.

It was tremendously important to understand the strong nuclear force, since that's what holds atomic nuclei together. A decent theory here should be able to explain why nuclei have the properties they do -- such as their masses and their participation in fission and fusion reactions. In addition, most of the menagerie of new particles being created in the "atom smashers" of the 1950s and 1960s were hadrons, which also owed their properties to the strong force.

The weak force, on the other hand, wasn't quite so interesting. It was known to be involved with the decay of neutrons into protons and electrons, and with a few other reactions involving more esoteric particles, but it got the name "weak" partly because it didn't seem to actually do all that much. Consequently, physicists spent a lot more effort, both experimentally and theoretically, working on the strong force. Fortuitously, developing a Yang-Mills theory of the strong force also turned out to be just a little simpler than for the weak force.

As already implied, the symmetry group SU(3) is used to construct a Yang-Mills gauge theory for the strong force. In this case, the symmetry group SU(3) is represented by permutations of triplets consisting of three quarks of the same flavor and each of the three color charges. The strong force arises to account for whatever change results from a transformation of SU(3) chosen to act independently at each point. The particles that carry the force and arise from the field representing the strong force are called gluons. There are 8 distinct gluons, and each is (like the photon, W, and Z particles) a boson with a quamtum mechanical spin value of 1.

For the most part, the theory of the strong force is in better shape than that of the weak force. In particular, the SU(3) symmetry is not broken. As far as experiments can tell, quarks of any given flavor but different color all have the same mass. Since the symmetry is not broken, renormalizability is easier to show.

But there are still some rough spots. The main one has to do with the fact that it doesn't seem possible for a quark to exist in isolation. Many experimental searches have been conducted for "naked" quarks, but so far none have been found. It appears that quarks are necessarily "confined" in bound states of either two quarks (mesons), three quarks (baryons), or perhaps many quarks ("nuggets"). Given that individual quarks always have some particular color charge, is this confinement a consequence of some general principle that any observable particle be color neutral?

It is possible to calculate, using QCD, that the strength of the strong force decreases the closer two quarks approach each other. This fact is expressed by saying that quarks are "asymptotically free". On the other hand, the force between two quarks appears to increase without limit as the quarks separate. Or rather, the force increases to the point that quark/anti-quark pairs are created, which results in the eventual production of separate colorless particles.

However, this is just the experimental observation. While it is possible to compute by means of "perturbation theory", what the inter-quark forces are at very short ranges, this calculation becomes impossible at longer ranges, so it is not clear what the theory actually predicts.

Part of the problem of making computations in QCD is that (unlike QED) the theory is non-abelian. This is a consequence of the fact that gluons themselves carry color charge. (In fact, they have both a red/green/blue charge, and a cyan/magenta/yellow anti-charge.) This means that gluons can interact with each other, making computations much more difficult. Another result is that rather exotic concoctions known as "glueballs", consisting entirely of gluons, can also exist. QED, by contrast, is an abelian gauge theory, since photons do not have electric charge and hence do not interact with each other.

More detail on the strong force


The electroweak force

Developing a Yang-Mills theory of the weak force proved difficult, but was finally accomplished in the late 1960s by Sheldon Glashow, Steven Weinberg, and Abdus Salam. However, it was necessary to unify the weak force with the electromagnetic force in order to do this. The reason is, basically, that the symmetry group required in a Yang-Mills theory acts on doublets consisting of a lepton and its corresponding neutrino. However, leptons have electric charge while neutrinos don't. Hence some interaction between the weak and electromagnetic forces seems inevitable.

We recall that Heisenberg, considering the similarities between protons and neutrons, reasoned that they could be considered to be just one sort of particle, a nucleon, having different amounts of a property called isospin, somewhat analogous to "regular" spin. Assigning some different value of isospin to a nucleon gives you a particle which may be a proton, a neutron, or something in between. We have seen that the group SU(2) can be represented in terms of operations which permute the elements of the proton-neutron doublet.

Although SU(2) ultimately failed to yield a Yang-Mills theory for the strong force, it turns out to be just right for the weak force. Simply looking at the table of elementary particles, there are a number of obvious doublets -- any meson and its corresponding neutrino, for example. Or any quark and its partner of the same generation. Particles in any such doublet differ in the amount they contain of a property analogous to isospin, which is called (unimaginatively) weak isospin.

So consider a lepton field, an electron field in particular. Let each possible field value be the weak isospin at a point, i. e. the degree of "electronness" of some superposition of an electron and an electron neutrino. If this value were the same at all points, then you would conclude that nothing is happening. But suppose there are differences from point to point. In fact, at every point you could arbitrarily assign any value which could result from the operation of the SU(2) symmetry group. Then, to "explain" the changes in these values as you go from point to point, you interpret them mathematically as the action of a force. This is, of course, the weak nuclear force.

Then, as in previous Yang-Mills field constructions, this force is itself represented by one or more fields. In fact for SU(2), which is non-abelian, there must be three fields. And each field corresponds to a particle that mediates the force. For the weak force, these force mediators came to be called "intermediate vector bosons". (These bosons have spin 1 (or -1), which, you recall, makes them "vector" bosons.)

At this point things start to get a little messy. In the Yang-Mills SU(3) theory of the color force (i. e. the strong force), each quark in a triplet differs only in color charge, and not mass or electric charge. But in the SU(2) theory of the weak force, the particles in each doublet (whether they are lepton-neutrino pairs or two quarks of differing flavors) are different in both mass and electric charges. So now we have to deal with a symmetry that is spontaneously broken.

This symmetry breaking is why we need to "unify" the electromagnetic and weak forces in order to get a coherent Yang-Mills gauge theory. What does it actually mean to "unify" two fundamental forces? It means to come up with a single set of equations which describe both forces simultaneously, just as Maxwell's equations unified electric and magnetic forces over 100 years ago. Experimentally, that made quite good sense, because the forces were in fact found to be closely related.

What experimental reason was there to think that the electromagnetic and weak forces should be related? The main reason was that it was noticed that the two seemingly different kinds of force appeared to actually have the same strength at a sufficiently high energy level. Experimentally, the strength of a force is expressed in terms of "coupling constants" which describe the probabilities of various kinds of particle interactions occuring. These probabilities depend on the energies possesed by the particles. When the strengths of these two forces was graphed against the energy possessed by interacting particles, it was observed that the curves approached each other at an energy around 100 GeV (billion electron volts) -- long before such energies were actually attainable in available particle accelerators. This energy is called the electroweak unification energy.

The first step in unifying the two forces was to come up with an appropriate symmetry group. This was relatively straighforward. In group theoretic terms the right choice is known as a "direct product", which in this case is written as U(1)xSU(2). This basically just combines an operation of U(1) with one of SU(2) to produce an operation of the product group. The harder part was arranging the equations which describe the two forces to remain invariant under operations of the product group. This is what the separate contributions of Glashow, Weinberg, and Salam accomplished.

Initially it was not clear that the unified electroweak theory was renormalizable. The reason is that -- for reasons which are still mysterious -- the vector bosons of the theory, far from being massless, are quite heavy. Consequently, the usual result that a Yang-Mills theory with massless bosons is renormalizable doesn't apply. There was, therefore, cause for some concern. However, in 1971 Gerard 't Hooft came through with the required proof of renormalizability for the U(1)xSU(2) electroweak theory.

In spite of the unification of the electromagnetic and weak forces in a renormalizable Yang-Mills theory, there are still some peculiarities of the electroweak force as far as symmetry is concerned. One of these involves the symmetry known as "parity". Parity has to do with the symmetry between right and left. How can this possibly be relevant to elementary particles?

Recall that the elementary fermions have values of spin equal to -1/2 or +1/2. Now, spin is a quantity which behaves mathematically like angular momentum (whence its name). So there is a direction associated with the spin. If this direction is the same as the direction of motion, the particle is said to have a right-handed "helicity". Otherwise, it is left-handed. The equations of the electroweak force are peculiar in that they do not respect the symmetry of parity. This was proven experimentally in 1956 by T. D. Lee and C. N. Yang, to the great surprise of the physics community. In fact, for particles moving at or near the speed of light, the equations strongly favor left-handed particles and right-handed antiparticles.

Parity is one of three important "discrete" symmetries in quantum physics. The other two are charge conjugation and time reversal. There is a famous theorem in quantum mechanics called the CPT theorem. It says that in any particle interaction, the operation of simultaneously reversing all three symmetries (C=charge conjugation, P=parity, T=time reversal) does not affect the results. We have just noted that for interactions involving the weak force, changing parity does affect the results. In fact, it was later found that simultaneously changing C and P can affect the results of some weak interactions. In such a case, the interaction must also violate time reversal symmetry in order to preserve the combined CPT symmetry.

None of this is actually problematical, just odd and seeming to call for fuller explanation. What is still problematical, however, is that the electroweak force breaks symmetry in other ways. It does so, as we noted, in that the electrical charges and masses of the particles in each doublet are different. Neutrinos, it now appears, are not quite massless -- just almost so -- while their partners have appreciable mass. Similarly, the paired quarks have different masses and electrical charge.

The other main breach of symmetry is the fact that the electromagnetic and weak forces differ significantly in strength in low-energy interactions, even though their strengths become equal at the "unification energy" where the two forces become indistinguishable, around 100 GeV.

There needs to be some mechanism to account for this "spontaneous" symmetry breaking. Although there is nothing inherently contradictory about this breaking, an explanation is needed, and the standard model doesn't provide it. A mechanism which employs "Higgs fields" and "Higgs bosons" is the current favorite. The Higgs mechanism is an elegant scheme which can explain not only why the electroweak bosons have considerable mass, but in fact how all the elementary particles which aren't massless acquire their mass.

The Higgs particles themselves should have considerable mass -- approximately the 100 GeV unification energy. Unfortunately, many experiments which, so far, have tested the lower range of this energy, have also failed to turn up any Higgs particles. Doubt is beginning to grow that they will be observed at all. There are alternatives, but none as elegant, and this question definitely remains open. We will deal with it in more detail elsewhere.


Open questions

It should be stressed that the standard model is an exceptionally successful theory. Despite experiments performed by an army of physicists using powerful accelerators for the last 30 years, no contradictions to predictions of the standard model. Problems with the theory are not incorrect predictions. Instead the problems are of several different types:
  1. Phenomena which the theory makes no prediction about, such as neutrino mass.
  2. Numerical quantities which are not predicted by the theory but have to be included as experimental givens, such as the mass of each particle and the strengths of the forces.
  3. Known circumstances which seem extremely implausible, in lieu of any explanation by the theory, such as the factor of 1014 disparity between the characteristic strengths of the weak and strong nuclear forces (the "hierarchy problem").
  4. Pheonomena which the theory simply doesn't deal with at all, such as gravity
We can summarise briefly the main problems here. Most of them will be discussed in more detail on other pages.

Electroweak symmetry breaking

The symmetry between the electromagnetic and weak forces is broken. For instance, the masses of particles in each doublet are different. There is also a large difference between the strength of the two forces at distances larger than the size of a nucleon. One possible way of explaining these facts lies in the "Higgs mechanism", which involves one or more scalar (spin 0) bosons. This is only one possible mechanism, but experimental evidence for the Higgs boson is still lacking.

Another problem is the lack of explanation for certain parameters in the theory, such as the "electroweak mixing angle" which describes how the two forces combine.

Lack of unification of the strong and electroweak forces

After the success in unifying the electromangetic and weak forces, it seemed plausible that the strong force could be unified in the same way. This would be done by the discovery of a larger symmetry group which contains both U(1)xSU(2) (of the electroweak force) and SU(3) (of the strong force). This larger group would be provide the symmetry operations which provide a Yang-Mills gauge theory of the fully unified forces.

In addition to being a plausible thing to expect, such a theory would resolve various puzzles. The hierarchy problem is one of these. Another is how to account for the otherwise seemingly arbitrary fact that the electron has exactly the same (but opposite) electrical charge as the proton.

Unfortunately, no group which makes exactly the right predictions has been discovered. In addition, certain predictions of any such "grand unified theory" have failed to be verfied, such as proton decay.

Too many free parameters in the theory

The theory does not accont for the experimentally measured masses of any elementary particles or the strengths of any fundamental forces. These have had to be included in the theory "by hand".

No explanation for quark confinement

Experiments indicate that the strong force between quarks does not decrease with distance and may in fact increase without limit. The result of this is "quark confinement", which explains why free quarks have never been observed. It has, however, been impossible to demonstrate this behavior of the strong force from the equations of QCD.

Failure of the theory to incorporate gravity

Not only has it been impossible to unify gravity with the other three forces in a Yang-Mills type theory, but in fact any kind of quantum theory of gravity has been impossible to achieve. All attempts to produce a consistent quantized theory of gravity have failed due to infinities occurring in the calculations of the hoped-for theory.

CP symmetry and the strong force

The fact that the weak force violates CP symmetry is well established, but as far as can be determined experimentally, the strong force does not. This is surprising, since theoretical considerations indicate that at least a small violation should exist. One proposed solution is that the magnitude of the small violation might be the average value of a global field. The source of this field would be a new particle called the "axion". The particle would exist if there is an additional global symmetry (called the "Peccei-Quinn symmetry") which is spontaneously broken. By a general principle ("Goldstone's theorem") there should exist a boson (the axion) to explain this symmetry breaking.

No explanation for neutrino mass

The simplest form of the Higgs mechanism for electroweak symmetry breaking provides masses for all the elementary fermions except the neutrino. Various experimental results now strongly indicate that neutrinos have mass, so some more elaborate sort of Higgs mechanism seems called for. It is also possible that right-handed neutrinos could also result -- though they would be extremely hard to observe, since gravity is the only force they would feel.

No explanation for exactly three families of leptons and quarks

Various lines of evidence have established that there are only three "generations" of leptons and quarks. However, there is no general principle which implies that there should be more than one generation. Muons, for instance, in all respects except for their mass, behave exactly like electrons. From the table of elementary particles, one might suspect the existence of some symmetry having a particle triplet consisting of electron, muon, and tau (for example). But no there is no theoretical basis for such a symmetry, and no experimental evidence of its existence.


The problems with the standard model can be summarized as follows:

There does not seem to be any reasonable way to include gravity in the standard model, because a viable quantum field theory of gravity is lacking. Treating gravity as a field in the standard model requires the existence of the graviton as the quantum of the field. The graviton should be a massless spin 2 boson. But naively treating the field equations of general relativity by the rules of quantum mechanics encounters problems of infinities - the theory is not renormalizable.

However, even leaving out gravity, the standard model theory of both the electroweak and strong forces is incomplete.


Recommended references: Web sites

Site indexes

Galaxy: Standard Model
Categorized site directory. Entries usually include descriptive annotations.


Sites with general resources

The E821 Muon (g-2) Home Page
Home page of an important experiment being conducted at the Brookhaven National Laboratory to measure the anomalous magnetic moment of muons. The experiment is indicating deviations from the standard model, possibly due to supersymmetric effects. There is a short list of links to related sites.
E158: The Asymmetric Strength of the Weak Force
Home page of an experiment to measure properties of the weak force, and in particular its parity asymmetry. Includes brief descriptions of the pertinent science, the weak force and parity violation


Surveys, overviews, tutorials

Standard Model
Article from Wikipedia. See also Gauge theory, Gauge field theory
The Standard Model
Elementary tutorial from The Particle Adventure.
The Standard Model
Part of hypertext document from the SLAC Virtual Visitor Center.
The Standard Model of Particle Physics
A single page overview and "simplified summary", with some external links, by Ben Best.
Worldwide discoveries that led to the Standard Model
Single page which lists dates of some of the key discoveries which underlie the standard model. Part of Fermilab's Inquiring Minds pages -- What is the world made of? in particular.
Standard Model of Fundamental Particles and Interactions
Basic information on the standard model, with some charts and graphics.
Muons threaten Standard Model
March 2001 article from Physics World, by Katie Pennicott. "Precise measurements of the magnetic moment of the muon disagree with theoretical predictions. Katie Pennicott asks if the Standard Model of particle physics has finally cracked."
The Standard Model
Part of a course on A Radically Modern Approach to Introductory Physics, by David J. Raymond.
The Standard Model
Downloadable lecture material, by M. J. Herrero.
Physicists Announce Possible Violation of Standard Model of Particle Physics
Undated news article concerning results of the E821 Muon g-2 experiment.


Recommended references: Magazine/journal articles

The Large Hadron Collider
Chris Llewellyn Smith
Scientific American, July 2000, pp. 70-77
The LHC, being constructed near Geneva, will probe as-yet inaccessible predictions of the Standard Model when it becomes operational around 2005. This includes phenomena such as the Higgs field and quark-gluon plasmas.
The Leptons After 100 Years
Martin L. Perl
Physics Today, October 1997, pp. 34-40
Although the first lepton, the electron, was discovered 100 years ago, there are still open question: Are there more than 6 leptons? Is there any intrinsic difference between electrons, muons, and tauons? Do neutrinos have mass?
The Discovery of the Top Quark
Tony M. Liss, Paul L. Tipton
Scientific American, September 1997, pp. 54-59
Two members of the team which demonstrated the existence of the top quark describe how it was done and possible implications of the findings.
Top-ology
Chris Quigg
Physics Today, May 1997, pp. 20-26
The top quark, whose existence was confirmed only in 1995, is the heaviest and shortest-lived quark, yet its properties are related to even such common quantities as the mass of the proton.
The Unification of Electromagnetism with the Weak Force
Paul Langacker; Alfred K. Mann
Physics Today, December 1989, pp. 22-31
The unification of the electromagnetic and weak forces is both a theoretical and experimental success. Experimental measurements of many different kinds confirm the validity of the theory down to a distance scale of about 10-16 cm.
Gauge Theories of the Forces between Elementary Particles
Gerard 't Hooft
Scientific American, June 1980, pp. 104-138
Theories of this kind now describe all four fundamental forces of nature. In such theories, the properties of the forces may be deduced from mathematical symmetries in the laws of physics.


Recommended references: Books

Robert Oerter – The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics
Pi Press, 2006
The standard model of particle physics has been around for about 30 years before the publication of this book, so many young physicists can't remember a time when there wasn't such a model. It gets taken for granted. It doesn't seem (now) nearly as sexy as string theory or other alternatives. But it is so successful, it has been proven frustratingly difficult to find any violations of it – though we know they must be some. So before you go off to look for extensions, it's absolutely necessary to know what's in the standard model. Oerter does a good job of introducing the model for a lay audience.
Bruce A. Schumm – Deep Down Things: The Breathtaking Beauty of Particle Physics
Johns Hopkins Uiversity Press, 2004
Schumm's book, like Robert Oerter's, explains the standard model for a lay audience, but with a somewhat more demanding (and satisfying) emphasis on fundamental physical and mathematical ideas, especially relativistic quantum field theory. Some of the main topics are symmetry, Lie groups, gauge theory, and the Higgs boson.
Gerard 't Hooft -- In Search of the Ultimate Building Blocks
Cambridge University Press, 1997
Excellent but brief, non-mathematical overview of high-energy physics by a leading theorist. The emphasis is on the standard model, with a skeptical attitude towards string theory.
Vincent Icke -- The Force of Symmetry
Cambridge University Press, 1995
A detailed but non-mathematical introduction to high-energy physics, with special emphasis on symmetry, symmetry breaking, and unification. Good glossary and bibliography. This is one of the best books for gaining an intuitive feel for quantum field theory, short of delving into the technical details.
G. D. Coughlan, J. E. Dodd - The Ideas of Particle Physics: An Introduction for Scientists
Cambridge University Press, 1991
Detailed introduction to particle physics with a little bit of math. Covers weak and strong interactions, gauge theories, quantum chromodynamics. Good glossary and bibliography. The book presents a lot of the phenomenological facts about fundamental particles and forces without going into the hairier apparatus of quantum field theory.
I. S. Hughes - Elementary Particles
Cambridge University Press, 1991
Respected textbook on the subject for college undergraduates, so has some mathmematics but not advanced. Good coverage of technical material, including a chapter on implications for cosmology.
Robert P. Crease; Charles C. Mann -- The Second Creation: Makers of the Revolution in 20th-Century Physics
Macmillan Publishing Company, 1986
The authors present a very knowledgable and detailed history of particle physics in the 20th century. (In spite of the subtitle, the development of quantum theory is mentioned only in passing.) It covers the history very well, and in the process explains many technical details of the standard model.
K. Moriyasu -- An Elementary Primer for Gauge Theory
World Scientific Publishing Co., 1983
Gauge theory has become fundamental to the theoretical description of how physical forces can be unified. It is a synthesis of quantum mechanics and symmetry principles. This book is an elementary introduction with some mathematics.
James S. Trefil -- From Atoms to Quarks: An Introduction to the Strange World of Particle Physics
Charles Scribner's Sons, 1980
In spite of its age, this relatively brief volume aimed at a popular audience gives a good presentation of basic facts of the standard model. It's orgnized along historical lines, and its age is actually helpful, since it presents the theory without conceptual complications from later developments such as supersymmetry and string theory.

Home

Copyright © 2002 by Charles Daney, All Rights Reserved