Although the mechanism which suppresses the naive value of the vacuum energy is unknown, it seems easier to imagine a hypothetical scenario which makes it exactly zero than one which sets it to just the right value to be observable today. (Keeping in mind that it is the zero-temperature, late-time vacuum energy which we want to be small; it is expected to change at phase transitions, and a large value in the early universe is a necessary component of inflationary universe scenarios [113, 159, 6].) If the recent observations pointing toward a cosmological constant of astrophysically relevant magnitude are confirmed, we will be faced with the challenge of explaining not only why the vacuum energy is smaller than expected, but also why it has the specific nonzero value it does.
Although initially investigated for other reasons, supersymmetry (SUSY) turns out to have a significant impact on the cosmological constant problem, and may even be said to solve it halfway. SUSY is a spacetime symmetry relating fermions and bosons to each other. Just as ordinary symmetries are associated with conserved charges, supersymmetry is associated with “supercharges” , where is a spinor index (for introductions see [178, 166, 169]). As with ordinary symmetries, a theory may be supersymmetric even though a given state is not supersymmetric; a state which is annihilated by the supercharges, , preserves supersymmetry, while states with are said to spontaneously break SUSY.
Let us begin by considering “globally supersymmetric” theories, which are defined in flat spacetime (obviously an inadequate setting in which to discuss the cosmological constant, but we have to start somewhere). Unlike most non-gravitational field theories, in supersymmetry the total energy of a state has an absolute meaning; the Hamiltonian is related to the supercharges in a straightforward way:. More concretely, in a given supersymmetric theory we can explicitly calculate the contributions to the energy from vacuum fluctuations and from the scalar potential . In the case of vacuum fluctuations, contributions from bosons are exactly canceled by equal and opposite contributions from fermions when supersymmetry is unbroken. Meanwhile, the scalar-field potential in supersymmetric theories takes on a special form; scalar fields must be complex (to match the degrees of freedom of the fermions), and the potential is derived from a function called the superpotential which is necessarily holomorphic (written in terms of and not its complex conjugate ). In the simple Wess-Zumino models of spin-0 and spin-1/2 fields, for example, the scalar potential is given by
So the vacuum energy of a supersymmetric state in a globally supersymmetric theory will vanish. This represents rather less progress than it might appear at first sight, since: 1.) Supersymmetric states manifest a degeneracy in the mass spectrum of bosons and fermions, a feature not apparent in the observed world; and 2.) The above results imply that non-supersymmetric states have a positive-definite vacuum energy. Indeed, in a state where SUSY was broken at an energy scale , we would expect a corresponding vacuum energy . In the real world, the fact that accelerator experiments have not discovered superpartners for the known particles of the Standard Model implies that is of order or higher. Thus, we are left with a discrepancy
As mentioned, however, this analysis is strictly valid only in flat space. In curved spacetime, the global transformations of ordinary supersymmetry are promoted to the position-dependent (gauge) transformations of supergravity. In this context the Hamiltonian and supersymmetry generators play different roles than in flat spacetime, but it is still possible to express the vacuum energy in terms of a scalar field potential . In supergravity depends not only on the superpotential , but also on a “Kähler potential” , and the Kähler metric constructed from the Kähler potential by . (The basic role of the Kähler metric is to define the kinetic term for the scalars, which takes the form .) The scalar potential is). At the same time, supergravity is not by itself a renormalizable quantum theory, and therefore it may not be reasonable to hope that a solution can be found purely within this context.
Unlike supergravity, string theory appears to be a consistent and well-defined theory of quantum gravity, and therefore calculating the value of the cosmological constant should, at least in principle, be possible. On the other hand, the number of vacuum states seems to be quite large, and none of them (to the best of our current knowledge) features three large spatial dimensions, broken supersymmetry, and a small cosmological constant. At the same time, there are reasons to believe that any realistic vacuum of string theory must be strongly coupled ; therefore, our inability to find an appropriate solution may simply be due to the technical difficulty of the problem. (For general introductions to string theory, see [110, 203]; for cosmological issues, see [167, 21]).
String theory is naturally formulated in more than four spacetime dimensions. Studies of duality symmetries have revealed that what used to be thought of as five distinct ten-dimensional superstring theories – Type I, Types IIA and IIB, and heterotic theories based on gauge groups E(8)E(8) and SO(32) – are, along with eleven-dimensional supergravity, different low-energy weak-coupling limits of a single underlying theory, sometimes known as M-theory. In each of these six cases, the solution with the maximum number of uncompactified, flat spacetime dimensions is a stable vacuum preserving all of the supersymmetry. To bring the theory closer to the world we observe, the extra dimensions can be compactified on a manifold whose Ricci tensor vanishes. There are a large number of possible compactifications, many of which preserve some but not all of the original supersymmetry. If enough SUSY is preserved, the vacuum energy will remain zero; generically there will be a manifold of such states, known as the moduli space.
Of course, to describe our world we want to break all of the supersymmetry. Investigations in contexts where this can be done in a controlled way have found that the induced cosmological constant vanishes at the classical level, but a substantial vacuum energy is typically induced by quantum corrections . Moore  has suggested that Atkin–Lehner symmetry, which relates strong and weak coupling on the string worldsheet, can enforce the vanishing of the one-loop quantum contribution in certain models (see also [67, 68]); generically, however, there would still be an appreciable contribution at two loops.
Thus, the search is still on for a four-dimensional string theory vacuum with broken supersymmetry and vanishing (or very small) cosmological constant. (See  for a general discussion of the vacuum problem in string theory.) The difficulty of achieving this in conventional models has inspired a number of more speculative proposals, which I briefly list here.
Of course, string theory might not be the correct description of nature, or its current formulation might not be directly relevant to the cosmological constant problem. For example, a solution may be provided by loop quantum gravity , or by a composite graviton . It is probably safe to believe that a significant advance in our understanding of fundamental physics will be required before we can demonstrate the existence of a vacuum state with the desired properties. (Not to mention the equally important question of why our world is based on such a state, rather than one of the highly supersymmetric states that appear to be perfectly good vacua of string theory.)
The anthropic principle [25, 122] is essentially the idea that some of the parameters characterizing the universe we observe may not be determined directly by the fundamental laws of physics, but also by the truism that intelligent observers will only ever experience conditions which allow for the existence of intelligent observers. Many professional cosmologists view this principle in much the same way as many traditional literary critics view deconstruction – as somehow simultaneously empty of content and capable of working great evil. Anthropic arguments are easy to misuse, and can be invoked as a way out of doing the hard work of understanding the real reasons behind why we observe the universe we do. Furthermore, a sense of disappointment would inevitably accompany the realization that there were limits to our ability to unambiguously and directly explain the observed universe from first principles. It is nevertheless possible that some features of our world have at best an anthropic explanation, and the value of the cosmological constant is perhaps the most likely candidate.
In order for the tautology that “observers will only observe conditions which allow for observers” to have any force, it is necessary for there to be alternative conditions – parts of the universe, either in space, time, or branches of the wavefunction – where things are different. In such a case, our local conditions arise as some combination of the relative abundance of different environments and the likelihood that such environments would give rise to intelligence. Clearly, the current state of the art doesn’t allow us to characterize the full set of conditions in the entire universe with any confidence, but modern theories of inflation and quantum cosmology do at least allow for the possibility of widely disparate parts of the universe in which the “constants of nature” take on very different values (for recent examples see [100, 161, 256, 163, 118, 162, 251, 258]). We are therefore faced with the task of estimating quantitatively the likelihood of observing any specific value of within such a scenario.
The most straightforward anthropic constraint on the vacuum energy is that it must not be so high that galaxies never form . From the discussion in Section 2.4, we know that overdense regions do not collapse once the cosmological constant begins to dominate the universe; if this happens before the epoch of galaxy formation, the universe will be devoid of galaxies, and thus of stars and planets, and thus (presumably) of intelligent life. The condition that implies[239, 104, 122].)
However, it is better to ask what is the most likely value of , i.e. what is the value that would be experienced by the largest number of observers [257, 76]? Since a universe with will have many more galaxies than one with , it is quite conceivable that most observers will measure something close to the former value. The probability measure for observing a value of can be decomposed asa priori probability measure (whatever that might mean) for , and is the average number of galaxies which form at the specified value of . Martel, Shapiro and Weinberg  have presented a calculation of using a spherical-collapse model. They argue that it is natural to take the a priori distribution to be a constant, since the allowed range of is very far from what we would expect from particle-physics scales. Garriga and Vilenkin  argue on the basis of quantum cosmology that there can be a significant departure from a constant a priori distribution. However, in either case the conclusion is that an observed of the same order of magnitude as is by no means extremely unlikely (which is probably the best one can hope to say given the uncertainties in the calculation).
Thus, if one is willing to make the leap of faith required to believe that the value of the cosmological constant is chosen from an ensemble of possibilities, it is possible to find an “explanation” for its current value (which, given its unnaturalness from a variety of perspectives, seems otherwise hard to understand). Perhaps the most significant weakness of this point of view is the assumption that there are a continuum of possibilities for the vacuum energy density. Such possibilities correspond to choices of vacuum states with arbitrarily similar energies. If these states were connected to each other, there would be local fluctuations which would appear to us as massless fields, which are not observed (see Section 4.5). If on the other hand the vacua are disconnected, it is hard to understand why all possible values of the vacuum energy are represented, rather than the differences in energies between different vacua being given by some characteristic particle-physics scale such as or . (For one scenario featuring discrete vacua with densely spaced energies, see .) It will therefore (again) require advances in our understanding of fundamental physics before an anthropic explanation for the current value of the cosmological constant can be accepted.
The importance of the cosmological constant problem has engendered a wide variety of proposed solutions. This section will present only a brief outline of some of the possibilities, along with references to recent work; further discussion and references can be found in [264, 48, 218].
One approach which has received a great deal of attention is the famous suggestion by Coleman , that effects of virtual wormholes could set the cosmological constant to zero at low energies. The essential idea is that wormholes (thin tubes of spacetime connecting macroscopically large regions) can act to change the effective value of all the observed constants of nature. If we calculate the wave function of the universe by performing a Feynman path integral over all possible spacetime metrics with wormholes, the dominant contribution will be from those configurations whose effective values for the physical constants extremize the action. These turn out to be, under a certain set of assumed properties of Euclidean quantum gravity, configurations with zero cosmological constant at late times. Thus, quantum cosmology predicts that the constants we observe are overwhelmingly likely to take on values which imply a vanishing total vacuum energy. However, subsequent investigations have failed to inspire confidence that the desired properties of Euclidean quantum cosmology are likely to hold, although it is still something of an open question; see discussions in [264, 48].
Another route one can take is to consider alterations of the classical theory of gravity. The simplest possibility is to consider adding a scalar field to the theory, with dynamics which cause the scalar to evolve to a value for which the net cosmological constant vanishes (see for example [74, 230]). Weinberg, however, has pointed out on fairly general grounds that such attempts are unlikely to work [264, 265]; in models proposed to date, either there is no solution for which the effective vacuum energy vanishes, or there is a solution but with other undesirable properties (such as making Newton’s constant also vanish). Rather than adding scalar fields, a related approach is to remove degrees of freedom by making the determinant of the metric, which multiplies in the action (15), a non-dynamical quantity, or at least changing its dynamics in some way (see [111, 270, 177] for recent examples). While this approach has not led to a believable solution to the cosmological constant problem, it does change the context in which it appears, and may induce different values for the effective vacuum energy in different branches of the wavefunction of the universe.
Along with global supersymmetry, there is one other symmetry which would work to prohibit a cosmological constant: conformal (or scale) invariance, under which the metric is multiplied by a spacetime-dependent function, . Like supersymmetry, conformal invariance is not manifest in the Standard Model of particle physics. However, it has been proposed that quantum effects could restore conformal invariance on length scales comparable to the cosmological horizon size, working to cancel the cosmological constant (for some examples see [240, 12, 11]). At this point it remains unclear whether this suggestion is compatible with a more complete understanding of quantum gravity, or with standard cosmological observations.
A final mechanism to suppress the cosmological constant, related to the previous one, relies on quantum particle production in de Sitter space (analogous to Hawking radiation around black holes). The idea is that the effective energy-momentum tensor of such particles may act to cancel out the bare cosmological constant (for recent attempts see [242, 243, 1, 184]). There is currently no consensus on whether such an effect is physically observable (see for example ).
If inventing a theory in which the vacuum energy vanishes is difficult, finding a model that predicts a vacuum energy which is small but not quite zero is all that much harder. Along these lines, there are various numerological games one can play. For example, the fact that supersymmetry solves the problem halfway could be suggestive; a theory in which the effective vacuum energy scale was given not by but by would seem to fit the observations very well. The challenging part of this program, of course, is to devise such a theory. Alternatively, one could imagine that we live in a “false vacuum” – that the absolute minimum of the vacuum energy is truly zero, but we live in a state which is only a local minimum of the energy. Scenarios along these lines have been explored [250, 103, 152]; the major hurdle to be overcome is explaining why the energy difference between the true and false vacua is so much smaller than one would expect.
Although a cosmological constant is an excellent fit to the current data, the observations can also be accommodated by any form of “dark energy” which does not cluster on small scales (so as to avoid being detected by measurements of ) and redshifts away only very slowly as the universe expands [to account for the accelerated expansion, as per equation (33)]. This possibility has been extensively explored of late, and a number of candidates have been put forward.
One way to parameterize such a component is by an effective equation of state, . (A large number of phenomenological models of this type have been investigated, starting with the early work in [183, 89]; see [182, 218] for many more references.) The relevant range for is between (ordinary matter) and (true cosmological constant); sources with redshift away more rapidly than ordinary matter (and therefore cause extra deceleration), while is unphysical by the criteria discussed in Section 2.1 (although see ). While not every source will obey an equation of state with , it is often the case that a single effective characterizes the behavior for the redshift range over which the component can potentially be observed. Current observations of supernovae, large-scale structure, gravitational lensing, and the CMB already provide interesting limits on [209, 56, 249, 93, 54, 101, 197, 260, 196, 261, 77, 202], and future data will be able to do much better [77, 135, 60, 220]. Figure 10 shows an example, in this case limits from supernovae and large-scale structure on and in a universe which is assumed to be flat and dominated by and ordinary matter. It is clear that the favored value for the equation-of-state parameter is near , that of a true cosmological constant, although other values are not completely ruled out.
The simplest physical model for an appropriate dark energy component is a single slowly-rolling scalar field, sometimes referred to as “quintessence” [73, 266, 190, 208, 267, 120, 94, 92, 91, 86, 43, 132]. In an expanding universe, a spatially homogeneous scalar with potential and minimal coupling to gravity obeys
There are many reasons to consider dynamical dark energy as an alternative to a cosmological constant. First and foremost, it is a logical possibility which might be correct, and can be constrained by observation. Secondly, it is consistent with the hope that the ultimate vacuum energy might actually be zero, and that we simply haven’t relaxed all the way to the vacuum as yet. But most interestingly, one might wonder whether replacing a constant parameter with a dynamical field could allow us to relieve some of the burden of fine-tuning that inevitably accompanies the cosmological constant. To date, investigations have focused on scaling or tracker models of quintessence, in which the scalar field energy density can parallel that of matter or radiation, at least for part of its history [86, 62, 279, 158, 232, 278, 219]. (Of course, we do not want the dark energy density to redshift away as rapidly as that in matter during the current epoch, or the universe would not be accelerating.) Tracker models can be constructed in which the vacuum energy density at late times is robust, in the sense that it does not depend sensitively on the initial conditions for the field. However, the ultimate value still depends sensitively on the parameters in the potential. Indeed, it is hard to imagine how this could help but be the case; unlike the case of the axion solution to the strong-CP problem, we have no symmetry to appeal to that would enforce a small vacuum energy, much less a particular small nonzero number.
Quintessence models also introduce new naturalness problems in addition to those of a cosmological constant. These can be traced to the fact that, in order for the field to be slowly-rolling today, we require ; but this expression is the effective mass of fluctuations in , so we have[47, 53, 125]. The only known way to obtain such a suppression is through the imposition of an approximate global symmetry (which would also help explain the low mass of the field), of the type characteristic of pseudo-Goldstone boson models of quintessence, which have been actively explored [92, 91, 144, 55, 145, 179]. (Cosmological pseudo-Goldstone bosons are potentially detectable through their tendency to rotate polarized radiation from galaxies and the CMB [47, 165].) See  for a discussion of further fine-tuning problems in the context of supersymmetric models.
Nevertheless, these naturalness arguments are by no means airtight, and it is worth considering specific particle-physics models for the quintessence field. In addition to the pseudo-Goldstone boson models just mentioned, these include models based on supersymmetric gauge theories [31, 170], supergravity [37, 5], small extra dimensions [29, 24], large extra dimensions [28, 22], quantum field theory effects in curved spacetime [185, 186], and non-minimal couplings to the curvature scalar [217, 253, 8, 198, 199, 64, 30]. Finally, the possibility has been raised that the scalar field responsible for driving inflation may also serve as quintessence [90, 191, 192, 106], although this proposal has been criticized for producing unwanted relics and isocurvature fluctuations .
There are other models of dark energy besides those based on nearly-massless scalar fields. One scenario is “solid” dark matter, typically based on networks of tangled cosmic strings or domain walls [255, 229, 39, 27]. Strings give an effective equation-of-state parameter , and walls have , so walls are a better fit to the data at present. There is also the idea of dark matter particles whose masses increase as the universe expands, their energy thus redshifting away more slowly than that of ordinary matter [99, 9] (see also ). The cosmological consequences of this kind of scenario turn out to be difficult to analyze analytically, and work is still ongoing.
This work is licensed under a Creative Commons License.