The most direct and theory-independent way to measure the cosmological constant would be to actually determine the value of the scale factor as a function of time. Unfortunately, the appearance of in formulae such as (42) renders this difficult. Nevertheless, with sufficiently precise information about the dependence of a distance measure on redshift we can disentangle the effects of spatial curvature, matter, and vacuum energy, and methods along these lines have been popular ways to try to constrain the cosmological constant.
Astronomers measure distance in terms of the “distance modulus” , where is the apparent magnitude of the source and its absolute magnitude. The distance modulus is related to the luminosity distance via.
Recently, significant progress has been made by using Type Ia supernovae as “standardizable candles”. Supernovae are rare – perhaps a few per century in a Milky-Way-sized galaxy – but modern telescopes allow observers to probe very deeply into small regions of the sky, covering a very large number of galaxies in a single observing run. Supernovae are also bright, and Type Ia’s in particular all seem to be of nearly uniform intrinsic luminosity (absolute magnitude , typically comparable to the brightness of the entire host galaxy in which they appear) . They can therefore be detected at high redshifts (), allowing in principle a good handle on cosmological effects [236, 108].
The fact that all SNe Ia are of similar intrinsic luminosities fits well with our understanding of these events as explosions which occur when a white dwarf, onto which mass is gradually accreting from a companion star, crosses the Chandrasekhar limit and explodes. (It should be noted that our understanding of supernova explosions is in a state of development, and theoretical models are not yet able to accurately reproduce all of the important features of the observed events. See [274, 114, 121] for some recent work.) The Chandrasekhar limit is a nearly-universal quantity, so it is not a surprise that the resulting explosions are of nearly-constant luminosity. However, there is still a scatter of approximately 40% in the peak brightness observed in nearby supernovae, which can presumably be traced to differences in the composition of the white dwarf atmospheres. Even if we could collect enough data that statistical errors could be reduced to a minimum, the existence of such an uncertainty would cast doubt on any attempts to study cosmology using SNe Ia as standard candles.
Fortunately, the observed differences in peak luminosities of SNe Ia are very closely correlated with observed differences in the shapes of their light curves: Dimmer SNe decline more rapidly after maximum brightness, while brighter SNe decline more slowly [200, 214, 115]. There is thus a one-parameter family of events, and measuring the behavior of the light curve along with the apparent luminosity allows us to largely correct for the intrinsic differences in brightness, reducing the scatter from 40% to less than 15% – sufficient precision to distinguish between cosmological models. (It seems likely that the single parameter can be traced to the amount of 56Ni produced in the supernova explosion; more nickel implies both a higher peak luminosity and a higher temperature and thus opacity, leading to a slower decline. It would be an exaggeration, however, to claim that this behavior is well-understood theoretically.)
Following pioneering work reported in , two independent groups have undertaken searches for distant supernovae in order to measure cosmological parameters. Figure 3 shows the results for vs. for the High-Z Supernova Team [102, 223, 212, 101], and Figure 4 shows the equivalent results for the Supernova Cosmology Project [195, 194, 197]. Under the assumption that the energy density of the universe is dominated by matter and vacuum components, these data can be converted into limits on and , as shown in Figures 5 and 6.
It is clear that the confidence intervals in the – plane are consistent for the two groups, with somewhat tighter constraints obtained by the Supernova Cosmology Project, who have more data points. The surprising result is that both teams favor a positive cosmological constant, and strongly rule out the traditional favorite universe. They are even inconsistent with an open universe with zero cosmological constant, given what we know about the matter density of the universe (see below).
Given the significance of these results, it is natural to ask what level of confidence we should have in them. There are a number of potential sources of systematic error which have been considered by the two teams; see the original papers [223, 212, 197] for a thorough discussion. The two most worrisome possibilities are intrinsic differences between Type Ia supernovae at high and low redshifts [75, 213], and possible extinction via intergalactic dust [2, 3, 4, 226, 241]. (There is also the fact that intervening weak lensing can change the distance-magnitude relation, but this seems to be a small effect in realistic universes [123, 143].) Both effects have been carefully considered, and are thought to be unimportant, although a better understanding will be necessary to draw firm conclusions. Here, I will briefly mention some of the relevant issues.
As thermonuclear explosions of white dwarfs, Type Ia supernovae can occur in a wide variety of environments. Consequently, a simple argument against evolution is that the high-redshift environments, while chronologically younger, should be a subset of all possible low-redshift environments, which include regions that are “young” in terms of chemical and stellar evolution. Nevertheless, even a small amount of evolution could ruin our ability to reliably constrain cosmological parameters . In their original papers [223, 212, 197], the supernova teams found impressive consistency in the spectral and photometric properties of Type Ia supernovae over a variety of redshifts and environments (e.g., in elliptical vs. spiral galaxies). More recently, however, Riess et al.  have presented tentative evidence for a systematic difference in the properties of high- and low-redshift supernovae, claiming that the risetimes (from initial explosion to maximum brightness) were higher in the high-redshift events. Apart from the issue of whether the existing data support this finding, it is not immediately clear whether such a difference is relevant to the distance determinations: first, because the risetime is not used in determining the absolute luminosity at peak brightness, and second, because a process which only affects the very early stages of the light curve is most plausibly traced to differences in the outer layers of the progenitor, which may have a negligible affect on the total energy output. Nevertheless, any indication of evolution could bring into question the fundamental assumptions behind the entire program. It is therefore essential to improve the quality of both the data and the theories so that these issues may be decisively settled.
Other than evolution, obscuration by dust is the leading concern about the reliability of the supernova results. Ordinary astrophysical dust does not obscure equally at all wavelengths, but scatters blue light preferentially, leading to the well-known phenomenon of “reddening”. Spectral measurements by the two supernova teams reveal a negligible amount of reddening, implying that any hypothetical dust must be a novel “grey” variety. This possibility has been investigated by a number of authors [2, 3, 4, 226, 241]. These studies have found that even grey dust is highly constrained by observations: first, it is likely to be intergalactic rather than within galaxies, or it would lead to additional dispersion in the magnitudes of the supernovae; and second, intergalactic dust would absorb ultraviolet/optical radiation and re-emit it at far infrared wavelengths, leading to stringent constraints from observations of the cosmological far-infrared background. Thus, while the possibility of obscuration has not been entirely eliminated, it requires a novel kind of dust which is already highly constrained (and may be convincingly ruled out by further observations).
According to the best of our current understanding, then, the supernova results indicating an accelerating universe seem likely to be trustworthy. Needless to say, however, the possibility of a heretofore neglected systematic effect looms menacingly over these studies. Future experiments, including a proposed satellite dedicated to supernova cosmology , will both help us improve our understanding of the physics of supernovae and allow a determination of the distance/redshift relation to sufficient precision to distinguish between the effects of a cosmological constant and those of more mundane astrophysical phenomena. In the meantime, it is important to obtain independent corroboration using other methods.
The discovery by the COBE satellite of temperature anisotropies in the cosmic microwave background  inaugurated a new era in the determination of cosmological parameters. To characterize the temperature fluctuations on the sky, we may decompose them into spherical harmonics,[113, 159, 6] typically predicts these kinds of perturbations.)
Although the dependence of the ’s on the parameters can be intricate, nature has chosen not to test the patience of cosmologists, as one of the easiest features to measure – the location in of the first “Doppler peak”, an increase in power due to acoustic oscillations – provides one of the most direct handles on the cosmic energy density, one of the most interesting parameters. The first peak (the one at lowest ) corresponds to the angular scale subtended by the Hubble radius at the time when the CMB was formed (known variously as “decoupling” or “recombination” or “last scattering”) . The angular scale at which we observe this peak is tied to the geometry of the universe: In a negatively (positively) curved universe, photon paths diverge (converge), leading to a larger (smaller) apparent angular size as compared to a flat universe. Since the scale is set mostly by microphysics, this geometrical effect is dominant, and we can relate the spatial curvature as characterized by to the observed peak in the CMB spectrum via [141, 138, 130][34, 128, 137, 276].
Figure 7 shows a summary of data as of 1998, with various experimental results consolidated into bins, along with two theoretical models. Since that time, the data have continued to accumulate (see for example [172, 171]), and the near future should see a wealth of new results of ever-increasing precision. It is clear from the figure that there is good evidence for a peak at approximately , as predicted in a spatially-flat universe. This result can be made more quantitative by fitting the CMB data to models with different values of and [35, 26, 164, 210, 72], or by combining the CMB data with other sources, such as supernovae or large-scale structure [268, 238, 101, 127, 237, 78, 38, 14]. Figure 8 shows the constraints from the CMB in the – plane, using data from the 1997 test flight of the BOOMERANG experiment . (Although the data used to make this plot are essentially independent of those shown in the previous figure, the constraints obtained are nearly the same.) It is clear that the CMB data provide constraints which are complementary to those obtained using supernovae; the two approaches yield confidence contours which are nearly orthogonal in the – plane. The region of overlap is in the vicinity of , which we will see below is also consistent with other determinations.
Many cosmological tests, such as the two just discussed, will constrain some combination of and . It is therefore useful to consider tests of alone, even if our primary goal is to determine . (In truth, it is also hard to constrain alone, as almost all methods actually constrain some combination of and the Hubble constant ); the HST Key Project on the extragalactic distance scale finds , which is consistent with other methods , and what I will assume below.)
For years, determinations of based on dynamics of galaxies and clusters have yielded values between approximately 0.1 and 0.4 – noticeably larger than the density parameter in baryons as inferred from primordial nucleosynthesis, [224, 41], but noticeably smaller than the critical density. The last several years have witnessed a number of new methods being brought to bear on the question; the quantitative results have remained unchanged, but our confidence in them has increased greatly.
A thorough discussion of determinations of requires a review all its own, and good ones are available [66, 15, 247, 88, 206]. Here I will just sketch some of the important methods.
The traditional method to estimate the mass density of the universe is to “weigh” a cluster of galaxies, divide by its luminosity, and extrapolate the result to the universe as a whole. Although clusters are not representative samples of the universe, they are sufficiently large that such a procedure has a chance of working. Studies applying the virial theorem to cluster dynamics have typically obtained values [45, 66, 15]. Although it is possible that the global value of differs appreciably from its value in clusters, extrapolations from small scales do not seem to reach the critical density . New techniques to weigh the clusters, including gravitational lensing of background galaxies  and temperature profiles of the X-ray gas , while not yet in perfect agreement with each other, reach essentially similar conclusions.
Rather than measuring the mass relative to the luminosity density, which may be different inside and outside clusters, we can also measure it with respect to the baryon density , which is very likely to have the same value in clusters as elsewhere in the universe, simply because there is no way to segregate the baryons from the dark matter on such large scales. Most of the baryonic mass is in the hot intracluster gas , and the fraction of total mass in this form can be measured either by direct observation of X-rays from the gas  or by distortions of the microwave background by scattering off hot electrons (the Sunyaev–Zeldovich effect) , typically yielding . Since primordial nucleosynthesis provides a determination of , these measurements imply
Another handle on the density parameter in matter comes from properties of clusters at high redshift. The very existence of massive clusters has been used to argue in favor of , and the lack of appreciable evolution of clusters from high redshifts to the present [17, 44] provides additional evidence that .
The story of large-scale motions is more ambiguous. The peculiar velocities of galaxies are sensitive to the underlying mass density, and thus to , but also to the “bias” describing the relative amplitude of fluctuations in galaxies and mass [66, 65]. Difficulties both in measuring the flows and in disentangling the mass density from other effects make it difficult to draw conclusions at this point, and at present it is hard to say much more than .
Finally, the matter density parameter can be extracted from measurements of the power spectrum of density fluctuations (see for example ). As with the CMB, predicting the power spectrum requires both an assumption of the correct theory and a specification of a number of cosmological parameters. In simple models (e.g., with only cold dark matter and baryons, no massive neutrinos), the spectrum can be fit (once the amplitude is normalized) by a single “shape parameter”, which is found to be equal to . (For more complicated models see .) Observations then yield , or . For a more careful comparison between models and observations, see [156, 157, 71, 205].
Thus, we have a remarkable convergence on values for the density parameter in matter:
The volume of space back to a specified redshift, given by (44), depends sensitively on . Consequently, counting the apparent density of observed objects, whose actual density per cubic Mpc is assumed to be known, provides a potential test for the cosmological constant [109, 96, 244, 48]. Like tests of distance vs. redshift, a significant problem for such methods is the luminosity evolution of whatever objects one might attempt to count. A modern attempt to circumvent this difficulty is to use the statistics of gravitational lensing of distant galaxies; the hope is that the number of condensed objects which can act as lenses is less sensitive to evolution than the number of visible objects.
In a spatially flat universe, the probability of a source at redshift being lensed, relative to the fiducial (, ) case, is given by
As shown in Figure 9, the probability rises dramatically as is increased to unity as we keep fixed. Thus, the absence of a large number of such lenses would imply an upper limit on .
Analysis of lensing statistics is complicated by uncertainties in evolution, extinction, and biases in the lens discovery procedure. It has been argued [146, 83] that the existing data allow us to place an upper limit of in a flat universe. However, other groups [52, 51] have claimed that the current data actually favor a nonzero cosmological constant. The near future will bring larger, more objective surveys, which should allow these ambiguities to be resolved. Other manifestations of lensing can also be used to constrain , including statistics of giant arcs , deep weak-lensing surveys , and lensing in the Hubble Deep Field .
There is a tremendous variety of ways in which a nonzero cosmological constant can manifest itself in observable phenomena. Here is an incomplete list of additional possibilities; see also [48, 58, 218].
This work is licensed under a Creative Commons License.