Two crucial questions that are often asked in the context of dark-energy surveys:
In this section we will attempt to answer these questions at least partially, in two different ways. We will start by examining whether we can draw useful lessons from inflation, and then we will look at what we can learn from arguments based on Bayesian model comparison.
In the first part we will see that for single field slow-roll inflation models we effectively measure with percent-level accuracy (see Figure 2); however, the deviation from a scale invariant spectrum means that we nonetheless observe a dynamical evolution and, thus, a deviation from an exact and constant equation of state of . Therefore, we know that inflation was not due to a cosmological constant; we also know that we can see no deviation from a de Sitter expansion for a precision smaller than the one Euclid will reach.
In the second part we will consider the Bayesian evidence in favor of a true cosmological constant if we keep finding ; we will see that for priors on and of order unity, a precision like the one for Euclid is necessary to favor a true cosmological constant decisively. We will also discuss how this conclusion changes depending on the choice of priors.
In all probability the observed late-time acceleration of the universe is not the first period of accelerated expansion that occurred during its evolution: the current standard model of cosmology incorporates a much earlier phase with , called inflation. Such a period provides a natural explanation for several late-time observations:
In addition, inflation provides a mechanism to get rid of unwanted relics from phase transitions in the early universe, like monopoles, that arise in certain scenarios (e.g., grand-unified theories).
While there is no conclusive proof that an inflationary phase took place in the early universe, it is surprisingly difficult to create the observed fluctuation spectrum in alternative scenarios that are strictly causal and only act on sub-horizon scales [854, 803].
If, however, inflation took place, then it seems natural to ask the question whether its observed properties appear similar to the current knowledge about the dark energy, and if yes, whether we can use inflation to learn something about the dark energy. The first lesson to draw from inflation is that it was not due to a pure cosmological constant. This is immediately clear since we exist: inflation ended. We can go even further: if Planck confirms the observations of a deviation from a scale invariant initial spectrum () of WMAP  then this excludes an exactly exponential expansion during the observable epoch and, thus, also a temporary, effective cosmological constant.
If there had been any observers during the observationally accessible period of inflation, what would they have been seeing? Following the analysis in , we notice that
As already said earlier, we conclude that inflation is not due to a cosmological constant. However, an observer back then would nonetheless have found . Thus, observation of (at least down to an error of about 0.02, see Figure 2) does not provide a very strong reason to believe that we are dealing with a cosmological constant.
We can rewrite Eq. (1.5.2) as
However, this last argument is highly speculative, and at least for inflation we know that there are classes of models where the cancellation is indeed natural, which is why one cannot give a lower limit for the amplitude of primordial gravitational waves. On the other hand, the observed period of inflation is probably in the middle of a long slow-roll phase during which tends to be close to (cf. Figure 3), while near the end of inflation the deviations become large. Additionally, inflation happened at an energy scale somewhere between 1 MeV and the Planck scale, while the energy scale of the late-time accelerated expansion is of the order of . At least in this respect the two are very different.
Despite previous arguments, it is natural to ask for a connection between the two known acceleration periods. In fact, in the last few years there has been a renewal of model building in inflationary cosmology by considering the fundamental Higgs as the inflaton field . Such an elegant and economical model can give rise to the observed amplitude of CMB anisotropies when we include a large non-minimal coupling of the Higgs to the scalar curvature. In the context of quantum field theory, the running of the Higgs mass from the electroweak scale to the Planck scale is affected by this non-minimal coupling in such a way that the beta function of the Higgs’ self-coupling vanishes at an intermediate scale (), if the mass of the Higgs is precisely 126 GeV, as measured at the LHC. This partial fixed point (other beta functions do not vanish) suggests an enhancement of symmetry at that scale, and the presence of a Nambu–Goldstone boson (the dilaton field) associated with the breaking of scale invariance . In a subsequent paper , the Higgs-Dilaton scenario was explored in full detail. The model predicts a bound on the scalar spectral index, , with negligible associated running, , and a scalar to tensor ratio, , which, although out of reach of the Planck satellite mission, is within the capabilities of future CMB satellite projects like PRISM . Moreover, the model predicts that, after inflation, the dilaton plays the role of a thawing quintessence field, whose slow motion determines a concrete relation between the early universe fluctuations and the equation of state of dark energy, , which could be within reach of Euclid satellite mission . Furthermore, within the HDI model, there is also a relation between the running of the scalar tilt and the variation of , , a prediction that can easily be ruled out with future surveys.
These relationships between early and late universe acceleration parameters constitute a fundamental physics connection within a very concrete and economical model, where the Higgs plays the role of the inflaton and the dilaton is a thawing quintessence field, whose dynamics has almost no freedom and satisfies all of the present constraints .
In the previous section we saw that inflation provides an argument why an observation of need not support a cosmological constant strongly. Let us now investigate this argument more precisely with Bayesian model comparison. One model, , posits that the accelerated expansion is due to a cosmological constant. The other models assume that the dark energy is dynamical, in a way that is well parametrized either by an arbitrary constant (model ) or by a linear fit (model ). Under the assumption that no deviation from will be detected in the future, at which point should we stop trying to measure ever more accurately? The relevant target here is to quantify at what point we will be able to rule out an entire class of theoretical dark-energy models (when compared to CDM) at a certain threshold for the strength of evidence.
Here we are using the constant and linear parametrization of because on the one hand we can consider the constant to be an effective quantity, averaged over redshift with the appropriate weighting factor for the observable, see , and on the other hand because the precision targets for observations are conventionally phrased in terms of the figure of merit (FoM) given by . We will, therefore, find a direct link between the model probability and the FoM. It would be an interesting exercise to repeat the calculations with a more general model, using e.g. PCA, although we would expect to reach a similar conclusion.
Bayesian model comparison aims to compute the relative model probability
Let us start by following  and consider the Bayes factor between a cosmological constant model and a free but constant effective . If we assume that the data are compatible with with an uncertainty , then the Bayes factor in favor of a cosmological constant is given by
We plot in Figure 4 contours of constant observational accuracy in the model predictivity space for from Eq. (1.5.5), corresponding to odds of 20 to 1 in favor of a cosmological constant (slightly above the “moderate” threshold. The figure can be interpreted as giving the space of extended models that can be significantly disfavored with respect to at a given accuracy. The results for the 3 benchmark models mentioned above (fluid-like, phantom or small departures from ) are summarized in Table 1. Instead, we can ask the question which precision needs to reached to support CDM at a given level. This is shown in Table 2 for odds 20:1 and 150:1. We see that to rule out a fluid-like model, which also covers the parameter space expected for canonical scalar field dark energy, we need to reach a precision comparable to the one that the Euclid satellite is expected to attain.
By considering the model we can also provide a direct link with the target DETF FoM: Let us choose (fairly arbitrarily) a flat probability distribution for the prior, of width and in the dark-energy parameters, so that the value of the prior is everywhere. Let us assume that the likelihood is Gaussian in and and centered on CDM (i.e., the data fully supports as the dark energy).
As above, we need to distinguish different cases depending on the width of the prior. If you accept the argument of the previous section that we expect only a small deviation from , and set a prior width of order 0.01 on both and , then the posterior is dominated by the prior, and the ratio will be of order 1 if the future data is compatible with . Since the precision of the experiment is comparable to the expected deviation, both CDM and evolving dark energy are equally probable (as argued above and shown for model in Table 1), and we have to wait for a detection of or a significant further increase in precision (cf. the last row in Table 2).
However, one often considers a much wider range for , for example the fluid-like model with and with equal probability (and neglecting some subtleties near ). If the likelihood is much narrower than the prior range, then the value of the normalized posterior at will be (since we excluded , else it would half this value). The Bayes factor is then given byif the data is in full agreement with ) under small variations of the prior as well.
A similar analysis could be easily carried out to compare the cosmological constant model against departures from Einstein gravity, thus giving some useful insight into the potential of future surveys in terms of Bayesian model selection.
To summarize, we used inflation as a dark-energy prototype to show that the current experimental bounds of are not yet sufficient to significantly favor a cosmological constant over other models. In addition, even when expecting a deviation of of order unity, our current knowledge of does not allow us to favor strongly in a Bayesian context. Here we showed that we need to reach a percent level accuracy both to have any chance of observing a deviation of from if the dark energy is similar to inflation, and because it is at this point that a cosmological constant starts to be favored decisively for prior widths of order . In either scenario, we do not expect to be able to improve much our knowledge with a lower precision measurement of . The dark energy can of course be quite different from the inflaton and may lead to larger deviations from . This indeed would be the preferred situation for Euclid, as then we will be able to investigate much more easily the physical origin of the accelerate expansion. We can, however, have departures from CDM even if w is very close to today. In fact most present models of modified gravity and dynamical dark energy have a value of which is asymptotically close to (in the sense that large departures from this value is already excluded). In this sense, for example, early dark-energy parameterizations () test the amount of dark energy in the past, which can still be non negligible (ex. ). Similarly, a fifth force can lead to a background similar to LCDM but different effects on perturbations and structure formation .
As discussed in Section 1.4, all dark energy and modified gravity models can be described with the same effective metric degrees of freedom. This makes it impossible in principle to distinguish clearly between the two possibilities with cosmological observations alone. But while the cleanest tests would come from laboratory experiments, this may well be impossible to achieve. We would expect that model comparison analyses would still favor the correct model as it should provide the most elegant and economical description of the data. However, we may not know the correct model a priori, and it would be more useful if we could identify generic differences between the different classes of explanations, based on the phenomenological description that can be used directly to analyze the data.
Looking at the effective energy momentum tensor of the dark-energy sector, we can either try to find a hint in the form of the pressure perturbation or in the effective anisotropic stress . Whilst all scalar field dark energy affects (and for multiple fields with different sound speeds in potentially quite complex ways), they generically have . The opposite is also true, modified gravity models have generically . Radiation and neutrinos will contribute to anisotropic stress on cosmological scales, but their contribution is safely negligible in the late-time universe. In the following sections we will first look at models with single extra degrees of freedom, for which we will find that is a firm prediction. We will then consider the case as an example for multiple degrees of freedom .
In the prototypical scalar-tensor theory, where the scalar is coupled to through , we find that . This is very similar to the case for which (where now ). In both cases the generic model with vanishing anisotropic stress is given by , which corresponds to a constant coupling (for scalar-tensor) or . In both cases we find the GR limit. The other possibility, or , imposes a very specific evolution on the perturbations that in general does not agree with observations.
Another possible way to build a theory that deviates from GR is to use a function of the second-order Lovelock function, the Gauss–Bonnet term . The Gauss–Bonnet term by itself is a topological invariant in 4 spacetime dimensions and does not contribute to the equation of motion. It is useful here since it avoids an Ostrogradski-type instability . In models, the situation is slightly more complicated than for the scalar-tensor case, as
Finally, in DGP one has, with the notation of ,
In all of these examples only the GR limit has consistently no effective anisotropic stress in situations compatible with observational results (matter dominated evolution with a transition towards a state with ).
In models with multiple degrees of freedom it is at least in principle possible to balance the contributions in order to achieve a net vanishing .  explicitly study the case of gravity (please refer to this paper for details). The general equation,
In summary, none of the standard examples with a single extra degree of freedom discussed above allows for a viable model with . While finely balanced solutions can be constructed for models with several degrees of freedom, one would need to link the motion in model space to the evolution of the universe, in order to preserve . This requires even more fine tuning, and in some cases is not possible at all, most notably for evolution to a de Sitter state. The effective anisotropic stress appears therefore to be a very good quantity to look at when searching for generic conclusions on the nature of the accelerated expansion from cosmological observations.
As explained in earlier sections of this report, modified-gravity models cannot be distinguished from dark-energy models by using solely the FLRW background equations. But by comparing the background expansion rate of the universe with observables that depend on linear perturbations of an FRW spacetime we can hope to distinguish between these two categories of explanations. An efficient way to do this is via a parameterized, model-independent framework that describes cosmological perturbation theory in modified gravity. We present here one such framework, the parameterized post-Friedmann formalism 3 that implements possible extensions to the linearized gravitational field equations.
The parameterized post-Friedmann approach (PPF) is inspired by the parameterized post-Newtonian (PPN) formalism [961, 960], which uses a set of parameters to summarize leading-order deviations from the metric of GR. PPN was developed in the 1970s for the purpose of testing of alternative gravity theories in the solar system or binary systems, and is valid in weak-field, low-velocity scenarios. PPN itself cannot be applied to cosmology, because we do not know the exact form of the linearized metric for our Hubble volume. Furthermore, PPN can only test for constant deviations from GR, whereas the cosmological data we collect contain inherent redshift dependence.
For these reasons the PPF framework is a parameterization of the gravitational field equations (instead of the metric) in terms of a set of functions of redshift. A theory of modified gravity can be analytically mapped onto these PPF functions, which in turn can be constrained by data.
We begin by writing the perturbed Einstein field equations for spin-0 (scalar) perturbations in the form:
In principle there could also be new terms containing matter perturbations on the RHS of Eq. (1.5.12). However, for theories that maintain the weak equivalence principle – i.e., those with a Jordan frame where matter is uncoupled to any new fields – these matter terms can be eliminated in favor of additional contributions to and .
The tensor is then expanded in terms of two gauge-invariant perturbation variables and . is one of the standard gauge-invariant Bardeen potentials, while is the following combination of the Bardeen potentials: . We use instead of the usual Bardeen potential because has the same derivative order as (whereas does not). We then deduce that the only possible structure of that maintains the gauge-invariance of the field equations is a linear combination of , and their derivatives, multiplied by functions of the cosmological background (see Eqs. (1.5.13) – (1.5.17) below).
is similarly expanded in a set of gauge-invariant potentials that contain the new degrees of freedom.  presented an algorithm for constructing the relevant gauge-invariant quantities in any theory.
For concreteness we will consider here a theory that contains only one new degree of freedom and is second-order in its equations of motion (a generic but not watertight requirement for stability, see ). Then the four components of Eq. (1.5.12) are:
where . Each of the lettered coefficients in Eqs. (1.5.13) – (1.5.17) is a function of cosmological background quantities, i.e., functions of time or redshift; this dependence has been suppressed above for clarity. Potentially the coefficients could also depend on scale, but this dependence is not arbitrary ). These PPF coefficients are the analogy of the PPN parameters; they are the objects that a particular theory of gravity ‘maps onto’, and the quantities to be constrained by data. Numerous examples of the PPF coefficients corresponding to well-known theories are given in .
The final terms in Eqs. (1.5.13) – (1.5.16) are present to ensure the gauge invariance of the modified field equations, as is required for any theory governed by a covariant action. The quantities , and are all pre-determined functions of the background. and are off-diagonal metric perturbations, so these terms vanish in the conformal Newtonian gauge. The gauge-fixing terms should be regarded as a piece of mathematical book-keeping; there is no constrainable freedom associated with them.
One can then calculate observable quantities – such as the weak lensing kernel or the growth rate of structure – using the parameterized field equations (1.5.13) – (1.5.17). Similarly, they can be implemented in an Einstein–Boltzmann solver code such as camb  to utilize constraints from the CMB. If we take the divergence of the gravitational field equations (i.e., the unperturbed equivalent of Eq. (1.5.12)), the left-hand side vanishes due to the Bianchi identity, while the stress-energy tensor of matter obeys its standard conservation equations (since we are working in the Jordan frame). Hence the -tensor must be separately conserved, and this provides the necessary evolution equation for the variable :
Eq. (1.5.18) has two components. If one wishes to treat theories with more than two new degrees of freedom, further information is needed to supplement the PPF framework.
The full form of the parameterized equations (1.5.13) – (1.5.17) can be simplified in the ‘quasistatic regime’, that is, significantly sub-horizon scales on which the time derivatives of perturbations can be neglected in comparison to their spatial derivatives . Quasistatic lengthscales are the relevant stage for weak lensing surveys and galaxy redshift surveys such as those of Euclid. A common parameterization used on these scales has the form:
where are two functions of time and scale to be constrained. This parameterization has been widely employed [131, 277, 587, 115, 737, 980, 320, 441, 442]. It has the advantages of simplicity and somewhat greater physical transparency: can be regarded as describing evolution of the effective gravitational constant, while can, to a certain extent, be thought of as acting like a source of anisotropic stress (see Section 1.5.2).
Let us make a comment about the number of coefficient functions employed in the PPF formalism. One may justifiably question whether the number of unknown functions in Eqs. (1.5.13) – (1.5.17) could ever be constrained. In reality, the PPF coefficients are not all independent. The form shown above represents a fully agnostic description of the extended field equations. However, as one begins to impose restrictions in theory space (even the simple requirement that the modified field equations must originate from a covariant action), constraint relations between the PPF coefficients begin to emerge. These constraints remove freedom from the parameterization.
Even so, degeneracies will exist between the PPF coefficients. It is likely that a subset of them can be well-constrained, while another subset have relatively little impact on current observables and so cannot be tested. In this case it is justifiable to drop the untestable terms. Note that this realization, in itself, would be an interesting statement – that there are parts of the gravitational field equations that are essentially unknowable.
Finally, we note that there is also a completely different, complementary approach to parameterizing modifications to gravity. Instead of parameterizing the linearized field equations, one could choose to parameterize the perturbed gravitational action. This approach has been used recently to apply the standard techniques of effective field theory to modified gravity; see [107, 142, 411] and references therein.
Living Rev. Relativity 16, (2013), 6
This work is licensed under a Creative Commons License.