Two crucial questions that are often asked in the context of darkenergy surveys:
In this section we will attempt to answer these questions at least partially, in two different ways. We will start by examining whether we can draw useful lessons from inflation, and then we will look at what we can learn from arguments based on Bayesian model comparison.
In the first part we will see that for single field slowroll inflation models we effectively measure with percentlevel accuracy (see Figure 2); however, the deviation from a scale invariant spectrum means that we nonetheless observe a dynamical evolution and, thus, a deviation from an exact and constant equation of state of . Therefore, we know that inflation was not due to a cosmological constant; we also know that we can see no deviation from a de Sitter expansion for a precision smaller than the one Euclid will reach.
In the second part we will consider the Bayesian evidence in favor of a true cosmological constant if we keep finding ; we will see that for priors on and of order unity, a precision like the one for Euclid is necessary to favor a true cosmological constant decisively. We will also discuss how this conclusion changes depending on the choice of priors.
In all probability the observed latetime acceleration of the universe is not the first period of accelerated expansion that occurred during its evolution: the current standard model of cosmology incorporates a much earlier phase with , called inflation. Such a period provides a natural explanation for several latetime observations:
In addition, inflation provides a mechanism to get rid of unwanted relics from phase transitions in the early universe, like monopoles, that arise in certain scenarios (e.g., grandunified theories).
While there is no conclusive proof that an inflationary phase took place in the early universe, it is surprisingly difficult to create the observed fluctuation spectrum in alternative scenarios that are strictly causal and only act on subhorizon scales [854, 803].
If, however, inflation took place, then it seems natural to ask the question whether its observed properties appear similar to the current knowledge about the dark energy, and if yes, whether we can use inflation to learn something about the dark energy. The first lesson to draw from inflation is that it was not due to a pure cosmological constant. This is immediately clear since we exist: inflation ended. We can go even further: if Planck confirms the observations of a deviation from a scale invariant initial spectrum () of WMAP [526] then this excludes an exactly exponential expansion during the observable epoch and, thus, also a temporary, effective cosmological constant.
If there had been any observers during the observationally accessible period of inflation, what would they have been seeing? Following the analysis in [475], we notice that
where and here the prime denotes a derivative with respect to the inflaton field. Since, therefore, the tensortoscalar ratio is linked to the equation of state parameter through we can immediately conclude that no deviation of from during inflation has been observed so far, just as no such deviation has been observed for the contemporary dark energy. At least in this respect inflation and the dark energy look similar. However, we also know that where is related to the scalar spectral index by . Thus, if we have that either or , and consequently either or is not constant.As already said earlier, we conclude that inflation is not due to a cosmological constant. However, an observer back then would nonetheless have found . Thus, observation of (at least down to an error of about 0.02, see Figure 2) does not provide a very strong reason to believe that we are dealing with a cosmological constant.
We can rewrite Eq. (1.5.2) as
Naively it would appear rather finetuned if precisely canceled the observed contribution from . Following this line of reasoning, if and are of about the same size, then we would expect to be about 0.005 to 0.015, well within current experimental bounds and roughly at the limit of what Euclid will be able to observe.However, this last argument is highly speculative, and at least for inflation we know that there are classes of models where the cancellation is indeed natural, which is why one cannot give a lower limit for the amplitude of primordial gravitational waves. On the other hand, the observed period of inflation is probably in the middle of a long slowroll phase during which tends to be close to (cf. Figure 3), while near the end of inflation the deviations become large. Additionally, inflation happened at an energy scale somewhere between 1 MeV and the Planck scale, while the energy scale of the latetime accelerated expansion is of the order of . At least in this respect the two are very different.
Despite previous arguments, it is natural to ask for a connection between the two known acceleration periods. In fact, in the last few years there has been a renewal of model building in inflationary cosmology by considering the fundamental Higgs as the inflaton field [133]. Such an elegant and economical model can give rise to the observed amplitude of CMB anisotropies when we include a large nonminimal coupling of the Higgs to the scalar curvature. In the context of quantum field theory, the running of the Higgs mass from the electroweak scale to the Planck scale is affected by this nonminimal coupling in such a way that the beta function of the Higgs’ selfcoupling vanishes at an intermediate scale (), if the mass of the Higgs is precisely 126 GeV, as measured at the LHC. This partial fixed point (other beta functions do not vanish) suggests an enhancement of symmetry at that scale, and the presence of a Nambu–Goldstone boson (the dilaton field) associated with the breaking of scale invariance [820]. In a subsequent paper [383], the HiggsDilaton scenario was explored in full detail. The model predicts a bound on the scalar spectral index, , with negligible associated running, , and a scalar to tensor ratio, , which, although out of reach of the Planck satellite mission, is within the capabilities of future CMB satellite projects like PRISM [52]. Moreover, the model predicts that, after inflation, the dilaton plays the role of a thawing quintessence field, whose slow motion determines a concrete relation between the early universe fluctuations and the equation of state of dark energy, , which could be within reach of Euclid satellite mission [383]. Furthermore, within the HDI model, there is also a relation between the running of the scalar tilt and the variation of , , a prediction that can easily be ruled out with future surveys.
These relationships between early and late universe acceleration parameters constitute a fundamental physics connection within a very concrete and economical model, where the Higgs plays the role of the inflaton and the dilaton is a thawing quintessence field, whose dynamics has almost no freedom and satisfies all of the present constraints [383].
In the previous section we saw that inflation provides an argument why an observation of need not support a cosmological constant strongly. Let us now investigate this argument more precisely with Bayesian model comparison. One model, , posits that the accelerated expansion is due to a cosmological constant. The other models assume that the dark energy is dynamical, in a way that is well parametrized either by an arbitrary constant (model ) or by a linear fit (model ). Under the assumption that no deviation from will be detected in the future, at which point should we stop trying to measure ever more accurately? The relevant target here is to quantify at what point we will be able to rule out an entire class of theoretical darkenergy models (when compared to CDM) at a certain threshold for the strength of evidence.
Here we are using the constant and linear parametrization of because on the one hand we can consider the constant to be an effective quantity, averaged over redshift with the appropriate weighting factor for the observable, see [838], and on the other hand because the precision targets for observations are conventionally phrased in terms of the figure of merit (FoM) given by . We will, therefore, find a direct link between the model probability and the FoM. It would be an interesting exercise to repeat the calculations with a more general model, using e.g. PCA, although we would expect to reach a similar conclusion.
Bayesian model comparison aims to compute the relative model probability
where we used Bayes formula and where is called the Bayes factor. The Bayes factor is the amount by which our relative belief in the two models is modified by the data, with indicating a preference for model 0 (model 1). Since the model is nested in at the point and in model at , we can use the Savage–Dickey (SD) density ratio [e.g. 894]. Based on SD, the Bayes factor between the two models is just the ratio of posterior to prior at or at , marginalized over all other parameters.Let us start by following [900] and consider the Bayes factor between a cosmological constant model and a free but constant effective . If we assume that the data are compatible with with an uncertainty , then the Bayes factor in favor of a cosmological constant is given by
where for the evolving darkenergy model we have adopted a flat prior in the region and we have made use of the Savage–Dickey density ratio formula [see 894]. The prior, of total width , is best interpreted as a factor describing the predictivity of the darkenergy model under consideration. For instance, in a model where dark energy is a fluid with a negative pressure but satisfying the strong energy condition we have that . On the other hand, phantom models will be described by , with the latter being possibly rather large. A model with a large will be more generic and less predictive, and therefore is disfavored by the Occam’s razor of Bayesian model selection, see Eq. (1.5.5). According to the Jeffreys’ scale for the strength of evidence, we have a moderate (strong) preference for the cosmological constant model for (), corresponding to posterior odds of 12:1 to 150:1 (above 150:1).Model  today ()  



Phantom  (strongly disfavored)  
Fluidlike  (slightly disfavored)  
Small departures  (inconclusive)  
We plot in Figure 4 contours of constant observational accuracy in the model predictivity space for from Eq. (1.5.5), corresponding to odds of 20 to 1 in favor of a cosmological constant (slightly above the “moderate” threshold. The figure can be interpreted as giving the space of extended models that can be significantly disfavored with respect to at a given accuracy. The results for the 3 benchmark models mentioned above (fluidlike, phantom or small departures from ) are summarized in Table 1. Instead, we can ask the question which precision needs to reached to support CDM at a given level. This is shown in Table 2 for odds 20:1 and 150:1. We see that to rule out a fluidlike model, which also covers the parameter space expected for canonical scalar field dark energy, we need to reach a precision comparable to the one that the Euclid satellite is expected to attain.
By considering the model we can also provide a direct link with the target DETF FoM: Let us choose (fairly arbitrarily) a flat probability distribution for the prior, of width and in the darkenergy parameters, so that the value of the prior is everywhere. Let us assume that the likelihood is Gaussian in and and centered on CDM (i.e., the data fully supports as the dark energy).
As above, we need to distinguish different cases depending on the width of the prior. If you accept the argument of the previous section that we expect only a small deviation from , and set a prior width of order 0.01 on both and , then the posterior is dominated by the prior, and the ratio will be of order 1 if the future data is compatible with . Since the precision of the experiment is comparable to the expected deviation, both CDM and evolving dark energy are equally probable (as argued above and shown for model in Table 1), and we have to wait for a detection of or a significant further increase in precision (cf. the last row in Table 2).
However, one often considers a much wider range for , for example the fluidlike model with and with equal probability (and neglecting some subtleties near ). If the likelihood is much narrower than the prior range, then the value of the normalized posterior at will be (since we excluded , else it would half this value). The Bayes factor is then given by
For the prior given above, we end up with . In order to reach a “decisive” Bayes factor, usually characterized as or , we thus need a figure of merit exceeding 375. Demanding that Euclid achieve a FoM places us, therefore, on the safe side and allows to reach the same conclusions (the ability to favor CDM decisively if the data is in full agreement with ) under small variations of the prior as well.A similar analysis could be easily carried out to compare the cosmological constant model against departures from Einstein gravity, thus giving some useful insight into the potential of future surveys in terms of Bayesian model selection.
To summarize, we used inflation as a darkenergy prototype to show that the current experimental bounds of are not yet sufficient to significantly favor a cosmological constant over other models. In addition, even when expecting a deviation of of order unity, our current knowledge of does not allow us to favor strongly in a Bayesian context. Here we showed that we need to reach a percent level accuracy both to have any chance of observing a deviation of from if the dark energy is similar to inflation, and because it is at this point that a cosmological constant starts to be favored decisively for prior widths of order . In either scenario, we do not expect to be able to improve much our knowledge with a lower precision measurement of . The dark energy can of course be quite different from the inflaton and may lead to larger deviations from . This indeed would be the preferred situation for Euclid, as then we will be able to investigate much more easily the physical origin of the accelerate expansion. We can, however, have departures from CDM even if w is very close to today. In fact most present models of modified gravity and dynamical dark energy have a value of which is asymptotically close to (in the sense that large departures from this value is already excluded). In this sense, for example, early darkenergy parameterizations () test the amount of dark energy in the past, which can still be non negligible (ex. [723]). Similarly, a fifth force can lead to a background similar to LCDM but different effects on perturbations and structure formation [79].
As discussed in Section 1.4, all dark energy and modified gravity models can be described with the same effective metric degrees of freedom. This makes it impossible in principle to distinguish clearly between the two possibilities with cosmological observations alone. But while the cleanest tests would come from laboratory experiments, this may well be impossible to achieve. We would expect that model comparison analyses would still favor the correct model as it should provide the most elegant and economical description of the data. However, we may not know the correct model a priori, and it would be more useful if we could identify generic differences between the different classes of explanations, based on the phenomenological description that can be used directly to analyze the data.
Looking at the effective energy momentum tensor of the darkenergy sector, we can either try to find a hint in the form of the pressure perturbation or in the effective anisotropic stress . Whilst all scalar field dark energy affects (and for multiple fields with different sound speeds in potentially quite complex ways), they generically have . The opposite is also true, modified gravity models have generically [537]. Radiation and neutrinos will contribute to anisotropic stress on cosmological scales, but their contribution is safely negligible in the latetime universe. In the following sections we will first look at models with single extra degrees of freedom, for which we will find that is a firm prediction. We will then consider the case as an example for multiple degrees of freedom [782].
In the prototypical scalartensor theory, where the scalar is coupled to through , we find that . This is very similar to the case for which (where now ). In both cases the generic model with vanishing anisotropic stress is given by , which corresponds to a constant coupling (for scalartensor) or . In both cases we find the GR limit. The other possibility, or , imposes a very specific evolution on the perturbations that in general does not agree with observations.
Another possible way to build a theory that deviates from GR is to use a function of the secondorder Lovelock function, the Gauss–Bonnet term . The Gauss–Bonnet term by itself is a topological invariant in 4 spacetime dimensions and does not contribute to the equation of motion. It is useful here since it avoids an Ostrogradskitype instability [967]. In models, the situation is slightly more complicated than for the scalartensor case, as
where the dot denotes derivative with respect to ordinary time and (see, e.g., [782]). An obvious choice to force is to take constant, which leads to in the action, and thus again to GR in four spacetime dimensions. There is no obvious way to exploit the extra terms in Eq. (1.5.7), with the exception of curvature dominated evolution and on small scales (which is not very relevant for realistic cosmologies).Finally, in DGP one has, with the notation of [41],
This expression vanishes for (which is never reached in the usual scenario in which from above) and for (for large the expression in front of in (1.5.8) vanishes like ). In the DGP scenario the absolute value of the anisotropic stress grows over time and approaches the limiting value of . The only way to avoid this limit is to set the crossover scale to be unobservably large, . In this situation the fivedimensional part of the action is suppressed and we end up with the usual 4D GR action.In all of these examples only the GR limit has consistently no effective anisotropic stress in situations compatible with observational results (matter dominated evolution with a transition towards a state with ).
In models with multiple degrees of freedom it is at least in principle possible to balance the contributions in order to achieve a net vanishing . [782] explicitly study the case of gravity (please refer to this paper for details). The general equation,
is rather complicated, and generically depends, e.g., on scale of the perturbations (except for constant, which in turn requires constant for and corresponds again to the GR limit). Looking only at small scales, , one finds It is in principle possible to find simultaneous solutions of this equation and the modified Friedmann (00 Einstein) equation, for a given . As an example, the model with allows for matter dominated evolution, , and has no anisotropic stress. It is however not clear at all how to connect this model to different epochs and especially how to move towards a future accelerated epoch with as the above exponents are finetuned to produce no anisotropic stress specifically only during matter domination. Additionally, during the transition to a de Sitter fixed point one encounters generically severe instabilities.In summary, none of the standard examples with a single extra degree of freedom discussed above allows for a viable model with . While finely balanced solutions can be constructed for models with several degrees of freedom, one would need to link the motion in model space to the evolution of the universe, in order to preserve . This requires even more fine tuning, and in some cases is not possible at all, most notably for evolution to a de Sitter state. The effective anisotropic stress appears therefore to be a very good quantity to look at when searching for generic conclusions on the nature of the accelerated expansion from cosmological observations.
As explained in earlier sections of this report, modifiedgravity models cannot be distinguished from darkenergy models by using solely the FLRW background equations. But by comparing the background expansion rate of the universe with observables that depend on linear perturbations of an FRW spacetime we can hope to distinguish between these two categories of explanations. An efficient way to do this is via a parameterized, modelindependent framework that describes cosmological perturbation theory in modified gravity. We present here one such framework, the parameterized postFriedmann formalism [73]^{3} that implements possible extensions to the linearized gravitational field equations.
The parameterized postFriedmann approach (PPF) is inspired by the parameterized postNewtonian (PPN) formalism [961, 960], which uses a set of parameters to summarize leadingorder deviations from the metric of GR. PPN was developed in the 1970s for the purpose of testing of alternative gravity theories in the solar system or binary systems, and is valid in weakfield, lowvelocity scenarios. PPN itself cannot be applied to cosmology, because we do not know the exact form of the linearized metric for our Hubble volume. Furthermore, PPN can only test for constant deviations from GR, whereas the cosmological data we collect contain inherent redshift dependence.
For these reasons the PPF framework is a parameterization of the gravitational field equations (instead of the metric) in terms of a set of functions of redshift. A theory of modified gravity can be analytically mapped onto these PPF functions, which in turn can be constrained by data.
We begin by writing the perturbed Einstein field equations for spin0 (scalar) perturbations in the form:
where is the usual perturbed stressenergy tensor of all cosmologicallyrelevant fluids. The tensor holds new terms that may appear in a modified theory, containing perturbations of the metric (in GR such perturbations are entirely accounted for by ). holds perturbations of any new degrees of freedom that are introduced by modifications to gravity. A simple example of the latter is a new scalar field, such as introduced by scalartensor or Galileon theories. However, new degrees of freedom could also come from spin0 perturbations of new tensor or vector fields, Stckelberg fields, effective fluids and actions based on curvature invariants (such as gravity).In principle there could also be new terms containing matter perturbations on the RHS of Eq. (1.5.12). However, for theories that maintain the weak equivalence principle – i.e., those with a Jordan frame where matter is uncoupled to any new fields – these matter terms can be eliminated in favor of additional contributions to and .
The tensor is then expanded in terms of two gaugeinvariant perturbation variables and . is one of the standard gaugeinvariant Bardeen potentials, while is the following combination of the Bardeen potentials: . We use instead of the usual Bardeen potential because has the same derivative order as (whereas does not). We then deduce that the only possible structure of that maintains the gaugeinvariance of the field equations is a linear combination of , and their derivatives, multiplied by functions of the cosmological background (see Eqs. (1.5.13) – (1.5.17) below).
is similarly expanded in a set of gaugeinvariant potentials that contain the new degrees of freedom. [73] presented an algorithm for constructing the relevant gaugeinvariant quantities in any theory.
For concreteness we will consider here a theory that contains only one new degree of freedom and is secondorder in its equations of motion (a generic but not watertight requirement for stability, see [967]). Then the four components of Eq. (1.5.12) are:
where . Each of the lettered coefficients in Eqs. (1.5.13) – (1.5.17) is a function of cosmological background quantities, i.e., functions of time or redshift; this dependence has been suppressed above for clarity. Potentially the coefficients could also depend on scale, but this dependence is not arbitrary [832]). These PPF coefficients are the analogy of the PPN parameters; they are the objects that a particular theory of gravity ‘maps onto’, and the quantities to be constrained by data. Numerous examples of the PPF coefficients corresponding to wellknown theories are given in [73].
The final terms in Eqs. (1.5.13) – (1.5.16) are present to ensure the gauge invariance of the modified field equations, as is required for any theory governed by a covariant action. The quantities , and are all predetermined functions of the background. and are offdiagonal metric perturbations, so these terms vanish in the conformal Newtonian gauge. The gaugefixing terms should be regarded as a piece of mathematical bookkeeping; there is no constrainable freedom associated with them.
One can then calculate observable quantities – such as the weak lensing kernel or the growth rate of structure – using the parameterized field equations (1.5.13) – (1.5.17). Similarly, they can be implemented in an Einstein–Boltzmann solver code such as camb [559] to utilize constraints from the CMB. If we take the divergence of the gravitational field equations (i.e., the unperturbed equivalent of Eq. (1.5.12)), the lefthand side vanishes due to the Bianchi identity, while the stressenergy tensor of matter obeys its standard conservation equations (since we are working in the Jordan frame). Hence the tensor must be separately conserved, and this provides the necessary evolution equation for the variable :
Eq. (1.5.18) has two components. If one wishes to treat theories with more than two new degrees of freedom, further information is needed to supplement the PPF framework.
The full form of the parameterized equations (1.5.13) – (1.5.17) can be simplified in the ‘quasistatic regime’, that is, significantly subhorizon scales on which the time derivatives of perturbations can be neglected in comparison to their spatial derivatives [457]. Quasistatic lengthscales are the relevant stage for weak lensing surveys and galaxy redshift surveys such as those of Euclid. A common parameterization used on these scales has the form:
where are two functions of time and scale to be constrained. This parameterization has been widely employed [131, 277, 587, 115, 737, 980, 320, 441, 442]. It has the advantages of simplicity and somewhat greater physical transparency: can be regarded as describing evolution of the effective gravitational constant, while can, to a certain extent, be thought of as acting like a source of anisotropic stress (see Section 1.5.2).
Let us make a comment about the number of coefficient functions employed in the PPF formalism. One may justifiably question whether the number of unknown functions in Eqs. (1.5.13) – (1.5.17) could ever be constrained. In reality, the PPF coefficients are not all independent. The form shown above represents a fully agnostic description of the extended field equations. However, as one begins to impose restrictions in theory space (even the simple requirement that the modified field equations must originate from a covariant action), constraint relations between the PPF coefficients begin to emerge. These constraints remove freedom from the parameterization.
Even so, degeneracies will exist between the PPF coefficients. It is likely that a subset of them can be wellconstrained, while another subset have relatively little impact on current observables and so cannot be tested. In this case it is justifiable to drop the untestable terms. Note that this realization, in itself, would be an interesting statement – that there are parts of the gravitational field equations that are essentially unknowable.
Finally, we note that there is also a completely different, complementary approach to parameterizing modifications to gravity. Instead of parameterizing the linearized field equations, one could choose to parameterize the perturbed gravitational action. This approach has been used recently to apply the standard techniques of effective field theory to modified gravity; see [107, 142, 411] and references therein.
http://www.livingreviews.org/lrr20136 
Living Rev. Relativity 16, (2013), 6
This work is licensed under a Creative Commons License. Email us: 