1.5 Generic properties of dark energy and modified gravity models

This section explores some generic issues that are not connected to particular models (although we use some specific models as examples). First, we ask ourselves to which precision we should measure w in order to make a significant progress in understanding dark energy. Second, we discuss the role of the anisotropic stress in distinguishing between dark energy and modified gravity models. Finally, we present some general consistency relations among the perturbation variables that all models of modified gravity should fulfill.

1.5.1 To which precision should we measure w?

Two crucial questions that are often asked in the context of dark-energy surveys:

In this section we will attempt to answer these questions at least partially, in two different ways. We will start by examining whether we can draw useful lessons from inflation, and then we will look at what we can learn from arguments based on Bayesian model comparison.

In the first part we will see that for single field slow-roll inflation models we effectively measure w ∼ − 1 with percent-level accuracy (see Figure 2View Image); however, the deviation from a scale invariant spectrum means that we nonetheless observe a dynamical evolution and, thus, a deviation from an exact and constant equation of state of w = − 1. Therefore, we know that inflation was not due to a cosmological constant; we also know that we can see no deviation from a de Sitter expansion for a precision smaller than the one Euclid will reach.

In the second part we will consider the Bayesian evidence in favor of a true cosmological constant if we keep finding w = − 1; we will see that for priors on w0 and wa of order unity, a precision like the one for Euclid is necessary to favor a true cosmological constant decisively. We will also discuss how this conclusion changes depending on the choice of priors.

1.5.1.1 Lessons from inflation

In all probability the observed late-time acceleration of the universe is not the first period of accelerated expansion that occurred during its evolution: the current standard model of cosmology incorporates a much earlier phase with ¨a > 0, called inflation. Such a period provides a natural explanation for several late-time observations:

In addition, inflation provides a mechanism to get rid of unwanted relics from phase transitions in the early universe, like monopoles, that arise in certain scenarios (e.g., grand-unified theories).

While there is no conclusive proof that an inflationary phase took place in the early universe, it is surprisingly difficult to create the observed fluctuation spectrum in alternative scenarios that are strictly causal and only act on sub-horizon scales [854, 803].

If, however, inflation took place, then it seems natural to ask the question whether its observed properties appear similar to the current knowledge about the dark energy, and if yes, whether we can use inflation to learn something about the dark energy. The first lesson to draw from inflation is that it was not due to a pure cosmological constant. This is immediately clear since we exist: inflation ended. We can go even further: if Planck confirms the observations of a deviation from a scale invariant initial spectrum (n ⁄= 1 s) of WMAP [526Jump To The Next Citation Point] then this excludes an exactly exponential expansion during the observable epoch and, thus, also a temporary, effective cosmological constant.

If there had been any observers during the observationally accessible period of inflation, what would they have been seeing? Following the analysis in [475Jump To The Next Citation Point], we notice that

2 H˙ 2 1 + w = − ---2 = -𝜀H , (1.5.1 ) 3H 3
where 𝜖H ≡ 2M P2l(H ′∕H )2 and here the prime denotes a derivative with respect to the inflaton field. Since, therefore, the tensor-to-scalar ratio is linked to the equation of state parameter through r ∼ 24 (1 + w ) we can immediately conclude that no deviation of from w = − 1 during inflation has been observed so far, just as no such deviation has been observed for the contemporary dark energy. At least in this respect inflation and the dark energy look similar. However, we also know that
dln(1-+-w-)= 2(ηH − 𝜀H ) (1.5.2 ) dN
where ηH ≡ 2M 2H ′′∕H Pl is related to the scalar spectral index by 2ηH = (ns − 1) + 4𝜀H. Thus, if ns ⁄= 1 we have that either ηH ⁄= 0 or 𝜀H ⁄= 0, and consequently either w ⁄= − 1 or w is not constant.

As already said earlier, we conclude that inflation is not due to a cosmological constant. However, an observer back then would nonetheless have found w ≈ − 1. Thus, observation of w ≈ − 1 (at least down to an error of about 0.02, see Figure 2View Image) does not provide a very strong reason to believe that we are dealing with a cosmological constant.

View Image

Figure 2: The evolution of w as a function of the comoving scale k, using only the 5-year WMAP CMB data. Red and yellow are the 95% and 68% confidence regions for the LV formalism. Blue and purple are the same for the flow-equation formalism. From the outside inward, the colored regions are red, yellow, blue, and purple. Image reproduced by permission from [475Jump To The Next Citation Point]; copyright by APS.

We can rewrite Eq. (1.5.2View Equation) as

1 ηH ηH (1 + w) = − --(ns − 1) + ---≈ 0.007 + --. (1.5.3 ) 6 3 3
Naively it would appear rather fine-tuned if ηH precisely canceled the observed contribution from ns − 1. Following this line of reasoning, if 𝜀H and ηH are of about the same size, then we would expect 1 + w to be about 0.005 to 0.015, well within current experimental bounds and roughly at the limit of what Euclid will be able to observe.

However, this last argument is highly speculative, and at least for inflation we know that there are classes of models where the cancellation is indeed natural, which is why one cannot give a lower limit for the amplitude of primordial gravitational waves. On the other hand, the observed period of inflation is probably in the middle of a long slow-roll phase during which w tends to be close to − 1 (cf. Figure 3View Image), while near the end of inflation the deviations become large. Additionally, inflation happened at an energy scale somewhere between 1 MeV and the Planck scale, while the energy scale of the late-time accelerated expansion is of the order of 10−3 eV. At least in this respect the two are very different.

View Image

Figure 3: The complete evolution of w (N ), from the flow-equation results accepted by the CMB likelihood. Inflation is made to end at N = 0 where w(N = 0) = − 1∕3 corresponding to 𝜖H (N = 0) = 1. For our choice of priors on the slow-roll parameters at N = 0, we find that w decreases rapidly towards − 1 (see inset) and stays close to it during the period when the observable scales leave the horizon (N ≈ 40 –60). Image reproduced by permission from [475]; copyright by APS.

1.5.1.2 Higgs-Dilaton Inflation: a connection between the early and late universe acceleration

Despite previous arguments, it is natural to ask for a connection between the two known acceleration periods. In fact, in the last few years there has been a renewal of model building in inflationary cosmology by considering the fundamental Higgs as the inflaton field [133]. Such an elegant and economical model can give rise to the observed amplitude of CMB anisotropies when we include a large non-minimal coupling of the Higgs to the scalar curvature. In the context of quantum field theory, the running of the Higgs mass from the electroweak scale to the Planck scale is affected by this non-minimal coupling in such a way that the beta function of the Higgs’ self-coupling vanishes at an intermediate scale (μ ∼ 1015 GeV), if the mass of the Higgs is precisely 126 GeV, as measured at the LHC. This partial fixed point (other beta functions do not vanish) suggests an enhancement of symmetry at that scale, and the presence of a Nambu–Goldstone boson (the dilaton field) associated with the breaking of scale invariance [820]. In a subsequent paper [383Jump To The Next Citation Point], the Higgs-Dilaton scenario was explored in full detail. The model predicts a bound on the scalar spectral index, ns < 0.97, with negligible associated running, − 0.0006 < dln ns∕d ln k < 0.00015, and a scalar to tensor ratio, 0.0009 < r < 0.0033, which, although out of reach of the Planck satellite mission, is within the capabilities of future CMB satellite projects like PRISM [52]. Moreover, the model predicts that, after inflation, the dilaton plays the role of a thawing quintessence field, whose slow motion determines a concrete relation between the early universe fluctuations and the equation of state of dark energy, 3(1 + w ) = 1 − ns > 0.03, which could be within reach of Euclid satellite mission [383Jump To The Next Citation Point]. Furthermore, within the HDI model, there is also a relation between the running of the scalar tilt and the variation of w (a), d lnns∕d lnk = 3wa, a prediction that can easily be ruled out with future surveys.

These relationships between early and late universe acceleration parameters constitute a fundamental physics connection within a very concrete and economical model, where the Higgs plays the role of the inflaton and the dilaton is a thawing quintessence field, whose dynamics has almost no freedom and satisfies all of the present constraints [383].

1.5.1.3 When should we stop: Bayesian model comparison

In the previous section we saw that inflation provides an argument why an observation of w ≈ − 1 need not support a cosmological constant strongly. Let us now investigate this argument more precisely with Bayesian model comparison. One model, M0, posits that the accelerated expansion is due to a cosmological constant. The other models assume that the dark energy is dynamical, in a way that is well parametrized either by an arbitrary constant w (model M1) or by a linear fit w (a) = w0 + (1 − a)wa (model M2). Under the assumption that no deviation from w = − 1 will be detected in the future, at which point should we stop trying to measure w ever more accurately? The relevant target here is to quantify at what point we will be able to rule out an entire class of theoretical dark-energy models (when compared to ΛCDM) at a certain threshold for the strength of evidence.

Here we are using the constant and linear parametrization of w because on the one hand we can consider the constant w to be an effective quantity, averaged over redshift with the appropriate weighting factor for the observable, see [838Jump To The Next Citation Point], and on the other hand because the precision targets for observations are conventionally phrased in terms of the figure of merit (FoM) given by ∘ ------------- 1∕ |Cov (w0, wa)|. We will, therefore, find a direct link between the model probability and the FoM. It would be an interesting exercise to repeat the calculations with a more general model, using e.g. PCA, although we would expect to reach a similar conclusion.

Bayesian model comparison aims to compute the relative model probability

P-(M0-|d) = P-(d|M0-)P-(M0-) (1.5.4 ) P (M1 |d) P (d|M1 )P (M1 )
where we used Bayes formula and where B01 ≡ P (d |M0 )∕P (d|M1 ) is called the Bayes factor. The Bayes factor is the amount by which our relative belief in the two models is modified by the data, with lnB01 > (< 0) indicating a preference for model 0 (model 1). Since the model M0 is nested in M1 at the point w = − 1 and in model M2 at (w0 = − 1,wa = 0), we can use the Savage–Dickey (SD) density ratio [e.g. 894Jump To The Next Citation Point]. Based on SD, the Bayes factor between the two models is just the ratio of posterior to prior at w = − 1 or at (w0 = − 1,wa = 0), marginalized over all other parameters.

Let us start by following [900Jump To The Next Citation Point] and consider the Bayes factor B01 between a cosmological constant model w = − 1 and a free but constant effective w. If we assume that the data are compatible with w = − 1 eff with an uncertainty σ, then the Bayes factor in favor of a cosmological constant is given by

∘ -- [ ( ) ( ) ]− 1 B = -2Δ+--+-Δ-−- erfc − √Δ+-- − erfc Δ√-−-- , (1.5.5 ) π σ 2σ 2σ
where for the evolving dark-energy model we have adopted a flat prior in the region − 1 − Δ ≤ w ≤ − 1 + Δ − eff + and we have made use of the Savage–Dickey density ratio formula [see 894Jump To The Next Citation Point]. The prior, of total width Δ = Δ+ + Δ −, is best interpreted as a factor describing the predictivity of the dark-energy model under consideration. For instance, in a model where dark energy is a fluid with a negative pressure but satisfying the strong energy condition we have that Δ+ = 2∕3, Δ− = 0. On the other hand, phantom models will be described by Δ+ = 0,Δ − > 0, with the latter being possibly rather large. A model with a large Δ will be more generic and less predictive, and therefore is disfavored by the Occam’s razor of Bayesian model selection, see Eq. (1.5.5View Equation). According to the Jeffreys’ scale for the strength of evidence, we have a moderate (strong) preference for the cosmological constant model for 2.5 < lnB01 < 5.0 (ln B01 > 5.0), corresponding to posterior odds of 12:1 to 150:1 (above 150:1).
View Image

Figure 4: Required accuracy on weff = − 1 to obtain strong evidence against a model where − 1 − Δ − ≤ we ff ≤ − 1 + Δ+ as compared to a cosmological constant model, w = − 1. For a given σ, models to the right and above the contour are disfavored with odds of more than 20:1.


Table 1: Strength of evidence disfavoring the three benchmark models against a cosmological constant model, using an indicative accuracy on w = − 1 from present data, σ ∼ 0.1.
Model (Δ+, Δ − ) ln B today (σ = 0.1)



Phantom (0,10) 4.4 (strongly disfavored)
Fluid-like (2∕3,0) 1.7 (slightly disfavored)
Small departures (0.01,0.01) 0.0 (inconclusive)

We plot in Figure 4View Image contours of constant observational accuracy σ in the model predictivity space (Δ − ,Δ+ ) for ln B = 3.0 from Eq. (1.5.5View Equation), corresponding to odds of 20 to 1 in favor of a cosmological constant (slightly above the “moderate” threshold. The figure can be interpreted as giving the space of extended models that can be significantly disfavored with respect to w = − 1 at a given accuracy. The results for the 3 benchmark models mentioned above (fluid-like, phantom or small departures from w = − 1) are summarized in Table 1. Instead, we can ask the question which precision needs to reached to support ΛCDM at a given level. This is shown in Table 2 for odds 20:1 and 150:1. We see that to rule out a fluid-like model, which also covers the parameter space expected for canonical scalar field dark energy, we need to reach a precision comparable to the one that the Euclid satellite is expected to attain.


Table 2: Required accuracy for future surveys in order to disfavor the three benchmark models against w = − 1 for two different strengths of evidence.
Model (Δ+, Δ − )
Required σ for odds
    > 20 : 1 > 150 : 1




Phantom (0,10) 0.4 5 ⋅ 10−2
Fluid-like (2∕3,0) 3 ⋅ 10 −2 3 ⋅ 10−3
Small departures (0.01,0.01) 4 ⋅ 10 −4 5 ⋅ 10−5

By considering the model M2 we can also provide a direct link with the target DETF FoM: Let us choose (fairly arbitrarily) a flat probability distribution for the prior, of width Δw0 and Δwa in the dark-energy parameters, so that the value of the prior is 1∕(Δw0 Δwa ) everywhere. Let us assume that the likelihood is Gaussian in w0 and wa and centered on ΛCDM (i.e., the data fully supports Λ as the dark energy).

As above, we need to distinguish different cases depending on the width of the prior. If you accept the argument of the previous section that we expect only a small deviation from w = − 1, and set a prior width of order 0.01 on both w0 and wa, then the posterior is dominated by the prior, and the ratio will be of order 1 if the future data is compatible with w = − 1. Since the precision of the experiment is comparable to the expected deviation, both ΛCDM and evolving dark energy are equally probable (as argued above and shown for model M1 in Table 1), and we have to wait for a detection of w ⁄= − 1 or a significant further increase in precision (cf. the last row in Table 2).

However, one often considers a much wider range for w, for example the fluid-like model with w0 ∈ [− 1∕3,− 1] and wa ∈ [− 1,1] with equal probability (and neglecting some subtleties near w = − 1). If the likelihood is much narrower than the prior range, then the value of the normalized posterior at w = − 1 will be ∘ ------------- 2∕(2π |Cov(w0, wa)| = FoM ∕π (since we excluded w < − 1, else it would half this value). The Bayes factor is then given by

Δw0--ΔwaFoM--- B01 = π . (1.5.6 )
For the prior given above, we end up with B01 ≈ 4FoM ∕(3π ) ≈ 0.4FoM. In order to reach a “decisive” Bayes factor, usually characterized as ln B > 5 or B > 150, we thus need a figure of merit exceeding 375. Demanding that Euclid achieve a FoM > 500 places us, therefore, on the safe side and allows to reach the same conclusions (the ability to favor ΛCDM decisively if the data is in full agreement with w = − 1) under small variations of the prior as well.

A similar analysis could be easily carried out to compare the cosmological constant model against departures from Einstein gravity, thus giving some useful insight into the potential of future surveys in terms of Bayesian model selection.

To summarize, we used inflation as a dark-energy prototype to show that the current experimental bounds of w ≈ − 1.0 ± 0.1 are not yet sufficient to significantly favor a cosmological constant over other models. In addition, even when expecting a deviation of w = − 1 of order unity, our current knowledge of w does not allow us to favor Λ strongly in a Bayesian context. Here we showed that we need to reach a percent level accuracy both to have any chance of observing a deviation of w from − 1 if the dark energy is similar to inflation, and because it is at this point that a cosmological constant starts to be favored decisively for prior widths of order 1. In either scenario, we do not expect to be able to improve much our knowledge with a lower precision measurement of w. The dark energy can of course be quite different from the inflaton and may lead to larger deviations from w = − 1. This indeed would be the preferred situation for Euclid, as then we will be able to investigate much more easily the physical origin of the accelerate expansion. We can, however, have departures from ΛCDM even if w is very close to − 1 today. In fact most present models of modified gravity and dynamical dark energy have a value of w 0 which is asymptotically close to − 1 (in the sense that large departures from this value is already excluded). In this sense, for example, early dark-energy parameterizations (Ωe) test the amount of dark energy in the past, which can still be non negligible (ex. [723]). Similarly, a fifth force can lead to a background similar to LCDM but different effects on perturbations and structure formation [79Jump To The Next Citation Point].

1.5.2 The effective anisotropic stress as evidence for modified gravity

As discussed in Section 1.4, all dark energy and modified gravity models can be described with the same effective metric degrees of freedom. This makes it impossible in principle to distinguish clearly between the two possibilities with cosmological observations alone. But while the cleanest tests would come from laboratory experiments, this may well be impossible to achieve. We would expect that model comparison analyses would still favor the correct model as it should provide the most elegant and economical description of the data. However, we may not know the correct model a priori, and it would be more useful if we could identify generic differences between the different classes of explanations, based on the phenomenological description that can be used directly to analyze the data.

Looking at the effective energy momentum tensor of the dark-energy sector, we can either try to find a hint in the form of the pressure perturbation δp or in the effective anisotropic stress π. Whilst all scalar field dark energy affects δp (and for multiple fields with different sound speeds in potentially quite complex ways), they generically have π = 0. The opposite is also true, modified gravity models have generically π ⁄= 0 [537]. Radiation and neutrinos will contribute to anisotropic stress on cosmological scales, but their contribution is safely negligible in the late-time universe. In the following sections we will first look at models with single extra degrees of freedom, for which we will find that π ⁄= 0 is a firm prediction. We will then consider the f (R, G ) case as an example for multiple degrees of freedom [782Jump To The Next Citation Point].

1.5.2.1 Modified gravity models with a single degree of freedom

In the prototypical scalar-tensor theory, where the scalar φ is coupled to R through F (φ)R, we find that π ∝ (F ′∕F )δφ. This is very similar to the f(R ) case for which π ∝ (F ′∕F)δR (where now F = df∕dR). In both cases the generic model with vanishing anisotropic stress is given by F ′ = 0, which corresponds to a constant coupling (for scalar-tensor) or f (R ) ∝ R + Λ. In both cases we find the GR limit. The other possibility, δφ = 0 or δR = 0, imposes a very specific evolution on the perturbations that in general does not agree with observations.

Another possible way to build a theory that deviates from GR is to use a function of the second-order Lovelock function, the Gauss–Bonnet term G ≡ R2 − 4R Rμν + R R αβμν μν αβμν. The Gauss–Bonnet term by itself is a topological invariant in 4 spacetime dimensions and does not contribute to the equation of motion. It is useful here since it avoids an Ostrogradski-type instability [967Jump To The Next Citation Point]. In R + f(G ) models, the situation is slightly more complicated than for the scalar-tensor case, as

( ) π ∼ Φ − Ψ = 4H ξË™Ψ − 4¨ξΦ + 4 H2 + H˙ δξ (1.5.7 )
where the dot denotes derivative with respect to ordinary time and ξ = df∕dG (see, e.g., [782Jump To The Next Citation Point]). An obvious choice to force π = 0 is to take ξ constant, which leads to R + G + Λ in the action, and thus again to GR in four spacetime dimensions. There is no obvious way to exploit the extra ξ terms in Eq. (1.5.7View Equation), with the exception of curvature dominated evolution and on small scales (which is not very relevant for realistic cosmologies).

Finally, in DGP one has, with the notation of [41Jump To The Next Citation Point],

-----2Hrc--−-1----- Φ − Ψ = 1 + Hr (3Hr − 2)Φ. (1.5.8 ) c c
This expression vanishes for Hrc = 1∕2 (which is never reached in the usual scenario in which Hrc → 1 from above) and for Hrc → ∞ (for large Hrc the expression in front of Φ in (1.5.8View Equation) vanishes like 1 ∕(Hr ) c). In the DGP scenario the absolute value of the anisotropic stress grows over time and approaches the limiting value of Φ − Ψ = Φ âˆ•2. The only way to avoid this limit is to set the crossover scale to be unobservably large, 2 3 rc ∝ M 4 ∕M 5 → ∞. In this situation the five-dimensional part of the action is suppressed and we end up with the usual 4D GR action.

In all of these examples only the GR limit has consistently no effective anisotropic stress in situations compatible with observational results (matter dominated evolution with a transition towards a state with w ≪ − 1∕3).

1.5.2.2 Balancing multiple degrees of freedom

In models with multiple degrees of freedom it is at least in principle possible to balance the contributions in order to achieve a net vanishing π. [782] explicitly study the case of f(R, G ) gravity (please refer to this paper for details). The general equation,

1 [ ( ) ] Φ − Ψ = -- δF + 4H ξË™Ψ − 4¨ξΦ + 4 H2 + H˙ δξ , (1.5.9 ) F
is rather complicated, and generically depends, e.g., on scale of the perturbations (except for ξ constant, which in turn requires F constant for π = 0 and corresponds again to the GR limit). Looking only at small scales, k ≫ aH, one finds
2 2 2 fRR + 16(H + H˙)(H + 2 ˙H )fGG + 4(2H + 3H˙)fRG = 0. (1.5.10 )
It is in principle possible to find simultaneous solutions of this equation and the modified Friedmann (0-0 Einstein) equation, for a given H (t). As an example, the model n m f(R, G) = R + G R with
( ) ( ) -1- √--- -1-- √ --- n = 90 11 ± 41 , m = 180 61 ± 11 41 (1.5.11 )
allows for matter dominated evolution, H = 2∕(3t), and has no anisotropic stress. It is however not clear at all how to connect this model to different epochs and especially how to move towards a future accelerated epoch with π = 0 as the above exponents are fine-tuned to produce no anisotropic stress specifically only during matter domination. Additionally, during the transition to a de Sitter fixed point one encounters generically severe instabilities.

In summary, none of the standard examples with a single extra degree of freedom discussed above allows for a viable model with π = 0. While finely balanced solutions can be constructed for models with several degrees of freedom, one would need to link the motion in model space to the evolution of the universe, in order to preserve π = 0. This requires even more fine tuning, and in some cases is not possible at all, most notably for evolution to a de Sitter state. The effective anisotropic stress appears therefore to be a very good quantity to look at when searching for generic conclusions on the nature of the accelerated expansion from cosmological observations.

1.5.3 Parameterized frameworks for theories of modified gravity

As explained in earlier sections of this report, modified-gravity models cannot be distinguished from dark-energy models by using solely the FLRW background equations. But by comparing the background expansion rate of the universe with observables that depend on linear perturbations of an FRW spacetime we can hope to distinguish between these two categories of explanations. An efficient way to do this is via a parameterized, model-independent framework that describes cosmological perturbation theory in modified gravity. We present here one such framework, the parameterized post-Friedmann formalism [73Jump To The Next Citation Point]3 that implements possible extensions to the linearized gravitational field equations.

The parameterized post-Friedmann approach (PPF) is inspired by the parameterized post-Newtonian (PPN) formalism [961, 960], which uses a set of parameters to summarize leading-order deviations from the metric of GR. PPN was developed in the 1970s for the purpose of testing of alternative gravity theories in the solar system or binary systems, and is valid in weak-field, low-velocity scenarios. PPN itself cannot be applied to cosmology, because we do not know the exact form of the linearized metric for our Hubble volume. Furthermore, PPN can only test for constant deviations from GR, whereas the cosmological data we collect contain inherent redshift dependence.

For these reasons the PPF framework is a parameterization of the gravitational field equations (instead of the metric) in terms of a set of functions of redshift. A theory of modified gravity can be analytically mapped onto these PPF functions, which in turn can be constrained by data.

We begin by writing the perturbed Einstein field equations for spin-0 (scalar) perturbations in the form:

δG = 8πG δT + δU metric+ δU d.o.f+ gauge invariance fixing terms, (1.5.12 ) μν μν μν μν
where δTμν is the usual perturbed stress-energy tensor of all cosmologically-relevant fluids. The tensor δU metric μν holds new terms that may appear in a modified theory, containing perturbations of the metric (in GR such perturbations are entirely accounted for by δG μν). d.o.f. δU μν holds perturbations of any new degrees of freedom that are introduced by modifications to gravity. A simple example of the latter is a new scalar field, such as introduced by scalar-tensor or Galileon theories. However, new degrees of freedom could also come from spin-0 perturbations of new tensor or vector fields, St¨uckelberg fields, effective fluids and actions based on curvature invariants (such as f (R) gravity).

In principle there could also be new terms containing matter perturbations on the RHS of Eq. (1.5.12View Equation). However, for theories that maintain the weak equivalence principle – i.e., those with a Jordan frame where matter is uncoupled to any new fields – these matter terms can be eliminated in favor of additional contributions to δUmμeνtric and δU dμ.νo.f..

The tensor δU metric μν is then expanded in terms of two gauge-invariant perturbation variables ˆΦ and ˆ Γ. ˆ Φ is one of the standard gauge-invariant Bardeen potentials, while ˆ Γ is the following combination of the Bardeen potentials: ˆΓ = 1∕k(ˆΦ˙+ ℋ Ψˆ). We use ˆΓ instead of the usual Bardeen potential ˆΨ because ˆΓ has the same derivative order as ˆΦ (whereas ˆΨ does not). We then deduce that the only possible structure of metric δUμν that maintains the gauge-invariance of the field equations is a linear combination of Φˆ, ˆΓ and their derivatives, multiplied by functions of the cosmological background (see Eqs. (1.5.13) – (1.5.17) below).

δUd.o.f. μν is similarly expanded in a set of gauge-invariant potentials {ˆχ } i that contain the new degrees of freedom. [73Jump To The Next Citation Point] presented an algorithm for constructing the relevant gauge-invariant quantities in any theory.

For concreteness we will consider here a theory that contains only one new degree of freedom and is second-order in its equations of motion (a generic but not watertight requirement for stability, see [967]). Then the four components of Eq. (1.5.12View Equation) are:

pict

where δGˆi = δGi − δijδGk j j 3 k. Each of the lettered coefficients in Eqs. (1.5.13) – (1.5.17) is a function of cosmological background quantities, i.e., functions of time or redshift; this dependence has been suppressed above for clarity. Potentially the coefficients could also depend on scale, but this dependence is not arbitrary [832]). These PPF coefficients are the analogy of the PPN parameters; they are the objects that a particular theory of gravity ‘maps onto’, and the quantities to be constrained by data. Numerous examples of the PPF coefficients corresponding to well-known theories are given in [73].

The final terms in Eqs. (1.5.13) – (1.5.16) are present to ensure the gauge invariance of the modified field equations, as is required for any theory governed by a covariant action. The quantities M Δ, M Θ and MP are all pre-determined functions of the background. 𝜖 and ν are off-diagonal metric perturbations, so these terms vanish in the conformal Newtonian gauge. The gauge-fixing terms should be regarded as a piece of mathematical book-keeping; there is no constrainable freedom associated with them.

One can then calculate observable quantities – such as the weak lensing kernel or the growth rate of structure f(z) – using the parameterized field equations (1.5.13) – (1.5.17). Similarly, they can be implemented in an Einstein–Boltzmann solver code such as camb [559Jump To The Next Citation Point] to utilize constraints from the CMB. If we take the divergence of the gravitational field equations (i.e., the unperturbed equivalent of Eq. (1.5.12View Equation)), the left-hand side vanishes due to the Bianchi identity, while the stress-energy tensor of matter obeys its standard conservation equations (since we are working in the Jordan frame). Hence the U-tensor must be separately conserved, and this provides the necessary evolution equation for the variable ˆχ:

pict

Eq. (1.5.18) has two components. If one wishes to treat theories with more than two new degrees of freedom, further information is needed to supplement the PPF framework.

The full form of the parameterized equations (1.5.13) – (1.5.17) can be simplified in the ‘quasistatic regime’, that is, significantly sub-horizon scales on which the time derivatives of perturbations can be neglected in comparison to their spatial derivatives [457Jump To The Next Citation Point]. Quasistatic lengthscales are the relevant stage for weak lensing surveys and galaxy redshift surveys such as those of Euclid. A common parameterization used on these scales has the form:

pict

where {μ, γ} are two functions of time and scale to be constrained. This parameterization has been widely employed [131Jump To The Next Citation Point, 277, 587Jump To The Next Citation Point, 115, 737, 980Jump To The Next Citation Point, 320, 441, 442]. It has the advantages of simplicity and somewhat greater physical transparency: μ(a,k) can be regarded as describing evolution of the effective gravitational constant, while γ(a,k) can, to a certain extent, be thought of as acting like a source of anisotropic stress (see Section 1.5.2).

Let us make a comment about the number of coefficient functions employed in the PPF formalism. One may justifiably question whether the number of unknown functions in Eqs. (1.5.13) – (1.5.17) could ever be constrained. In reality, the PPF coefficients are not all independent. The form shown above represents a fully agnostic description of the extended field equations. However, as one begins to impose restrictions in theory space (even the simple requirement that the modified field equations must originate from a covariant action), constraint relations between the PPF coefficients begin to emerge. These constraints remove freedom from the parameterization.

Even so, degeneracies will exist between the PPF coefficients. It is likely that a subset of them can be well-constrained, while another subset have relatively little impact on current observables and so cannot be tested. In this case it is justifiable to drop the untestable terms. Note that this realization, in itself, would be an interesting statement – that there are parts of the gravitational field equations that are essentially unknowable.

Finally, we note that there is also a completely different, complementary approach to parameterizing modifications to gravity. Instead of parameterizing the linearized field equations, one could choose to parameterize the perturbed gravitational action. This approach has been used recently to apply the standard techniques of effective field theory to modified gravity; see [107, 142, 411] and references therein.


  Go to previous page Go up Go to next page