4.3 Beyond homogeneity and isotropy

The crucial ingredient that kickstarted dark energy research was the interpretation in 1998 of standard candle observations in terms of cosmic acceleration required to explain the data in the context of the FLRW metric. What we observe is however merely that distant sources (z > 0.3) are dimmer than we would predict in a matter-only universe calibrated through “nearby” sources. That is, we observe a different evolution of luminosity rather than directly an increase in the expansion rate. Can this be caused by a strong inhomogeneity rather than by an accelerating universe?

In addition, cosmic acceleration seems to be a recent phenomenon at least for standard dark-energy models, which gives rise to the coincidence problem. The epoch in which dark energy begins to play a role is close to the epoch in which most of the cosmic structures formed out of the slow linear gravitational growth. We are led to ask again: can the acceleration be caused by strong inhomogeneities rather than by a dark energy component?

Finally, one must notice that in all the standard treatment of dark energy one always assumes a perfectly isotropic expansion. Could it be that some of the properties of acceleration depends critically on this assumption?

In order to investigate these issues, in this section we explore radical deviations from homogeneity and isotropy and see how Euclid can test them.

4.3.1 Anisotropic models

In recent times, there has been a resurgent interest towards anisotropic cosmologies, classified in terms of Bianchi solutions to general relativity. This has been mainly motivated by hints of anomalies in the cosmic microwave background (CMB) distribution observed on the full sky by the WMAP satellite [288Jump To The Next Citation Point, 930, 268, 349]. While the CMB is very well described as a highly isotropic (in a statistical sense) Gaussian random field, and the anomalies are a posteriori statistics and therefore their statistical significance should be corrected at least for the so-called look elsewhere effect (see, e.g., [740, 122Jump To The Next Citation Point] and references therein) recent analyses have shown that local deviations from Gaussianity in some directions (the so called cold spots, see [268]) cannot be excluded at high confidence levels. Furthermore, the CMB angular power spectrum extracted from the WMAP maps has shown in the past a quadrupole power lower than expected from the best-fit cosmological model [335]. Several explanations for this anomaly have been proposed (see, e.g., [905, 243, 297, 203, 408]) including the fact that the universe is expanding with different velocities along different directions. While deviations from homogeneity and isotropy are constrained to be very small from cosmological observations, these usually assume the non-existence of anisotropic sources in the late universe. Conversely, as suggested in [520, 519, 106, 233, 251], dark energy with anisotropic pressure acts as a late-time source of anisotropy. Even if one considers no anisotropic pressure fields, small departures from isotropy cannot be excluded, and it is interesting to devise possible strategies to detect them.

The effect of assuming an anisotropic cosmological model on the CMB pattern has been studied by [249, 89, 638, 601, 189, 516]. The Bianchi solutions describing the anisotropic line element were treated as small perturbations to a Friedmann–Robertson–Walker (FRW) background. Such early studies did not consider the possible presence of a non-null cosmological constant or dark energy and were upgraded recently by [652, 477].

One difficulty with the anisotropic models that have been shown to fit the large-scale CMB pattern, is that they have to be produced according to very unrealistic choices of the cosmological parameters. For example, the Bianchi VIIh template used in [477] requires an open universe, an hypothesis which is excluded by most cosmological observations. An additional problem is that an inflationary phase – required to explain a number of feature of the cosmological model – isotropizes the universe very efficiently, leaving a residual anisotropy that is negligible for any practical application. These difficulties vanish if an anisotropic expansion takes place only well after the decoupling between matter and radiation, for example at the time of dark energy domination [520, 519, 106, 233, 251].

Bianchi models are described by homogeneous and anisotropic metrics. If anisotropy is slight, the dynamics of any Bianchi model can be decomposed into an isotropic FRW background linearly perturbed to break isotropy; on the other side, homogeneity is maintained with respect to three Killing vector fields.

The geometry of Bianchi models is set up by the structure constants Ckij, defined by the commutators of (these) three Killing fields ⃗ ξi:

[⃗ ⃗ ] k⃗ ξi,ξj = C ijξk. (4.3.1 )
The structure constants are subject to the antisymmetry relation k k Cij = − C ji and the Jacobi identities a d C [bcCe]a = 0. As a consequence, their attainable values are restricted to only four of the initial 27 necessary to describe a given space. In [340] these four values are dubbed as n1,n2,n3 and a1. The categorization of Bianchi models into different types relies on classifying the inequivalent sets of these four constants. In Table 24 the subclass of interest containing the FRW limit is shown. Bianchi types VIIh and IX contain the open and closed FRW model, respectively. Type VII0 contains the flat FRW; types I and V are just particular subcases of the VII0 and VIIh. In type I no vertical motions are allowed and the only extension with respect to the FRW case is that there are three different scale factors. The metric in general can be written as
g μν = − n μnν + gabξaμξbν, (4.3.2 )
where gab is a 3 × 3 metric depending on t. It can be decomposed as 2α 2β gab = e [e ]ab, where the first term represents the volumetric expansion and the second term includes the anisotropy.

Table 24: Bianchi models containing FRW limit and their structure constants.
Type a n1 n2 n3

I 0 0 0 0
V 1 0 0 0
VII0 0 0 1 1
VIIh √ -- h 0 1 1
IX 0 1 1 1 Late-time anisotropy

While deviations from homogeneity and isotropy are constrained to be very small from cosmological observations, these usually assume the non-existence of anisotropic sources in the late universe. The CMB provides very tight constraints on Bianchi models at the time of recombination [189, 516, 638] of order of the quadrupole value, i.e., ∼ 10−5. Usually, in standard cosmologies with a cosmological constant the anisotropy parameters scale as the inverse of the comoving volume. This implies an isotropization of the expansion from the recombination up to the present, leading to the typically derived constraints on the shear today, namely ∼ 10−9 ÷ 10− 10. However, this is only true if the anisotropic expansion is not generated by any anisotropic source arising after decoupling, e.g., vector fields representing anisotropic dark energy [519].

As suggested in [520, 519, 106, 233, 251], dark energy with anisotropic pressure acts as a late-time source of anisotropy. An additional problem is that an inflationary phase – required to explain a number of feature of the cosmological model – isotropizes the universe very efficiently, leaving a residual anisotropy that is negligible for any practical application. These difficulties vanish if an anisotropic expansion takes place only well after the decoupling between matter and radiation, for example at the time of dark energy domination [520, 519, 106, 233, 251].

For example, the effect of cosmic parallax [749] has been recently proposed as a tool to assess the presence of an anisotropic expansion of the universe. It is essentially the change in angular separation in the sky between far-off sources, due to an anisotropic expansion.

A common parameterization of an anisotropically distributed dark energy component is studied in a class of Bianchi I type, where the line element is

ds2 = − dt2 + a2(t)dx2 + b2(t)dy2 + c2(t)dz2. (4.3.3 )
The expansion rates in the three Cartesian directions x, y and z are defined as HX = a˙∕a, HY = ˙b∕b and HZ = ˙c∕c, where the dot denotes the derivative with respect to coordinate time. In these models they differ from each other, but in the limit of HX = HY = HZ the flat FRW isotropic expansion is recovered. Among the Bianchi classification models the type I exhibits flat geometry and no overall vorticity; conversely, shear components ΣX,Y,Z = HX,Y,Z ∕H − 1 are naturally generated, where H is the expansion rate of the average scale factor, related to the volume expansion as H = A˙∕A with A = (abc)1∕3.

The anisotropic expansion is caused by the anisotropically stressed dark energy fluid whenever its energy density contributes to the global energy budget. If the major contributions to the overall budget come from matter and dark energy, as after recombination, their energy-momentum tensor can be parametrized as:

μ T (m )ν = diag (− 1,wm, wm, wm )ρm (4.3.4 ) Tμ(DE )ν = diag (− 1,w, w + 3 δ,w + 3γ)ρDE, (4.3.5 )
respectively, where wm and w are the equation of state parameters of matter and dark energy and the skewness parameters δ and γ can be interpreted as the difference of pressure along the x and y and z axis. Note that the energy-momentum tensor (4.3.5View Equation) is the most general one compatible with the metric (4.3.3View Equation) [519]. Two quantities are introduced to define the degree of anisotropic expansion:
˙ R ≡ (˙a∕a − b∕b)∕H = Σx − Σy, (4.3.6 ) S ≡ (˙a∕a − ˙c∕c)∕H = 2Σx + Σy.

Considering the generalized Friedmann equation, the continuity equations for matter and dark energy and no coupling between the two fluids, the derived autonomous system reads [520, 519]:

U ′ =U (U − 1)[γ(3 + R − 2S ) + δ(3 − 2R + S) + 3(w − w )] m S ′ = 1(9 − R2 + RS − S2){S [U (δ + γ + w − w ) + w − 1] − 6 γU } 6 m m (4.3.7 ) ′ 1 2 2 { } R = 6(9 − R + RS − S ) R [U (δ + γ + w − wm ) + wm − 1] − 6δU ,
where U ≡ ρDE∕(ρDE + ρm ) and the derivatives are taken with respect to log(A )∕3. System (4.3.7View Equation) exhibits many different fixed points, defined as the solutions of the system S ′ = R′ = U ′ = 0. Beside the Einstein–de Sitter case (R∗ = S ∗ = U ∗ = 0), the most physically interesting for our purposes are the dark energy dominated solution
6δ 6 γ R∗ = --------------, S ∗ = --------------, U∗ = 1, (4.3.8 ) δ + γ + w − 1 δ + γ + w − 1
and the scaling solution
-3δ(δ +-γ +-w)- -3γ(δ +-γ +-w)- ---------w-+-γ-+-δ--------- R ∗ = 2(δ2 − δγ + γ2), S ∗ = 2(δ2 − δγ + γ2), U ∗ = w2 − 3 (γ − δ )2 + 2w (γ + δ), (4.3.9 )
in which ρDE ∕ρm = const., i.e., the fractional dark energy contribution to the total energy density is constant.

Anisotropic distribution of sources in Euclid survey might constrain the anisotropy at present, when the dark energy density is of order 74%, hence not yet in the final dark energy dominant attractor phase (4.3.8View Equation).

4.3.2 Late-time inhomogeneity

Inhomogeneity is relatively difficult to determine, as observations are typically made on our past light cone, but some methods exist (e.g., [242Jump To The Next Citation Point, 339, 600]). However, homogeneity may be tested by exploring the interior of the past light cone by using the fossil record of galaxies to probe along the past world line of a large number of galaxies [428Jump To The Next Citation Point]. One can use the average star formation rate at a fixed lookback time as a diagnostic test for homogeneity. The lookback time has two elements to it – the lookback time of the emission of the light, plus the time along the past world line. The last of these can be probed using the integrated stellar spectra of the galaxies, using a code such as vespa [890], and this is evidently dependent only on atomic and nuclear physics, independent of homogeneity. The lookback time can also be computed, surprisingly simply, without assuming homogeneity from [428]

∫ z dz′ Δt = --------------, (4.3.10 ) 0 (1 + z′)Hr(z′)
where H r is the radial Hubble constant. In principle, this can be obtained from radial BAOs, assuming early-time homogeneity so that the physical BAO scale is fixed. The spectroscopic part of Euclid could estimate both the star formation histories from stacked spectra, and the radial expansion rate.

4.3.3 Inhomogeneous models: Large voids

Nonlinear inhomogeneous models are traditionally studied either with higher-order perturbation theory or with N-body codes. Both approaches have their limits. A perturbation expansion obviously breaks down when the perturbations are deeply in the nonlinear regime. N-body codes, on the other hand, are intrinsically Newtonian and, at the moment, are unable to take into account full relativistic effects. Nevertheless, these codes can still account for the general relativistic behavior of gravitational collapse in the case of inhomogeneous large void models, as shown recently in [30Jump To The Next Citation Point], where the growth of the void follows the full nonlinear GR solution down to large density contrasts (of order one).

A possibility to make progress is to proceed with the most extreme simplification: radial symmetry. By assuming that the inhomogeneity is radial (i.e., we are at the center of a large void or halo) the dynamical equations can be solved exactly and one can make definite observable predictions.

It is however clear from the start that these models are highly controversial, since the observer needs to be located at the center of the void with a tolerance of about few percent of the void scale radius, see [141, 242], disfavoring the long-held Copernican principle (CP). Notwithstanding this, the idea that we live near the center of a huge void is attractive for another important reason: a void creates an apparent acceleration field that could in principle match the supernovae observations [891, 892, 220, 474]. Since we observe that nearby SN Ia recede faster than the H (z) predicted by the Einstein–de Sitter universe, we could assume that we live in the middle of a huge spherical region which is expanding faster because it is emptier than the outside. The transition redshift z e, i.e., the void edge, should be located around 0.3 – 0.5, the value at which in the standard interpretation we observe the beginning of acceleration.

The consistent way to realize such a spherical inhomogeneity has been studied since the 1930s in the relativistic literature: the Lemaître–Tolman–Bondi (LTB) metric. This is the generalization of a FLRW metric in which the expansion factor along the radial coordinate r is different relative to the surface line element 2 2 2 2 d Ω = d 𝜃 + sin 𝜃d ϕ. If we assume the inhomogeneous metric (this subsection follows closely the treatment in [49])

2 2 2 2 2 2 ds = − dt + X (t,r)dr + R (t,r)dΩ , (4.3.11 )
and solve the (0,1) Einstein equation for a fluid at rest we find that the LTB metric is given by
2 2 [R-′(t,r)]2- 2 2 2 ds = − dt + 1 + β(r) dr + R (t,r)dΩ , (4.3.12 )
where R(t,r),β(r) are arbitrary functions. Here primes and dots refer to partial space and time derivatives, respectively. The function β(r) can be thought of as a position-dependent spatial curvature. If R is factorized so that R (t,r) = a(t)f(r) and β (r) = − Kf 2(r), then we recover the FLRW metric (up to a redefinition of r: from now on when we seek the FLRW limit we put R = a(t)r and 2 β = − Kr). Otherwise, we have a metric representing a spherical inhomogeneity centered on the origin. An observer located at the origin will observe an isotropic universe. We can always redefine r at the present time to be R0 ≡ R (t0,r) = r, so that the metric is very similar to a FLRW today.

Considering the infinitesimal radial proper length ′ √ ------ D|| = R dr∕ 1 + β, we can define the radial Hubble function as

H || ≡ D˙||∕D || = ˙R′∕R ′, (4.3.13 )
and similarly the transverse Hubble function:
H = ˙R∕R. (4.3.14 ) ⊥
Of course the two definitions coincide for the FLRW metric. The non-vanishing components of the Ricci tensor for the LTB metric are
¨ ¨′ R0 = 2R- + R--, (4.3.15 ) 0 R R ′ 1 2R˙R˙′ + R ¨R′ − β′ R 1 = ----------′------, (4.3.16 ) RR2 ′ ′ ′ R2 = R3 = ˙R--−-β-+ R˙˙R-+--R-¨R-−-β-∕2-. (4.3.17 ) 2 3 R2 RR ′

In terms of the two Hubble functions, we find that the Friedmann equations for the pressureless matter density ρm (t,r) are given by [28]

2 β-- -β′- H⊥ + 2H ||H ⊥ − R2 − RR ′ = 8πG ρm, (4.3.18 ) ¨ ′ 6 R-+ 2H2 − 2 β--− 2H H + -β--= − 8πG ρ . (4.3.19 ) R ⊥ R2 || ⊥ RR ′ m
Adding Eqs. (4.3.18View Equation) and (4.3.19View Equation), it follows that 2R ¨R + R˙2 = β. Integrating this equation, we obtain a Friedmann-like equation
α(r) β (r) H2⊥ = --3--+ ---2-, (4.3.20 ) R R
where α(r) is a free function that we can use along with β(r) to describe the inhomogeneity. From this we can define an effective density parameter Ω (m0)(r) = Ωm (r,t0) today:
(0) -α-(r)- Ω m (r) ≡ R3 H2 , (4.3.21 ) 0 ⊥0
where R0 ≡ R (r,t0) = r,H ⊥0 ≡ H⊥ (r,t0) (the superscript (0) denotes the present value) and an effective spatial curvature
(0) (0) -β-(r-)- ΩK (r) = 1 − Ω m (r) = R20H2⊥0 . (4.3.22 )
Hence, we see that the initial condition at some time t0 (which here we take as the present time) must specify two free functions of r, for instance α(r),β (r ) or Ω (0m)(r),H ⊥0(r). The latter choice shows that the inhomogeneity can be in the matter distribution or in the expansion rate or in both. This freedom can be used to fit simultaneously for any expansion rate (and therefore luminosity and angular diameter distances [771]) and for any source number density [681].

If one imposes the additional constraint that the age of the universe is the same for every observer, then only one free function is left [380Jump To The Next Citation Point]. The same occurs if one chooses (0) Ω m (r) = constant (notice that this is different from ρ(m0)(r ) = constant, which is another possible choice), i.e., if the matter density fraction is assumed homogeneous today (and only today) [341]. The choice of a homogeneous universe age guarantees against the existence of diverging inhomogeneities in the past. However, there is no compelling reason to impose such restrictions.

Eq. (4.3.20View Equation) is the classical cycloid equation whose solution for β > 0 is given parametrically by


where t (r) = t(r,η = 0) B is the inhomogeneous “big-bang” time, i.e., the time for which η = 0 and R = 0 for a point at comoving distance r. This can be put to zero in all generality by a redefinition of time. The “time” variable η is defined by the relation

∫ tβ(r)1∕2 η = -------d&tidle;t. (4.3.25 ) 0 R (&tidle;t,r)
Notice that the “time” η that corresponds to a given t depends on r; so R (r,t) is found by solving numerically η(t,r) from Eq. (4.3.24) and then substituting R [r,η (r,t)]. The present epoch η0(r) is defined by the condition R = R0. In the problem [10.2] we will derive the age of the universe tage(r) = t(r,η0) − tB(r) in terms of Ω(m0),H ⊥0. For β < 0 the η functions in Eqs. ( become (1 − cosη) and (η − sin η) for R and t, respectively, while for β = 0 they are 2 η ∕2 and 3 η ∕6: we will not consider these cases further.

As anticipated, since we need to have a faster expansion inside some distance to mimic cosmic acceleration, we need to impose to our solution the structure of a void. An example of the choice of (0) −1 −1 Ω m (r) ≡ Ωm (r,t0),h (0)(r) ≡ H ⊥0∕(100 km s Mpc) is [380Jump To The Next Citation Point]

Ω (0m)(r) = Ωout + (Ωin − Ωout)f (r,r0,Δ ), (4.3.26 ) (0) h (r) = hout + (hin − hout)f (r,r0,Δ ), (4.3.27 )
1-−-tanh-[(r-−-r0)∕2Δ-] f (r,r0,Δ ) = 1 + tanh(r0∕2Δ ) , (4.3.28 )
representing the transition function of a shell of radius r0 and thickness Δ. The six constants Ωin,Ωout,hin,hout,r0,Δ completely fix the model. If hin > hout we can mimic the accelerated expansion.

In order to compare the LTB model to observations we need to generalize two familiar concepts: redshift and luminosity distance. The redshift can be calculated through the equation [27]

dz ˙R′ ---= (1 + z)√-------, (4.3.29 ) dr 1 + β
where R(t,r) must be calculated on the trajectory tp(r) and we must impose z(r = 0) = 0. Every LTB function, e.g., H (t,r),R (t,r) ⊥ etc. can be converted into line-of-sight functions of redshift by evaluating the arguments rp(z),tp(z) along the past light cone.

The proper area of an infinitesimal surface at r,t = constant is given by 2 A = R (r,t)sin 𝜃d𝜃d ϕ. The angular diameter distance is the square root of A∕(sin𝜃d 𝜃dϕ) so that dA(z) = R (tp(z),rp(z)). Since the Etherington duality relation dL = (1 + z)2dA remains valid in inhomogeneous models, we have [530]

dL(z) = (1 + z)2R(tp(z),rp(z)). (4.3.30 )
This clearly reduces to dL = (1 + z)r(z) in the FLRW background. Armed with these observational tools, we can compare any LTB model to the observations.

Besides matching the SN Ia Hubble diagram, we do not want to spoil the CMB acoustic peaks and we also need to impose a local density Ωin near 0.1 – 0.3, a flat space outside (to fulfil inflationary predictions), i.e., Ωout = 1, and finally the observed local Hubble value hin ≈ 0.7 ± 0.1. The CMB requirement can be satisfied by a small value of hout, since we know that to compensate for Ωout = 1 we need a small Hubble rate (remember that the CMB essentially constrains Ω (m0)h2). This fixes h ≈ 0.5 out. So we are left with only r 0 and Δ to be constrained by SN Ia. As anticipated we expect r0 to be near z = 0.5, which in the standard ΛCDM model gives a distance r(z) ≈ 2 Gpc. An analysis using SN Ia data [382Jump To The Next Citation Point] finds that r0 = 2.3 ± 0.9 Gpc and Δ∕r0 > 0.2. Interestingly, a “cold spot” in the CMB sky could be attributed to a void of comparable size [269, 642].

There are many more constraints one can put on such large inhomogeneities. Matter inside the void moves with respect to CMB photons coming from outside. So the hot intracluster gas will scatter the CMB photons with a large peculiar velocity and this will induce a strong kinematic Sunyaev–Zel’dovich effect [381]. Moreover, secondary photons scattered towards us by reionized matter inside the void should also distort the black-body spectrum due to the fact that the CMB radiation seen from anywhere in the void (except from the center) is anisotropic and therefore at different temperatures [197]. These two constraints require the voids not to exceed 1 or 2 Gpc, depending on the exact modelling and are therefore already in mild conflict with the fit to supernovae.

Moreover, while in the FLRW background the function H (z) fixes the comoving distance χ (z) up to a constant curvature (and consequently also the luminosity and angular diameter distances), in the LTB model the relation between χ (z) and H ⊥(z) or H ∥(z ) can be arbitrary. That is, one can choose the two spatial free functions to be for instance H ⊥ (r,0 ) and R(r,0), from which the line-of-sight values H ⊥(z) and χ(z) would also be arbitrarily fixed. This shows that the “consistency” FLRW relation between χ(z) and H (z) is violated in the LTB model, and in general in any strongly inhomogeneous universe.

Further below we discuss how this consistency test can be exploited by Euclid to test for large-scale inhomogeneities. Recently, there has been an implementation of LTB models in large-scale structure N-body simulations [30], where inhomogeneities grow in the presence of a large-scale void and seen to follow the predictions of linear perturbation theory.

An interesting class of tests on large-scale inhomogeneities involve probes of the growth of structure. However, progress in making theoretical predictions has been hampered by the increased complexity of cosmological perturbation theory in the LTB spacetime, where scalar and tensor perturbations couple, see for example [241]. Nevertheless, a number of promising tests of large-scale inhomogeneity using the growth of structure have been proposed. [29] used N-body simulations to modify the Press–Schechter halo mass function, introducing a sensitive dependence on the background shear. The shear vanishes in spatially-homogeneous models, and so a direct measurement of this quantity would put stringent constraints on the level of background inhomogeneity, independent of cosmological model assumptions. Furthermore, recent upper limits from the ACT and SPT experiments on the linear, all-sky kinematic Sunyaev–Zel’dovich signal at ℓ = 3000, a probe of the peculiar velocity field, appear to put strong constraints on voids [986]. This result depends sensitively on theoretical uncertainties on the matter power spectrum of the model, however.

Purely geometric tests involving large-scale structure have been proposed, which neatly side-step the perturbation theory issue. The Baryon Acoustic Oscillations (BAO) measure a preferred length scale, d(z), which is a combination of the acoustic length scale, l, set at matter-radiation decoupling, and projection effects due to the geometry of the universe, characterized by the volume distance, DV (z ). In general, the volume distance in an LTB model will differ significantly from that in the standard model, even if the two predict the same SN Ia Hubble diagram and CMB power spectrum. Assuming that the LTB model is almost homogeneous around the decoupling epoch, l may be inferred from CMB observations, allowing the purely geometric volume distance to be reconstructed from BAO measurements. It has been shown by [990] that, based on these considerations, recent BAO measurements effectively rule out giant void models, independent of other observational constraints.

The tests discussed so far have been derived under the assumption of a homogeneous Big Bang (equivalent to making a particular choice of the bang time function). Allowing the Big Bang to be inhomogeneous considerably loosens or invalidates some of the constraints from present data. It has been shown [187] that giant void models with inhomogeneous bang times can be constructed to fit the SN Ia data, WMAP small-angle CMB power spectrum, and recent precision measurements of h simultaneously. This is contrary to claims by, e.g., [767], that void models are ruled out by this combination of observables. However, the predicted kinematic Sunyaev–Zel’dovich signal in such models was found to be severely incompatible with existing constraints. When taken in combination with other cosmological observables, this also indicates a strong tension between giant void models and the data, effectively ruling them out.

4.3.4 Inhomogeneous models: Backreaction

In general, we would like to compute directly the impact of the inhomogeneities, without requiring an exact and highly symmetric solution of Einstein’s equations like FLRW or even LTB. Unfortunately there is no easy way to approach this problem. One ansatz tries to construct average quantities that follow equations similar to those of the traditional FLRW model, see e.g., [185, 751, 752, 186]. This approach is often called backreaction as the presence of the inhomogeneities acts on the background evolution and changes it. In this framework, it is possible to obtain a set equations, often called Buchert equations, that look surprisingly similar to the Friedmann equations for the averaged scale factor a𝒟, with extra contributions:

( )2 3 ˙a𝒟- − 8πG ⟨ϱ⟩ − Λ = − ⟨ℛ-⟩𝒟-+-𝒬-𝒟-, (4.3.31 ) a𝒟 𝒟 2 ¨a𝒟 3 ---+ 4πG ⟨ϱ⟩𝒟 − Λ = 𝒬 𝒟, (4.3.32 ) a𝒟
Here ℛ is the 3-Ricci scalar of the spatial hypersurfaces and 𝒬 is given by
2 ⟨ 2⟩ ⟨ 2⟩ 𝒬 𝒟 = -- (𝜃 − ⟨𝜃⟩𝒟) 𝒟 − 2 σ 𝒟 , (4.3.33 ) 3
i.e., it is a measure of the variance the expansion rate 𝜃 and of the shear σij. We see that this quantity, if it is positive, can induce an accelerated growth of a 𝒟, which suggests that observers would conclude that the universe is undergoing accelerated expansion.

However, it is not possible to directly link this formalism to observations. A first step can be done by imposing by hand an effective, average geometry with the help of a template metric that only holds on average. The probably simplest first choice is to impose on each spatial hypersurface a spatial metric with constant curvature, by imagining that the inhomogeneities have been smoothed out. But in general the degrees of freedom of this metric (scale factor and spatial curvature) will not evolve as in the FLRW case, since the evolution is given by the full, inhomogeneous universe, and we would not expect that the smoothing of the inhomogeneous universe follows exactly the evolution that we would get for a smooth (homogeneous) universe. For example, the average curvature could grow over time, due to the collapse of overdense structure and the growth (in volume) of the voids. Thus, unlike in the FRLW case, the average curvature in the template metric should be allowed to evolve. This is the case that was studied in [547Jump To The Next Citation Point].

While the choice of template metric and the Buchert equations complete the set of equations, there are unfortunately further choices that need to be made. Firstly, although there is an integrability condition linking the evolution of ⟨ℛ ⟩𝒟 and 𝒬 𝒟 and in addition a consistency requirement that the effective curvature κ (t) in the metric is related to ⟨ℛ⟩ 𝒟, we still need to impose an overall evolution by hand as it was not yet possible to compute this from first principles. Larena assumed a scaling solution n ⟨ℛ ⟩𝒟 ∝ a 𝒟, with n a free exponent. In a dark energy context, this scaling exponent n corresponds to an effective dark energy with w 𝒟 = − (n + 3)∕3, but in the backreaction case with the template metric the geometry is different from the usual dark energy case. A perturbative analysis [569] found n = − 1, but of course this only an indication of the possible behavior as the situation is essentially non-perturbative.

The second choice concerns the computation of observables. [547] studied distances to supernovae and the CMB peak position, effectively another distance. The assumption taken was that distances could be computed within the averaged geometry as if this was the true geometry, by integrating the equation of radial null geodesics. In other words, the effective metric was taken to be the one that describes distances correctly. The resulting constraints are shown in Figure 51View Image. We see that the leading perturbative mode (n = 1) is marginally consistent with the constraints. These contours should be regarded as an indication of what kind of backreaction is needed if it is to explain the observed distance data.

View Image

Figure 51: Supernovae and CMB constraints in the (Ω 𝒟0 m,n) plane for the averaged effective model with zero Friedmannian curvature (filled ellipses) and for a standard flat FLRW model with a quintessence field with constant equation of state w = − (n + 3)∕3 (black ellipses). The disk and diamond represent the absolute best-fit models respectively for the standard FLRW model and the averaged effective model.

One interesting point, and maybe the main point in light of the discussion in the following section, is that the averaged curvature needs to become necessarily large at late times due to the link between it and the backreaction term 𝒬, in order to explain the data. Just as in the case of a huge void, this effective curvature makes the backreaction scenario testable to some degree with future large surveys like Euclid.

View Image

Figure 52: Upper panel: Evolution of Ωk (z𝒟) as a function of redshift for the absolute best-fit averaged model represented by the diamond in Figure 51View Image. One can see that all positively curved FLRW models (Ω < 0 k,0) and only highly negatively curved FLRW models (Ω > 0.5 k,0) can be excluded by the estimation of Ωk (z𝒟). Central panel: Evolution of the coordinate distance for the best-fit averaged model (solid line), for a ΛCDM model with Ωm,0 = 0.277, ΩΛ = 0.735 and H0 = 73 km ∕s∕Mpc (dashed line), and for the FLRW model with the same parameters as the best-fit averaged model (dashed-dotted line). Lower panel: Evolution of the Hubble parameter H ∕H 0 for the best-fit averaged model (solid line), the FLRW model with the same parameters as the averaged best-fit model (dashed-dotted line), and for the same ΛCDM model as in the central panel (dashed line). The error bars in all panels correspond to the expectations for future large surveys like Euclid.

  Go to previous page Go up Go to next page