In addition, cosmic acceleration seems to be a recent phenomenon at least for standard dark-energy models, which gives rise to the coincidence problem. The epoch in which dark energy begins to play a role is close to the epoch in which most of the cosmic structures formed out of the slow linear gravitational growth. We are led to ask again: can the acceleration be caused by strong inhomogeneities rather than by a dark energy component?

Finally, one must notice that in all the standard treatment of dark energy one always assumes a perfectly isotropic expansion. Could it be that some of the properties of acceleration depends critically on this assumption?

In order to investigate these issues, in this section we explore radical deviations from homogeneity and isotropy and see how Euclid can test them.

In recent times, there has been a resurgent interest towards anisotropic cosmologies, classified in terms of Bianchi solutions to general relativity. This has been mainly motivated by hints of anomalies in the cosmic microwave background (CMB) distribution observed on the full sky by the WMAP satellite [288, 930, 268, 349]. While the CMB is very well described as a highly isotropic (in a statistical sense) Gaussian random field, and the anomalies are a posteriori statistics and therefore their statistical significance should be corrected at least for the so-called look elsewhere effect (see, e.g., [740, 122] and references therein) recent analyses have shown that local deviations from Gaussianity in some directions (the so called cold spots, see [268]) cannot be excluded at high confidence levels. Furthermore, the CMB angular power spectrum extracted from the WMAP maps has shown in the past a quadrupole power lower than expected from the best-fit cosmological model [335]. Several explanations for this anomaly have been proposed (see, e.g., [905, 243, 297, 203, 408]) including the fact that the universe is expanding with different velocities along different directions. While deviations from homogeneity and isotropy are constrained to be very small from cosmological observations, these usually assume the non-existence of anisotropic sources in the late universe. Conversely, as suggested in [520, 519, 106, 233, 251], dark energy with anisotropic pressure acts as a late-time source of anisotropy. Even if one considers no anisotropic pressure fields, small departures from isotropy cannot be excluded, and it is interesting to devise possible strategies to detect them.

The effect of assuming an anisotropic cosmological model on the CMB pattern has been studied by [249, 89, 638, 601, 189, 516]. The Bianchi solutions describing the anisotropic line element were treated as small perturbations to a Friedmann–Robertson–Walker (FRW) background. Such early studies did not consider the possible presence of a non-null cosmological constant or dark energy and were upgraded recently by [652, 477].

One difficulty with the anisotropic models that have been shown to fit the large-scale CMB pattern, is that they have to be produced according to very unrealistic choices of the cosmological parameters. For example, the Bianchi VIIh template used in [477] requires an open universe, an hypothesis which is excluded by most cosmological observations. An additional problem is that an inflationary phase – required to explain a number of feature of the cosmological model – isotropizes the universe very efficiently, leaving a residual anisotropy that is negligible for any practical application. These difficulties vanish if an anisotropic expansion takes place only well after the decoupling between matter and radiation, for example at the time of dark energy domination [520, 519, 106, 233, 251].

Bianchi models are described by homogeneous and anisotropic metrics. If anisotropy is slight, the dynamics of any Bianchi model can be decomposed into an isotropic FRW background linearly perturbed to break isotropy; on the other side, homogeneity is maintained with respect to three Killing vector fields.

The geometry of Bianchi models is set up by the structure constants , defined by the commutators of (these) three Killing fields :

The structure constants are subject to the antisymmetry relation and the Jacobi identities . As a consequence, their attainable values are restricted to only four of the initial 27 necessary to describe a given space. In [340] these four values are dubbed as and . The categorization of Bianchi models into different types relies on classifying the inequivalent sets of these four constants. In Table 24 the subclass of interest containing the FRW limit is shown. Bianchi types VIIh and IX contain the open and closed FRW model, respectively. Type VII contains the flat FRW; types I and V are just particular subcases of the VII and VIIh. In type I no vertical motions are allowed and the only extension with respect to the FRW case is that there are three different scale factors. The metric in general can be written as where is a metric depending on . It can be decomposed as , where the first term represents the volumetric expansion and the second term includes the anisotropy.While deviations from homogeneity and isotropy are constrained to be very small from cosmological observations, these usually assume the non-existence of anisotropic sources in the late universe. The CMB provides very tight constraints on Bianchi models at the time of recombination [189, 516, 638] of order of the quadrupole value, i.e., . Usually, in standard cosmologies with a cosmological constant the anisotropy parameters scale as the inverse of the comoving volume. This implies an isotropization of the expansion from the recombination up to the present, leading to the typically derived constraints on the shear today, namely . However, this is only true if the anisotropic expansion is not generated by any anisotropic source arising after decoupling, e.g., vector fields representing anisotropic dark energy [519].

As suggested in [520, 519, 106, 233, 251], dark energy with anisotropic pressure acts as a late-time source of anisotropy. An additional problem is that an inflationary phase – required to explain a number of feature of the cosmological model – isotropizes the universe very efficiently, leaving a residual anisotropy that is negligible for any practical application. These difficulties vanish if an anisotropic expansion takes place only well after the decoupling between matter and radiation, for example at the time of dark energy domination [520, 519, 106, 233, 251].

For example, the effect of cosmic parallax [749] has been recently proposed as a tool to assess the presence of an anisotropic expansion of the universe. It is essentially the change in angular separation in the sky between far-off sources, due to an anisotropic expansion.

A common parameterization of an anisotropically distributed dark energy component is studied in a class of Bianchi I type, where the line element is

The expansion rates in the three Cartesian directions , and are defined as , and , where the dot denotes the derivative with respect to coordinate time. In these models they differ from each other, but in the limit of the flat FRW isotropic expansion is recovered. Among the Bianchi classification models the type I exhibits flat geometry and no overall vorticity; conversely, shear components are naturally generated, where is the expansion rate of the average scale factor, related to the volume expansion as with .The anisotropic expansion is caused by the anisotropically stressed dark energy fluid whenever its energy density contributes to the global energy budget. If the major contributions to the overall budget come from matter and dark energy, as after recombination, their energy-momentum tensor can be parametrized as:

respectively, where and are the equation of state parameters of matter and dark energy and the skewness parameters and can be interpreted as the difference of pressure along the and and axis. Note that the energy-momentum tensor (4.3.5) is the most general one compatible with the metric (4.3.3) [519]. Two quantities are introduced to define the degree of anisotropic expansion:Considering the generalized Friedmann equation, the continuity equations for matter and dark energy and no coupling between the two fluids, the derived autonomous system reads [520, 519]:

where and the derivatives are taken with respect to . System (4.3.7) exhibits many different fixed points, defined as the solutions of the system . Beside the Einstein–de Sitter case (), the most physically interesting for our purposes are the dark energy dominated solution and the scaling solution in which , i.e., the fractional dark energy contribution to the total energy density is constant.Anisotropic distribution of sources in Euclid survey might constrain the anisotropy at present, when the dark energy density is of order 74%, hence not yet in the final dark energy dominant attractor phase (4.3.8).

Inhomogeneity is relatively difficult to determine, as observations are typically made on our past light cone, but some methods exist (e.g., [242, 339, 600]). However, homogeneity may be tested by exploring the interior of the past light cone by using the fossil record of galaxies to probe along the past world line of a large number of galaxies [428]. One can use the average star formation rate at a fixed lookback time as a diagnostic test for homogeneity. The lookback time has two elements to it – the lookback time of the emission of the light, plus the time along the past world line. The last of these can be probed using the integrated stellar spectra of the galaxies, using a code such as vespa [890], and this is evidently dependent only on atomic and nuclear physics, independent of homogeneity. The lookback time can also be computed, surprisingly simply, without assuming homogeneity from [428]

where is the radial Hubble constant. In principle, this can be obtained from radial BAOs, assuming early-time homogeneity so that the physical BAO scale is fixed. The spectroscopic part of Euclid could estimate both the star formation histories from stacked spectra, and the radial expansion rate.

Nonlinear inhomogeneous models are traditionally studied either with higher-order perturbation theory or with -body codes. Both approaches have their limits. A perturbation expansion obviously breaks down when the perturbations are deeply in the nonlinear regime. -body codes, on the other hand, are intrinsically Newtonian and, at the moment, are unable to take into account full relativistic effects. Nevertheless, these codes can still account for the general relativistic behavior of gravitational collapse in the case of inhomogeneous large void models, as shown recently in [30], where the growth of the void follows the full nonlinear GR solution down to large density contrasts (of order one).

A possibility to make progress is to proceed with the most extreme simplification: radial symmetry. By assuming that the inhomogeneity is radial (i.e., we are at the center of a large void or halo) the dynamical equations can be solved exactly and one can make definite observable predictions.

It is however clear from the start that these models are highly controversial, since the observer needs to be located at the center of the void with a tolerance of about few percent of the void scale radius, see [141, 242], disfavoring the long-held Copernican principle (CP). Notwithstanding this, the idea that we live near the center of a huge void is attractive for another important reason: a void creates an apparent acceleration field that could in principle match the supernovae observations [891, 892, 220, 474]. Since we observe that nearby SN Ia recede faster than the predicted by the Einstein–de Sitter universe, we could assume that we live in the middle of a huge spherical region which is expanding faster because it is emptier than the outside. The transition redshift , i.e., the void edge, should be located around 0.3 – 0.5, the value at which in the standard interpretation we observe the beginning of acceleration.

The consistent way to realize such a spherical inhomogeneity has been studied since the 1930s in the relativistic literature: the Lemaître–Tolman–Bondi (LTB) metric. This is the generalization of a FLRW metric in which the expansion factor along the radial coordinate is different relative to the surface line element . If we assume the inhomogeneous metric (this subsection follows closely the treatment in [49])

and solve the Einstein equation for a fluid at rest we find that the LTB metric is given by where are arbitrary functions. Here primes and dots refer to partial space and time derivatives, respectively. The function can be thought of as a position-dependent spatial curvature. If is factorized so that and , then we recover the FLRW metric (up to a redefinition of : from now on when we seek the FLRW limit we put and ). Otherwise, we have a metric representing a spherical inhomogeneity centered on the origin. An observer located at the origin will observe an isotropic universe. We can always redefine at the present time to be , so that the metric is very similar to a FLRW today.Considering the infinitesimal radial proper length , we can define the radial Hubble function as

and similarly the transverse Hubble function: Of course the two definitions coincide for the FLRW metric. The non-vanishing components of the Ricci tensor for the LTB metric areIn terms of the two Hubble functions, we find that the Friedmann equations for the pressureless matter density are given by [28]

Adding Eqs. (4.3.18) and (4.3.19), it follows that . Integrating this equation, we obtain a Friedmann-like equation where is a free function that we can use along with to describe the inhomogeneity. From this we can define an effective density parameter today: where (the superscript denotes the present value) and an effective spatial curvature Hence, we see that the initial condition at some time (which here we take as the present time) must specify two free functions of , for instance or . The latter choice shows that the inhomogeneity can be in the matter distribution or in the expansion rate or in both. This freedom can be used to fit simultaneously for any expansion rate (and therefore luminosity and angular diameter distances [771]) and for any source number density [681].If one imposes the additional constraint that the age of the universe is the same for every observer, then only one free function is left [380]. The same occurs if one chooses (notice that this is different from , which is another possible choice), i.e., if the matter density fraction is assumed homogeneous today (and only today) [341]. The choice of a homogeneous universe age guarantees against the existence of diverging inhomogeneities in the past. However, there is no compelling reason to impose such restrictions.

Eq. (4.3.20) is the classical cycloid equation whose solution for is given parametrically by

where is the inhomogeneous “big-bang” time, i.e., the time for which and for a point at comoving distance . This can be put to zero in all generality by a redefinition of time. The “time” variable is defined by the relation

Notice that the “time” that corresponds to a given depends on ; so is found by solving numerically from Eq. (4.3.24) and then substituting . The present epoch is defined by the condition . In the problem [10.2] we will derive the age of the universe in terms of . For the functions in Eqs. (4.3.23 – 4.3.24) become and for and , respectively, while for they are and : we will not consider these cases further.As anticipated, since we need to have a faster expansion inside some distance to mimic cosmic acceleration, we need to impose to our solution the structure of a void. An example of the choice of ) is [380]

with representing the transition function of a shell of radius and thickness . The six constants completely fix the model. If we can mimic the accelerated expansion.In order to compare the LTB model to observations we need to generalize two familiar concepts: redshift and luminosity distance. The redshift can be calculated through the equation [27]

where must be calculated on the trajectory and we must impose . Every LTB function, e.g., etc. can be converted into line-of-sight functions of redshift by evaluating the arguments along the past light cone.The proper area of an infinitesimal surface at is given by . The angular diameter distance is the square root of so that . Since the Etherington duality relation remains valid in inhomogeneous models, we have [530]

This clearly reduces to in the FLRW background. Armed with these observational tools, we can compare any LTB model to the observations.Besides matching the SN Ia Hubble diagram, we do not want to spoil the CMB acoustic peaks and we also need to impose a local density near 0.1 – 0.3, a flat space outside (to fulfil inflationary predictions), i.e., , and finally the observed local Hubble value . The CMB requirement can be satisfied by a small value of , since we know that to compensate for we need a small Hubble rate (remember that the CMB essentially constrains ). This fixes . So we are left with only and to be constrained by SN Ia. As anticipated we expect to be near , which in the standard CDM model gives a distance . An analysis using SN Ia data [382] finds that and . Interestingly, a “cold spot” in the CMB sky could be attributed to a void of comparable size [269, 642].

There are many more constraints one can put on such large inhomogeneities. Matter inside the void moves with respect to CMB photons coming from outside. So the hot intracluster gas will scatter the CMB photons with a large peculiar velocity and this will induce a strong kinematic Sunyaev–Zel’dovich effect [381]. Moreover, secondary photons scattered towards us by reionized matter inside the void should also distort the black-body spectrum due to the fact that the CMB radiation seen from anywhere in the void (except from the center) is anisotropic and therefore at different temperatures [197]. These two constraints require the voids not to exceed 1 or 2 Gpc, depending on the exact modelling and are therefore already in mild conflict with the fit to supernovae.

Moreover, while in the FLRW background the function fixes the comoving distance up to a constant curvature (and consequently also the luminosity and angular diameter distances), in the LTB model the relation between and or can be arbitrary. That is, one can choose the two spatial free functions to be for instance and , from which the line-of-sight values and would also be arbitrarily fixed. This shows that the “consistency” FLRW relation between and is violated in the LTB model, and in general in any strongly inhomogeneous universe.

Further below we discuss how this consistency test can be exploited by Euclid to test for large-scale inhomogeneities. Recently, there has been an implementation of LTB models in large-scale structure -body simulations [30], where inhomogeneities grow in the presence of a large-scale void and seen to follow the predictions of linear perturbation theory.

An interesting class of tests on large-scale inhomogeneities involve probes of the growth of structure. However, progress in making theoretical predictions has been hampered by the increased complexity of cosmological perturbation theory in the LTB spacetime, where scalar and tensor perturbations couple, see for example [241]. Nevertheless, a number of promising tests of large-scale inhomogeneity using the growth of structure have been proposed. [29] used -body simulations to modify the Press–Schechter halo mass function, introducing a sensitive dependence on the background shear. The shear vanishes in spatially-homogeneous models, and so a direct measurement of this quantity would put stringent constraints on the level of background inhomogeneity, independent of cosmological model assumptions. Furthermore, recent upper limits from the ACT and SPT experiments on the linear, all-sky kinematic Sunyaev–Zel’dovich signal at , a probe of the peculiar velocity field, appear to put strong constraints on voids [986]. This result depends sensitively on theoretical uncertainties on the matter power spectrum of the model, however.

Purely geometric tests involving large-scale structure have been proposed, which neatly side-step the perturbation theory issue. The Baryon Acoustic Oscillations (BAO) measure a preferred length scale, , which is a combination of the acoustic length scale, , set at matter-radiation decoupling, and projection effects due to the geometry of the universe, characterized by the volume distance, . In general, the volume distance in an LTB model will differ significantly from that in the standard model, even if the two predict the same SN Ia Hubble diagram and CMB power spectrum. Assuming that the LTB model is almost homogeneous around the decoupling epoch, may be inferred from CMB observations, allowing the purely geometric volume distance to be reconstructed from BAO measurements. It has been shown by [990] that, based on these considerations, recent BAO measurements effectively rule out giant void models, independent of other observational constraints.

The tests discussed so far have been derived under the assumption of a homogeneous Big Bang (equivalent to making a particular choice of the bang time function). Allowing the Big Bang to be inhomogeneous considerably loosens or invalidates some of the constraints from present data. It has been shown [187] that giant void models with inhomogeneous bang times can be constructed to fit the SN Ia data, WMAP small-angle CMB power spectrum, and recent precision measurements of simultaneously. This is contrary to claims by, e.g., [767], that void models are ruled out by this combination of observables. However, the predicted kinematic Sunyaev–Zel’dovich signal in such models was found to be severely incompatible with existing constraints. When taken in combination with other cosmological observables, this also indicates a strong tension between giant void models and the data, effectively ruling them out.

In general, we would like to compute directly the impact of the inhomogeneities, without requiring an exact and highly symmetric solution of Einstein’s equations like FLRW or even LTB. Unfortunately there is no easy way to approach this problem. One ansatz tries to construct average quantities that follow equations similar to those of the traditional FLRW model, see e.g., [185, 751, 752, 186]. This approach is often called backreaction as the presence of the inhomogeneities acts on the background evolution and changes it. In this framework, it is possible to obtain a set equations, often called Buchert equations, that look surprisingly similar to the Friedmann equations for the averaged scale factor , with extra contributions:

Here is the 3-Ricci scalar of the spatial hypersurfaces and is given by i.e., it is a measure of the variance the expansion rate and of the shear . We see that this quantity, if it is positive, can induce an accelerated growth of , which suggests that observers would conclude that the universe is undergoing accelerated expansion.However, it is not possible to directly link this formalism to observations. A first step can be done by imposing by hand an effective, average geometry with the help of a template metric that only holds on average. The probably simplest first choice is to impose on each spatial hypersurface a spatial metric with constant curvature, by imagining that the inhomogeneities have been smoothed out. But in general the degrees of freedom of this metric (scale factor and spatial curvature) will not evolve as in the FLRW case, since the evolution is given by the full, inhomogeneous universe, and we would not expect that the smoothing of the inhomogeneous universe follows exactly the evolution that we would get for a smooth (homogeneous) universe. For example, the average curvature could grow over time, due to the collapse of overdense structure and the growth (in volume) of the voids. Thus, unlike in the FRLW case, the average curvature in the template metric should be allowed to evolve. This is the case that was studied in [547].

While the choice of template metric and the Buchert equations complete the set of equations, there are unfortunately further choices that need to be made. Firstly, although there is an integrability condition linking the evolution of and and in addition a consistency requirement that the effective curvature in the metric is related to , we still need to impose an overall evolution by hand as it was not yet possible to compute this from first principles. Larena assumed a scaling solution , with a free exponent. In a dark energy context, this scaling exponent corresponds to an effective dark energy with , but in the backreaction case with the template metric the geometry is different from the usual dark energy case. A perturbative analysis [569] found , but of course this only an indication of the possible behavior as the situation is essentially non-perturbative.

The second choice concerns the computation of observables. [547] studied distances to supernovae and the CMB peak position, effectively another distance. The assumption taken was that distances could be computed within the averaged geometry as if this was the true geometry, by integrating the equation of radial null geodesics. In other words, the effective metric was taken to be the one that describes distances correctly. The resulting constraints are shown in Figure 51. We see that the leading perturbative mode () is marginally consistent with the constraints. These contours should be regarded as an indication of what kind of backreaction is needed if it is to explain the observed distance data.

One interesting point, and maybe the main point in light of the discussion in the following section, is that the averaged curvature needs to become necessarily large at late times due to the link between it and the backreaction term , in order to explain the data. Just as in the case of a huge void, this effective curvature makes the backreaction scenario testable to some degree with future large surveys like Euclid.

Living Rev. Relativity 16, (2013), 6
http://www.livingreviews.org/lrr-2013-6 |
This work is licensed under a Creative Commons License. E-mail us: |