The concrete implementation of these ideas has begun only recently and led to a number of surprising results to be reviewed here. Part of the physics intuition, on the other hand, dates back to an 1979 article by Weinberg  (see also ). Motivated by the analogy to the asymptotic freedom property of non-Abelian gauge theories, the term “asymptotic safety” was suggested in , indicating that physical quantities are “safe” from divergencies as the cutoff is removed. Following this suggestion we shall refer to the above circle of ideas as the “asymptotic safety scenario” for quantum gravity. For convenient orientation we display the main features of the asymptotic safety scenario in an overview:
This is a setting which places quantum gravity within the framework of known physics principles. It is not presupposed that continuous fields or distributions on a four dimensional manifold are necessarily the most adequate description in the extreme ultraviolet. However, since these evidently provide the correct dynamical degrees of freedom at ‘low’ (sub TeV) energies, a research strategy which focuses on the ‘backtracing’ of the origin of these dynamical degrees of freedom via renormalization group ideas seems most appropriate.
This amounts to a strategy centered around a functional integral picture, which was indeed the strategy adopted early on [144, 78], but which is now mostly abandoned. A functional integral over geometries of course has to differ in several crucial ways from one for fields on a fixed geometry. This led to the development of several formulations (canonical, covariant [64, 65, 66], proper time [212, 213], and covariant Euclidean [104, 92]). As is well-known the functional integral picture is also beset by severe technical problems [210, 63]. Nevertheless this should not distract attention from the fact that a functional integral picture has a physics content which differs from the physics content of other approaches. For want of a better formulation we shall refer to this fact by saying that a functional integral picture “takes the degrees of freedom of the gravitational field seriously also in the quantum regime”.
Let us briefly elaborate on that. Arguably the cleanest intuition to ‘what quantizing gravity might mean’ comes from the functional integral picture. Transition or scattering amplitudes for nongravitational processes should be affected not only by one geometry solving the gravitational field equations, but by a ‘weighted superposition’ of ‘nearby possible’ off-shell geometries. The rationale behind this intuition is that all known (microscopic) matter is quantized that way, and using an off-shell matter configuration as the source of the Einstein field equations is in general inconsistent, unless the geometry is likewise off-shell. Moreover, relativistic quantum field theory suggests that the matter-geometry coupling is effected not only through averaged or large scale properties of matter. For example nonvanishing connected correlators of a matter energy momentum tensor should be a legitimate source of gravitational radiation as well (see ). Of course this does not tell in which sense the geometry is off-shell, nor which class of possible geometries ought to be considered and be weighed with respect to which measure. Rapid decoherence, a counterpart of spontaneous symmetry breaking, and other unknown mechanisms may in addition mask the effects of the superposition principle. Nevertheless the argument suggests that the degrees of freedom of the gravitational field should be taken seriously also in the quantum regime, roughly along the lines of a functional integral.
Doing so one has to face the before mentioned enormous difficulties. Nevertheless facing these problems and maintaining the credible physics premise of a functional integral picture is, in our view, more appropriate than evading the problems in exchange for a less credible physics premise. Of course in the absence of empirical guidance the ‘true’ physics of quantum gravity is unknown; so for the time being it will be important to try to isolate differences in the physics content of the various approaches. By physics content we mean here qualitative or quantitative results for the values of “quantum gravity corrections” to generic physical quantities in the approach considered. Generic physical quantities should be such that they in principle capture the entire invariant content of a theory. In a conventional field theory S-matrix elements by and large have this property, in canonical general relativity Dirac observables play this role [9, 219, 70]. In quantum gravity, in contrast, no agreement has been reached on the nature of such generic physical quantities.
Quantum gravity research strongly draws on concepts and techniques from other areas of theoretical physics. As these concepts and techniques evolve they are routinely applied to quantum gravity. In the case of the functional integral picture the transferral was in the past often dismissed as eventually inappropriate. As the concepts and techniques evolved further, the reasons for the original dismissal may have become obsolete but the negative opinion remained. We share the viewpoint expressed by Wilczek in : “Whether the next big step will require a sharp break from the principles of quantum field theory, or, like the previous ones, a better appreciation of its potentialities, remains to be seen”. As a first (small) step one can try to reassess the prospects of a functional integral picture for the description of the quantized gravitational field, which is what we set out to do here. We try to center the discussion around the above main ideas, and, for short, call a quantum theory of gravity based on them Quantum Gravidynamics. For the remainder of Section 1.1 we now discuss a number of key issues that arise.
In any functional integral picture one has to face the crucial renormalizability problem. Throughout we shall be concerned exclusively with (non-)renormalizability in the ultraviolet. The perspective on the nature of the impasse entailed by the perturbative non-renormalizability of the Einstein–Hilbert action (see Bern  for a recent review), however, has changed significantly since the time it was discovered by ’t Hooft and Veltmann . First, the effective field theory framework applied to quantum gravity (see  for a recent review) provides unambiguous answers for ‘low energy’ quantities despite the perturbative non-renormalizability of the ‘fundamental’ action. The role of an a-priori microscopic action is moreover strongly deemphasized when a Kadanoff–Wilson view on renormalization is adopted. We shall give a quick reminder on this framework in Appendix A. Applied to gravity it means that the Einstein–Hilbert action should not be considered as the microscopic (high energy) action, rather the (nonperturbatively defined) renormalization flow itself will dictate, to a certain extent, which microscopic action to use and whether or not there is a useful description of the extreme ultraviolet regime in terms of ‘fundamental’ (perhaps non-metric) degrees of freedom. The extent to which this is true hinges on the existence of a fixed point with a renormalized trajectory emanating from it. The fixed point guarantees universality in the statistical physics sense. If there is a fixed point, any action on a renormalized trajectory describes identically the same physics on all energy scales lower than the one where it is defined. Following the trajectory back (almost) into the fixed point one can in principle extract unambiguous answers for physical quantities on all energy scales.
Compared to the effective field theory framework the main advantage lies not primarily in the gained energy range in which reliable computations can be made, but rather that one has a chance to properly identify ‘large’ quantum gravity effects at low energies. Indeed the (presently known) low energy effects that arise in the effective field theory framework, although unambiguously defined, are suppressed by the powers of energy scale/Planck mass one would expect on dimensional grounds. Conversely, if there are detectable low energy imprints of quantum gravity they presumably arise from high energy (Planck scale) processes, in which case one has to computationally propagate their effect through many orders of magnitudes down to accessible energies.
This may be seen as the the challenge a physically viable theory of quantum gravity has to meet, while the nature of the ‘fundamental’ degrees of freedom is of secondary importance. Indeed, from the viewpoint of renormalization theory it is the universality class that matters, not the particular choice of dynamical variables. Once a functional integral picture has been adopted, even nonlocally and nonlinearly related sets of fields or other variables may describe the same universality class – and hence the same physics.
The arena on which the renormalization group acts is a space of actions or, equivalently, a space of measures. A typical action has the form , where are interaction monomials (including kinetic terms) and the are scale dependent coefficients. The subset which cannot be removed by field redefinitions are called essential parameters, or couplings. Usually one makes them dimensionless by taking out a suitable power of the scale parameter , . In the following the term “essential coupling” will always refer to these dimensionless variants. We also presuppose the principles according to which a (Wilson–Kadanoff) renormalization flow is defined on this area. For the convenience of the reader a brief reminder is included in Appendix A. In the context of Quantum Gravidynamics some key notions (unstable manifold and continuum limit) have a somewhat different status which we outline below.
Initially all concepts in a Wilson–Kadanoff renormalization procedure refer to a choice of coarse graining operation. It is part of the physics premise of a functional integral type approach that there is a description independent and physically relevant distinction between coarse grained and fine grained geometries. On a classical level this amounts to the distinction, for example, between a perfect fluid solution of the field equations and one generated by its or so molecular constituents. A sufficiently large set of Dirac observables would be able to discriminate two such spacetimes. Whenever we shall refer later on to “coarse grained” versus “fine grained” geometries we have a similar picture in mind for the ensembles of off-shell geometries entering a functional integral.
With respect to a given coarse graining operation one can ask whether the flow of actions or couplings has a fixed point. The existence of a fixed point is the raison d’être for the universality properties (in the statistical field theory sense) which eventually are ‘handed down’ to the physics in the low energy regime. By analogy with other field theoretical systems one should probably not expect that the existence (or nonexistence) of a (non-Gaussian) fixed point will be proven with mathematical rigor in the near future. From a physics viewpoint, however, it is the high degree of universality ensued by a fixed point that matters, rather than the existence in the mathematical sense. For example non-Abelian gauge theories appear to have a (Gaussian) fixed point ‘for all practical purposes’, while their rigorous construction as the continuum limit of a lattice theory is still deemed a ‘millennium problem’. In the case of quantum gravity we shall present in Sections 3 and 4 in detail two new pieces of evidence for the existence of a (non-Gaussian) fixed point.
Accepting the existence of a (non-Gaussian) fixed point as a working hypothesis one is led to determine the structure of its unstable manifold. Given a coarse graining operation and a fixed point of it, the stable (unstable) manifold is the set of all points connected to the fixed point by a coarse graining trajectory terminating at it (emanating from it). It is not guaranteed though that the space of actions can in the vicinity of the fixed point be divided into a stable and an unstable manifold; there may be trajectories which develop singularities or enter a region of coupling space deemed unphysical for other reasons and thus remain unconnected to the fixed point. The stable manifold is the innocuous part of the problem; it is the unstable manifold which is crucial for the construction of a continuum limit. By definition it is swept out by flow lines emanating from the fixed point, the so-called renormalized trajectories. Points on such a flow line correspond to actions or measures which are called perfect in that they can be used to compute continuum answers for physical quantities even in the presence of an ultraviolet (UV) cutoff, like one which discretizes the base manifold. In practice the unstable manifold is not known and renormalized trajectories have to be identified approximately by a tuning process. What is easy to determine is whether in a given expansion “sum over coupling times interaction monomial” a coupling will be driven away from the value the corresponding coordinate has at the fixed point after a sufficient number of coarse graining steps (in which case it is called relevant) or will move towards this fixed point value (in which case it is called irrelevant). Note that this question can be asked even for trajectories which are not connected to the fixed point. The dimension of the unstable manifold equals the number of independent relevant interaction monomials that are ‘connected’ to the fixed point by a (renormalized) trajectory.
Typically the unstable manifold is indeed locally a manifold, though it may have cusps. Although ultimately it is only the unstable manifold that matters for the construction of a continuum limit, relevant couplings which blow up somewhere in between may make it very difficult to successfully identify the unstable manifold. In practice, if the basis of interaction monomials in which this happens is deemed natural and a change of basis in which the pathological directions could simply be omitted from the space of actions is very complicated, the problems caused by such a blow up may be severe. An important issue in practice is therefore whether in a natural basis of interaction monomials the couplings are ‘safe’ from such pathologies and the space of actions decomposes in the vicinity of the fixed point neatly into a stable and an unstable manifold. This regularity property is one aspect of “asymptotic safety”, as we shall see below.
A second caveat appears in infinite-dimensional situations. Whenever the coarse graining operates on an infinite set of potentially relevant interaction monomials, convergence issues in the infinite sums formed from them may render formally equivalent bases inequivalent. In this case the geometric picture of a (coordinate independent) manifold breaks down or has to be replaced by a more refined functional analytic framework. An example of a field theory with an infinite set of relevant interaction monomials is QCD in a lightfront formulation  where manifest Lorentz and gauge invariance is given up in exchange of other advantages. In this case it is thought that there are hidden dependencies among the associated couplings so that the number of independent relevant couplings is finite and the theory is eventually equivalent to conventional QCD. Such a reduction of couplings is nontrivial because a relation among couplings has to be preserved under the renormalization flow. In quantum gravity related issues arise to which we turn later.
As an interlude let us outline the role of Newton’s constant in a diffeomorphism invariant theory with a dynamical metric. Let be any local action, where is the metric and the “matter” fields are not scaled when the metric is. Constant rescalings of the metric then give rise to a variation of the Lagrangian which vanishes on a shell:essential (i.e. a genuine coupling) becomes inessential (i.e. can be changed at will by a redefinition of the fields). The running of this parameter, like that of a wave function renormalization constant, has no direct significance. If the pure gravity part contains the usual Ricci scalar term , the parameter that becomes inessential may be taken as its prefactor . Up to a dimension dependent coefficient it can be identified with the inverse of Newton’s constant , the latter defined through the nonrelativistic force law. It is also easy to see that in a background field formalism sets the overall normalization of the spectral/momentum values. Hence in a theory with a dynamical metric the three (conceptually distinct) inessential parameters – overall scale of the metric, the inverse of Newton’s constant, and the overall normalization of the spectral/momentum values – are in one-to-one correspondence (see Section 2.3.1 for details). For definiteness let us consider the running of Newton’s constant here.
Being inessential, the quantum field theoretical running of has significance only relative to the running coefficient of some reference operator. The most commonly used choice is a cosmological constant term . Indeed. The associated essential coupling is in the present context assumed to be asymptotically safe, i.e. , , where here . Factorizing it into the dimensionless Newton constant and , there are two possibilities: One is that the scheme choices are such that both and behave like asymptotically safe couplings, i.e. satisfy Equation (1.3) below. This is advantageous for most purposes. The second possibility is realized when a singular solution for the flow equation for is inserted into the flow equation for . This naturally occurs when , viewed as an inessential parameter, is frozen at a prescribed value, say GeV, which amounts to working with Planck units . Then the flow is trivial, , but the flow equation for carries an explicit -dependence. By and large both formulations are mathematically equivalent (see Section 2.3.1). For definiteness we considered here the cosmological constant term as a reference operator, but many other choices are possible. In summary, the dimensionless Newton constant can be treated either as an inessential parameter (and then frozen to a constant value) or as a quasi-essential coupling (in which case it runs and assumes a finite positive asymptotic value).
The unstable manifold of a fixed point is crucial for the construction of a continuum limit. The fixed point itself describes a strictly scale invariant situation. More precisely the situation at the fixed point is by definition invariant under the chosen coarse graining (i.e. scale changing) operation. In particular any dependence on an ultraviolet cutoff must drop out at the fixed point, which is why fixed points are believed to be indispensable for the construction of a scaling limit. If one now uses a different coarse graining operation the location of the fixed point will change in the given coordinate system provided by the essential couplings. One aspect of universality is that all field theories based on the fixed points referring to different coarse graining operations have the same long distance behavior.
This suggests to introduce the notion of a continuum limit as an ‘equivalence class’ of scaling limits in which the physical quantities become independent of the UV cutoff, largely independent of the choice of the coarse graining operation, and, ideally, invariant under local reparameterizations of the fields.
In the framework of statistical field theories one distinguishes between two construction principles, a massless scaling limit and a massive scaling limit. In the first case all the actions/measures on a trajectory emanating from the fixed point describe a scale invariant system, in the second case this is true only for the action/measure at the fixed point. In either case the unstable manifold of the given fixed point has to be at least one-dimensional. Here we shall exclusively be interested in the second construction principle. Given a coarse graining operation and a fixed point of it with a nontrivial unstable manifold a scaling limit is then constructed by ‘backtracing’ a renormalized trajectory emanating from the fixed point. The number of parameters needed to specify a point on the unstable manifold gives the number of possible scaling limits – not all of which must be physically distinct, however.
In this context it should be emphasized that the number of relevant directions in a chosen basis is not directly related to the predictive power of the theory. A number of authors have argued in the effective field theory framework that even theories with an infinite number of relevant parameters can be predictive [126, 16, 32]. This applies all the more if the theory under consideration is based on a fixed point, and thus not merely effective. One reason lies in the fact that the number of independent relevant directions connected to the fixed point might not be known. Hidden dependencies would then allow for a (genuine or effective) reduction of couplings [236, 160, 174, 11, 16]. For quantum gravity the situation is further complicated by the fact that generic physical quantities are likely to be related only nonlocally and nonlinearly to the metric. What matters for the predictive power is not the total number of relevant parameters but how the observables depend on them. To illustrate the point imagine a (hypothetical) case where observables are injective functions of relevant couplings each; then measurements will determine the couplings, leaving predictions. This gives plenty of predictions, for any , and it remains true in the limit , despite the fact that one then has infinitely many relevant couplings.
Infinitely many essential couplings naturally arise when a perturbative treatment of Quantum Gravidynamics is based on a type propagator. As first advocated by Gomis and Weinberg  the use of a type graviton propagator in combination with higher derivative terms avoids the problems with unitarity that occur in other treatments of higher derivative theories. Consistency requires that quadratic counterterms (those which contribute to the propagator) can be absorbed by field redefinitions. This can be seen to be the case  either in the absence of a cosmological constant term or when the background spacetime admits a metric with constant curvature. The price to pay for the type propagator is that all nonquadratic counterterms have to be included in the bare action, so that independence of the UV cutoff can only be achieved with infinitely many essential couplings, but it can be . In order to distinguish this from the familiar notion of perturbative renormalizability with finitely many couplings we shall call such theories (perturbatively) weakly renormalizable. Translated into Wilsonian terminology the above results then show the existence of a “weakly renormalizable” but “propagator unitary” Quantum Gravidynamics based on a perturbative Gaussian fixed point.
The beta functions for this infinite set of couplings are presently unknown. If they were known, expectations are that at least a subset of the couplings would blow up at some finite momentum scale and would be unphysical for . In this case the computed results for physical quantities (“reaction rates”) are likely to blow up likewise at some (high) energy scale .
This illustrates Weinberg’s concept of asymptotic safety. To quote from : “A theory is said to be asymptotically safe if the essential coupling parameters approach a fixed point as the momentum scale of their renormalization point goes to infinity”. Here ‘the’ essential couplings are those which are useful for the absorption of cutoff dependencies, i.e. not irrelevant ones. The momentum scale is the above , so that the condition amounts to having nonterminating trajectories for the ’s with a finite limit:asymptotically safe. As a specification one should add : “Of course the question whether or not an infinity in coupling constants betokens a singularity in reaction rates depends on how the coupling constants are parameterized. We could always adopt a perverse definition (e.g. ) such that reaction rates are finite even at an infinity of the coupling parameters. This problem can be avoided if we define the coupling constants as coefficients in a power series expansion of the reaction rates themselves around some physical renormalization point”.
A similar remark applies to the signs of coupling constants. When defined through physical quantities certain couplings or coupling combinations will be constrained to be positive. For example in a (nongravitational) effective field theory this constrains the couplings of a set of leading power counting irrelevant operators to be positive . In an asymptotically safe theory similar constraints are expected to arise and are crucial for its physics viability.
Note that whenever the criterion for asymptotic safety is met, all the relevant couplings lie in the unstable manifold of the fixed point (which is called the “UV critical surface” in , Page 802, a term now usually reserved for the surface of infinite correlation length). The regularity property described earlier is then satisfied, and the space of actions decomposes in the vicinity of the fixed point into a stable and an unstable manifold.
Comparing the two perturbative treatments of Quantum Gravidynamics described earlier, one sees that they have complementary advantages and disadvantages: Higher derivative theories based on a propagator are strictly renormalizable with couplings that are presumed to be asymptotically safe; however unphysical propagating modes are present. Defining higher derivative gravity perturbatively with respect to a propagator has the advantage that all propagating modes are physical, but infinitely many essential couplings are needed, a subset of which is presumed to be not asymptotically safe. From a technical viewpoint the challenge of Quantum Gravidynamics lies therefore not so much in achieving renormalizability but to reconcile asymptotically safe couplings with the absence of unphysical propagating modes.
The solution of this ‘technical’ problem is likely also to give rise to enhanced preditability properties, which should be vital to make the theory phenomenologically interesting. Adopting the second of the above perturbative constructions one sees that situation is similar to, for example, perturbative QED. So, apart from esthetic reasons, why not be content with physically motivated couplings that display a ‘Landau’ pole, and hence with an effective field theory description? Predictability in principle need not a be problem. The previous remarks about the predictability of theories with infinitely many essential couplings apply here. Even in Quantum Gravidynamics based on the perturbative Gaussian fixed point, some lowest order corrections are unambiguously defined (independent of the scale ), as stressed by Donoghue (see  and references therein). In our view , as mentioned earlier, the main rationale for trying to go beyond Quantum Gravidynamics based on the perturbative Gaussian fixed point is not the infinite number of essential couplings, but the fact that the size of the corrections is invariably governed by power-counting dimensions. As a consequence, in the energy range where the computations are reliable the corrections are way too small to be phenomenologically interesting. Conversely, if there is a physics of quantum gravity, which is experimentally accessible and adequately described by some Quantum Gravidynamics, the above two features need to be reconciled – perturbatively or nonperturbatively.
Assuming that this can be achieved certain qualitative features such a gravitational functional integral must have can be inferred without actually evaluating it. One is the presence of anti-screening configurations, the other is a dimensional reduction phenomenon in the ultraviolet.
In non-Abelian gauge theories the anti-screening phenomenon can be viewed as the physics mechanism underlying their benign high energy behavior (as opposed to Abelian gauge theories, say); see e.g.  for an intuitive discussion. It is important not to identify “anti-screening” with its most widely known manifestation, the sign of the dominant contribution to the one-loop beta function. In an exact continuum formulation of a pure Yang–Mills theory, say, the correlation functions do not even depend on the gauge coupling. Nevertheless they indirectly do know about “asymptotic freedom” through their characteristic high energy behavior. In the functional integral measure this comes about through the dominance of certain configurations/histories which one might also call “anti-screening”.
By analogy one would expect that in a gravitational functional integral which allows for a continuum limit, a similar mechanism is responsible for its benign ultraviolet behavior (as opposed to the one expected by power counting considerations with respect to a propagator, say). Some insight into the nature of this mechanism can be gained from a Hamiltonian formulation of the functional integral (authors, unpublished) but a concise characterization of the “anti-screening” geometries/histories, ideally in a discretized setting, remains to be found. By definition the dominance of these configurations/histories would be responsable for the benign ultraviolet properties of the discretized functional integral based on a non-Gaussian fixed point. Conversely understanding the nature of these antiscreening geometries/histories might help to design good discretizations. A discretization of the gravitational functional integral which allows for a continuum limit might also turn out to exclude or dynamically disfavor configurations that are taken into account in other, off-hand equally plausible, discretizations. Compared to such a naive discretization it will look as if a constraint on the allowed configurations/histories has been imposed. For want of a better term we call this an “anti-screening constraint”. A useful analogy is the inclusion of a causality constraint in the definition of the (formal Euclidean) functional integral originally proposed by Teitelboim [212, 213], and recently put to good use in the framework of dynamical triangulations . Just as the inclusion of a good causality constraint is justified retroactively, so would be the inclusion of a suitable “antiscreening” constraint.
A second qualitative property of a gravitational functional integral where the continuum limit is based on a non-Gaussian fixed point is a dimensional reduction of the residual interactions in the UV. There are several arguments for this phenomenon which will be described in Section 2.4. Perhaps the simplest one is based on the large anomalous dimensions at a non-Gaussian fixed point and runs as follows: (We present here a formulation independent variant  of the argument first used in .) Suppose that the unkown microscopic action is local and reparameterization invariant. The only term containing second derivatives then is the familiar Einstein–Hilbert term of mass dimension in dimensions, if the metric is taken dimensionless. As explained before the dimensionful running prefactor multiplying it plays a double role, once as a wave function renormalization constant and once as a quasi-essential coupling . Both aspects are related as outlined before; in particularIf this flow equation now has a nontrivial fixed point , the only way how the right-hand-side can vanish is for , irrespective of the detailed behavior of the other couplings as long as no blow-up occurs. This is a huge anomalous dimension. For a graviton “test propagator” (see below) the key property of is that it gives rise to a high momentum behavior of the form modulo logarithms, or a short distance behavior of the form modulo logarithms. Keeping only the leading part the vanishing power at translates into a logarithmic behavior, , formally the same as for massless Klein–Gordon fields in a two-dimensional field theory. We shall comment on potential pitfalls of such an argument below.
In accordance with this argument a type propagator goes hand in hand with a non-Gaussian fixed point for in two other computational settings: in strictly renormalizable higher derivative theories (see Section 2.3.2 and in the expansion [216, 217, 203]. In the latter case a nontrivial fixed point goes hand in hand with a graviton propagator whose high momentum behavior is of the form , in four dimensions, and formally in dimensions.
The fact that a large anomalous dimension occurs at a non-Gaussian fixed point was first observed in the context of the expansion [116, 117] and then noticed in computations based on truncated flow equations . The above variant of the argument  shows that no specific computational information enters. It highlights what is special about the Einstein–Hilbert term (within the class of local gravitational actions): it is the kinetic (second derivative) term itself which carries a dimensionful coupling. Of course one could assign to the metric a mass dimension , in which case Newton’s constant would be dimensionless. However one readily checks that then the wave function renormalization constant of a standard matter kinetic term acquires a mass dimension for bosons and for fermions, respectively. Assuming that the dimensionless parameter associated with them remains nonzero as , one can repeat the above argument and finds now that all matter propagators have a high momentum behavior, or a short distance behavior. It is this universality which justifies to attribute the modification in the short distance behavior of the fields to a modification of the underlying (random) geometry. This may be viewed as a specific variant of the old expectation that gravity acts as a short distance regulator.
Let us stress that while the anomalous dimension always governs the UV behavior in the vicinity of a (UV) fixed point, it is in general not related to the geometry of field propagation (see  for a discussion in QCD). What is special about gravity is ultimately that the propagating field itself determines distances. In the context of the above argument this is used in the reshuffling of the soft UV behavior to matter propagators. The propagators used here should be viewed as “test propagators”, not as physical ones. One transplants the information in derived from the gravitational functional integral into a conventional propagator on a (flat or curved) background spacetime. The reduced dimension two should be viewed as an “interaction dimension” specifying roughly the (normalized) number of independent degrees of freedom a randomly picked one interacts with.
The same conclusion ( propagators or interaction dimension ) can be reached in a number of other ways as well, which are described in Section 2.4. A more detailed understanding of the microstructure of the random geometries occuring in an asymptotically safe functional integral remains to be found (see however [135, 134]).
Accepting this dimensional reduction as a working hypothesis it is natural to ask whether there exists a two-dimensional field theory which provides an quantitatively accurate (‘effective’) description of this extreme UV regime. Indeed, one can identify a number of characteristics such a field theory should have, using only the main ideas of the scenario (see the end of Section 2.4). The asymptotic safety of such a field theory would then strongly support the corresponding property of the full theory and the self-consistency of the scenario. In summary, we have argued that the qualitative properties of the gravitational functional integral in the extreme ultraviolet follow directly from the previously highlighted principles: the existence of a nontrivial UV fixed point, asymptotic safety of the couplings, and antiscreening. Moreover these UV properties can be probed for self-consistency.
© Max Planck Society and the author(s)