"Quantum-Spacetime Phenomenology"
Giovanni Amelino-Camelia 
1 Introduction and Preliminaries
1.1 The “Quantum-Gravity problem” as seen by a phenomenologist
1.2 Quantum spacetime vs quantum black hole and graviton exchange
1.3 20th century quantum-gravity phenomenology
1.4 Genuine Planck-scale sensitivity and the dawn of quantum-spacetime phenomenology
1.5 A simple example of genuine Planck-scale sensitivity
1.6 Focusing on a neighborhood of the Planck scale
1.7 Characteristics of the experiments
1.8 Paradigm change and test theories of not everything
1.9 Sensitivities rather than limits
1.10 Other limitations on the scope of this review
1.11 Schematic outline of this review
2 Quantum-Gravity Theories, Quantum Spacetime, and Candidate Effects
2.1 Quantum-Gravity Theories and Quantum Spacetime
2.2 Candidate effects
3 Quantum-Spacetime Phenomenology of UV Corrections to Lorentz Symmetry
3.1 Some relevant concepts
3.2 Preliminaries on test theories with modified dispersion relation
3.3 Photon stability
3.4 Pair-production threshold anomalies and gamma-ray observations
3.5 Photopion production threshold anomalies and the cosmic-ray spectrum
3.6 Pion non-decay threshold and cosmic-ray showers
3.7 Vacuum Cerenkov and other anomalous processes
3.8 In-vacuo dispersion for photons
3.9 Quadratic anomalous in-vacuo dispersion for neutrinos
3.10 Implications for neutrino oscillations
3.11 Synchrotron radiation and the Crab Nebula
3.12 Birefringence and observations of polarized radio galaxies
3.13 Testing modified dispersion relations in the lab
3.14 On test theories without energy-dependent modifications of dispersion relations
4 Other Areas of UV Quantum-Spacetime Phenomenology
4.1 Preliminary remarks on fuzziness
4.2 Spacetime foam, distance fuzziness and interferometric noise
4.3 Fuzziness for waves propagating over cosmological distances
4.4 Planck-scale modifications of CPT symmetry and neutral-meson studies
4.5 Decoherence studies with kaons and atoms
4.6 Decoherence and neutrino oscillations
4.7 Planck-scale violations of the Pauli Exclusion Principle
4.8 Phenomenology inspired by causal sets
4.9 Tests of the equivalence principle
5 Infrared Quantum-Spacetime Phenomenology
5.1 IR quantum-spacetime effects and UV/IR mixing
5.2 A simple model with soft UV/IR mixing and precision Lamb-shift measurements
5.3 Soft UV/IR mixing and atom-recoil experiments
5.4 Opportunities for Bose–Einstein condensates
5.5 Soft UV/IR mixing and the end point of tritium beta decay
5.6 Non-Keplerian rotation curves from quantum-gravity effects
5.7 An aside on gravitational quantum wells
6 Quantum-Spacetime Cosmology
6.1 Probing the trans-Planckian problem with modified dispersion relations
6.2 Randomly-fluctuating metrics and the cosmic microwave background
6.3 Loop quantum cosmology
6.4 Cosmology with running spectral dimensions
6.5 Some other quantum-gravity-cosmology proposals
7 Quantum-Spacetime Phenomenology Beyond the Standard Setup
7.1 A totally different setup with large extra dimensions
7.2 The example of hard UV/IR mixing
7.3 The possible challenge of not-so-subleading higher-order terms
8 Closing Remarks

List of Footnotes

1 For simplicity, I am assuming the description of the collision is being given in the center-of-mass frame. For example, we might have two identical particles, which in that frame have the same gigantic energy but propagate in opposite directions.
2 Note, however, that this type of Heisenberg-microscope-type issues can be studied in some of the frameworks under consideration for the quantum-gravity problem. In particular, this has attracted some interest by proponents of asymptotic safety, as illustrated by the study reported in Ref. [451] (for an alternative perspective see Ref. [216]).
3 Our familiarity with the Newtonian regime of gravity extends down to distance scales no shorter than −6 ∼ 10 m, so the factor 2 2 ℓp∕r would invite us to determine a Newtonian potential with accuracy of at least parts in 58 10.
4 Consistent with what is done in the relevant literature, I write Eq. (1View Equation) without assuming exact validity of the equivalence principle, and, therefore, introduce different symbols MI and MG for the inertial and gravitational mass respectively.
5 For a detailed discussion of the implications for the energy-momentum relation of a given scheme of spacetime discretization see, e.g., Ref. [517].
6 η represents one of (or a linear combination of) the coefficients η m0,m1,m2,m3. The quantum-gravity intuition for η is |η| ∼ 1. For example, in my simple-minded “spacetime-lattice picture” |η| ≃ 1 is obtained when the lattice spacing is exactly E− 1 p. Sensitivity to values of |η| even smaller than 1 would reflect the ability to probe spacetime structure down to distance scales even smaller than the Planck length (in the “spacetime-lattice picture” this would correspond to a lattice spacing of √ ηE− 1 p).
7 Evidently, this might mean that the length scale of spacetime quantization might be somewhat lower or somewhat higher than the Planck scale. But notice that when reasoning in terms of ∘ --------- ℓQST ∼ GN (ℓQST) one should then allow for the possibility that there might not be any spacetime quantization after all (the self-consistent solution of ∘ --------- ℓQST ∼ GN (ℓQST ) might be ℓQST = 0).
8 Some colleagues even use the expression “theory of everything” without adding “we know so far”.
9 One may then argue that there are some indirect reasons why the Planck scale should appear in formulas setting the significance of the effects, but the connection with the Planck scale remains relatively weak.
10 While the type of quantum-spacetime effects considered in the LQG literature makes it natural to question the fate of Lorentz symmetry in the quasi-Minkowski limit, I should stress that at present no fully robust result is available, and some authors (notably Refs. [478, 479]) have observed that there could be ways to reconcile what is presently known about LQG with the presence of exact Lorentz symmetry in the quasi-Minkowski limit.
11 A space with some elements of quantization/discreteness may have classical continuous symmetries, but only if things are arranged in an ad hoc manner. Typically quantization/discretization of spacetime observables does lead to departures from classical spacetime symmetries. So clearly spacetime-symmetry tests should be a core area of quantum-spacetime phenomenology, but to be pursued with the awareness that spacetime quantization does not automatically affect spacetime symmetries (it typically does but not automatically).
12 In the case of canonical noncommutativity, evidence of some departures from Poincaré symmetry are found both if 𝜃μν is a fixed tensor [213Jump To The Next Citation Point, 516Jump To The Next Citation Point] and if it is a fixed observer-independent matrix [161, 236, 100]. The possibility to preserve classical Poincaré symmetry is instead still not excluded in what was actually the earliest approach [211Jump To The Next Citation Point] based on [xμ,xν] = i𝜃μν, where for 𝜃μν one seeks a formulation with richer algebraic properties.
13 While I shall not discuss it here, because it has so far attracted little interest from a quantum-spacetime perspective, I should encourage readers to also become acquainted with the fact that studies such as the ones in Refs. [196Jump To The Next Citation Point, 192Jump To The Next Citation Point], through the mechanism for violation of the equivalence principle, also provide motivation for “varying coupling constants” [531, 394].
14 Think of the limitations that the speed-of-light limit imposes on certain setups for clock synchronization and of the contexts in which it is impossible to distinguish between a constant acceleration and the presence of a gravitational field.
15 And this assessment does not improve much upon observing that exact supersymmetry could protect from the emergence of any energy density of the sort relevant to such cosmological-constant studies. In fact, nature clearly does not have supersymmetry at least up to the TeV scale, and this would still lead to a natural prediction of the cosmological constant, which is some 60 orders of magnitude too high.
16 As stressed earlier in this section, one can restore a relativistic formulation by appropriately matching the modification of the dispersion relation and a modification of energy-momentum conservation. When the modifications of the dispersion relation and of energy-momentum conservation (even when both present) do not match, one has a framework that requires a preferred frame.
17 While Myers and Pospelov have the merit of alerting the community to several opportunities and issues within a simple model, we now understand that many of these aspects uncovered through their simple model (such as birefringence) are common aspects of a more general class of field-theory models with rotationally-invariant operators of odd dimension (see, e.g., Ref. [337]).
18 While some observers understandably argue that the the residual grey areas that I discuss force us to still be extremely prudent, even at the present time one could legitimately describe as robust [62Jump To The Next Citation Point] the observational evidence indicating that some γ-rays with energies up to 20 TeV are absorbed by the IR diffuse extragalactic background. And some authors (see, e.g., Ref. [511]) actually see in the presently-available data an even sharper level of agreement with the classical-spacetime picture, which would translate in having already achieved Planck-scale sensitivity.
19 It used to be natural to expect [111] that indeed the highest energy cosmic rays are protons. However, this is changing rather rapidly in light of recent dedicated studies using Auger data [7, 242Jump To The Next Citation Point, 509Jump To The Next Citation Point], which favor a significant contribution from heavy nuclei. The implications for the Lorentz symmetry analysis of the differences between protons and heavy nuclei, while significant in the detail (see, e.g., Ref. [488]), are not as large as one might naively expect. This is due to the fact that it just happens to be the case that the photodisintegration threshold is reached when the energy of typical heavy nuclei, such as Fe, is ∼ 5⋅1019 eV, i.e., just about the value of the photopion-production threshold, expected for cosmic-ray protons.
20 We are dealing again with the limitations that pure-kinematics particle-reaction analyses suffer when the properties of the incoming particles are not fully under control. The pure kinematics of the PKV0 test theory definitely forbids (for negative η of order 1 and n ≤ 2) pion production resulting from collisions between a 19 5⋅10 eV proton and a CMBR photon. But it allows pion production resulting from collisions between a 19 5⋅10 eV proton and more energetic photons, and in order to exclude that possibility one ends up formulating assumptions about dynamics (the low density of relevant photons may be compensated for by an unexpected increase in cross section).
21 For the related subject of the description of light propagation in models of emergent spacetime, see, e.g., Ref. [275] and references therein.
22 Up to 1997, the distances from gamma-ray bursts to Earth were not established experimentally. Starting with the 1997 result of Ref. [188], we are now able to establish, through a suitable analysis of the gamma-ray-burst “afterglow”, the distance between the gamma-ray bursts and Earth for a significant portion of all detected bursts. Sources at a distance of ∼ 1010 light years (∼ 1017 s) are not uncommon.
23 There are ordinary-physics effects that could be relevant for these analyses, such as ordinary electromagnetic dispersion, but it is easy to show [66Jump To The Next Citation Point, 132] that already at energies of a few GeV these ordinary-physics effects would be negligible with respect to the candidate quantum-gravity effect here considered.
24 Also noteworthy is the analysis reported in Ref. [156], which argues that neutrino oscillations may play a role for other aspects of quantum-spacetime phenomenology, in addition to their use in relation to flavor-dependent Planck-scale modifications of the dispersion relation.
25 This is in part due to the fact that “naive quantum gravity” is not a renormalizable theory, and as a result the restriction to power-counting renormalizable correction terms (which is standard outside quantum-gravity research) is expected not to be necessarily applicable to quantum-gravity research.
26 A warning to readers: whereas originally the denomination “Standard Model Extension” was universally used to describe a framework implementing the restriction to powercounting-renormalizable correction terms, recently (see, e.g., Ref. [123Jump To The Next Citation Point]) some theorists describe as “Standard Model Extension” the generalization that includes correction terms that are not powercounting renormalizable, while they describe as a “Minimal Standard Model Extension” the case with the original restriction to powercounting-renormalizable correction terms. Still, even as I write this review, many authors (in particular the near totality of experimentalists involved in such studies) continue to adopt the original description of the “Standard Model Extension”, restricted to powercounting-renormalizable correction terms, and this may create some confusion (for example experimentalists reporting results on the “Standard Model Extension” are actually, according to the terminology now used by some theorists, describing experimental limits on the “Minimal Standard Model Extension”).
27 Interestingly, this simple scheme for modeling spacetime-foam effects also provides the basis for the proposal put forward in Refs. [25, 24] of a mechanism that could be responsible for the cosmological matter-antimatter asymmetry.
28 Since modern interferometers were designed to look for classical gravity waves (gravity waves are their sought “signal”), it is reasonable to denominate as “noise” all test-mass-distance fluctuations that are not due to gravity waves. I adopt terminology that reflects the original objectives of modern interferometers, even though it is somewhat awkward for the type of quantum-spacetime-phenomenology studies discussed, in which interferometers would be used for searches of quantum-gravity-induced distance fluctuations (and, therefore, in these studies quantum-gravity-induced distance fluctuations would play the role of “signal”).
29 While most formulas in this review adopt ¯h = c = 1 conventions, I do make some exceptions (with explicit ¯h or c) when I believe it can help to characterize the conceptual ingredients of the formula.
30 The fact that, according to some of these test theories, quantum-spacetime-induced noise becomes increasingly significant as the characteristic frequency of observation is lowered, also opens the way to possible studies [492] using cryogenic resonators, which are rigid optical interferometers with good sensitivities down to frequencies of about − 6 10 Hz.
31 The interplay between violations of Lorentz symmetry and violations of CPT symmetry, which is a part of the objectives being pursued within the context provided by the Standard Model Extension, is also the subject of a lively debate, as can be seen from Refs. [265, 160].
32 In Ref. [541] the main focus was again on atom interferometry, but following a closely related approach the very recent Ref. [542Jump To The Next Citation Point] made a proposal for Bose–Einstein condensates.
33 It is not implausible that different particles would be characterized by different values of 𝒦. This in particular could reflect the expectation that more pointlike particles, such as neutrinos, should be more sensitive to the spacetime-lattice structure with respect to composite particles (such as protons and, even more evidently, atoms). However, it appears natural to assume [396Jump To The Next Citation Point], at least in these first explorations of causal-set phenomenology, that different particles would have values of 𝒦 that are not too far apart.
34 Similar types of “fuzzy metrics” were considered in Refs. [206Jump To The Next Citation Point, 557Jump To The Next Citation Point, 460Jump To The Next Citation Point] but I postpone their discussion to Section 6.2 devoted to quantum-spacetime cosmology.
35 For example, for Planck-length radii one might even imagine to satisfy the Bekenstein–Hawking bound S ≤ R2 in a theory where entropy actually scales with a cubic law of the type S = αR3, if the proportionality parameter α is such that S = αR3 ≤ R2. But evidently for any such α there are values of the radius large enough that instead the Bekenstein–Hawking bound would be violated.
36 I set aside here, as in Refs. [155Jump To The Next Citation Point, 69Jump To The Next Citation Point], the possibility of a helicity dependence [34Jump To The Next Citation Point] on the effects.
37 There is no consistent adoption of conventions for notation in Refs. [33, 34, 154Jump To The Next Citation Point, 155Jump To The Next Citation Point, 69Jump To The Next Citation Point]. I am adopting here a notation that appears to render more transparent the connection between arguments for UV/IR mixing from LQG and arguments for UV/IR mixing from spacetime noncommutativity.
38 As already stressed, this is not necessarily a natural expectation. In particular, several arguments appear to suggest that composite particles may be less sensitive to quantum-spacetime effects than “fundamental” particles. Still, it is interesting to study this issue experimentally, and a way to do that is to look for opportunities for the “universality assumption” to break down.
39 I am describing the analysis of Ref. [154Jump To The Next Citation Point] using the notation I adopted throughout this section. Instead of ξ Ref. [154] used a parameter λ linked to ξ by λ = − ξm2ν∕Ep.
40 Though not relevant to my review, it is interesting to note that, besides constraining certain quantum pictures of spacetime, the measurement results reported in Refs. [428Jump To The Next Citation Point, 429Jump To The Next Citation Point] can also be used to set an upper limit for an additional short-range fundamental force [429Jump To The Next Citation Point].