One of the earliest characteristic evolution codes was constructed by Corkill and Stewart [19, 23] to treat space-times with two Killing vectors. The grid was based upon a double null coordinate system, with the null hypersurfaces intersecting in the surfaces spanned by the Killing vectors. This allowed simulation of colliding plane waves (as well as the Schwarzschild solution). They were able to evolve the Khan-Penrose  collision of impulsive ( -function curvature) plane waves to within a few numerical zones from the singularity that forms after the collision, with extremely close agreement with the analytic results. Their simulations of collisions with more general waveforms, for which exact solutions are not known, have provided input to the theoretical understanding of singularity formation in this problem.
Most of the 1+1 dimensional applications of characteristic methods have been for spherically symmetric systems. Here matter must be included in order to make the system non-Schwarzschild. Except for some trivial applications to evolve dust cosmologies by characteristic evolution, matter has been represented by a massless Klein-Gordon field. This allows simulation of radiation effects in the simple context of spherical symmetry. Highly interesting results have been found this way.
On the analytic side, working in a characteristic initial value formulation based upon outgoing null cones, Christodoulou made a penetrating study of the existence and uniqueness of solutions to this problem. [25, 26, 27, 28] He showed that weak initial data evolve to Minkowski space asymptotically in time, but that sufficiently strong data form a horizon, with nonzero Bondi mass. In the latter case, he showed that the geometry is asymptotically Schwarzschild in the approach to (future time infinity) from outside the horizon, thus establishing a rigorous version of the no-hair theorem. What this analytic tour-de-force did not reveal was the remarkable critical behavior in the transition between these two regimes, which was discovered by Choptuik [29, 30] using computational simulation.
A characteristic evolution algorithm for this problem centers about the evolution scheme for the scalar field, which constitutes the only dynamical field. Given the scalar field, all gravitational quantities can be determined by integration along the characteristics of the null foliation. But this is a coupled problem, since the scalar wave equation involves the curved space metric. It provides a good illustration of how null algorithms lead to a hierarchy of equations which can be integrated along the characteristics to effectively decouple the hypersurface and dynamical variables.
In a Bondi coordinate system based upon outgoing null hypersurfaces u = const, the metric is
Smoothness at r =0 allows the coordinate conditions
The field equations consist of the wave equation for the scalar field and two hypersurface equations for the metric functions:
The wave equation can be expressed in the form
where and is the D'Alembertian associated with the two dimensional submanifold spanned by the ingoing and outgoing null geodesics. Initial null data for evolution consists of at initial retarded time .
Because any two dimensional geometry is conformally flat, the surface integral of over a null parallelogram gives exactly the same result as in a flat 2-space, and leads to an integral identity upon which a simple evolution algorithm can be based . Let the vertices of the null parallelogram be labeled by N, E, S and W corresponding, respectively, to their relative locations North, East, South and West in the 2-space. Upon integration of (7), curvature introduces an area integral correction to the flat space null parallelogram relation between the values of g at the vertices:
This identity, in one form or another, lies behind all of the null evolution algorithms that have been applied to this system. The prime distinction between the different algorithms is whether they are based upon double null coordinates or Bondi coordinates as in Eq. (3). When a double null coordinate system is adopted, the points N, E, S and W can be located in each computational cell at grid points, so that evaluation of the left hand side of Eq. (8) requires no interpolation. As a result, in flat space, where the right hand side of Eq. (8) vanishes, it is possible to formulate an exact evolution algorithm. In curved space, of course, there is truncation error arising from the approximation of the integral by evaluating the integrand at the center of .
The identity (8) gives rise to the following explicit marching algorithm. Let the null parallelogram lie at some fixed and and span adjacent retarded time levels and . Imagine for now that the points N, E, S and W lie on the spatial grid, with . If g has been determined on the initial cone , which contains the points E and S, and radially outward from the origin to the point W on the next cone , then Eq. (8) determines g at the next radial grid point N in terms of an integral over . The integrand can be approximated to second order, i.e. to , by evaluating it at the center of . To this same accuracy, the value of g at the center equals its average between the points E and W, at which g has already been determined. Similarly, the value of at the center of can be approximated to second order in terms of values of V at points where it can be determined by integrating the hypersurface equations (5) and (6) radially outward from r =0.
After carrying out this procedure to evaluate g at the point N, the procedure can then be iterated to determine g at the next radially outward grid point on the level. Upon completing this radial march to null infinity, in terms of a compactified radial coordinate such as x = r /(1+ r), the field g is then evaluated on the next null cone at , beginning at the vertex where smoothness gives the startup condition that g (u,0)=0.
In the compactified Bondi formalism, the vertices N, E, S and W of the null parallelogram cannot be chosen to lie exactly on the grid because, even in Minkowski space, the velocity of light in terms of a compactified radial coordinate x is not constant. As a consequence, the fields g, and V at the vertices of are approximated to second order accuracy by interpolating between grid points. However, cancellations arise between these four interpolations so that Eq.(8) is satisfied to fourth order accuracy. The net result is that the finite difference version of (8) steps g radially outward one zone with an error of fourth order in grid size, . In addition, the smoothness conditions (4) can be incorporated into the startup for the numerical integrations for V and to insure no loss of accuracy in starting up the march at r =0. The resulting global error in g, after evolving a finite retarded time, is then , after compounding errors from number of zones.
Because of the explicit nature of this algorithm, its stability requires an analogue of the Courant-Friedrichs-Lewy (CFL) condition that the physical domain of dependence be contained in the numerical domain of dependence. In the present spherically symmetric case, this condition requires that the ratio of the time step to radial step be limited by , where . This condition can be built into the code using the value , corresponding to the maximum of V / r at . The strongest restriction on the time step then arises just before the formation of a horizon, where at , This infinite redshift provides a mechanism for locating the true event horizon ``on the fly'' and restricting the evolution to the exterior space-time. Points near must be dropped in order to evolve across the horizon in this gauge.
Such algorithms have been applied to many interesting problems. A characteristic algorithm based upon double null coordinates was used by Goldwirth and Piran in a study of cosmic censorship . Their early study lacked the sensitivity of adaptive mesh refinement which later enabled Choptuik to discover the critical phenomena appearing in this problem. The most accurate global treatment of this problem using a Cauchy code has been achieved by Marsa and Choptuik  using ingoing Eddington-Finklestein coordinates. This use of a null based time slicing enabled them to avoid problems with the singularity by excising the black hole exterior and to construct a 1D code that runs forever.
Gómez and Winicour  constructed a characteristic code for this problem based upon the compactified Bondi formalism outlined above. Studies with the code revealed interesting high amplitude behavior under the rescaling . As , the red shift creates an effective boundary layer at which causes the Bondi mass and the scalar field monopole moment Q to be related by , rather than the quadratic relation of the weak field limit . This can also be established analytically so that the high amplitude limit provides a check on the code's ability to handle strongly nonlinear fields. In the small amplitude case, this work incorrectly reported that the radiation tails from black hole formation had an exponential decay characteristic of quasinormal modes rather than the polynomial 1/ t or falloff expected from Price's  work on perturbations of Schwarzschild black holes. In hindsight, the error here was not having confidence to run the code sufficiently long to see the proper late time behavior.
Gundlach, Price and Pullin [36, 37] subsequently reexamined the issue of power law tails using a double null code similar to that developed by Goldwirth and Piran. Their numerical simulations verified the existence of power law tails in the full nonlinear case, thus establishing consistency with analytic perturbative theory. They also found normal mode ringing at intermediate time, again with properties in accord with the perturbative results. The consistency they found between perturbation theory and numerical evolution of the nonlinear system is very reassuring. There is a region of space-time where the results of linearized theory are remarkably reliable even though highly nonlinear behavior is taking place elsewhere. These results have led to a methodology that has application beyond the confines of spherically symmetric problems, most notably in the ``close approximation'' for the binary black hole problem . Power law tails and quasinormal ringing were also confirmed using Cauchy evolution .
The study of the radiation tail decay was subsequently extended by Gómez, Schmidt and Winicour  using a code based upon the nullcone-worldtube version of the Bondi formalism. They showed that the Newman-Penrose constant  for the scalar field is the factor which determines the exponent of the power law (not the static monopole moment as often stated). When this constant is non-zero, the tail decays as 1/ t, as opposed to the decay for the vanishing case. (There are also corrections in addition to the exponentially decaying contributions of the quasinormal modes). This code was also used to study the instability of a topological kink in the configuration of the scalar field . The kink instability provides the simplest example of the turning point instability [42, 43] which underlies gravitational collapse of static equilibria.
Hamadé and Stewart  have applied a double null code to the problem of critical phenomena. In order to obtain the accuracy necessary to confirm Choptuik's results they developed the first example of a characteristic grid with adaptive mesh refinement (AMR). They did this with both the standard Berger and Oliger algorithm and their own simplified version. These different versions of AMR gave indistinguishable numerical results. Their simulations of critical collapse of a scalar field agree with Choptuik's values for the universal parameters governing mass scaling and display the echoing associated with discrete self-similarity. Hamadé, Horne and Stewart  extended this study to the spherical collapse of an axion/dilaton system and found in this case that the self-similarity was a continuous symmetry of the critical solution.
The Southampton group has also constructed a 1+1 dimensional characteristic code for space-times with cylindrical symmetry [46, 47]. Their motivation was not to produce a stand-alone code for the study of cylindrically symmetric relativity but rather to use it as a test case for combining Cauchy and characteristic codes into a global scheme. Their work will be discussed later in this review under Cauchy-characteristic matching.
|Characteristic Evolution and Matching
© Max-Planck-Gesellschaft. ISSN 1433-8351
Problems/Comments to firstname.lastname@example.org