3.2 Two-D Codes3 Characteristic Evolution Codes3 Characteristic Evolution Codes

3.1 One+One Dimensional Codes

It is now often said that the solution of the general ordinary differential equation is essentially known, in light of the success of computational algorithms and present day computing power. Perhaps this is an overstatement because investigating singular behavior is still an art. But, in the same vein, it is fair to say that the general system of hyperbolic partial differential equations in one spatial dimension is a solved problem. At least, it seems to be true in general relativity.

One of the earliest characteristic evolution codes was constructed by Corkill and Stewart [19, 23] to treat space-times with two Killing vectors. The grid was based upon a double null coordinate system, with the null hypersurfaces intersecting in the surfaces spanned by the Killing vectors. This allowed simulation of colliding plane waves (as well as the Schwarzschild solution). They were able to evolve the Khan-Penrose [24] collision of impulsive (tex2html_wrap_inline1541 -function curvature) plane waves to within a few numerical zones from the singularity that forms after the collision, with extremely close agreement with the analytic results. Their simulations of collisions with more general waveforms, for which exact solutions are not known, have provided input to the theoretical understanding of singularity formation in this problem.

Most of the 1+1 dimensional applications of characteristic methods have been for spherically symmetric systems. Here matter must be included in order to make the system non-Schwarzschild. Except for some trivial applications to evolve dust cosmologies by characteristic evolution, matter has been represented by a massless Klein-Gordon field. This allows simulation of radiation effects in the simple context of spherical symmetry. Highly interesting results have been found this way.

On the analytic side, working in a characteristic initial value formulation based upon outgoing null cones, Christodoulou made a penetrating study of the existence and uniqueness of solutions to this problem. [25, 26, 27, 28] He showed that weak initial data evolve to Minkowski space asymptotically in time, but that sufficiently strong data form a horizon, with nonzero Bondi mass. In the latter case, he showed that the geometry is asymptotically Schwarzschild in the approach to tex2html_wrap_inline1543 (future time infinity) from outside the horizon, thus establishing a rigorous version of the no-hair theorem. What this analytic tour-de-force did not reveal was the remarkable critical behavior in the transition between these two regimes, which was discovered by Choptuik [29, 30Jump To The Next Citation Point In The Article] using computational simulation.

A characteristic evolution algorithm for this problem centers about the evolution scheme for the scalar field, which constitutes the only dynamical field. Given the scalar field, all gravitational quantities can be determined by integration along the characteristics of the null foliation. But this is a coupled problem, since the scalar wave equation involves the curved space metric. It provides a good illustration of how null algorithms lead to a hierarchy of equations which can be integrated along the characteristics to effectively decouple the hypersurface and dynamical variables.

In a Bondi coordinate system based upon outgoing null hypersurfaces u = const, the metric is

  equation44

Smoothness at r =0 allows the coordinate conditions

  equation49

The field equations consist of the wave equation tex2html_wrap_inline1549 for the scalar field and two hypersurface equations for the metric functions:

  equation53

  equation58

The wave equation can be expressed in the form

  equation63

where tex2html_wrap_inline1551 and tex2html_wrap_inline1553 is the D'Alembertian associated with the two dimensional submanifold spanned by the ingoing and outgoing null geodesics. Initial null data for evolution consists of tex2html_wrap_inline1555 at initial retarded time tex2html_wrap_inline1557 .

Because any two dimensional geometry is conformally flat, the surface integral of tex2html_wrap_inline1559 over a null parallelogram tex2html_wrap_inline1561 gives exactly the same result as in a flat 2-space, and leads to an integral identity upon which a simple evolution algorithm can be based [31Jump To The Next Citation Point In The Article]. Let the vertices of the null parallelogram be labeled by N, E, S and W corresponding, respectively, to their relative locations North, East, South and West in the 2-space. Upon integration of (7Popup Equation), curvature introduces an area integral correction to the flat space null parallelogram relation between the values of g at the vertices:

  equation74

This identity, in one form or another, lies behind all of the null evolution algorithms that have been applied to this system. The prime distinction between the different algorithms is whether they are based upon double null coordinates or Bondi coordinates as in Eq. (3Popup Equation). When a double null coordinate system is adopted, the points N, E, S and W can be located in each computational cell at grid points, so that evaluation of the left hand side of Eq. (8Popup Equation) requires no interpolation. As a result, in flat space, where the right hand side of Eq. (8Popup Equation) vanishes, it is possible to formulate an exact evolution algorithm. In curved space, of course, there is truncation error arising from the approximation of the integral by evaluating the integrand at the center of tex2html_wrap_inline1561 .

The identity (8Popup Equation) gives rise to the following explicit marching algorithm. Let the null parallelogram lie at some fixed tex2html_wrap_inline1583 and tex2html_wrap_inline1585 and span adjacent retarded time levels tex2html_wrap_inline1557 and tex2html_wrap_inline1589 . Imagine for now that the points N, E, S and W lie on the spatial grid, with tex2html_wrap_inline1599 . If g has been determined on the initial cone tex2html_wrap_inline1557, which contains the points E and S, and radially outward from the origin to the point W on the next cone tex2html_wrap_inline1589, then Eq. (8Popup Equation) determines g at the next radial grid point N in terms of an integral over tex2html_wrap_inline1561 . The integrand can be approximated to second order, i.e. to tex2html_wrap_inline1619, by evaluating it at the center of tex2html_wrap_inline1561 . To this same accuracy, the value of g at the center equals its average between the points E and W, at which g has already been determined. Similarly, the value of tex2html_wrap_inline1631 at the center of tex2html_wrap_inline1561 can be approximated to second order in terms of values of V at points where it can be determined by integrating the hypersurface equations (5Popup Equation) and (6Popup Equation) radially outward from r =0.

After carrying out this procedure to evaluate g at the point N, the procedure can then be iterated to determine g at the next radially outward grid point on the tex2html_wrap_inline1589 level. Upon completing this radial march to null infinity, in terms of a compactified radial coordinate such as x = r /(1+ r), the field g is then evaluated on the next null cone at tex2html_wrap_inline1651, beginning at the vertex where smoothness gives the startup condition that g (u,0)=0.

In the compactified Bondi formalism, the vertices N, E, S and W of the null parallelogram tex2html_wrap_inline1561 cannot be chosen to lie exactly on the grid because, even in Minkowski space, the velocity of light in terms of a compactified radial coordinate x is not constant. As a consequence, the fields g, tex2html_wrap_inline1669 and V at the vertices of tex2html_wrap_inline1561 are approximated to second order accuracy by interpolating between grid points. However, cancellations arise between these four interpolations so that Eq.(8Popup Equation) is satisfied to fourth order accuracy. The net result is that the finite difference version of (8Popup Equation) steps g radially outward one zone with an error of fourth order in grid size, tex2html_wrap_inline1677 . In addition, the smoothness conditions (4Popup Equation) can be incorporated into the startup for the numerical integrations for V and tex2html_wrap_inline1669 to insure no loss of accuracy in starting up the march at r =0. The resulting global error in g, after evolving a finite retarded time, is then tex2html_wrap_inline1687, after compounding errors from tex2html_wrap_inline1689 number of zones.

Because of the explicit nature of this algorithm, its stability requires an analogue of the Courant-Friedrichs-Lewy (CFL) condition that the physical domain of dependence be contained in the numerical domain of dependence. In the present spherically symmetric case, this condition requires that the ratio of the time step to radial step be limited by tex2html_wrap_inline1691, where tex2html_wrap_inline1693 . This condition can be built into the code using the value tex2html_wrap_inline1695, corresponding to the maximum of V / r at tex2html_wrap_inline1699 . The strongest restriction on the time step then arises just before the formation of a horizon, where tex2html_wrap_inline1701 at tex2html_wrap_inline1699, This infinite redshift provides a mechanism for locating the true event horizon ``on the fly'' and restricting the evolution to the exterior space-time. Points near tex2html_wrap_inline1699 must be dropped in order to evolve across the horizon in this gauge.

Such algorithms have been applied to many interesting problems. A characteristic algorithm based upon double null coordinates was used by Goldwirth and Piran in a study of cosmic censorship [32]. Their early study lacked the sensitivity of adaptive mesh refinement which later enabled Choptuik to discover the critical phenomena appearing in this problem. The most accurate global treatment of this problem using a Cauchy code has been achieved by Marsa and Choptuik [33Jump To The Next Citation Point In The Article] using ingoing Eddington-Finklestein coordinates. This use of a null based time slicing enabled them to avoid problems with the singularity by excising the black hole exterior and to construct a 1D code that runs forever.

Gómez and Winicour [31] constructed a characteristic code for this problem based upon the compactified Bondi formalism outlined above. Studies with the code revealed interesting high amplitude behavior under the rescaling tex2html_wrap_inline1707 . As tex2html_wrap_inline1709, the red shift creates an effective boundary layer at tex2html_wrap_inline1511 which causes the Bondi mass tex2html_wrap_inline1713 and the scalar field monopole moment Q to be related by tex2html_wrap_inline1717, rather than the quadratic relation of the weak field limit [34]. This can also be established analytically so that the high amplitude limit provides a check on the code's ability to handle strongly nonlinear fields. In the small amplitude case, this work incorrectly reported that the radiation tails from black hole formation had an exponential decay characteristic of quasinormal modes rather than the polynomial 1/ t or tex2html_wrap_inline1721 falloff expected from Price's [35Jump To The Next Citation Point In The Article] work on perturbations of Schwarzschild black holes. In hindsight, the error here was not having confidence to run the code sufficiently long to see the proper late time behavior.

Gundlach, Price and Pullin [36, 37] subsequently reexamined the issue of power law tails using a double null code similar to that developed by Goldwirth and Piran. Their numerical simulations verified the existence of power law tails in the full nonlinear case, thus establishing consistency with analytic perturbative theory. They also found normal mode ringing at intermediate time, again with properties in accord with the perturbative results. The consistency they found between perturbation theory and numerical evolution of the nonlinear system is very reassuring. There is a region of space-time where the results of linearized theory are remarkably reliable even though highly nonlinear behavior is taking place elsewhere. These results have led to a methodology that has application beyond the confines of spherically symmetric problems, most notably in the ``close approximation'' for the binary black hole problem [38]. Power law tails and quasinormal ringing were also confirmed using Cauchy evolution [33Jump To The Next Citation Point In The Article].

The study of the radiation tail decay was subsequently extended by Gómez, Schmidt and Winicour [39] using a code based upon the nullcone-worldtube version of the Bondi formalism. They showed that the Newman-Penrose constant [40] for the scalar field is the factor which determines the exponent of the power law (not the static monopole moment as often stated). When this constant is non-zero, the tail decays as 1/ t, as opposed to the tex2html_wrap_inline1721 decay for the vanishing case. (There are also tex2html_wrap_inline1727 corrections in addition to the exponentially decaying contributions of the quasinormal modes). This code was also used to study the instability of a topological kink in the configuration of the scalar field [41]. The kink instability provides the simplest example of the turning point instability [42, 43] which underlies gravitational collapse of static equilibria.

Hamadé and Stewart [44] have applied a double null code to the problem of critical phenomena. In order to obtain the accuracy necessary to confirm Choptuik's results they developed the first example of a characteristic grid with adaptive mesh refinement (AMR). They did this with both the standard Berger and Oliger algorithm and their own simplified version. These different versions of AMR gave indistinguishable numerical results. Their simulations of critical collapse of a scalar field agree with Choptuik's values for the universal parameters governing mass scaling and display the echoing associated with discrete self-similarity. Hamadé, Horne and Stewart [45] extended this study to the spherical collapse of an axion/dilaton system and found in this case that the self-similarity was a continuous symmetry of the critical solution.

The Southampton group has also constructed a 1+1 dimensional characteristic code for space-times with cylindrical symmetry [46Jump To The Next Citation Point In The Article, 47Jump To The Next Citation Point In The Article]. Their motivation was not to produce a stand-alone code for the study of cylindrically symmetric relativity but rather to use it as a test case for combining Cauchy and characteristic codes into a global scheme. Their work will be discussed later in this review under Cauchy-characteristic matching.



3.2 Two-D Codes3 Characteristic Evolution Codes3 Characteristic Evolution Codes

image Characteristic Evolution and Matching
Jeffrey Winicour
http://www.livingreviews.org/lrr-1998-5
© Max-Planck-Gesellschaft. ISSN 1433-8351
Problems/Comments to livrev@aei-potsdam.mpg.de