4.2 Geometrical formalism

The PITT code uses a standard Bondi–Sachs null coordinate system,
( ) 2 2βV 2 A B 2 2β 2 B A 2 A B ds = − e -- − r hAB U U du − 2e du dr − 2r hAB U dudx + r hAB dx dx , (23 ) r
where det(hAB ) = det (qAB ) for some standard choice qAB of the unit sphere metric. This generalizes Equation (13View Equation) to the three-dimensional case. The characteristic version of Einstein’s equations are The hypersurface equations derive from the ν G μ ∇ νu components of the Einstein tensor. They take the explicit form [248]
β,r = -1-rhAC hBD hAB,r hCD,r, (24 ) ( ) 16 ( ) r4e− 2βhABU B,r = 2r4 r −2β,A − r2hBC DC (hAB,r) (25 ) ,r ,r (( ) ) 2e−2βV,r = ℛ − 2DADA β − 2 (DA β)DA β + r− 2e− 2βDA r4U A ,r 1-4 − 4β A B − 2r e hAB U,r U ,r, (26 )
where DA is the covariant derivative and ℛ the curvature scalar of the conformal 2-metric hAB of the r = const. surfaces, and capital Latin indices are raised and lowered with hAB. Given the null data h AB on an outgoing null hypersurface, this hierarchy of equations can be integrated radially in order to determine β, A U and V on the hypersurface in terms of integration constants on an inner boundary. The evolution equations for the u-derivative of the null data derive from the trace-free part of the angular components of the Einstein tensor, i.e. the components mAmBGAB where hAB = 2m (Am¯B ). They take the explicit form
A B( 1 2 β β C m m (rhAB,u),r − --(rV hAB,r),r − -e DADBe + rhAC DB (U,r) 3 2r r − r-e− 2βh h U C UD + 2D U + r-h D UC 2 AC BD ,r ,r A B 2 AB,r C C CD ) +rU DC (hAB,r) + rhAD,r h (DBUC − DC UB) = 0. (27 )
A compactified treatment of null infinity is achieved by introducing the radial coordinate x = r∕(R + r), where R is a scale parameter adjusted to the size of the inner boundary. Thus x = 1 at ℐ+.

The Canberra code employs a null quasi-spherical (NQS) gauge (not to be confused with the quasi-spherical approximation in which quadratically aspherical terms are ignored [49Jump To The Next Citation Point]). The NQS gauge takes advantage of the possibility of mapping the angular part of the Bondi metric conformally onto a unit sphere metric, so that hAB → qAB. The required transformation A A A x → y (u, r,x ) is in general dependent upon u and r so that the NQS angular coordinates yA are not constant along the outgoing null rays, unlike the Bondi–Sachs angular coordinates. Instead the coordinates yA display the analogue of a shift on the null hypersurfaces u = const. In addition, the NQS spheres (u,r) = const. are not the same as the Bondi spheres. The radiation content of the metric is contained in a shear vector describing this shift. This results in the description of the radiation in terms of a spin-weight 1 field, rather than the spin-weight 2 field associated with hAB in the Bondi–Sachs formalism. In both the Bondi–Sachs and NQS gauges, the independent gravitational data on a null hypersurface is the conformal part of its degenerate 3-metric. The Bondi–Sachs null data consist of hAB, which determines the intrinsic conformal metric of the null hypersurface. In the NQS case, hAB = qAB and the shear vector comprises the only non-trivial part of the conformal 3-metric. Both the Bondi–Sachs and NQS gauges can be arranged to coincide in the special case of shear-free Robinson–Trautman metrics [8226Jump To The Next Citation Point].

The formulation of Einstein’s equations in the NQS gauge is presented in [25], and the associated gauge freedom arising from (u,r ) dependent rotation and boosts of the unit sphere is discussed in [26]. As in the PITT code, the main equations involve integrating a hierarchy of hypersurface equations along the radial null geodesics extending from the inner boundary to null infinity. In the NQS gauge the source terms for these radial ODEs are rather simple when the unknowns are chosen to be the connection coefficients. However, as a price to pay for this simplicity, after the radial integrations are performed on each null hypersurface a first order elliptic equation must be solved on each r = const. cross-section to reconstruct the underlying metric.

4.2.1 Angular dissipation

For a {3 + 1} evolution algorithm based upon a system of wave equations, or any other symmetric hyperbolic system, numerical dissipation can be added in the standard Kreiss–Oliger form [158]. Dissipation cannot be added to the {2 + 1 + 1} format of characteristic evolution in this standard way for {3 + 1} Cauchy evolution. In the original version of the PITT code, which used square stereographic patches with boundaries aligned with the grid, numerical dissipation was only introduced in the radial direction [163Jump To The Next Citation Point]. This was sufficient to establish numerical stability. In the new version of the code with circular stereographic patches, whose boundaries fit into the stereographic grid in an irregular way, angular dissipation is necessary to suppress the resulting high frequency error.

Angular dissipation can be introduced in the following way [14Jump To The Next Citation Point]. In terms of the spin-weight 2 variable J = qAqBhAB, the evolution equation (27View Equation) takes the form

∂u∂r(rJ ) = S, (28 )
where S represents the right hand side terms. We add angular dissipation to the u-evolution through the modification
∂u∂r (rJ ) + 𝜖uh3 ∂2¯∂2∂r(rJ) = S, (29 )
where h is the discretization size and 𝜖u ≥ 0 is an adjustable parameter independent of h. This leads to
( ) ( ) ( ) ∂u |∂r(rJ )|2 + 2𝜖uh3ℜ {∂r(rJ¯)∂2 ¯∂2 ∂r(rJ ) } = 2ℜ { ∂r(rJ¯) S}. (30 )
Integration over the unit sphere with solid angle element dΩ then gives
∮ ∮ ∮ 2 3 ¯2( ) 2 ( ¯ ∂u |∂r(rJ)| dΩ + 2𝜖uh |∂ ∂r(rJ) | dΩ = 2ℜ ∂r(rJ))Sd Ω. (31 )
Thus the 𝜖u-term has the effect of damping high frequency noise as measured by the L2 norm of ∂r(rJ ) over the sphere.

Similarly, dissipation can be introduced in the radial integration of (28View Equation) through the substitution

∂u ∂r(rJ) → ∂u∂r(rJ) + 𝜖rh3∂2¯∂2∂u(rJ ), (32 )
with 𝜖r ≥ 0. Angular dissipation can also be introduced in the hypersurface equations, e.g. in (26View Equation) through the substitution
3¯ ¯ ∂rV → ∂rV + 𝜖V h ∂∂∂ ∂V. (33 )

4.2.2 First versus second differential order

The PITT code was originally formulated in the second differential form of Equations (24View Equation, 25View Equation, 26View Equation, 27View Equation), which in the spin-weighted version leads to an economical number of 2 real and 2 complex variables. Subsequently, the variable

Q = r2e− 2βh UB (34 ) A AB ,r
was introduced to reduce Equation (25View Equation) to two first order radial equations, which simplified the startup procedure at the boundary. Although the resulting code was verified to be stable and second order accurate, its application to problems involving strong fields and gradients led to numerical errors which made small scale effects of astrophysical importance difficult to measure.

In particular, in initial attempts to simulate a white hole fission, Gómez [110Jump To The Next Citation Point] encountered an oscillatory error pattern in the angular directions near the time of fission. The origin of the problem was tracked to numerical error of an oscillatory nature introduced by 2 ∂ terms in the hypersurface and evolution equations. Gómez’s solution was to remove the offending second angular derivatives by introducing additional variables and reducing the system to first differential order in the angular directions. This suppressed the oscillatory mode and subsequently improved performance in the simulation of the white hole fission problem [113Jump To The Next Citation Point] (see Section 4.4.2).

This success opens the issue of whether a completely first differential order code might perform even better, as has been proposed by Gómez and Frittelli [112]. By the use of ∂uhAB as a fundamental variable, they cast the Bondi system into Duff’s first order quasilinear canonical form [88Jump To The Next Citation Point]. At the analytic level this provides standard uniqueness and existence theorems (extending previous work for the linearized case [102]) and is a starting point for establishing the estimates required for well-posedness.

At the numerical level, Gómez and Frittelli point out that this first order formulation provides a bridge between the characteristic and Cauchy approaches which allows application of standard methods for constructing numerical algorithms, e.g. to take advantage of shock-capturing schemes. Although true shocks do not exist for vacuum gravitational fields, when coupled to hydro the resulting shocks couple back to form steep gradients which might not be captured by standard finite difference approximations. In particular, the second derivatives needed to compute gravitational radiation from stellar oscillations have been noted to be a troublesome source of inaccuracy in the characteristic treatment of hydrodynamics [220Jump To The Next Citation Point]. Application of standard versions of AMR is also facilitated by the first order form.

The benefits of this completely first order approach are not simple to decide without code comparison. The part of the code in which the 2 ∂ operator introduced the oscillatory error mode in [110] was not identified, i.e. whether it originated in the inner boundary treatment or in the interpolations between stereographic patches where second derivatives might be troublesome. There are other possible ways to remove the oscillatory angular modes, such as adding angular dissipation (see Section 4.2.1). The finite difference algorithm in the original PITT code only introduced numerical dissipation in the radial direction [163Jump To The Next Citation Point]. The economy of variables and other advantages of a second order scheme [159] should not be abandoned without further tests and investigation.

4.2.3 Numerical methods

The PITT code is an explicit finite difference evolution algorithm based upon retarded time steps on a uniform three-dimensional null coordinate grid based upon the stereographic coordinates and a compactified radial coordinate. The straightforward numerical implementation of the finite difference equations has facilitated code development. The Canberra code uses an assortment of novel and elegant numerical methods. Most of these involve smoothing or filtering and have obvious advantage for removing short wavelength noise but would be unsuitable for modeling shocks.

There have been two recent projects, to improve the performance of the PITT code by using the cubed-sphere method to coordinatize the sphere. They both include an adaptation of the eth-calculus to handle the transformation of spin-weighted variables between the six patches.

In one of these projects, Gómez, Barreto and Frittelli develop the cubed-sphere approach into an efficient, highly parallelized 3D code, the LEO code, for the characteristic evolution of the coupled Einstein–Klein–Gordon equations in the Bondi–Sachs formalism [111Jump To The Next Citation Point]. This code was demonstrated to be convergent and its high accuracy in the linearized regime with a Schwarzschild background was demonstrated by the simulation of the quasinormal ringdown of the scalar field and its energy-momentum conservation.

Because the characteristic evolution scheme constitutes a radial integration carried out for each angle on the sphere of null directions, the natural way to parallelize the code is distribute the angular grid among processors. Thus given M × M processors one can distribute the N × N points in each spherical patch (cubed-sphere or stereographic), assigning to each processor equal square grids of extent N ∕M in each direction. To be effective this requires that the communication time between processors scales effectively. This depends upon the ghost point location necessary to supply nearest neighbor data and is facilitated in the cubed-sphere approach because the ghost points are aligned on 1-dimensional grid lines, whose pattern is invariant under grid size. In the stereographic approach, the ghost points are arranged in an irregular pattern which changes in an essentially random way under rescaling and requires a more complicated parallelization algorithm.

Their goal is to develop the LEO code for application to black hole- neutron star binaries in a close orbit regime where the absence of caustics make a pure characteristic evolution possible. Their first anticipated application is the simulation of a boson star orbiting a black hole, whose dynamics is described by the Einstein–Klein–Gordon equations. They point out that characteristic evolution of such systems of astrophysical interest have been limited in the past by resolution due to the lack off the necessary computational power, parallel infrastructure and mesh refinement. Most characteristic code development has been geared toward single processor machines whereas the current computational platforms are designed toward performing high resolution simulations in reasonable times by parallel processing.

At the same time the LEO code was being developed, Reisswig et al. [201Jump To The Next Citation Point] also constructed a characteristic code for the Bondi–Sachs problem based upon the cubed-sphere infrastructure of Thornburg [242241]. They retain the original second order differential form of the angular operators is retained.

The Canberra code handles fields on the sphere by means of a 3-fold representation: (i) as discretized functions on a spherical grid uniformly spaced in standard (𝜃,ϕ ) coordinates, (ii) as fast-Fourier transforms with respect to (𝜃,ϕ) (based upon the smooth map of the torus onto the sphere), and (iii) as a spectral decomposition of scalar, vector, and tensor fields in terms of spin-weighted spherical harmonics. The grid values are used in carrying out nonlinear algebraic operations; the Fourier representation is used to calculate (𝜃,ϕ)-derivatives; and the spherical harmonic representation is used to solve global problems, such as the solution of the first order elliptic equation for the reconstruction of the metric, whose unique solution requires pinning down the ℓ = 1 gauge freedom. The sizes of the grid and of the Fourier and spherical harmonic representations are coordinated. In practice, the spherical harmonic expansion is carried out to 15th order in ℓ, but the resulting coefficients must then be projected into the ℓ ≤ 10 subspace in order to avoid inconsistencies between the spherical harmonic, grid, and Fourier representations.

The Canberra code solves the null hypersurface equations by combining an 8th order Runge–Kutta integration with a convolution spline to interpolate field values. The radial grid points are dynamically positioned to approximate ingoing null geodesics, a technique originally due to Goldwirth and Piran [109] to avoid the problems with a uniform r-grid near a horizon which arise from the degeneracy of an areal coordinate on a stationary horizon. The time evolution uses the method of lines with a fourth order Runge–Kutta integrator, which introduces further high frequency filtering.

4.2.4 Stability

PITT code
Analytic stability analysis of the finite difference equations has been crucial in the development of a stable evolution algorithm, subject to the standard Courant–Friedrichs–Lewy (CFL) condition for an explicit code. Linear stability analysis on Minkowski and Schwarzschild backgrounds showed that certain field variables must be represented on the half-grid [120Jump To The Next Citation Point49Jump To The Next Citation Point]. Nonlinear stability analysis was essential in revealing and curing a mode coupling instability that was not present in the original axisymmetric version of the code [46Jump To The Next Citation Point163]. This has led to a code whose stability persists even in the regime that the u-direction, along which the grid flows, becomes spacelike, such as outside the velocity of light cone in a rotating coordinate system. Severe tests were used to verify stability. In the linear regime, robust stability was established by imposing random initial data on the initial characteristic hypersurface and random constraint violating boundary data on an inner worldtube. (Robust stability was later adopted as one of the standardized tests for Cauchy codes [5].) The code ran stably for 10,000 grid crossing times under these conditions [49Jump To The Next Citation Point]. The PITT code was the first 3D general relativistic code to pass this robust stability test. The use of random data is only possible in sufficiently weak cases where terms quadratic in the field gradients are not dominant. Stability in the highly nonlinear regime was tested in two ways. Runs for a time of 60,000 M were carried out for a moving, distorted Schwarzschild black hole (of mass M), with the marginally trapped surface at the inner boundary tracked and its interior excised from the computational grid [116Jump To The Next Citation Point117]. At the time, this was by far the longest simulation of a dynamic black hole. Furthermore, the scattering of a gravitational wave off a Schwarzschild black hole was successfully carried out in the extreme nonlinear regime where the backscattered Bondi news was as large as N = 400 (in dimensionless geometric units) [46Jump To The Next Citation Point], showing that the code can cope with the enormous power output 2 5 60 N c ∕G ≈ 10 W in conventional units. This exceeds the power that would be produced if, in 1 second, the entire galaxy were converted to gravitational radiation.

Cubed-sphere codes
The characteristic codes using the cubed-sphere grid [111Jump To The Next Citation Point201Jump To The Next Citation Point] are based upon the same variables and equations as the PITT code, with the same radial integration scheme. Thus it should be expected that stability be maintained since the main difference is that the interpatch interpolations are simpler, i.e. only 1-dimensional. This appears to be the case in the reported tests, although robust stability was not directly confirmed. In particular, the LEO code showed no sign of instability in long time, high resolution simulations of the quasinormal ringdown of a scalar field scattering off a Schwarzschild black hole. Angular dissipation was not necessary

Canberra code
Analytic stability analysis of the underlying finite difference equations is impractical because of the extensive mix of spectral techniques, higher order methods, and splines. Although there is no clear-cut CFL limit on the code, stability tests show that there is a limit on the time step. The damping of high frequency modes due to the implicit filtering would be expected to suppress numerical instability, but the stability of the Canberra code is nevertheless subject to two qualifications [282930]: (i) At late times (less than 100 M), the evolution terminates as it approaches an event horizon, apparently because of a breakdown of the NQS gauge condition, although an analysis of how and why this should occur has not yet been given. (ii) Numerical instabilities arise from dynamic inner boundary conditions and restrict the inner boundary to a fixed Schwarzschild horizon. Tests in the extreme nonlinear regime were not reported.

4.2.5 Accuracy

PITT code
The designed second order accuracy has been verified in an extensive number of testbeds [49Jump To The Next Citation Point46Jump To The Next Citation Point116Jump To The Next Citation Point255Jump To The Next Citation Point256Jump To The Next Citation Point], including new exact solutions specifically constructed in null coordinates for the purpose of convergence tests:

In addition to these testbeds, a set of linearized solutions has recently been obtained in the Bondi–Sachs gauge for either Schwarzschild or Minkowski backgrounds [40Jump To The Next Citation Point]. The solutions are generated by the introduction of a thin shell of matter whose density varies with time and angle. This gives rise to an exterior field containing gravitational waves. For a Minkowski background, the solution is given in exact analytic form and, for a Schwarzschild background, in terms of a power series. The solutions are parametrized by frequency and spherical harmonic decomposition. They supply a new and very useful testbed for the calibration and further development of characteristic evolution codes for Einstein’s equations, analogous to the role of the Teukolsky waves in Cauchy evolution. The PITT code showed clean second order convergence in both the L2 and L∞ error norms in tests based upon waves in a Minkowski background.

Cubed-sphere codes
Convergence of the cubed-sphere code developed by Reisswig et al [201Jump To The Next Citation Point] was also checked using linearized waves in a Minkowski background. The convergence rate for the L2 error norm was approximately second order accurate but in some cases there was significant degradation. They conjectured that the underlying source of error arises at the corners of the six patches. Comparison with the L ∞ error norm would discriminate such a localized source of error but such results were not reported.

The designed convergence rate of the ∂ operator used in the LEO code was verified for second, fourth and eighth order finite difference approximations, using the spin-weight 2 spherical harmonic Y 2 43 as a test. Similarly, the convergence of the integral relations governing the orthonormality of the spin-weighted harmonics was verified. The code includes coupling to a Klein–Gordon scalar field. Although convergence of the evolution code was not explicitly checked, high accuracy in the linearized regime with Schwarzschild background was demonstrated in the simulation of quasinormal ringdown of the scalar field and in the energy-momentum conservation of the scalar field.

Canberra code
The complexity of the algorithm and NQS gauge makes it problematic to establish accuracy by direct means. Exact solutions do not provide an effective convergence check, because the Schwarzschild solution is trivial in the NQS gauge and other known solutions in this gauge require dynamic inner boundary conditions which destabilize the present version of the code. Convergence to linearized solutions is a possible check but has not yet been performed. Instead indirect tests by means of geometric consistency and partial convergence tests are used to calibrate accuracy. The consistency tests were based on the constraint equations, which are not enforced during null evolution except at the inner boundary. The balance between mass loss and radiation flux through + ℐ is a global consequence of these constraints. No appreciable growth of the constraints was noticeable until within 5 M of the final breakdown of the code. In weak field tests where angular resolution does not dominate the error, partial convergence tests based upon varying the radial grid size verify the 8th order convergence in the shear expected from the Runge–Kutta integration and splines. When the radial source of error is small, reduced error with smaller time step can also be discerned.

In practical runs, the major source of inaccuracy is the spherical harmonic resolution, which was restricted to ℓ ≤ 15 by hardware limitations. Truncation of the spherical harmonic expansion has the effect of modifying the equations to a system for which the constraints are no longer satisfied. The relative error in the constraints is 10− 3 for strong field simulations [27Jump To The Next Citation Point].

4.2.6 Nonlinear scattering off a Schwarzschild black hole

A natural physical application of a characteristic evolution code is the nonlinear version of the classic problem of scattering off a Schwarzschild black hole, first solved perturbatively by Price [198Jump To The Next Citation Point]. Here the inner worldtube for the characteristic initial value problem consists of the ingoing branch of the r = 2m hypersurface (the past horizon), where Schwarzschild data are prescribed. The nonlinear problem of a gravitational wave scattering off a Schwarzschild black hole is then posed in terms of data on an outgoing null cone which describe an incoming pulse with compact support. Part of the energy of this pulse falls into the black hole and part is backscattered to + ℐ. This problem has been investigated using both the PITT and Canberra codes.

The Pittsburgh group studied the backscattered waveform (described by the Bondi news function) as a function of incoming pulse amplitude. The computational eth-module smoothly handled the complicated time dependent transformation between the non-inertial computational frame at ℐ+ and the inertial (Bondi) frame necessary to obtain the standard “plus” and “cross” polarization modes. In the perturbative regime, the news corresponds to the backscattering of the incoming pulse off the effective Schwarzschild potential. When the energy of the pulse is no larger than the central Schwarzschild mass, the backscattered waveform still depends roughly linearly on the amplitude of the incoming pulse. However, for very high amplitudes the waveform behaves quite differently. Its amplitude is greater than that predicted by linear scaling and its shape drastically changes and exhibits extra oscillations. In this very high amplitude case, the mass of the system is completely dominated by the incoming pulse, which essentially backscatters off itself in a nonlinear way.

The Canberra code was used to study the change in Bondi mass due to the radiation [27]. The Hawking mass MH (u,r) was calculated as a function of radius and retarded time, with the Bondi mass MB (u) then obtained by taking the limit r → ∞. The limit had good numerical behavior. For a strong initial pulse with ℓ = 4 angular dependence, in a run from u = 0 to u = 70 (in units where the interior Schwarzschild mass is 1), the Bondi mass dropped from 1.8 to 1.00002, showing that almost half of the initial energy of the system was backscattered and that a surprisingly negligible amount of energy fell into the black hole. A possible explanation is that the truncation of the spherical harmonic expansion cuts off wavelengths small enough to effectively penetrate the horizon. The Bondi mass decreased monotonically in time, as necessary theoretically, but its rate of change exhibited an interesting pulsing behavior whose time scale could not be obviously explained in terms of quasinormal oscillations. The Bondi mass loss formula was confirmed with relative error of less than 10−3. This is impressive accuracy considering the potential sources of numerical error introduced by taking the limit of the Hawking mass with limited resolution. The code was also used to study the appearance of logarithmic terms in the asymptotic expansion of the Weyl tensor [31]. In addition, the Canberra group studied the effect of the initial pulse amplitude on the waveform of the backscattered radiation, but did not extend their study to the very high amplitude regime in which qualitatively interesting nonlinear effects occur.

4.2.7 Black hole in a box

The PITT code has also been implemented to evolve along an advanced time foliation by ingoing null cones, with data given on a worldtube at their outer boundary and on the initial ingoing null cone. The code was used to evolve a black hole in the region interior to the worldtube by implementing a horizon finder to locate the marginally trapped surface (MTS) on the ingoing cones and excising its singular interior [119Jump To The Next Citation Point]. The code tracks the motion of the MTS and measures its area during the evolution. It was used to simulate a distorted “black hole in a box” [116Jump To The Next Citation Point]. Data at the outer worldtube was induced from a Schwarzschild or Kerr spacetime but the worldtube was allowed to move relative to the stationary trajectories; i.e. with respect to the grid the worldtube is fixed but the black hole moves inside it. The initial null data consisted of a pulse of radiation which subsequently travels outward to the worldtube where it reflects back toward the black hole. The approach of the system to equilibrium was monitored by the area of the MTS, which also equals its Hawking mass. When the worldtube is stationary (static or rotating in place), the distorted black hole inside evolved to equilibrium with the boundary. A boost or other motion of the worldtube with respect to the black hole did not affect this result. The marginally trapped surface always reached equilibrium with the outer boundary, confirming that the motion of the boundary was “pure gauge”.

This was the first code that ran “forever” in a dynamic black hole simulation, even when the worldtube wobbled with respect to the black hole to produce artificial periodic time dependence. An initially distorted, wobbling black hole was evolved for a time of 60,000 M, longer by orders of magnitude than permitted by the stability of other existing 3D black hole codes at the time. This exceptional performance opens a promising new approach to handle the inner boundary condition for Cauchy evolution of black holes by the matching methods reviewed in Section 5.

Note that setting the pulse to zero is equivalent to prescribing shear free data on the initial null cone. Combined with Schwarzschild boundary data on the outer worldtube, this would be complete data for a Schwarzschild space time. However, the evolution of such shear free null data combined with Kerr boundary data would have an initial transient phase before settling down to a Kerr black hole. This is because the twist of the shear-free Kerr null congruence implies that Kerr data specified on a null hypersurface are not generally shear free. The event horizon is an exception but Kerr null data on other null hypersurfaces have not been cast in explicit analytic form. This makes the Kerr spacetime an awkward testbed for characteristic codes. (Curiously, Kerr data on a null hypersurface with a conical type singularity do take a simple analytic form, although unsuitable for numerical evolution [90].) Using some intermediate analytic results of Israel and Pretorius [196], Venter and Bishop [50] have recently constructed a numerical algorithm for transforming the Kerr solution into Bondi coordinates and in that way provide the necessary null data numerically.

  Go to previous page Go up Go to next page