4.2 Geometrical formalism

The PITT code uses a standard Bondi–Sachs null coordinate system,
( ) 2 2βV- 2 A B 2 2β 2 B A 2 A B ds = − e r − r hAB U U du − 2e du dr − 2r hAB U dudx + r hAB dx dx , (34 )
det(hAB ) = det(qAB) (35 )
for some standard choice qAB of the unit sphere metric. This generalizes Equation (24View Equation) to the three-dimensional case. The characteristic version of Einstein’s equations consists of four hypersurface equations, which do not contain time derivates, two evolution equations, and four supplementary conditions, which need only hold on a worldtube. The hypersurface equations derive from the G μν∇ νu components of the Einstein tensor. They take the explicit form [302]
β = -1-rhAC hBD h h , (36 ) ,r 16 AB,r CD,r (r4e− 2βh U B ) = 2r4(r −2β ) − r2hBC D (h ) (37 ) AB ,r ,r ,A ,r C AB,r ( ) 2e−2βV = ℛ − 2DAD β − 2 (DA β)D β + r− 2e− 2βD (r4U A) ,r A A A ,r 1-4 − 4β A B − 2r e hAB U,r U ,r, (38 )
where DA is the covariant derivative and ℛ the curvature scalar of the conformal 2-metric hAB of the r = const. surfaces, and capital Latin indices are raised and lowered with hAB. Given the null data hAB on an outgoing null hypersurface, this hierarchy of equations can be integrated radially in order to determine β, UA and V on the hypersurface in terms of integration constants on an inner boundary. The evolution equations for the u-derivative of the null data derive from the trace-free part of the angular components of the Einstein tensor, i.e., the components A B m m GAB where AB (A B ) h = 2m m¯. They take the explicit form
( mAmB (rh ) − 1-(rV h ) − 2eβD D eβ + rh D (U C) AB,u ,r 2r AB,r ,r r A B AC B ,r r3 r − --e− 2βhAC hBD U C,r UD,r + 2DA UB + --hAB,r DC UC 2 2 ) +rU CDC (hAB,r) + rhAD,r hCD (DBUC − DC UB) = 0. (39 )
A compactified treatment of null infinity is achieved by introducing the radial coordinate x = r∕(R + r), where R is a scale parameter adjusted to the size of the inner boundary. Thus, x = 1 at + ℐ.

The Canberra code employs a null quasi-spherical (NQS) gauge (not to be confused with the quasi-spherical approximation in which quadratically-aspherical terms are ignored [56Jump To The Next Citation Point]). The NQS gauge takes advantage of the possibility of mapping the angular part of the Bondi metric conformally onto a unit-sphere metric, so that hAB → qAB. The required transformation xA → yA(u,r,xA ) is in general dependent upon u and r so that the NQS angular coordinates yA are not constant along the outgoing null rays, unlike the Bondi–Sachs angular coordinates. Instead the coordinates A y display the analogue of a shift on the null hypersurfaces u = const. In addition, the NQS spheres (u,r) = const. are not the same as the Bondi spheres. The radiation content of the metric is contained in a shear vector describing this shift. This results in the description of the radiation in terms of a spin-weight 1 field, rather than the spin-weight 2 field associated with h AB in the Bondi–Sachs formalism. In both the Bondi–Sachs and NQS gauges, the independent gravitational data on a null hypersurface is the conformal part of its degenerate 3-metric. The Bondi–Sachs null data consist of hAB, which determines the intrinsic conformal metric of the null hypersurface. In the NQS case, hAB = qAB and the shear vector comprises the only non-trivial part of the conformal 3-metric. Both the Bondi–Sachs and NQS gauges can be arranged to coincide in the special case of shear-free Robinson–Trautman metrics [95, 32Jump To The Next Citation Point].

The formulation of Einstein’s equations in the NQS gauge is presented in [31], and the associated gauge freedom arising from (u,r ) dependent rotation and boosts of the unit sphere is discussed in [32]. As in the PITT code, the main equations involve integrating a hierarchy of hypersurface equations along the radial null geodesics extending from the inner boundary to null infinity. In the NQS gauge the source terms for these radial ODEs are rather simple when the unknowns are chosen to be the connection coefficients. However, as a price to pay for this simplicity, after the radial integrations are performed on each null hypersurface, a first-order elliptic equation must be solved on each r = const. cross-section to reconstruct the underlying metric.

4.2.1 Worldtube conservation laws

The components of Einstein’s equations independent of the hypersurface and evolution equations,

hABG = 0 (40 ) AB GrA = 0 (41 ) Gr = 0, (42 ) u
were called supplementary conditions by Bondi et al. [63] and Sachs [258]. They showed that the Bianchi identity
μ -1--- √--- μ 1-ρσ ∇μG ν = √−-g ( − gG ν),μ + 2g,ν G ρσ = 0
implies that these equations need only be satisfied on a worldtube r = R (u,xA ). When the hypersurface and evolution equations are satisfied, the Bianchi identity for ν = r reduces to hABG = 0 AB so that (40View Equation) becomes trivially satisfied. The Bianchi identity for ν = A then reduces to
2 r (r G A),r = 0,
so that r G A = 0 if it is set to zero at A r = R (u,x ). When that is the case, the Bianchi identity for ν = u then reduces to
(r2Gr ) = 0, u ,r
so that Gru = 0 also vanishes if it vanishes for r = R (u, xA).

As a result, the supplementary conditions can be replaced by the condition that the Einstein tensor satisfy

μ ν ξ G μN ν = 0 (43 )
on the worldtube, where μ ξ is any vector field tangent to the worldtube and N μ is the unit normal to the worldtube. Since μ ξ N μ = 0, we can further replace (43View Equation) by the worldtube condition on the Ricci tensor
ξμR νμN ν = 0. (44 )
The Ricci identity
ξμR ν = ∇ ∇ (νξμ) + ∇ ∇ [νξ μ] − ∇ ν∇ ξμ (45 ) μ μ μ μ
then gives rise to the strict Komar conservation law [181]
[ν μ] ∇ μ∇ ξ = 0
when ξμ is a Killing vector corresponding to an exact symmetry. More generally, (45View Equation) gives rise to the flux conservation law
∫ u2 P ξ(u2 ) − P ξ(u1) = dS ν{∇ ν∇ μξμ − ∇μ ∇(νξμ)} u1
∮ P ξ = dS μν∇ [νξμ]
and dS μν and dS ν are, respectively, the appropriate surface and 3-volume elements on the worldtube. For the limiting case when R → ∞, these flux conservation laws govern the energy-momentum, angular momentum and supermomentum corresponding to the generators of the Bondi–Metzner–Sachs asymptotic symmetry group [288Jump To The Next Citation Point]. For an asymptotic time translation, they give rise to the Bondi mass-loss relation.

These conservation laws (44View Equation) can also be expressed in terms of the intrinsic metric of the worldtube.

H = g − N N μν μν μ ν
and its extrinsic curvature
K μν = H ρ∇ ρN ν. μ
This leads to the worldtube analogue of the momentum constraint for the Cauchy problem,
0 = 𝒟 (K μ − δμ K ρ) (= H μG N ρ), (46 ) μ ν ν ρ ν μρ
where 𝒟μ is the covariant derivative associated with H μν. These are equivalent to the conservation conditions (44View Equation) and allow the conserved quantities to be expressed in terms of the extrinsic curvature of the boundary. For any vector field μ ξ tangent to the worldtube, (46View Equation) implies
ν μ μ ρ (μ ν) ρ 𝒟 μ(ξ K ν − ξ K ρ) = 𝒟 ξ (K μν − H μνKρ ).
In particular, this shows that ξμ need only be a Killing vector for the 3-metric H μν to obtain a strict conservation law on the boundary.

The worldtube conservation laws can also be interpreted as a symmetric hyperbolic system governing the evolution of certain components of the extrinsic curvature [306]. This leads to the

Worldtube Theorem:

Given Hab, mambKab and K, the worldtube constraints constitute a well-posed initial-value problem, which determines the remaining components of the extrinsic curvature Kab.

These extrinsic curvature components are related to the integration constants for the Bondi–Sachs system, which leads to possible applications of the worldtube theorem. One application is to waveform extraction. In that case, the data (Hab, mambKab, K ) necessary to apply the worldtube theorem are supplied by the numerical results of a 3 + 1 Cauchy evolution. The remaining components of the extrinsic curvature can then be determined by means of a well-posed initial-value problem on the boundary. The integration constants A A (β,V, U ,U,r,hAB ), for the Bondi–Sachs equations at r = R (u,xA ) are then determined. This approach can be used to enforce the constraints in the numerical computation of waveforms at ℐ+by means of Cauchy-characteristic extraction (see Section 6).

Another possible application is to the characteristic initial-boundary value problem, for which boundary data consistent with the constraints must be prescribed a priori, i.e., independent of the evolution. The object is to obtain a well-posed version of the characteristic initial-boundary value problem. However, the complicated coupling between the Bondi–Sachs evolution system and the boundary constraint system prevents any definitive results.

4.2.2 Angular dissipation

For a {3 + 1} evolution algorithm based upon a system of wave equations, or any other symmetric hyperbolic system, numerical dissipation can be added in the standard Kreiss–Oliger form [186]. Dissipation cannot be added to the {2 + 1 + 1} format of characteristic evolution in this standard way for {3 + 1} Cauchy evolution. In the original version of the PITT code, which used square stereographic patches with boundaries aligned with the grid, numerical dissipation was only introduced in the radial direction [195Jump To The Next Citation Point]. This was sufficient to establish numerical stability. In the new version of the code with circular stereographic patches, whose boundaries fit into the stereographic grid in an irregular way, angular dissipation is necessary to suppress the resulting high-frequency error.

Angular dissipation can be introduced in the following way [13Jump To The Next Citation Point]. In terms of the spin-weight 2 variable

J = qAqBh , (47 ) AB
the evolution equation (39View Equation) takes the form
∂u∂r(rJ ) = S, (48 )
where S represents the right-hand-side terms. We add angular dissipation to the u-evolution through the modification
3 2¯2 ∂u∂r (rJ ) + 𝜖uh ∂ ∂ ∂r(rJ) = S, (49 )
where h is the discretization size and 𝜖u ≥ 0 is an adjustable parameter independent of h. This leads to
( ) ( ) ( ) ∂ |∂ (rJ )|2 + 2𝜖 h3ℜ {∂ (rJ¯)∂2 ¯∂2 ∂ (rJ ) } = 2ℜ { ∂ (rJ¯) S}. (50 ) u r u r r r
Integration over the unit sphere with solid angle element dΩ then gives
∮ ∮ ( ) ∮ ( ) ∂u |∂r(rJ)|2dΩ + 2𝜖uh3 |¯∂2 ∂r(rJ ) |2dΩ = 2ℜ ∂r(r ¯J) Sd Ω. (51 )
Thus, the 𝜖u-term has the effect of damping high-frequency noise as measured by the L2 norm of ∂r(rJ ) over the sphere.

Similarly, dissipation can be introduced in the radial integration of (48View Equation) through the substitution

∂u ∂r(rJ) → ∂u∂r(rJ) + 𝜖rh3∂2¯∂2∂u(rJ ), (52 )
with 𝜖 ≥ 0 r. Angular dissipation can also be introduced in the hypersurface equations, e.g., in Equation (38View Equation) through the substitution
∂ V → ∂ V + 𝜖 h3¯∂∂∂ ¯∂V. (53 ) r r V

4.2.3 First versus second differential order

The PITT code was originally formulated in the second differential form of Equations (36View Equation, 37View Equation, 38View Equation, 39View Equation), which in the spin-weighted version leads to an economical number of 2 real and 2 complex variables. Subsequently, the variable

2 − 2β B QA = r e hAB U,r (54 )
was introduced to reduce Equation (37View Equation) to two first-order radial equations, which simplified the startup procedure at the boundary. Although the resulting code was verified to be stable and second-order accurate, its application to problems involving strong fields and gradients led to numerical errors, which made small-scale effects of astrophysical importance difficult to measure.

In particular, in initial attempts to simulate a white-hole fission, Gómez [133Jump To The Next Citation Point] encountered an oscillatory error pattern in the angular directions near the time of fission. The origin of the problem was tracked to numerical error of an oscillatory nature introduced by 2 ∂ terms in the hypersurface and evolution equations. Gómez’s solution was to remove the offending second angular derivatives by introducing additional variables and reducing the system to first differential order in the angular directions. This suppressed the oscillatory mode and subsequently improved performance in the simulation of the white-hole fission problem [136Jump To The Next Citation Point] (see Section 4.4.2).

This success opens the issue of whether a completely first differential order code might perform even better, as has been proposed by Gómez and Frittelli [135]. By the use of ∂uhAB as a fundamental variable, they cast the Bondi system into Duff’s first-order quasilinear canonical form [103Jump To The Next Citation Point]. At the analytic level this provides standard uniqueness and existence theorems (extending previous work for the linearized case [124]) and is a starting point for establishing the estimates required for well-posedness.

At the numerical level, Gómez and Frittelli point out that this first-order formulation provides a bridge between the characteristic and Cauchy approaches, which allows application of standard methods for constructing numerical algorithms, e.g., to take advantage of shock-capturing schemes. Although true shocks do not exist for vacuum gravitational fields, when coupled to hydro the resulting shocks couple back to form steep gradients, which might not be captured by standard finite-difference approximations. In particular, the second derivatives needed to compute gravitational radiation from stellar oscillations have been noted to be a troublesome source of inaccuracy in the characteristic treatment of hydrodynamics [270Jump To The Next Citation Point]. Application of standard versions of AMR is also facilitated by the first-order form.

The benefits of this completely first-order approach are not simple to decide without code comparison. The part of the code in which the 2 ∂ operator introduced the oscillatory error mode in [133] was not identified, i.e., whether it originated in the inner boundary treatment or in the interpolations between stereographic patches where second derivatives might be troublesome. There are other possible ways to remove the oscillatory angular modes, such as adding angular dissipation (see Section 4.2.2). The finite-difference algorithm in the original PITT code only introduced numerical dissipation in the radial direction [195Jump To The Next Citation Point]. The economy of variables and other advantages of a second-order scheme [187] should not be abandoned without further tests and investigation.

4.2.4 Numerical methods

The PITT code is an explicit finite-difference evolution algorithm based upon retarded time steps on a uniform three-dimensional null coordinate grid based upon the stereographic coordinates and a compactified radial coordinate. The straightforward numerical implementation of the finite-difference equations has facilitated code development. The Canberra code uses an assortment of novel and elegant numerical methods. Most of these involve smoothing or filtering and have obvious advantage for removing short wavelength noise but would be unsuitable for modeling shocks.

There have been two recent projects, to improve the performance of the PITT code by using the cubed-sphere method to coordinatize the sphere. They both include an adaptation of the eth-calculus to handle the transformation of spin-weighted variables between the six patches.

In one of these projects, Gómez, Barreto and Frittelli develop the cubed-sphere approach into an efficient, highly parallelized 3D code, the LEO code, for the characteristic evolution of the coupled Einstein–Klein–Gordon equations in the Bondi–Sachs formalism [134Jump To The Next Citation Point]. This code was demonstrated to be convergent and its high accuracy in the linearized regime with a Schwarzschild background was demonstrated by the simulation of the quasinormal ringdown of the scalar field and its energy-momentum conservation.

Because the characteristic evolution scheme constitutes a radial integration carried out for each angle on the sphere of null directions, the natural way to parallelize the code is to distribute the angular grid among processors. Thus, given M × M processors one can distribute the N × N points in each spherical patch (cubed-sphere or stereographic), assigning to each processor equal square grids of extent N ∕M in each direction. To be effective this requires that the communication time between processors scales effectively. This depends upon the ghost point location necessary to supply nearest neighbor data and is facilitated in the cubed-sphere approach because the ghost points are aligned on one-dimensional grid lines, whose pattern is invariant under grid size. In the stereographic approach, the ghost points are arranged in an irregular pattern, which changes in an essentially random way under rescaling and requires a more complicated parallelization algorithm.

Their goal is to develop the LEO code for application to black-hole–neutron-star binaries in a close orbit regime, where the absence of caustics make a pure characteristic evolution possible. Their first anticipated application is the simulation of a boson star orbiting a black hole, whose dynamics is described by the Einstein–Klein–Gordon equations. They point out that characteristic evolution of such systems of astrophysical interest have been limited in the past by resolution due to the lack of necessary computational power, parallel infrastructure and mesh refinement. Most characteristic code development has been geared toward single processor machines, whereas the current computational platforms are designed toward performing high-resolution simulations in reasonable times by parallel processing.

At the same time the LEO code was being developed, Reisswig et al. [242Jump To The Next Citation Point] also constructed a characteristic code for the Bondi–Sachs problem based upon the cubed-sphere infrastructure of Thornburg [295, 294]. They retain the original second-order differential form of the angular operators.

The Canberra code handles fields on the sphere by means of a 3-fold representation: (i) as discretized functions on a spherical grid uniformly spaced in standard (𝜃,ϕ ) coordinates, (ii) as fast-Fourier transforms with respect to (𝜃,ϕ) (based upon the smooth map of the torus onto the sphere), and (iii) as a spectral decomposition of scalar, vector, and tensor fields in terms of spin-weighted spherical harmonics. The grid values are used in carrying out nonlinear algebraic operations; the Fourier representation is used to calculate (𝜃,ϕ)-derivatives; and the spherical harmonic representation is used to solve global problems, such as the solution of the first-order elliptic equation for the reconstruction of the metric, whose unique solution requires pinning down the ℓ = 1 gauge freedom. The sizes of the grid and of the Fourier and spherical-harmonic representations are coordinated. In practice, the spherical-harmonic expansion is carried out to 15th order in ℓ, but the resulting coefficients must then be projected into the ℓ ≤ 10 subspace in order to avoid inconsistencies between the spherical harmonic, grid, and Fourier representations.

The Canberra code solves the null hypersurface equations by combining an eighth-order Runge–Kutta integration with a convolution spline to interpolate field values. The radial grid points are dynamically positioned to approximate ingoing null geodesics, a technique originally due to Goldwirth and Piran [132] to avoid the problems with a uniform r-grid near a horizon, which arise from the degeneracy of an areal coordinate on a stationary horizon. The time evolution uses the method of lines with a fourth-order Runge–Kutta integrator, which introduces further high frequency filtering.

4.2.5 Stability

PITT code
Analytic stability analysis of the finite-difference equations has been crucial in the development of a stable evolution algorithm, subject to the standard Courant–Friedrichs–Lewy (CFL) condition for an explicit code. Linear stability analysis on Minkowski and Schwarzschild backgrounds showed that certain field variables must be represented on the half-grid [142Jump To The Next Citation Point, 56Jump To The Next Citation Point]. Nonlinear stability analysis was essential in revealing and curing a mode-coupling instability that was not present in the original axisymmetric version of the code [53Jump To The Next Citation Point, 195]. This has led to a code, whose stability persists even in the regime that the u-direction, along which the grid flows, becomes spacelike, such as outside the velocity-of-light cone in a rotating coordinate system. Several tests were used to verify stability. In the linear regime, robust stability was established by imposing random initial data on the initial characteristic hypersurface and random constraint-violating boundary data on an inner worldtube. (Robust stability was later adopted as one of the standardized tests for Cauchy codes [5].) The code ran stably for 10,000 grid crossing times under these conditions [56Jump To The Next Citation Point]. The PITT code was the first 3D general-relativistic code to pass this robust stability test. The use of random data is only possible in sufficiently weak cases where terms quadratic in the field gradients are not dominant. Stability in the highly nonlinear regime was tested in two ways. Runs for a time of 60,000 M were carried out for a moving, distorted Schwarzschild black hole (of mass M), with the marginally-trapped surface (MTS) at the inner boundary tracked and its interior excised from the computational grid [139Jump To The Next Citation Point, 148]. At the time, this was by far the longest simulation of a dynamic black hole. Furthermore, the scattering of a gravitational wave off a Schwarzschild black hole was successfully carried out in the extreme nonlinear regime where the backscattered Bondi news was as large as N = 400 (in dimensionless geometric units) [53Jump To The Next Citation Point], showing that the code can cope with the enormous power output N 2c5∕G ≈ 1060 W in conventional units. This exceeds the power that would be produced if, in one second, the entire galaxy were converted to gravitational radiation.

Cubed-sphere codes
The characteristic codes using the cubed-sphere grid [134Jump To The Next Citation Point, 242Jump To The Next Citation Point] are based upon the same variables and equations as the PITT code, with the same radial integration scheme. Thus, it should be expected that stability be maintained since the main difference is that the interpatch interpolations are simpler, i.e., only one-dimensional. This appears to be the case in the reported tests, although robust stability was not directly confirmed. In particular, the LEO code showed no sign of instability in long time, high-resolution simulations of the quasinormal ringdown of a scalar field scattering off a Schwarzschild black hole. Angular dissipation was not necessary.

Canberra code
Analytic stability analysis of the underlying finite-difference equations is impractical because of the extensive mix of spectral techniques, higher-order methods, and splines. Although there is no clear-cut CFL limit on the code, stability tests show that there is a limit on the time step. The damping of high frequency modes due to the implicit filtering would be expected to suppress numerical instability, but the stability of the Canberra code is nevertheless subject to two qualifications [34, 35, 36]: (i) At late times (less than 100 M), the evolution terminates as it approaches an event horizon, apparently because of a breakdown of the NQS gauge condition, although an analysis of how and why this should occur has not yet been given. (ii) Numerical instabilities arise from dynamic inner boundary conditions and restrict the inner boundary to a fixed Schwarzschild horizon. Tests in the extreme nonlinear regime were not reported.

4.2.6 Accuracy

PITT code
The designed second-order accuracy has been verified in an extensive number of testbeds [56Jump To The Next Citation Point, 53Jump To The Next Citation Point, 139Jump To The Next Citation Point, 311Jump To The Next Citation Point, 312Jump To The Next Citation Point], including new exact solutions specifically constructed in null coordinates for the purpose of convergence tests:

In addition to these testbeds, a set of linearized solutions has recently been obtained in the Bondi–Sachs gauge for either Schwarzschild or Minkowski backgrounds [47Jump To The Next Citation Point]. The solutions are generated by the introduction of a thin shell of matter whose density varies with time and angle. This gives rise to an exterior field containing gravitational waves. For a Minkowski background, the solution is given in exact analytic form and, for a Schwarzschild background, in terms of a power series. The solutions are parametrized by frequency and spherical harmonic decomposition. They supply a new and very useful testbed for the calibration and further development of characteristic evolution codes for Einstein’s equations, analogous to the role of the Teukolsky waves in Cauchy evolution. The PITT code showed clean second-order convergence in both the L2 and L∞ error norms in tests based upon waves in a Minkowski background. However, in applications involving very high resolution or nonlinearity, there was excessive short wavelength noise, which degraded convergence. Recent improvements in the code [16Jump To The Next Citation Point] have now established clean second-order convergence in the nonlinear regime.

It would be of great value to increase the accuracy of the code to higher order. However, the marching algorithm, which combines the radial integration of the hypersurface and evolution equations does not fall into the standard categories that have been studied in computational mathematics. In particular, there are no energy estimates for the analytic problem, which would could serve as a guide to design a higher-order stable algorithm. This is a an important area for future investigation.

Cubed-sphere codes
Convergence of the cubed-sphere code developed by Reisswig et al. [242Jump To The Next Citation Point] was also checked using linearized waves in a Minkowski background. The convergence rate for the L2 error norm was approximately second-order accurate but in some cases there was significant degradation. They conjectured that the underlying source of error arises at the corners of the six patches. Comparison with the L ∞ error norm would discriminate such a localized source of error but such results were not reported.

The designed convergence rate of the ∂ operator used in the LEO code was verified for second-, fourth- and eighth-order finite-difference approximations, using the spin-weight 2 spherical harmonic 2Y43 as a test. Similarly, the convergence of the integral relations governing the orthonormality of the spin-weighted harmonics was verified. The code includes coupling to a Klein–Gordon scalar field. Although convergence of the evolution code was not explicitly checked, high accuracy in the linearized regime with Schwarzschild background was demonstrated in the simulation of quasinormal ringdown of the scalar field and in the energy-momentum conservation of the scalar field.

Canberra code
The complexity of the algorithm and NQS gauge makes it problematic to establish accuracy by direct means. Exact solutions do not provide an effective convergence check, because the Schwarzschild solution is trivial in the NQS gauge and other known solutions in this gauge require dynamic inner boundary conditions, which destabilize the present version of the code. Convergence to linearized solutions is a possible check but has not yet been performed. Instead indirect tests by means of geometric consistency and partial convergence tests are used to calibrate accuracy. The consistency tests were based on the constraint equations, which are not enforced during null evolution except at the inner boundary. The balance between mass loss and radiation flux through ℐ+ is a global consequence of these constraints. No appreciable growth of the constraints was noticeable until within 5 M of the final breakdown of the code. In weak field tests where angular resolution does not dominate the error, partial convergence tests based upon varying the radial grid size verify the eigth-order convergence in the shear expected from the Runge–Kutta integration and splines. When the radial source of error is small, reduced error with smaller time step can also be discerned.

In practical runs, the major source of inaccuracy is the spherical-harmonic resolution, which was restricted to ℓ ≤ 15 by hardware limitations. Truncation of the spherical-harmonic expansion has the effect of modifying the equations to a system for which the constraints are no longer satisfied. The relative error in the constraints is 10–3 for strong field simulations [33Jump To The Next Citation Point].

4.2.7 Nonlinear scattering off a Schwarzschild black hole

A natural physical application of a characteristic evolution code is the nonlinear version of the classic problem of scattering off a Schwarzschild black hole, first solved perturbatively by Price [238Jump To The Next Citation Point]. Here the inner worldtube for the characteristic initial-value problem consists of the ingoing branch of the r = 2 m hypersurface (the past horizon), where Schwarzschild data are prescribed. The nonlinear problem of a gravitational wave scattering off a Schwarzschild black hole is then posed in terms of data on an outgoing null cone, which describe an incoming pulse with compact support. Part of the energy of this pulse falls into the black hole and part is backscattered to ℐ+. This problem has been investigated using both the PITT and Canberra codes.

The Pittsburgh group studied the backscattered waveform (described by the Bondi news function) as a function of incoming pulse amplitude. The computational eth-module smoothly handled the complicated time-dependent transformation between the non-inertial computational frame at + ℐ and the inertial (Bondi) frame necessary to obtain the standard “plus” and “cross” polarization modes. In the perturbative regime, the news corresponds to the backscattering of the incoming pulse off the effective Schwarzschild potential. When the energy of the pulse is no larger than the central Schwarzschild mass, the backscattered waveform still depends roughly linearly on the amplitude of the incoming pulse. However, for very high amplitudes the waveform behaves quite differently. Its amplitude is greater than that predicted by linear scaling and its shape drastically changes and exhibits extra oscillations. In this very high amplitude case, the mass of the system is completely dominated by the incoming pulse, which essentially backscatters off itself in a nonlinear way.

The Canberra code was used to study the change in Bondi mass due to the radiation [33]. The Hawking mass MH (u, r) was calculated as a function of radius and retarded time, with the Bondi mass MB (u) then obtained by taking the limit r → ∞. The limit had good numerical behavior. For a strong initial pulse with ℓ = 4 angular dependence, in a run from u = 0 to u = 70 (in units where the interior Schwarzschild mass is 1), the Bondi mass dropped from 1.8 to 1.00002, showing that almost half of the initial energy of the system was backscattered and that a surprisingly negligible amount of energy fell into the black hole. A possible explanation is that the truncation of the spherical harmonic expansion cuts off wavelengths small enough to effectively penetrate the horizon. The Bondi mass decreased monotonically in time, as necessary theoretically, but its rate of change exhibited an interesting pulsing behavior whose time scale could not be obviously explained in terms of quasinormal oscillations. The Bondi mass loss formula was confirmed with relative error of less than 10–3. This is impressive accuracy considering the potential sources of numerical error introduced by taking the limit of the Hawking mass with limited resolution. The code was also used to study the appearance of logarithmic terms in the asymptotic expansion of the Weyl tensor [37]. In addition, the Canberra group studied the effect of the initial pulse amplitude on the waveform of the backscattered radiation, but did not extend their study to the very high amplitude regime in which qualitatively interesting nonlinear effects occur.

4.2.8 Black hole in a box

The PITT code has also been implemented to evolve along an advanced time foliation by ingoing null cones, with data given on a worldtube at their outer boundary and on the initial ingoing null cone. The code was used to evolve a black hole in the region interior to the worldtube by implementing a horizon finder to locate the MTS on the ingoing cones and excising its singular interior [141Jump To The Next Citation Point]. The code tracks the motion of the MTS and measures its area during the evolution. It was used to simulate a distorted “black hole in a box” [139Jump To The Next Citation Point]. Data at the outer worldtube was induced from a Schwarzschild or Kerr spacetime but the worldtube was allowed to move relative to the stationary trajectories; i.e., with respect to the grid the worldtube is fixed but the black hole moves inside it. The initial null data consisted of a pulse of radiation, which subsequently travels outward to the worldtube, where it reflects back toward the black hole. The approach of the system to equilibrium was monitored by the area of the MTS, which also equals its Hawking mass. When the worldtube is stationary (static or rotating in place), the distorted black hole inside evolved to equilibrium with the boundary. A boost or other motion of the worldtube with respect to the black hole did not affect this result. The MTS always reached equilibrium with the outer boundary, confirming that the motion of the boundary was “pure gauge”.

This was the first code that ran “forever” in a dynamic black-hole simulation, even when the worldtube wobbled with respect to the black hole to produce artificial periodic time dependence. An initially distorted, wobbling black hole was evolved for a time of 60,000 M, longer by orders of magnitude than permitted by the stability of other existing black hole codes at the time. This exceptional performance opens a promising new approach to handle the inner boundary condition for Cauchy evolution of black holes by the matching methods reviewed in Section 5.

Note that setting the pulse to zero is equivalent to prescribing shear free data on the initial null cone. Combined with Schwarzschild boundary data on the outer worldtube, this would be complete data for a Schwarzschild space time. However, the evolution of such shear free null data combined with Kerr boundary data would have an initial transient phase before settling down to a Kerr black hole. This is because the twist of the shear-free Kerr null congruence implies that Kerr data specified on a null hypersurface are not generally shear free. The event horizon is an exception but Kerr null data on other null hypersurfaces have not been cast in explicit analytic form. This makes the Kerr spacetime an awkward testbed for characteristic codes. (Curiously, Kerr data on a null hypersurface with a conical type singularity do take a simple analytic form, although unsuitable for numerical evolution [108].) Using some intermediate analytic results of Israel and Pretorius [236], Venter and Bishop [59] have recently constructed a numerical algorithm for transforming the Kerr solution into Bondi coordinates and in that way provide the necessary null data numerically.

  Go to previous page Go up Go to next page