Let us discuss the main properties of each of these approaches with respect to their treatment of infinity. In this connection it is useful to look at conformal diagrams. In all three cases the physical situation is the same. A source, indicated by a shaded brown region, evolves between the time-like past and future infinities and thereby emits gravitational radiation which escapes towards future null infinity . The three methods differ in how they introduce an evolution process in order to simulate the physical system. The goal is to obtain the radiation data which are received by an observer on as accurately as possible (see [42] for a discussion of the issues involved in the reception process).

In Figure 9 the geometry is sketched which is implied by solving the Cauchy problem for the standard Einstein equations in any of the various forms which exist today: ADM formulation, old [5] and improved [16], or any one of the hyperbolic formulations like, e.g. in [1, 61]. In this approach the physical space-time is decomposed into space and time by the introduction of a time coordinate. The hypersurfaces of constant physical time are Cauchy surfaces and, hence, asymptotically Euclidean. They are indicated by green lines. The blue line corresponds to a (normally time-like) hypersurface which has to be introduced because for the standard Einstein equations, formulated in the physical space-time, null-infinity is infinitely far away. The numerical grids are distributed along the constant time hypersurfaces, and with each time-step they evolve from one hypersurface to the next. They have outermost grid-points which, in the course of the evolution, move along that time-like hypersurface.

Thus, inevitably, we are facing the task of solving an initial boundary value problem (IBVP) for the Einstein equations. Such problems are notoriously difficult to treat numerically. This, in fact, is one of the big problems in scientific computing as witnessed by the following quotation [142]:

The difficulties caused by boundary conditions in scientific computing would be hard to overemphasize. Boundary conditions can easily make the difference between a successful and an unsuccessful computation, or between a fast and a slow one. Yet in many important cases, there is little agreement about what the proper treatment of the boundary should be.While these remarks apply to any numerical problem, they are particularly relevant to numerical relativity.One of the sources of difficulty is that although some numerical boundary conditions come from discretizing the boundary conditions for the continuous problem, other ``artificial'' or ``numerical boundary conditions'' do not. The reason is that the number of boundary conditions required by a finite difference formula depends on its stencil, not on the equation being modeled. Thus even a complete mathematical understanding of the initial boundary value problem to be solved (which is often lacking) is in general not enough to ensure a successful choice of numerical boundary conditions. This situation reaches an extreme in the design of what are variously known as ``open'', ``radiation'', ``absorbing'', ``non-reflecting'', or ``far-field'' boundary conditions, which are numerical artifacts designed to limit a computational domain in a place where the mathematical problem has no boundary at all.

Despite these remarks, perhaps the most basic and useful advice one can offer concerning numerical boundary conditions is this: Before worrying about discretization, make sure to understand the mathematical problem. If the IBVP is ill-posed, no amount of numerical cleverness will lead to a successful computation. This principle may seem too obvious to deserve mention, but it is surprising how often it is forgotten.

Until only quite recently no result on the IBVP for the Einstein equations was available. This situation has changed with the work in [59] where it is proved that the IBVP for the Einstein equations in the physical space-time is well-posed for a certain class of boundary conditions. In rough terms, these boundary conditions can be understood as follows: For an arbitrarily specified time-like unit vector tangent to the boundary we can define a null-tetrad by taking the two real null-vectors as lying in the two-dimensional space spanned by that vector and the unit vector normal to the boundary. This fixes the null-tetrad up to boosts in that plane and rotations in the plane orthogonal to it. With respect to that tetrad we can define the (complex) Weyl tensor components and . Then, the boundary conditions have the form

Here
,
and
*q*
are complex functions on the boundary.
*q*
can be freely specified, thus representing the free data, while
and
are restricted by some condition, which implies in particular
that
. They can be regarded as specifying the reflection and
transmission properties of the boundary. In the frame defined
above, the component
resp.
can be interpreted as the part of the Weyl tensor which
propagates orthogonally across the boundary in the outward or
inward direction, respectively. The boundary conditions (36) essentially say that we can specify the value of the ingoing
part freely and couple some parts of the outgoing field back to
it.

Consider the boundary condition with
*q*
=0 and
viz.
. This looks like a completely absorbing boundary condition or,
in other words, like the condition of no incoming radiation.
However, this is deceiving. Recall that the Weyl tensor
components are defined with respect to a time-like unit vector.
But there is no distinguished time-like unit-vector available so
that they are all equivalent. This means that the boundary
conditions, and hence the presence or absence of incoming
radiation, are
*observer dependent*
. Clearly, the condition of no incoming radiation is not
compatible with the Einstein equations anyway, because outgoing
waves will always be accompanied by backscattered waves generated
by the non-linear nature of the equations. The impossibility of a
local completely absorbing boundary condition for the scalar wave
equation in flat space has been emphasized already in numerical
analysis [36]. The deep reason behind this fact seems to be the Lorentz
invariance of the equations. While the conditions in (36) give rise to a well-posed IBVP, they have not found their way
into the numerical work yet.

Returning to Figure 9 we see that there are several fundamental difficulties with this (currently standard) approach. Recall that the goal is to obtain radiative information near . To achieve this goal we need to arrange for two things: We need to make sure that we simulate an asymptotically flat space-time and we need to have a stable code in order to get meaningful results. In this approach, both these requirements can only be realized by an appropriate boundary condition. The orange line in the diagram indicates the domain of influence of the boundary. The parts of space-time above that line are influenced by the boundary condition. It is obvious from the causal dependencies that the radiative information which can be accessed using this approach is to a large extent affected by the boundary condition. A bad choice of the boundary condition, in particular one which is not compatible with the Einstein evolution, or one which is not adequate for asymptotic flatness, can ruin any information gained about the radiation.

So we see that a boundary condition has to have several important properties. It must be

- physically reasonable: in some way it should express the asymptotic flatness of the space-time manifold,
- mathematically reasonable: the IBVP should be well-posed with this boundary condition,
- numerically reasonable: codes which implement the boundary condition should be stable.

We can also see from the diagram that the choice of asymptotically Euclidean hypersurfaces is unfortunate for the following reason. A natural thing to do in order to increase the accuracy of the results is to push the boundary further outward towards infinity. However, pushing along surfaces of constant time brings us closer to and not to the interesting parts in the neighbourhood of . This implies that the numerical codes not only increase in size but in addition that they have to run longer (in the sense of elapsed physical time) until the interesting regions are covered. The constant time hypersurfaces do not ``track the radiation''.

One idea [21] to overcome the problems encountered in the standard Cauchy
approach is the so-called Cauchy-Characteristic matching
procedure [22]. In this approach one introduces a time-like hypersurface, the
*interface*, along which one tries to match a Cauchy problem with a
characteristic initial value problem. The Cauchy part provides
the interior boundary condition for the characteristic part while
the latter provides the outer boundary condition for the former.
Thus, the space-time is foliated by hypersurfaces which are
partly space-like and partly null, where the causal character
changes in a continuous but non-differentiable way across a
two-dimensional surface (see Figure
10).

The characteristic part of the hypersurfaces reaches out all the way to . The advantage of this procedure is that the problem of finding the correct boundary condition has been eliminated by the introduction of the characteristic part of the scheme.

The characteristic part has been implemented numerically in a very successful way (see the Living Reviews article [146] for a recent review on that topic). It is based on the Bondi-Sachs equations for the gravitational field on null-hypersurfaces. Using a ``compactified coordinate'' along the generators of the null-hypersurfaces it is even possible to ``bring null-infinity to finite places''. However, this is different from the conformal compactification because the metric is not altered. This necessarily leads to equations which ``notice where infinity is'' because they degenerate there. Because of this degeneracy the solutions can be singular unless very specific boundary values are prescribed. This is essentially the peeling property of the fields in disguise.

The critical place is, of course, the interface where two completely different codes have to be matched. This is not merely a change in the numerical procedures but the entire setup changes across the interface: the geometry, the independent and dependent variables etc. Naturally, this requires a very sophisticated implementation of the interface. Test calculations have been performed successfully for space-times with symmetries (cylindrical, spherical) and/or model equations like non-linear wave equations. Currently, a combined code for doing the pure gravitational problem in three dimensions is being developed. This is also described in more detail in [146].

Finally, we want to discuss the approach based on the conformal compactification using the conformal field equations (see Figure 11). In this case, the arena is not the physical space-time but some other unphysical manifold which is conformally related via some conformal factor .

The physical space-time is the part of the conformal manifold on which is positive. Introducing an appropriate time-coordinate in the conformal manifold leads to a foliation by space-like hypersurfaces which also cover the physical manifold. Those hypersurfaces which intersect transversely are hyperboloidal hypersurfaces in the physical space-time. It is important to note that they are submanifolds of the conformal manifold so that they do not stop at but continue smoothly across which is just another submanifold of the conformal manifold, albeit a special one.

Hyperboloidal initial data (see 3.4) are given on one such hypersurface and are subsequently evolved with the symmetric hyperbolic (reduced) system of evolution equations. In contrast to the behaviour discussed above in connection with the characteristic evolution, the conformal field equations do not know where is. They know that there might be a (because the conformal factor is one of the variables in the system) but they have to be told its location on the initial data surface, and from there on this information is carried along by the evolution.

What are the advantages of this kind of formulation? First of all, one might have thought that the question of the correct boundary condition for the IBVP for the Einstein equations has now merely been shifted to the even more complicated problem of finding a boundary condition for the conformal field equations. This is true but irrelevant for the following reason: Suppose we give hyperboloidal initial data on that part of the initial surface which lies inside the physical region. We may now smoothly extend these data into the unphysical region. The structure of the characteristics of the conformal field equations is such that (provided the gauges are chosen appropriately) the outermost characteristic is the light cone. Hence is a characteristic for the evolution equations, and this implies that the extension of the initial data into the unphysical region cannot influence the interior. Similarly, it is true that also the conformal field equations need a boundary condition. But since this boundary has been shifted into the unphysical region there is no need for it to be physically reasonable. It is enough to have a boundary condition which is numerically reasonable, i.e. which leads to stable codes. The information generated at the boundary travels at most with the velocity of light and so it cannot swap into the physical region. acts as a ``one-way membrane''. It should be remarked here that this is true for the exact analytical case. Numerically, things are somewhat different but one can expect that any influence from the outside will die away with the order of accuracy of the discretization scheme used.

A further advantage of the conformal approach is the
possibility to study
*global*
properties of the space-times under consideration. Not only do
the hyperboloidal hypersurfaces extend up to and beyond
null-infinity but it is also possible to study the long-time
behaviour of gravitational fields. If the initial data are small
enough so that future time-like infinity
is a regular point (see Theorem
6) then one can determine in a finite (conformal and
computational) time the entire future of the initial surface and
therefore a (semi-)global space-time up to (and even beyond)
. This has been successfully demonstrated by Hübner [77] in the case of spherically symmetric calculations and more
recently also in higher dimensions.

The calculation of a global space-time up to has the effect of shrinking the region on the computational domain which corresponds to the physical space-time because seems to move through the grid. This means that more and more grid-points are lost for the computation of the physical part. Sometimes it might be more useful to have more resolution there. Then the available gauge freedom can be used to gain complete control over the movement of through the grid [38].

The conformal field equations lead to a first order system of approximately sixty equations depending on the specific formulation and gauge conditions. In the version derived in Appendix 6 there are 65 equations. This is a very high number of equations. However, on the one hand one has to compare this with current proposals for hyperbolic systems for the Einstein equations, which advocate a number of equations of the same order (although there are not even equations for a conformal factor). On the other hand, the variables which appear in the conformal field equations are of a very geometric nature, and for further investigations of the space-times (finding the geodesics, computing forces etc.), these variables would have to be computed anyway.

But we can be somewhat more specific: The conformal field
equations have a total number of equations which is roughly 60.
The conventional codes use roughly 10 equations if they use the
standard Cauchy approach. So there is a factor 6 between the
conformal field equations and the others. But that is not really
the point. The complexity (in particular, the memory) scales
roughly with the number of gridpoints and for a 3D code that is
, where
*N*
is the number of grid points per dimension. So already doubling
*N*
gives a factor of 8. The upshot of this is that the memory
requirements are
*not*
dictated by the number of equations because this is a fixed
factor but by the dimensionality of the code, and this affects
both codes in the same way.

A final word concerning the inclusion of matter into the conformal approach: It is clear that this is more complicated than it is for the standard Einstein equations. The reason is that the conformal field equations also contain the (rescaled) Weyl curvature which couples to the energy-momentum tensor via its derivatives. This means that one needs equations not only for those matter fields which appear in the energy-momentum tensor but also for their derivatives. Furthermore, the fact that under conformal rescalings the trace-free part of the energy-momentum tensor behaves differently compared to its trace, causes additional difficulties. However, these can be overcome under certain circumstances [52, 78].

It was mentioned in Section 3.1 that for the conformal field equations reduce to the standard Einstein equations together with the vacuum Bianchi identity. This suggests that one should specify the conformal factor initially to be equal to unity in a region which contains the sources (assuming the sources have spatially compact support) and then decreases until it vanishes on . Furthermore, the time-derivative of should be adjusted such that it vanishes in a neighbourhood of the source. In fact, it is possible to do so, but it is not yet known, whether and, if so, for how long one can keep up this condition of constant conformal factor around the source. The proposed procedure does not eliminate the problem that additional equations for the matter variables are needed, but it might reduce the complexity somewhat.

We will now discuss some issues related to the construction of hyperboloidal initial data and the implementation of the evolution equations.

Conformal Infinity
Jörg Frauendiener
http://www.livingreviews.org/lrr-2000-4
© Max-Planck-Gesellschaft. ISSN 1433-8351 Problems/Comments to livrev@aei-potsdam.mpg.de |