4.5 Problems for future considerations

Now we want to discuss some of the problems that remain to be dealt with. Since the conformal approach has been tested numerically so far mostly in 2D cases with unphysical global structure, it is necessary to implement full 3D codes to run on general enough data. Work on this is well underway and first results that confirm the expectations based on the two-dimensional codes have been obtained [9598]. Husa and Lechner are currently developing a full 3D code using a method of lines approach to be integrated into the Cactus framework [155], which features an automated generator for the required right hand sides [99].

Apart from the requirement of having a general 3D code, there are other problems that require more consideration. Let us start with the boundary condition at the boundary of the computational domain. It would be interesting to see how the results of [72Jump To The Next Citation Point] translate to the conformal field equations. This would provide mathematically reasonable boundary conditions at the edge of the computational domain. Their implementation could result in stable codes that do not need any additional transition zones beyond ℐ and that are compatible with the evolution equation, unlike the procedure used in [46Jump To The Next Citation Point].

Another problem has to do with the constraint equations. It is now a well-known fact in all numerical codes attempting to solve the Einstein equations that the conservation of the constraints, while analytically well established, is numerically not satisfied. This means that the constraints, even when they are prescribed exactly, will in the course of the evolution inadvertently drift away from the constraint surface, thereby no longer being preserved. And this means that the codes will not provide solutions of the full set of the field equations. This problem is present in the conformal approach as well. In a way, one could claim that this problem is not as serious as it might be in other approaches, for the simple reason that the computation is done in “conformal” time and not in physical time. Supposing we have chosen a conformal gauge that allows us to compute up to time-like infinity in a finite computational time, then we can (hopefully) control the deviation of the constraints.

Ultimately, this is, of course, not a solution and the source for the deviation of the constraints must be found. This means that one needs to study the system that propagates the constraints. The form of the general conformal field equations in the conformal Gauß gauge suggests that the crucial subset of equations is the Bianchi equation for the rescaled conformal tensor, because this is the only part that determines the wave properties of the gravitational field. Therefore, it should be interesting to study perturbations of the equations that propagate the constraints for this system in a reasonably simple case in order to see whether there are modes that diverge from the constraint hypersurface.

Another problem with the constraint equations is due to the particulars of the conformal approach. The necessity of dividing by the conformal factor to construct the initial data is annoying. The way around this problem is to solve the conformal constraints directly. While it has been possible to do this in the spherically symmetric case [90Jump To The Next Citation Point], no result is available for the general case. It would be very desirable to have another way of constructing the initial data, because then one could construct more easily data that evolve into space-times with multiple black holes. An interesting and certainly fruitful line of research would be to explore the new methods of treating the constraints, developed by Corvino and Schoen, and see whether they can be used for the construction of initial data. The fact that there are initial data that are exactly Schwarzschild outside a compact set, would allow us to dispense with the numerical division by the conformal factor in the computation of the data because these are known explicitly near ℐ.

As a last problem in connection with the conformal approach, one should mention that Friedrich’s work on 0 i provides a way to evolve Cauchy data specified on an asymptotically Euclidean hypersurface to hyperboloidal initial data. A code that does that kind of evolution can provide the initial data for an evolution code for the hyperboloidal initial value problem. This area is entirely unexplored. Surely there will be difficult problems in the numerical treatment of the transport equations related to the total characteristic at spatial infinity. But the work on this problem is worthwhile because it would provide the final step to the ultimate goal of a global simulation of an isolated system.

A problem that affects all the work in numerical relativity today is the obscure nature of the gauge conditions. Currently, there is not much understanding of the effects of a gauge condition on the resulting nature of the coordinates (frame, conformal factor). Most of the work done on these problems is related to the choice of a lapse function; in particular several proposals have been made for selecting a time coordinate. These are mostly dictated by formal considerations such as the need to make a system hyperbolic or of easy implementation. To some extent this is justified because the physics cannot depend on the coordinates that are used. But it is also well known that there are “good” and “bad” coordinates. What “good” and “bad” means depends to a large extent on what the goal is.

Ideally, coordinates should be tied to the geometry so that they obtain a more invariant nature. In 1D cases one can set up a system of double null-coordinates (and a derived system of time and space coordinates). This provides a gauge that is good as long as the geometry is well behaved [90]. But, unfortunately, this gauge cannot be generalized in a straightforward way to higher dimensions (some attempts have been made in [46]). Probably one should assume a pragmatic viewpoint towards the problem of finding appropriate coordinates in the sense that one should regard the gauge sources as knobs that have to be adjusted by trial and error. Maybe it is possible, at least to some extent, to let the code do the “twiddling” automatically. This requires that one should be able to formulate criteria for a “good solution” that can be checked by the computer. Furthermore, it is also necessary that a change in the gauge sources does not change the characteristics of the system, because otherwise it is easy to get into situations where the system is not hyperbolic anymore.

Finally, there is no doubt that the issue of the correct boundary condition for codes based on the standard Einstein equations also needs more attention. Such codes should try to implement the boundary conditions given in [72]. Although it is not clear what these conditions mean physically, chances are good that they will produce stable codes (see e.g. the discussion of numerical boundary conditions in [86]). If this is the case, then one could try to check which ones work best by comparison with exact radiating solutions, such as the boost-rotation symmetric solutions discussed in [22]. Another test should be a numerical comparison with the data computed by a hyperboloidal evolution code. Such tests are important because so far the standard codes have been tested only against linearized solutions. This is the regime where one would expect them to work as the boundary condition is still benign. In this way, one could not only select physically reasonable boundary conditions for the standard codes, but also check how well they perform in comparison with the conformal codes. In particular, one could see how accurately the radiation extraction can be done with those codes and whether the accuracy is good enough for LIGO waveforms. Then one could compute them with a safe conscience using the standard codes, provided they are more efficient than the conformal codes.


  Go to previous page Go up Go to next page