As seen in Section 2.4.4, spectral methods are very efficient when dealing with functions. However, they lose some of their appeal when dealing with less regular functions, the convergence to the exact functions being substantially slower. Nevertheless, the physicist has sometimes to deal with such functions. This is the case for the density jump at the surface of strange stars or the formation of shocks, to mention only two examples. In order to maintain spectral convergence, one then needs to introduce several computational domains such that the various discontinuities of the functions lie at the interface between the domains. Doing so in each domain means that one only deals with functions.

Multidomain techniques can also be valuable when dealing with a physical space either too complicated or too large to be described by a single domain. Related to that, one can also use several domains to increase the resolution in some parts of the space where more precision is required. This can easily be done by using a different number of basis functions in different domains. One then talks about fixed-mesh refinement.

Efficient parallel processing may also require that several domains be used. Indeed, one could set a solver, dealing with each domain on a given processor, and interprocessor communication would then only be used for matching the solution across the various domains. The algorithm of Section 2.6.4 is well adapted to such purposes.

In the following, four different multidomain methods are presented to solve an equation of the type on . is a second-order linear operator and is a given source function. Appropriate boundary conditions are given at the boundaries and .

For simplicity the physical space is split into two domains:

- first domain: described by ,
- second domain: described by .

If , a function is described by its interpolant in terms of : . The same is true for with respect to the variable . Such a set-up is obviously appropriate to deal with problems where discontinuities occur at , that is and .

As for the standard tau method (see Section 2.5.2) and in each domain, the test functions are the basis polynomials and one writes the associated residual equations. For instance, in the domain one gets:

being the coefficients of the source and the matrix representation of the operator. As for the one-domain case, one relaxes the last two equations, keeping only equations. The same is done in the second domain.Two supplementary equations are enforced to ensure that the boundary conditions are fulfilled. Finally, the operator being of second order, one needs to ensure that the solution and its first derivative are continuous at the interface . This translates to a set of two additional equations involving both domains.

So, one considers

- residual equations in the first domain,
- residual equations in the second domain,
- 2 boundary conditions,
- 2 matching conditions,

for a total of equations. The unknowns are the coefficients of in both domains (i.e. the and the ), that is unknowns. The system is well posed and admits a unique solution.

As for the standard collocation method (see Section 2.5.3) and in each domain, the test functions are the Lagrange cardinal polynomials. For instance, in the domain one gets:

being the matrix representation of the operator and the collocation point in the first domain. As for the one-domain case, one relaxes the two equations corresponding to the boundaries of the domain, keeping only equations. The same is done in the second domain.Two supplementary equations are enforced to ensure that the boundary conditions are fulfilled. Finally, the operator being second order, one needs to ensure that the solution and its first derivative are continuous at the interface . This translates to a set of two additional equations involving the coefficients in both domains.

So, one considers

- residual equations in the first domain,
- residual equations in the second domain,
- 2 boundary conditions,
- 2 matching conditions,

for a total of equations. The unknowns are the coefficients of in both domains (i.e. the and the ), that is unknowns. The system is well posed and admits a unique solution.

The method described here proceeds in two steps. First, particular solutions are computed in each domain. Then, appropriate linear combinations with the homogeneous solutions of the operator are performed to ensure continuity and impose boundary conditions.

In order to compute particular solutions, one can rely on any of the methods described in Section 2.5. The boundary conditions at the boundary of each domain can be chosen (almost) arbitrarily. For instance, one can use in each domain a collocation method to solve , demanding that the particular solution is zero at both ends of each interval.

Then, in order to have a solution over the whole space, one needs to add homogeneous solutions to the particular ones. In general, the operator is second order and admits two independent homogeneous solutions and in each domain. Let us note that, in some cases, additional regularity conditions can reduce the number of available homogeneous solutions. The homogeneous solutions can either be computed analytically if the operator is simple enough or numerically, but one must then have a method for solving .

In each domain, the physical solution is a combination of the particular solution and homogeneous ones of the type:

where and are constants that must be determined. In the two domains case, we are left with four unknowns. The system of equations they must satisfy is composed of i) two equations for the boundary conditions ii) two equations for the matching of and its first derivative across the boundary between the two domains. The obtained system is called the matching system and generally admits a unique solution.

Contrary to previously presented methods, the variational one is only applicable with Legendre polynomials. Indeed, the method requires that the measure be . It is also useful to extract the second-order term of the operator and to rewrite it as , being first order only.

In each domain, one writes the residual equation explicitly:

The term involving the second derivative of is then integrated by parts:

The test functions are the same as the ones used for the collocation method, i.e. functions being zero at all but one collocation point, in both domains (): . By making use of the Gauss quadratures, the various parts of Equation (79) can be expressed as ( indicates the domain):

where (or , respectively) represents the action of the derivative (or of , respectively) in the configuration spaceFor points strictly inside each domain, the integrated term of Equation (79) vanishes and one gets equations of the form:

This is a set of equations for each domains (). In the above form, the unknowns are the , i.e. the solution is sought in the configuration space.As usual, two additional equations are provided by appropriate boundary conditions at both ends of the global domain. One also gets an additional condition by matching the solution across the boundary between the two domains.

The last equation of the system is the matching of the first derivative of the solution. However, instead of writing it “explicitly”, this is done by making use of the integrated term in Equation (79) and this is actually the crucial step of the whole method. Applying Equation (79) to the last point of the first domain, one gets:

The same can be done with the first point of the second domain to get , and the last equation of the system is obtained by demanding that and relates the values of in both domains.Before finishing with the variational method, it may be worthwhile to explain why Legendre polynomials are used. Suppose one wants to work with Chebyshev polynomials instead. The measure is then . When one integrates the term containing by parts, one gets

Because the measure is divergent at the boundaries, it is difficult, if not impossible, to isolate the term in . On the other hand, this is precisely the term that is needed to impose the appropriate matching of the solution.

From a numerical point of view, the method based on an explicit matching using the homogeneous solutions is somewhat different from the two others. Indeed, one must solve several systems in a row and each one is of the same size as the number of points in one domain. This splitting of the different domains can also be useful for designing parallel codes. On the contrary, for both the variational and the tau method one must solve only one system, but its size is the same as the number of points in a whole space, which can be quite large for many domains settings. However, those two methods do not require one to compute the homogeneous solutions, computation that could be tricky depending on the operators involved and the number of dimensions.

The variational method may seem more difficult to implement and is only applicable with Legendre polynomials. However, on mathematical grounds, it is the only method that is demonstrated to be optimal. Moreover, some examples have been found in which the others methods are not optimal. It remains true that the variational method is very dependent on both the shape of the domains and the type of equation that needs to be solved.

The choice of one method or another thus depends on the particular situation. As for the mono-domain space, for simple test problems the results are very similar. Figure 16 shows the maximum error between the analytic solution and the numeric one for the four different methods. All errors decay exponentially and reach machine accuracy within roughly the same number of points.

Living Rev. Relativity 12, (2009), 1
http://www.livingreviews.org/lrr-2009-1 |
This work is licensed under a Creative Commons License. E-mail us: |