Definition 8. Denote for each boundary point the boundary space

of state vectors satisfying the homogeneous boundary condition. is called maximal nonpositive if- for all ,
- is maximal with respect to condition (i); that is, if is a linear subspace of containing , which satisfies (i), then .

The boundary condition is called maximal dissipative if the associated boundary spaces are maximal nonpositive for all .

Maximal dissipative boundary conditions were proposed in [189, 275] in the context of symmetric positive operators, which include symmetric hyperbolic operators as a special case. With such boundary conditions, the IBVP is well posed in the following sense:

Definition 9. Consider the linearized version of the IBVP (5.1, 5.2, 5.3), where the matrix functions and and the vector function do not depend on . It is called well posed if there are constants and such that each compatible data and gives rise to a unique -solution satisfying the estimate

for all . If, in addition, the constant can be chosen strictly positive, the problem is called strongly well posed.This definition strengthens the corresponding definition in the Laplace analysis, where trivial initial data was assumed and only a time-integral of the -norm of the solution could be estimated (see Definition 6). The main result of the theory of maximal dissipative boundary conditions is:

Theorem 7. Consider the linearized version of the IBVP (5.1, 5.2, 5.3), where the matrix functions and and the vector function do not depend on . Suppose the system is symmetric hyperbolic, and that the boundary conditions (5.3) are maximal dissipative. Suppose, furthermore, that the rank of the boundary matrix is constant in .

Then, the problem is well posed in the sense of Definition 9. Furthermore, it is strongly well posed if the boundary matrix is invertible.

This theorem was first proven in [189, 275, 344] for the case where the boundary surface is non-characteristic, that is, the boundary matrix is invertible for all . A difficulty with the characteristic case is the loss of derivatives of in the normal direction to the boundary (see [422]). This case was studied in [293, 343, 387], culminating with the regularity theorem in [387], which is based on special function spaces, which control the -norms of tangential derivatives and normal derivatives at the boundary (see also [389]). For generalizations of Theorem 7 to the quasilinear case; see [218, 388].

A more practical way of characterizing maximal dissipative boundary conditions is the following. Fix a boundary point , and define the scalar product by , . Since the boundary matrix is Hermitian with respect to this scalar product, there exists a basis of eigenvectors of , which are orthonormal with respect to . Let be the corresponding eigenvalues, where we might assume that the first of these eigenvalues are strictly positive, and the last are strictly negative. We can expand any vector as , the coefficients being the characteristic fields with associated speeds . Then, the condition (5.90) at the point can be written as

where we have used the fact that , and the remaining ’s are zero. Therefore, a maximal dissipative boundary condition must have the form with a complex matrix, since must imply . Furthermore, the matrix has to be small enough such that the inequality (5.93) holds. There can be no further conditions since an additional, independent condition on would violate the maximality of the boundary space .In conclusion, a maximal dissipative boundary condition must have the form of Eq. (5.94), which describes a linear coupling of the outgoing characteristic fields to the incoming ones, . In particular, there are exactly as many independent boundary conditions as there are incoming fields, in agreement with the Laplace analysis in Section 5.1.1. Furthermore, the boundary conditions must not involve the zero speed fields. The simplest choice for is the trivial one, , in which case data for the incoming fields is specified. A nonzero value of would be chosen if the boundary is to incorporate some reflecting properties, like the case of a perfect conducting surface in electromagnetism, for example.

Example 29. Consider the first-order reformulation of the Klein–Gordon equation for the variables ; see Example 13. Suppose the spatial domain is , with the boundary located at . Then, and the boundary matrix is

Therefore, the characteristic fields and speeds are , (zero speed fields, ), (incoming field with speed ) and (outgoing field with speed ). It follows from Eqs. (5.93, 5.94) that the class of maximal dissipative boundary conditions is where the function satisfies and is smooth boundary data. Particular cases are:- : Sommerfeld boundary condition,
- : Dirichlet boundary condition,
- : Neumann boundary condition.

Example 30. For Maxwell’s equations on a domain with -boundary , the boundary matrix is given by

see Example 14. In terms of the components of parallel to the boundary surface , and the ones , which are orthogonal to it (and, hence, parallel to ) the characteristic speeds and fields are- , : The boundary condition describes a perfectly conducting boundary surface.
- , : This is a Sommerfeld-type boundary condition, which, locally, is transparent to outgoing plane waves traveling in the normal direction , where the frequency, the wave vector, and the polarization vector, which is orthogonal to . The generalization of this boundary condition to inhomogeneous data allows one to specify data on the incoming field at the boundary surface, which is equal to for the plane waves traveling in the normal inward direction .

Recall that the constraints and propagate along the time evolution vector field , , , provided the continuity equation holds. Since is tangent to the boundary, no additional conditions controlling the constraints must be specified at the boundary; the constraints are automatically satisfied everywhere provided they are satisfied on the initial surface.

Example 31. Commonly, one writes Maxwell’s equations as a system of wave equations for the electromagnetic potential in the Lorentz gauge, as discussed in Example 28. By reducing the problem to a first-order symmetric hyperbolic system, one may wonder if it is possible to apply the theory of maximal dissipative boundary conditions and obtain a well-posed IBVP, as in the previous example. As we shall see in Section 5.2.1, the answer is affirmative, but the correct application of the theory is not completely straightforward. In order to illustrate why this is the case, introduce the new independent fields . Then, the set of wave equations can be rewritten as the first-order system for the 20-component vector , ,

which is symmetric hyperbolic. The characteristic fields with respect to the unit outward normal at the boundary areAt this point, one might ask why we were able to formulate a well-posed IBVP based on the second-order formulation in Example 28, while the first-order reduction discussed here fails. As we shall see, the reason for this is that there exist many first-order reductions, which are inequivalent to each other, and a slightly more sophisticated reduction works, while the simplest choice adopted here does not. See also [354, 14] for well-posed formulations of the IBVP in electromagnetism based on the potential formulation in a different gauge.

Example 32. A generalization of Maxwell’s equations is the evolution system

for the symmetric, trace-free tensor fields and , where here we use the Einstein summation convention, the indices run over , denotes symmetrization over , and is the totally antisymmetric tensor with . Notice that the right-hand sides of Eqs. (5.102, 5.103) are symmetric and trace-free, such that one can consistently assume that . The evolution system (5.102, 5.103), which is symmetric hyperbolic with respect to the trivial symmetrizer, describes the propagation of the electric and magnetic parts of the Weyl tensor for linearized gravity on a Minkowski background; see, for instance, [182].Decomposing into its parts parallel and orthogonal to the unit outward normal ,

where , , , , and similarly for , the eigenvalue problem for the boundary matrix isHowever, one also needs to control the incoming field at the boundary. This field, which propagates with speed , is related to the constraints in the theory. Like in electromagnetism, the fields and are subject to the divergence constraints , . However, unlike the Maxwell case, these constraints do not propagate trivially. As a consequence of the evolution equations (5.102, 5.103), the constraint fields and obey

which is equivalent to Maxwell’s equations except that the propagation speed for the transverse modes is instead of . Therefore, guaranteeing constraint propagation requires specifying homogeneous maximal dissipative boundary conditions for this system, which have the form of Eq. (5.98) with , and . A problem is that this yields conditions involving first derivatives of the fields and , when rewritten as a boundary condition for the main system (5.102, 5.103). Except in some particular cases involving totally-reflecting boundaries, it is not possible to cast these conditions into maximal dissipative form.A solution to this problem has been presented in [181] and [187], where a similar system appears in the context of the IBVP for Einstein’s field equations for solutions with anti-de Sitter asymptotics, or for solutions with an artificial boundary, respectively. The method consists in modifying the evolution system (5.102, 5.103) by using the constraint equations in such a way that the constraint fields for the resulting boundary adapted system propagate along at the boundary surface. In order to describe this system, extend to a smooth vector field on with the property that . Then, the boundary-adapted system reads:

This system is symmetric hyperbolic, and the characteristic fields in the normal direction are identical to the unmodified system with the important difference that the fields now propagate with zero speed. The induced evolution system for the constraint fields is symmetric hyperbolic, and has a trivial boundary matrix. As a consequence, the constraints propagate tangentially to the boundary surface, and no extra boundary conditions for controlling the constraints must be specified.

As anticipated in Example 31, the theory of symmetric hyperbolic first-order equations with maximal dissipative boundary conditions can also be used to formulate well-posed IBVP for systems of wave equations, which are coupled through the boundary conditions, as already discussed in Section 5.1.3 based on the Laplace method. Again, the key idea is to show strong well-posedness; that is, an a priori estimate, which controls the first derivatives of the fields in the bulk and at the boundary.

In order to explain how this is performed, we consider the simple case of the Klein–Gordon equation on the half plane . In Example 13 we reduced the problem to a first-order symmetric hyperbolic system for the variables with symmetrizer , and in Example 29 we determined the class of maximal dissipative boundary conditions for this first-order reduction. Consider the particular case of Sommerfeld boundary conditions, where is specified at . Then, Eq. (3.103) gives the following conservation law,

where , and ; see Example 29. Using the Sommerfeld boundary condition, we may rewrite , and obtain the energy equality controlling the derivatives of and at the boundary surface. However, a weakness of this estimate is that it does not control the zero speed fields and at the boundary, and so one does not obtain strong well-posedness.On the other hand, the first-order reduction is not unique, and as we show now, different reductions may lead to stronger estimates. For this, we choose a real constant such that and define the new fields , which yield the symmetric hyperbolic system

with symmetrizer . The characteristic fields in terms of and its derivatives are , , , and , as before. However, the fields now have characteristic speeds , respectively, whereas in the previous reduction they were . Therefore, the effect of the new reduction versus the old one is to shift the speeds of the zero speed fields, and to convert them to outgoing fields with speed . Notice that the Sommerfeld boundary condition is still maximal dissipative with respect to the new reduction. Repeating the energy estimates again leads to a conservation law of the form (5.108), but where now the energy and flux quantities are and Imposing the boundary condition at and using leads to the energy estimate controlling and all its first derivatives at the boundary surface.Summarizing, we have seen that the most straightforward first-order reduction of the Klein–Gordon equation does not lead to strong well-posedness. However, strong well-posedness can be obtained by choosing a more sophisticated reduction, in which the time-derivative of is replaced by its derivative along the time-like vector , which is pointing outside the domain at the boundary surface. In fact, it is possible to obtain a symmetric hyperbolic reduction leading to strong well-posedness for any future-directed time-like vector field , which is pointing outside the domain at the boundary. Based on the geometric definition of first-order symmetric hyperbolic systems in [205], it is possible to generalize this result to systems of quasilinear wave equations on curved backgrounds [264].

In order to describe the result in [264], let be a vector bundle over with fiber ; let be a fixed, given connection on and let be a Lorentz metric on with inverse , which depends pointwise and smoothly on a vector-valued function , parameterizing a local section of . Assume that each time-slice is space-like and that the boundary is time-like with respect to . We consider a system of quasilinear wave equations of the form

where is a vector-valued function, which depends pointwise and smoothly on its arguments. The wave system (5.113) is subject to the initial conditions where and are given vector-valued functions on , and where denotes the future-directed unit normal to with respect to . In order to describe the boundary conditions, let , , be a future-directed vector field on , which is normalized with respect to , and let be the unit outward normal to with respect to the metric . We consider boundary conditions on of the following form where is a strictly positive, smooth function, is a given, vector-valued function on and the matrix coefficients and are smooth functions of their arguments. Furthermore, we assume that satisfies the following property. Given a local trivialization of such that is compact and contains a portion of the boundary , there exists a smooth map such that the transformed matrix coefficients are in upper triangular form with zeroes on the diagonal, that isTheorem 8. [264] The IBVP (5.113, 5.114, 5.115) is well posed. Given and sufficiently small and smooth initial and boundary data , and satisfying the compatibility conditions at the edge , there exists a unique smooth solution on satisfying the evolution equation (5.113), the initial condition (5.114) and the boundary condition (5.115). Furthermore, the solution depends continuously on the initial and boundary data.

Theorem 8 provides the general framework for treating wave systems with constraints, such as Maxwell’s equations in the Lorentz gauge and, as we will see in Section 6.1, Einstein’s field equations with artificial outer boundaries.

Here, we show how to prove the existence of weak solutions for linear, symmetric hyperbolic equations with variable coefficients and maximal dissipative boundary conditions. The method can also be applied to a more general class of linear symmetric operators with maximal dissipative boundary conditions; see [189, 275]. The proof below will shed some light on the maximality condition for the boundary space .

Our starting point is an IBVP of the form (5.1, 5.2, 5.3), where the matrix functions and do not depend on , and where is replaced by , such that the system is linear. Furthermore, we can assume that the initial and boundary data is trivial, , . We require the system to be symmetric hyperbolic with symmetrizer satisfying the conditions in Definition 4(iii), and assume the boundary conditions (5.3) are maximal dissipative. We rewrite the IBVP on as the abstract linear problem

where is the linear operator on the Hilbert space defined by the evolution equation and the initial and boundary conditions:For the following, the adjoint IBVP plays an important role. This problem is defined as follows. First, the symmetrizer defines a natural scalar product on ,

which, because of the properties of , is equivalent to the standard scalar product on . In order to obtain the adjoint problem, we take and , and use Gauss’s theorem to find where we have defined the formal adjoint of by In order for the integrals on the right-hand side of Eq. (5.120) to vanish, such that , we first notice that the integral over vanishes, because on . The integral over also vanishes if we require on . The last term also vanishes if we require to lie in the dual boundary space for each . Therefore, if we define we have for all and ; that is, the operator is adjoint to . There is the following nice relation between the boundary spaces and :Lemma 4. Let be a boundary point. Then, is maximal nonpositive if and only if is maximal nonnegative.

Proof. Fix a boundary point and define the matrix with the unit outward normal to at . Since the system is symmetric hyperbolic, is Hermitian. We decompose into orthogonal subspaces , , on which is positive, negative and zero, respectively. We equip with the scalar products , which are defined by

In particular, we have for all . Therefore, if is maximal nonpositive, there exists a linear transformation satisfying for all , such that (cf. Eq. (5.94)) Let . Then, for all , where is the adjoint of with respect to the scalar products defined on . Therefore, , and Since has the same norm as , which is one, it follows that is maximal nonnegative. The converse statement follows in an analogous way. □The lemma implies that solving the original problem with is equivalent to solving the adjoint problem with , which, since is held fixed at , corresponds to the time-reversed problem with the adjoint boundary conditions. From the a priori energy estimates we obtain:

Lemma 5. There is a constant such that

for all and , where is the norm induced by the scalar product .

Proof. Let and set . From the energy estimates in Section 3.2.3 one easily obtains

for some positive constants depending on . Integrating both sides from to gives which yields the statement for setting . The estimate for follows from a similar energy estimate for the adjoint problem. □In particular, Lemma 5 implies that (strong) solutions to the IBVP and its adjoint are unique. Since and are closable operators [345], their closures and satisfy the same inequalities as in Eq. (5.128). Now we are ready to define weak solutions and to prove their existence:

In order to prove the existence of such , we introduce the linear space and equip it with the scalar product defined by

The positivity of this product is a direct consequence of Lemma 5, and since is closed, defines a Hilbert space. Next, we define the linear form on by This form is bounded, according to Lemma 5, for all . Therefore, according to the Riesz representation lemma there exists a unique such that for all . Setting gives a weak solution of the problem.If is a weak solution, which is sufficiently smooth, it follows from the Green type identity (5.120) that has vanishing initial data and that it satisfies the required boundary conditions, and hence is a solution to the original IBVP (5.118). The difficult part is to show that a weak solution is indeed sufficiently regular for this conclusion to be made. See [189, 275, 344, 343, 387] for such “weak=strong” results.

Living Rev. Relativity 15, (2012), 9
http://www.livingreviews.org/lrr-2012-9 |
This work is licensed under a Creative Commons License. E-mail us: |