5.2 Maximal dissipative boundary conditions

An alternative technique for specifying boundary conditions, which does not require Laplace–Fourier transformation and the use of pseudo-differential operators when generalizing to variable coefficients, is based on energy estimates. In order to understand this, we go back to Section 3.2.3, where we discussed such estimates for linear, first-order symmetric hyperbolic evolution equations with symmetrizer H (t,x). We obtained the estimate (3.107View Equation), bounding the energy ∫ E (Σt) = Σt J0(t,x)dnx at any time t ∈ [0,T ] in terms of the initial energy E(Σ0 ), provided that the flux integral
∫ μ μ ∗ μ e μJ (t,x)dS, J (t,x) := − u(t,x) H (t,x )A (t,x)u(t,x) (5.89 ) 𝒯
was nonnegative. Here, the boundary surface is 𝒯 = [0,T ] × ∂Σ, and its unit outward normal e = (0,s1,...,sn) is determined by the unit outward normal s to ∂Σ. Therefore, the integral is nonnegative if
u (t,x )∗H (t,x)P0 (t,x, s)u (t,x ) ≤ 0, (t,x) ∈ 𝒯 , (5.90 )
where ∑n P0(t,x,s) = Aj (t,x )sj j=1 is the principal symbol in the direction of the unit normal s. Hence, the idea is to specify homogeneous boundary conditions, b(t,x)u = 0 at 𝒯, such that the condition (5.90View Equation) is satisfied.22 In this case, one obtains an a priori energy estimate as in Section 3.2.3. Of course, there are many possible choices for b(t,x), which fulfill the condition (5.90View Equation); however, an additional requirement is that one should not overdetermine the IBVP. For example, setting all the components of u to zero at the boundary does not lead to a well-posed problem if there are outgoing modes, as discussed in Section 5.1.1 for the constant coefficient case. Correct boundary conditions turn out to be a minimal condition on u for which the inequality (5.90View Equation) holds. In other words, at the boundary surface, u has to be restricted to a space for which Eq. (5.90View Equation) holds and which cannot be extended. The precise definition, which captures this idea, is:

Definition 8. Denote for each boundary point p = (t,x ) ∈ 𝒯 the boundary space

V := {u ∈ ℂm : b(t,x)u = 0} ⊂ ℂm (5.91 ) p
of state vectors satisfying the homogeneous boundary condition. Vp is called maximal nonpositive if
  1. u∗H (t,x)P (t,x,s)u ≤ 0 0 for all u ∈ V p,
  2. Vp is maximal with respect to condition (i); that is, if Wp ⊃ Vp is a linear subspace of m ℂ containing Vp, which satisfies (i), then Wp = Vp.

The boundary condition b(t,x )u = g(t,x) is called maximal dissipative if the associated boundary spaces Vp are maximal nonpositive for all p ∈ 𝒯.

Maximal dissipative boundary conditions were proposed in [189Jump To The Next Citation Point, 275Jump To The Next Citation Point] in the context of symmetric positive operators, which include symmetric hyperbolic operators as a special case. With such boundary conditions, the IBVP is well posed in the following sense:

Definition 9. Consider the linearized version of the IBVP (5.1View Equation, 5.2View Equation, 5.3View Equation), where the matrix functions Aj (t,x ) and b(t,x) and the vector function F (t,x) do not depend on u. It is called well posed if there are constants K = K (T ) and 𝜀 = 𝜀(T ) ≥ 0 such that each compatible data f ∈ C∞ (Σ, ℂm ) b and ∞ r g ∈ C b ([0,T ] × ∂ Σ,ℂ ) gives rise to a unique ∞ C-solution u satisfying the estimate

⌊ ⌋ ∫t ∫t ( ) ∥u(t,⋅)∥2L2(Σ) + 𝜀 ∥u(s,⋅)∥2L2(∂Σ)ds ≤ K2 ⌈∥f ∥2L2(Σ) + ∥F (s,⋅)∥2L2(Σ) + ∥g(s,⋅)∥2L2(∂Σ) ds⌉(5,.92 &#x00 0 0
for all t ∈ [0,T ]. If, in addition, the constant 𝜀 > 0 can be chosen strictly positive, the problem is called strongly well posed.

This definition strengthens the corresponding definition in the Laplace analysis, where trivial initial data was assumed and only a time-integral of the L2(Σ )-norm of the solution could be estimated (see Definition 6). The main result of the theory of maximal dissipative boundary conditions is:

Theorem 7. Consider the linearized version of the IBVP (5.1View Equation, 5.2View Equation, 5.3View Equation), where the matrix functions j A (t,x ) and b(t,x) and the vector function F (t,x ) do not depend on u. Suppose the system is symmetric hyperbolic, and that the boundary conditions (5.3View Equation) are maximal dissipative. Suppose, furthermore, that the rank of the boundary matrix P0(t,x,s) is constant in (t,x) ∈ 𝒯.

Then, the problem is well posed in the sense of Definition 9. Furthermore, it is strongly well posed if the boundary matrix P0(t,x,s) is invertible.

This theorem was first proven in [189Jump To The Next Citation Point, 275Jump To The Next Citation Point, 344Jump To The Next Citation Point] for the case where the boundary surface 𝒯 is non-characteristic, that is, the boundary matrix P (t,x,s) 0 is invertible for all (t,x) ∈ 𝒯. A difficulty with the characteristic case is the loss of derivatives of u in the normal direction to the boundary (see [422]). This case was studied in [293, 343Jump To The Next Citation Point, 387Jump To The Next Citation Point], culminating with the regularity theorem in [387Jump To The Next Citation Point], which is based on special function spaces, which control the L2-norms of 2k tangential derivatives and k normal derivatives at the boundary (see also [389]). For generalizations of Theorem 7 to the quasilinear case; see [218Jump To The Next Citation Point, 388Jump To The Next Citation Point].

A more practical way of characterizing maximal dissipative boundary conditions is the following. Fix a boundary point (t,x) ∈ 𝒯, and define the scalar product (⋅,⋅) by (u, v) := u ∗H (t,x)v, u,v ∈ ℂm. Since the boundary matrix P0(t,x,s) is Hermitian with respect to this scalar product, there exists a basis e1,e2,...,em of eigenvectors of P0 (t,x, s), which are orthonormal with respect to (⋅,⋅). Let λ1,λ2, ...,λm be the corresponding eigenvalues, where we might assume that the first r of these eigenvalues are strictly positive, and the last s are strictly negative. We can expand any vector u ∈ ℂm as m u = ∑ u(j)ej j=1, the coefficients u(j) being the characteristic fields with associated speeds λj. Then, the condition (5.90View Equation) at the point p can be written as

m r m ∑ (j)2 ∑ (j) 2 ∑ (j)2 0 ≥ (u,P0 (t,x, s)u) = λj|u | = λj|u | − |λj||u |, (5.93 ) j=1 j=1 j=m −s+1
where we have used the fact that λ1,...,λr > 0, λm −s+1,...,λm < 0 and the remaining λj’s are zero. Therefore, a maximal dissipative boundary condition must have the form
( (1)) ( (m−s+1)) u u u+ = qu− , u+ := ( ...) , u − := ( ... ) , (5.94 ) u(r) u(m)
with q a complex r × s matrix, since u− = 0 must imply u+ = 0. Furthermore, the matrix q has to be small enough such that the inequality (5.93View Equation) holds. There can be no further conditions since an additional, independent condition on u would violate the maximality of the boundary space Vp.

In conclusion, a maximal dissipative boundary condition must have the form of Eq. (5.94View Equation), which describes a linear coupling of the outgoing characteristic fields u− to the incoming ones, u+. In particular, there are exactly as many independent boundary conditions as there are incoming fields, in agreement with the Laplace analysis in Section 5.1.1. Furthermore, the boundary conditions must not involve the zero speed fields. The simplest choice for q is the trivial one, q = 0, in which case data for the incoming fields is specified. A nonzero value of q would be chosen if the boundary is to incorporate some reflecting properties, like the case of a perfect conducting surface in electromagnetism, for example.

Example 29. Consider the first-order reformulation of the Klein–Gordon equation for the variables u = (Φ, Φt,Φx, Φy ); see Example 13. Suppose the spatial domain is x > 0, with the boundary located at x = 0. Then, s = (− 1,0) and the boundary matrix is

( 0 0 0 0) | | P0(s) = − | 0 0 1 0| . (5.95 ) ( 0 1 0 0) 0 0 0 0
Therefore, the characteristic fields and speeds are Φ, Φy (zero speed fields, λ = 0), Φt − Φx (incoming field with speed λ = 1) and Φ + Φ t x (outgoing field with speed λ = − 1). It follows from Eqs. (5.93View Equation, 5.94View Equation) that the class of maximal dissipative boundary conditions is
(Φt − Φx) = q(t,y)(Φt + Φx ) + g(t,y), t ≥ 0, y ∈ ℝ, (5.96 )
where the function q satisfies |q(t,y)| ≤ 1 and g is smooth boundary data. Particular cases are:

Example 30. For Maxwell’s equations on a domain 3 Σ ⊂ ℝ with ∞ C-boundary ∂Σ, the boundary matrix is given by

( ) ( ) E +s ∧ B P0(s) B = − s ∧ E ; (5.97 )
see Example 14. In terms of the components E|| of E parallel to the boundary surface ∂Σ, and the ones E ⊥, which are orthogonal to it (and, hence, parallel to s) the characteristic speeds and fields are
0 : E ⊥, B ⊥, ±1 : E || ± s ∧ B ||.
Therefore, maximal dissipative boundary conditions have the form
(E || + s ∧ B ||) = q(E || − s ∧ B ||) + g||, (5.98 )
with g|| some smooth vector-valued function at the boundary, which is parallel to ∂Σ, and q a matrix-valued function satisfying the condition |q| ≤ 1. Particular cases are:

Recall that the constraints ∇ ⋅ E = ρ and ∇ ⋅ B = 0 propagate along the time evolution vector field ∂t, (∇ ⋅ E − ρ)t = 0, (∇ ⋅ B )t = 0, provided the continuity equation holds. Since ∂t is tangent to the boundary, no additional conditions controlling the constraints must be specified at the boundary; the constraints are automatically satisfied everywhere provided they are satisfied on the initial surface.

Example 31. Commonly, one writes Maxwell’s equations as a system of wave equations for the electromagnetic potential Aμ in the Lorentz gauge, as discussed in Example 28. By reducing the problem to a first-order symmetric hyperbolic system, one may wonder if it is possible to apply the theory of maximal dissipative boundary conditions and obtain a well-posed IBVP, as in the previous example. As we shall see in Section 5.2.1, the answer is affirmative, but the correct application of the theory is not completely straightforward. In order to illustrate why this is the case, introduce the new independent fields D μν := ∂μA ν. Then, the set of wave equations can be rewritten as the first-order system for the 20-component vector (A ν,Dtν,Dj ν), j = x, y,z,

j ∂tA ν = Dt ν, ∂tDtν = ∂ Dj ν, ∂tDj ν = ∂jDtν, (5.100 )
which is symmetric hyperbolic. The characteristic fields with respect to the unit outward normal s = (− 1,0,0) at the boundary are
Dtν − Dx ν = (∂t − ∂x)Aν (incoming field), Dt ν + Dx ν = (∂t + ∂x)Aν (outgoing field ), Dyν = ∂yA ν (zero speed field), Dz ν = ∂zAν (zero speed field).
According to Eq. (5.88View Equation) we can rewrite the Lorentz constraint in the following way:
(Dtt − Dxt) + (Dtx − Dxx ) = − (Dtt + Dxt ) + (Dtx + Dxx) + 2Dyy + 2Dzz. (5.101 )
The problem is that when written in terms of the characteristic fields, the Lorentz constraint not only depends on the in- and outgoing fields, but also on the zero speed fields Dyy and Dzz. Therefore, imposing the constraint on the boundary in order to guarantee constraint preservation leads to a boundary condition, which couples the incoming fields to outgoing and zero speed fields,23, which does not fall in the class of admissible boundary conditions.

At this point, one might ask why we were able to formulate a well-posed IBVP based on the second-order formulation in Example 28, while the first-order reduction discussed here fails. As we shall see, the reason for this is that there exist many first-order reductions, which are inequivalent to each other, and a slightly more sophisticated reduction works, while the simplest choice adopted here does not. See also [354, 14] for well-posed formulations of the IBVP in electromagnetism based on the potential formulation in a different gauge.

Example 32. A generalization of Maxwell’s equations is the evolution system

∂tEij = − 𝜀kl(i∂kBlj), (5.102 ) ∂ B = + 𝜀 ∂kEl , (5.103 ) t ij kl(i j)
for the symmetric, trace-free tensor fields Eij and Bij, where here we use the Einstein summation convention, the indices i,j,k,l run over 1,2,3, (ij) denotes symmetrization over ij, and 𝜀ijk is the totally antisymmetric tensor with 𝜀123 = 1. Notice that the right-hand sides of Eqs. (5.102View Equation, 5.103View Equation) are symmetric and trace-free, such that one can consistently assume that i i E i = B i = 0. The evolution system (5.102View Equation, 5.103View Equation), which is symmetric hyperbolic with respect to the trivial symmetrizer, describes the propagation of the electric and magnetic parts of the Weyl tensor for linearized gravity on a Minkowski background; see, for instance, [182].

Decomposing Eij into its parts parallel and orthogonal to the unit outward normal s,

( ) ¯ 1- ¯ ˆ Eij = E sisj − 2 γij + 2s(iEj ) + Eij, (5.104 )
where γij := δij − sisj, E¯ := sisjEij, ¯Ei := γikEkjsj, Eˆij := (γikγl− γijγkl∕2)Ekl j, and similarly for B ij, the eigenvalue problem λu = P (s)u 0 for the boundary matrix is
¯ λE = 0, λB¯ = 0, 1 k l λ ¯Ei = −-𝜀klis ¯B , 2 λ ¯Bi = + 1𝜀klisk ¯El, 2 λEˆij = − 𝜀kl(isk ˆBlj), k l λBˆij = + 𝜀kl(is ˆE j),
from which one obtains the following characteristic speeds and fields,
0 : ¯E, ¯B, ± 1-: ¯Ei ∓ 𝜀klisk ¯Bl, 2 ±1 : ˆEij ∓ 𝜀kl(iskBˆlj).
Similar to the Maxwell case, the boundary condition Eˆij − 𝜀kl(isk ˆBlj) = 0 on the incoming, symmetric trace-free characteristic field is, locally, transparent to outgoing linear gravitational plane waves traveling in the normal direction s. In fact, this condition is equivalent to setting the complex Weyl scalar Ψ0 computed from the adapted, complex null tetrad K := ∂t + s, L := ∂t − s, Q, ¯ Q, to zero at the boundary surface.24 Variants of this condition have been proposed in the literature in the context of the IBVP for Einstein’s field equations in order to approximately control the incoming gravitational radiation; see [187Jump To The Next Citation Point, 40Jump To The Next Citation Point, 253Jump To The Next Citation Point, 378Jump To The Next Citation Point, 363Jump To The Next Citation Point, 309Jump To The Next Citation Point, 286Jump To The Next Citation Point, 384Jump To The Next Citation Point, 366Jump To The Next Citation Point].

However, one also needs to control the incoming field ¯Ei − 𝜀klisk ¯Bl at the boundary. This field, which propagates with speed 1∕2, is related to the constraints in the theory. Like in electromagnetism, the fields Eij and Bij are subject to the divergence constraints Pj := ∂iEij = 0, i Qj := ∂ Bij = 0. However, unlike the Maxwell case, these constraints do not propagate trivially. As a consequence of the evolution equations (5.102View Equation, 5.103View Equation), the constraint fields Pj and Qj obey

1- k l 1- k l ∂tPj = − 2 𝜀jkl∂ Q , ∂tQj = + 2𝜀jkl∂ P , (5.105 )
which is equivalent to Maxwell’s equations except that the propagation speed for the transverse modes is 1∕2 instead of 1. Therefore, guaranteeing constraint propagation requires specifying homogeneous maximal dissipative boundary conditions for this system, which have the form of Eq. (5.98View Equation) with E ↦→ P, B ↦→ − Q and g || = 0. A problem is that this yields conditions involving first derivatives of the fields E ij and B ij, when rewritten as a boundary condition for the main system (5.102View Equation, 5.103View Equation). Except in some particular cases involving totally-reflecting boundaries, it is not possible to cast these conditions into maximal dissipative form.

A solution to this problem has been presented in [181] and [187Jump To The Next Citation Point], where a similar system appears in the context of the IBVP for Einstein’s field equations for solutions with anti-de Sitter asymptotics, or for solutions with an artificial boundary, respectively. The method consists in modifying the evolution system (5.102View Equation, 5.103View Equation) by using the constraint equations Pj = Qj = 0 in such a way that the constraint fields for the resulting boundary adapted system propagate along ∂t at the boundary surface. In order to describe this system, extend s to a smooth vector field on Σ with the property that |s| ≤ 1. Then, the boundary-adapted system reads:

k l k l ∂tEij = − 𝜀kl(i∂ B j) + s(i𝜀j)kls Q , (5.106 ) ∂ B = +𝜀 ∂kEl − s 𝜀 skP l. (5.107 ) t ij kl(i j) (ij)kl
This system is symmetric hyperbolic, and the characteristic fields in the normal direction are identical to the unmodified system with the important difference that the fields ¯Ei ∓ 𝜀kliskB¯l now propagate with zero speed. The induced evolution system for the constraint fields is symmetric hyperbolic, and has a trivial boundary matrix. As a consequence, the constraints propagate tangentially to the boundary surface, and no extra boundary conditions for controlling the constraints must be specified.

5.2.1 Application to systems of wave equations

As anticipated in Example 31, the theory of symmetric hyperbolic first-order equations with maximal dissipative boundary conditions can also be used to formulate well-posed IBVP for systems of wave equations, which are coupled through the boundary conditions, as already discussed in Section 5.1.3 based on the Laplace method. Again, the key idea is to show strong well-posedness; that is, an a priori estimate, which controls the first derivatives of the fields in the bulk and at the boundary.

In order to explain how this is performed, we consider the simple case of the Klein–Gordon equation Φ = Δ Φ − m2 Φ tt on the half plane Σ := {(x, y) ∈ ℝ2 : x > 0}. In Example 13 we reduced the problem to a first-order symmetric hyperbolic system for the variables u = (Φ, Φt,Φx, Φy) with symmetrizer H = diag(m2, 1,1,1), and in Example 29 we determined the class of maximal dissipative boundary conditions for this first-order reduction. Consider the particular case of Sommerfeld boundary conditions, where Φt = Φx is specified at x = 0. Then, Eq. (3.103View Equation) gives the following conservation law,

∫T ∫ E (Σ ) = E(Σ ) + u∗HP (s)u | dydt, (5.108 ) T 0 0 x=0 0 ℝ
where E(Σ ) = ∫ u∗Hudxdy = ∫ (m2 |Φ|2 + |Φ |2 + |Φ |2 + |Φ |2) dxdy t Σt Σt t x y, and u∗HP (s)u = − 2Re(Φ ∗Φ ) 0 t x; see Example 29. Using the Sommerfeld boundary condition, we may rewrite ∗ 2 2 − 2Re (ΦtΦx ) = − (|Φt| + |Φx |), and obtain the energy equality
∫T ∫ [ ] E (ΣT) + |Φt|2 + |Φx |2 dydt = E (Σ0), (5.109 ) x=0 0 ℝ
controlling the derivatives of Φt and Φx at the boundary surface. However, a weakness of this estimate is that it does not control the zero speed fields Φ and Φ y at the boundary, and so one does not obtain strong well-posedness.

On the other hand, the first-order reduction is not unique, and as we show now, different reductions may lead to stronger estimates. For this, we choose a real constant b such that 0 < b ≤ 1∕2 and define the new fields ¯u := (Φ,Φt − bΦx, Φx, Φy), which yield the symmetric hyperbolic system

( ) ( ) ( ) b 0 0 0 0 0 0 0 0 1 0 0 | 0 − b 1 − b2 0 | | 0 0 0 1| | − m2 0 0 0 | u¯t = |( |) ¯ux + |( |) u¯y + |( |) ¯u, (5.110 ) 0 1 b 0 0 0 0 0 0 0 0 0 0 0 0 b 0 1 0 0 0 0 0 0
with symmetrizer H¯ = diag(m2, 1,1 − b2,1). The characteristic fields in terms of Φ and its derivatives are Φ, Φy, Φt + Φx, and Φt − Φx, as before. However, the fields now have characteristic speeds − b,− b,− 1, +1, respectively, whereas in the previous reduction they were 0, 0,− 1, +1. Therefore, the effect of the new reduction versus the old one is to shift the speeds of the zero speed fields, and to convert them to outgoing fields with speed − b. Notice that the Sommerfeld boundary condition Φt = Φx is still maximal dissipative with respect to the new reduction. Repeating the energy estimates again leads to a conservation law of the form (5.108View Equation), but where now the energy and flux quantities are ∫ ∫ E (Σt) = Σtu¯∗ ¯H ¯udxdy = Σt (m2 |Φ |2 + |Φt − bΦx |2 + (1 − b2)|Φx |2 + |Φy|2)dxdy and
¯u∗ ¯HP (s)¯u = − b[m2 |Φ|2 + |Φ |2 + |Φ |2 + |Φ |2] + 2b[|Φ |2 + |Φ |2] − 2Re (Φ∗Φ ). (5.111 ) 0 t x y t x t x
Imposing the boundary condition Φt = Φx at x = 0 and using 2b ≤ 1 leads to the energy estimate
∫T ∫ E (Σ ) + b [m2|Φ |2 + |Φ |2 + |Φ |2 + |Φ |2] dydt ≤ E (Σ ), (5.112 ) T t x y x=0 0 0 ℝ
controlling Φ and all its first derivatives at the boundary surface.

Summarizing, we have seen that the most straightforward first-order reduction of the Klein–Gordon equation does not lead to strong well-posedness. However, strong well-posedness can be obtained by choosing a more sophisticated reduction, in which the time-derivative of Φ is replaced by its derivative Φ − bΦ t x along the time-like vector (1, − b), which is pointing outside the domain at the boundary surface. In fact, it is possible to obtain a symmetric hyperbolic reduction leading to strong well-posedness for any future-directed time-like vector field u, which is pointing outside the domain at the boundary. Based on the geometric definition of first-order symmetric hyperbolic systems in [205], it is possible to generalize this result to systems of quasilinear wave equations on curved backgrounds [264Jump To The Next Citation Point].

In order to describe the result in [264Jump To The Next Citation Point], let π : E → M be a vector bundle over M = [0,T] × ¯Σ with fiber ℝN; let ∇ μ be a fixed, given connection on E and let gμν = gμν(Φ) be a Lorentz metric on M with inverse gμν(Φ), which depends pointwise and smoothly on a vector-valued function Φ = { ΦA } A=1,2,...,N, parameterizing a local section of E. Assume that each time-slice Σt = {t} × Σ is space-like and that the boundary 𝒯 = [0,T ] × ∂ Σ is time-like with respect to gμν(Φ). We consider a system of quasilinear wave equations of the form

gμν(Φ )∇ μ∇ νΦA = F A(Φ, ∇ Φ), (5.113 )
where A F (Φ,∇ Φ ) is a vector-valued function, which depends pointwise and smoothly on its arguments. The wave system (5.113View Equation) is subject to the initial conditions
ΦA || = ΦA , nμ∇ ΦA || = ΠA , (5.114 ) Σ0 0 μ Σ0 0
where ΦA 0 and ΠA 0 are given vector-valued functions on Σ0, and where nμ = n μ(Φ ) denotes the future-directed unit normal to Σ 0 with respect to g μν. In order to describe the boundary conditions, let μ μ T = T (p,Φ ), p ∈ 𝒯, be a future-directed vector field on 𝒯, which is normalized with respect to gμν, and let N μ = N μ(p, Φ) be the unit outward normal to 𝒯 with respect to the metric gμν. We consider boundary conditions on 𝒯 of the following form
μ μ A|| μA B|| A B || A [T + αN ]∇ μΦ 𝒯 = c B ∇ μΦ 𝒯 + d B Φ 𝒯 + G , (5.115 )
where α = α (p, Φ) > 0 is a strictly positive, smooth function, GA = GA (p) is a given, vector-valued function on 𝒯 and the matrix coefficients μA μA c B = c B(p,Φ ) and A A d B = d B (p, Φ) are smooth functions of their arguments. Furthermore, we assume that μA c B satisfies the following property. Given a local trivialization φ : U × ℝN ↦→ π −1(U ) of E such that U¯ ⊂ M is compact and contains a portion 𝒰 of the boundary 𝒯, there exists a smooth map J : U → GL (N, ℝ ),p ↦→ (JAB (p)) such that the transformed matrix coefficients
μA A μC ( −1)D &tidle;c B := J Cc D J B (5.116 )
are in upper triangular form with zeroes on the diagonal, that is
μA &tidle;c B = 0, B ≤ A. (5.117 )

Theorem 8. [264Jump To The Next Citation Point] The IBVP (5.113View Equation, 5.114View Equation, 5.115View Equation) is well posed. Given T > 0 and sufficiently small and smooth initial and boundary data A Φ 0, A Π 0 and A G satisfying the compatibility conditions at the edge S = {0} × ∂Σ, there exists a unique smooth solution on M satisfying the evolution equation (5.113View Equation), the initial condition (5.114View Equation) and the boundary condition (5.115View Equation). Furthermore, the solution depends continuously on the initial and boundary data.

Theorem 8 provides the general framework for treating wave systems with constraints, such as Maxwell’s equations in the Lorentz gauge and, as we will see in Section 6.1, Einstein’s field equations with artificial outer boundaries.

5.2.2 Existence of weak solutions and the adjoint problem

Here, we show how to prove the existence of weak solutions for linear, symmetric hyperbolic equations with variable coefficients and maximal dissipative boundary conditions. The method can also be applied to a more general class of linear symmetric operators with maximal dissipative boundary conditions; see [189Jump To The Next Citation Point, 275Jump To The Next Citation Point]. The proof below will shed some light on the maximality condition for the boundary space Vp.

Our starting point is an IBVP of the form (5.1View Equation, 5.2View Equation, 5.3View Equation), where the matrix functions j A (t,x) and b(t,x) do not depend on u, and where F (t,x,u) is replaced by B (t,x)u + F (t,x), such that the system is linear. Furthermore, we can assume that the initial and boundary data is trivial, f = 0, g = 0. We require the system to be symmetric hyperbolic with symmetrizer H (t,x ) satisfying the conditions in Definition 4(iii), and assume the boundary conditions (5.3View Equation) are maximal dissipative. We rewrite the IBVP on ΩT := [0,T ] × Σ as the abstract linear problem

− Lu = F, (5.118 )
where L : D (L) ⊂ X → X is the linear operator on the Hilbert space 2 X := L (ΩT ) defined by the evolution equation and the initial and boundary conditions:
D(L ) := {u ∈ C∞ (Ω ) : u(p) = 0 for all p ∈ Σ and u(p) ∈ V for all p ∈ 𝒯 }, n b T 0 p ∑ μ -∂u- Lu := A (t,x)∂x μ + B (t,x)u, u ∈ D(L ), μ=0
where we have defined A0 := − I and x0 := t, where V = {u ∈ ℂm : b(t,x)u = 0 } p is the boundary space, and where Σ0 := {0} × Σ, ΣT := {T} × Σ and 𝒯 := [0,T ] × ∂Σ denote the initial, the final and the boundary surface, respectively.

For the following, the adjoint IBVP plays an important role. This problem is defined as follows. First, the symmetrizer defines a natural scalar product on X,

∫ ∗ n ⟨v, u⟩H := v (t,x )H (t,x)u (t,x )dtd x, u, v ∈ X, (5.119 ) ΩT
which, because of the properties of H, is equivalent to the standard scalar product on 2 L (ΩT ). In order to obtain the adjoint problem, we take u ∈ D (L) and v ∈ C ∞b (ΩT), and use Gauss’s theorem to find
∫ ∫ ∫ ∗ ∗ n ∗ n ∗ ⟨v,Lu ⟩H = ⟨L v,u⟩H + v H (t,x)ud x − v H (t,x )ud x + v H (t,x )P0(t,x,s)udS,(5.120 ) Σ0 ΣT 𝒯
where we have defined the formal adjoint ∗ ∗ L : D (L ) ⊂ X → X of L by
∑n ∑ n μ L ∗v := − Aμ(t,x)-∂v- − H (t,x)−1 ∂-[H-(t,x)A--(t,x-)]v + H (t,x)− 1B (t,x)∗H (t,x)v.(5.121 ) μ=0 ∂x μ μ=0 ∂x μ
In order for the integrals on the right-hand side of Eq. (5.120View Equation) to vanish, such that ⟨v,Lu ⟩H = ⟨L ∗v, u⟩H, we first notice that the integral over Σ0 vanishes, because u = 0 on Σ0. The integral over ΣT also vanishes if we require v = 0 on ΣT. The last term also vanishes if we require v to lie in the dual boundary space
V ∗:= {v ∈ ℂm : v∗H (t,x )P0(t,x,s)u = 0 for all u ∈ Vp}, (5.122 ) p
for each p ∈ 𝒯. Therefore, if we define
D (L ∗) := {v ∈ C ∞ (Ω ) : v (p ) = 0 for all p ∈ Σ and v(p) ∈ V∗ for all p ∈ 𝒯 }, (5.123 ) b T T p
we have ⟨v,Lu ⟩H = ⟨L∗v, u⟩H for all u ∈ D (L) and v ∈ D (L ∗); that is, the operator L∗ is adjoint to L. There is the following nice relation between the boundary spaces Vp and V ∗ p:

Lemma 4. Let p ∈ 𝒯 be a boundary point. Then, V p is maximal nonpositive if and only if V∗ p is maximal nonnegative.

Proof. Fix a boundary point p = (t,x) ∈ 𝒯 and define the matrix ℬ := H (t,x)P (t,x, s) 0 with s the unit outward normal to ∂Σ at x. Since the system is symmetric hyperbolic, ℬ is Hermitian. We decompose ℂm = E+ ⊕ E − ⊕ E0 into orthogonal subspaces E+, E −, E0 on which ℬ is positive, negative and zero, respectively. We equip E ± with the scalar products (⋅,⋅)±, which are defined by

(u ±,v±)± := ±u ∗ ℬv±, u±, v± ∈ E ±. (5.124 ) ±
In particular, we have u ∗ℬu = (u+, u+ )+ − (u − ,u− )− for all u ∈ ℂm. Therefore, if Vp is maximal nonpositive, there exists a linear transformation q : E − → E+ satisfying |qu − |+ ≤ |u− |− for all u ∈ E − −, such that (cf. Eq. (5.94View Equation))
V = {u ∈ ℂm : u = qu }. (5.125 ) p + −
Let v ∈ V ∗p. Then,
0 = v∗ℬu = (v ,u ) − (v ,u ) = (v ,qu ) − (v ,u ) = (q†v ,u ) − (v ,u ) (5.126 ) + + + − − − + − + − − − + − − − − −
for all u ∈ Vp, where q† : E+ → E − is the adjoint of q with respect to the scalar products (⋅,⋅)± defined on E ±. Therefore, v − = q†v+, and
∗ m † V p = {v ∈ ℂ : v− = q v+}. (5.127 )
Since q† has the same norm as q, which is one, it follows that V ∗p is maximal nonnegative. The converse statement follows in an analogous way. □

The lemma implies that solving the original problem − Lu = F with u ∈ D (L ) is equivalent to solving the adjoint problem L∗v = F with v ∈ D (L∗), which, since v(T, x) = 0 is held fixed at ΣT, corresponds to the time-reversed problem with the adjoint boundary conditions. From the a priori energy estimates we obtain:

Lemma 5. There is a constant δ = δ(T) such that

∥Lu ∥ ≥ δ∥u∥ , ∥L∗v∥ ≥ δ∥v∥ (5.128 ) H H H H
for all u ∈ D (L) and v ∈ D (L∗), where ∥ ⋅ ∥H is the norm induced by the scalar product ⟨⋅,⋅⟩H.

Proof. Let u ∈ D (L ) and set F := − Lu. From the energy estimates in Section 3.2.3 one easily obtains

E (Σt) ≤ C ∥F∥2H, 0 ≤ t ≤ T, (5.129 )
for some positive constants C depending on T. Integrating both sides from t = 0 to t = T gives
2 2 2 ∥u ∥H ≤ CT ∥F ∥H = CT ∥Lu ∥H , (5.130 )
which yields the statement for L setting δ := (CT )−1∕2. The estimate for L ∗ follows from a similar energy estimate for the adjoint problem. □

In particular, Lemma 5 implies that (strong) solutions to the IBVP and its adjoint are unique. Since L and ∗ L are closable operators [345], their closures -- L and -∗- L satisfy the same inequalities as in Eq. (5.128View Equation). Now we are ready to define weak solutions and to prove their existence:

Definition 10. u ∈ X is called a weak solution of the problem (5.118View Equation) if

⟨L ∗v,u⟩H = − ⟨v, F⟩H (5.131 )
for all ∗ v ∈ D (L ).

In order to prove the existence of such u ∈ X, we introduce the linear space --- Y = D (L ∗) and equip it with the scalar product ⟨⋅,⋅⟩ Y defined by

--- --- ⟨v,w⟩Y := ⟨L ∗v,L∗w ⟩H, v,w ∈ Y. (5.132 )
The positivity of this product is a direct consequence of Lemma 5, and since --- L ∗ is closed, Y defines a Hilbert space. Next, we define the linear form J : Y → ℂ on Y by
J (v) := − ⟨F,v ⟩H. (5.133 )
This form is bounded, according to Lemma 5,
--- |J(v )| ≤ ∥F ∥H ∥v∥H ≤ δ−1∥F ∥H ∥L∗v∥H = δ−1∥F ∥H ∥v∥Y (5.134 )
for all v ∈ Y. Therefore, according to the Riesz representation lemma there exists a unique w ∈ Y such that ⟨w, v⟩Y = J (v ) for all v ∈ Y. Setting --- u := L ∗w ∈ X gives a weak solution of the problem.

If u ∈ X is a weak solution, which is sufficiently smooth, it follows from the Green type identity (5.120View Equation) that u has vanishing initial data and that it satisfies the required boundary conditions, and hence is a solution to the original IBVP (5.118View Equation). The difficult part is to show that a weak solution is indeed sufficiently regular for this conclusion to be made. See [189, 275, 344, 343, 387] for such “weak=strong” results.


  Go to previous page Go up Go to next page