3.1 Linear, constant coefficient problems

We consider an evolution equation on n-dimensional space of the following form:
∑ n ut = P (∂∕∂x )u ≡ A νD νu, x ∈ ℝ , t ≥ 0. (3.1 ) |ν|≤p
Here, m u = u (t,x ) ∈ ℂ is the state vector, and ut its partial derivative with respect to t. Next, the A ν’s denote complex, m × m matrices where ν = (ν1,ν2,...,νn ) denotes a multi-index with components νj ∈ {0, 1,2,3,...} and |ν| := ν1 + ...+ νn. Finally, D ν denotes the partial derivative operator
|ν| D := ----∂-------- ν ∂xν11⋅ ⋅ ⋅ ∂xνnn

of order |ν |, where D := I 0. Here are a few representative examples:

Example 1. The advection equation u (t,x ) = λu (t,x) t x with speed λ ∈ ℝ in the negative x direction.

Example 2. The heat equation ut(t,x ) = Δu (t,x ), where

-∂2- -∂2- -∂2- Δ := ∂x2 + ∂x2 + ...+ ∂x2 1 2 n

denotes the Laplace operator.

Example 3. The Schrödinger equation u (t,x) = iΔu (t,x) t.

Example 4. The wave equation Utt = ΔU, which can be cast into the form of Eq. (3.1View Equation),

( ) ( ) ut = 0 1 u, u = U . (3.2 ) Δ 0 V

We can find solutions of Eq. (3.1View Equation) by Fourier transformation in space,

∫ ---1--- −ik⋅x n n ˆu (t,k ) = (2π)n∕2 e u(t,x)d x, k ∈ ℝ , t ≥ 0. (3.3 )
Applied to Eq. (3.1View Equation) this yields the system of linear ordinary differential equations
uˆt = P (ik)ˆu, t ≥ 0, (3.4 )
for each wave vector n k ∈ ℝ where P (ik ), called the symbol of the differential operator P (∂∕∂x ), is defined as
∑ P (ik) := Aν(ik1)ν1 ⋅ ⋅ ⋅ (ikn)νn, k ∈ ℝn. (3.5 ) |ν|≤p
The solution of Eq. (3.4View Equation) is given by
ˆu(t,k) = eP(ik)tuˆ(0,k), t ≥ 0, (3.6 )
where ˆu(0,k) is determined by the initial data for u at t = 0. Therefore, the formal solution of the Cauchy problem
n ut(t,x) = P (∂∕∂x )u(t,x), x ∈ ℝ , t ≥ 0, (3.7 ) u(0,x ) = f(x), x ∈ ℝn, (3.8 )
with given initial data f for u at t = 0 is
1 ∫ u(t,x) = ------- eik⋅xeP (ik)t ˆf(k)dnk, x ∈ ℝn, t ≥ 0, (3.9 ) (2π)n∕2
where ˆf(k) = ---1--∫ e− ik⋅xf (x)dnx (2π)n∕2.

3.1.1 Well-posedness

At this point we have to ask ourselves if expression (3.9View Equation) makes sense. In fact, we do not expect the integral to converge in general. Even if ˆf is smooth and decays rapidly to zero as |k| → ∞ we could still have problems if |eP(ik)t| diverges as |k| → ∞. One simple, but very restrictive, possibility to control this problem is to limit ourselves to initial data f in the class ω 𝒮 of functions, which are the Fourier transform of a C∞-function with compact support, i.e., f ∈ 𝒮 ω, where

{ ∫ } 𝒮 ω := v(⋅) = ---1--- eik⋅(⋅)ˆv(k)dnk : ˆv ∈ C ∞ (ℝn) . (3.10 ) (2π)n∕2 0
A function in this space is real analytic and decays faster than any polynomial as |x| → ∞.1 If ω f ∈ 𝒮 the integral in Eq. (3.9View Equation) is well-defined and we obtain a solution of the Cauchy problem (3.7View Equation, 3.8View Equation), which, for each t ≥ 0 lies in this space. However, this possibility suffers from several unwanted features:

For these reasons, it is desirable to consider initial data of a more general class than 𝒮ω. For this, we need to control the growth of eP(ik)t. This is captured in the following

Definition 1. The Cauchy problem (3.7View Equation, 3.8View Equation) is called well posed if there are constants K ≥ 1 and α ∈ ℝ such that

|eP (ik)t| ≤ Ke αt for all t ≥ 0 and all k ∈ ℝn. (3.11 )

The importance of this definition relies on the property that for each fixed time t > 0 the norm |eP(ik)t| of the propagator is bounded by the constant C (t) := Ke αt, which is independent of the wave vector k. The definition does not state anything about the growth of the solution with time other that this growth is bounded by an exponential. In this sense, unless one can choose α ≤ 0 or α > 0 arbitrarily small, well-posedness is not a statement about the stability in time, but rather about stability with respect to mode fluctuations.

Let us illustrate the meaning of Definition 1 with a few examples:

Example 5. The heat equation ut(t,x ) = Δu (t,x ).
Fourier transformation converts this equation into ˆu (t,k) = − |k |2ˆu(t,k) t. Hence, the symbol is 2 P (ik) = − |k| and P(ik)t − |k|2t |e | = e ≤ 1. The problem is well posed.

Example 6. The backwards heat equation ut(t,x) = − Δu (t,x).
In this case the symbol is P(ik) = + |k|2, and |eP(ik)t| = e|k|2t. In contrast to the previous case, eP(ik)t exhibits exponential frequency-dependent growth for each fixed t > 0 and the problem is not well posed. Notice that small initial perturbations with large |k | are amplified by a factor that becomes larger and larger as |k| increases. Therefore, after an arbitrarily small time, the solution is contaminated by high-frequency modes.

Example 7. The Schrödinger equation ut(t,x) = iΔu (t,x).
In this case we have 2 P(ik) = i|k | and P (ik)t |e | = 1. The problem is well posed. Furthermore, the evolution is unitary, and we can evolve forward and backwards in time. When compared to the previous example, it is the factor i in front of the Laplace operator that saves the situation and allows the evolution backwards in time.

Example 8. The one-dimensional wave equation written in first-order form,

( 0 1 ) ut(t,x ) = Aux(t,x), A = . (3.12 ) 1 0
The symbol is P(ik) = ikA. Since the matrix A is symmetric and has eigenvalues ±1, there exists an orthogonal transformation U such that
( ) ( ) 1 0 −1 ikAt eikt 0 −1 A = U 0 − 1 U , e = U 0 e−ikt U . (3.13 )
Therefore, |eP(ik)t| = 1, and the problem is well posed.

Example 9. Perturb the previous problem by a lower-order term,

( ) 0 1 ut(t,x) = Aux (t,x ) + λu (t,x), A = 1 0 , λ ∈ ℝ. (3.14 )
The symbol is P(ik) = ikA + λI, and |eP(ik)t| = eλt. The problem is well posed, even though the solution grows exponentially in time if λ > 0.

More generally one can show (see Theorem 2.1.2 in [259Jump To The Next Citation Point]):

Lemma 1. The Cauchy problem for the first-order equation ut = Aux + B with complex m × m matrices A and B is well posed if and only if A is diagonalizable and has only real eigenvalues.

By considering the eigenvalues of the symbol P (ik ) we obtain the following simple necessary condition for well-posedness:

Lemma 2 (Petrovskii condition). Suppose the Cauchy problem (3.7View Equation, 3.8View Equation) is well posed. Then, there is a constant α ∈ ℝ such that

Re (λ ) ≤ α (3.15 )
for all eigenvalues λ of P (ik).

Proof. Suppose λ is an eigenvalue of P(ik) with corresponding eigenvector v, P (ik )v = λv. Then, if the problem is well posed,

αt P(ik)t λt Re(λ)t Ke |v| ≥ |e v| = |e v| = e |v|, (3.16 )
for all t ≥ 0, which implies that eRe (λ)t ≤ Ke αt for all t ≥ 0, and hence Re (λ) ≤ α. □

Although the Petrovskii condition is a very simple necessary condition, we stress that it is not sufficient in general. Counterexamples are first-order systems, which are weakly, but not strongly, hyperbolic; see Example 10 below.

3.1.2 Extension of solutions

Now that we have defined and illustrated the notion of well-posedness, let us see how it can be used to solve the Cauchy problem (3.7View Equation, 3.8View Equation) for initial data more general than in 𝒮ω. Suppose first that f ∈ 𝒮 ω, as before. Then, if the problem is well posed, Parseval’s identities imply that the solution (3.9View Equation) must satisfy

P (i⋅)t ˆ αt ˆ αt ∥u(t,.)∥ = ∥ˆu(t,.)∥ = ∥e f∥ ≤ Ke ∥f ∥ = Ke ∥f∥, t ≥ 0. (3.17 )
Therefore, the 𝒮ω-solution satisfies the following estimate
∥u(t,.)∥ ≤ Ke αt∥f ∥, t ≥ 0, (3.18 )
for all ω f ∈ 𝒮. This estimate is important because it allows us to extend the solution to the much larger space 2 n L (ℝ ). This extension is defined in the following way: let 2 n f ∈ L (ℝ ). Since 𝒮 ω is dense in L2 (ℝn ) there exists a sequence {fj} in 𝒮 ω such that ∥fj − f∥ → 0. Therefore, if the problem is well posed, it follows from the estimate (3.18View Equation) that the corresponding solutions uj defined by Eq. (3.9View Equation) form a Cauchy-sequence in L2(ℝn ), and we can define
∫ 1 ik⋅x P(ik)t n n U (t)f(x ) := lji→m∞ ----n∕2 e e fˆj(k)d k, x ∈ ℝ , t ≥ 0, (3.19 ) (2π)
where the limit exists in the L2(ℝn ) sense. The linear map U(t) : L2(ℝn ) → L2 (ℝn) satisfies the following properties:
  1. U (0 ) = I is the identity map.
  2. U (t + s) = U (t)U (s) for all t,s ≥ 0.
  3. For ω f ∈ 𝒮, u (t,.) = U (t)f is the unique solution to the Cauchy problem (3.7View Equation, 3.8View Equation).
  4. αt ∥U (t)f ∥ ≤ Ke ∥f∥ for all 2 n f ∈ L (ℝ ) and all t ≥ 0.

The family {U (t) : t ≥ 0} is called a semi-group on 2 n L (ℝ ). In general, U (t) cannot be extended to negative t as the example of the backwards heat equation, Example 6, shows.

For f ∈ L2(ℝn ) the function u (t,x ) := U (t)f (x) is called a weak solution of the Cauchy problem (3.7View Equation, 3.8View Equation). It can also be constructed in an abstract way by using the Fourier–Plancharel operator ℱ : L2(ℝn ) → L2(ℝn ). If the problem is well posed, then for each f ∈ L2 (ℝn ) and t ≥ 0 the map P(ik)t k ↦→ e ℱ(f )(k ) defines an 2 n L (ℝ )-function, and, hence, we can define

−1 ( P(i⋅)t ) u (t,⋅) := ℱ e ℱ f , t ≥ 0. (3.20 )

According to Duhamel’s principle, the semi-group U (t) can also be used to construct weak solutions of the inhomogeneous problem,

n ut(t,x ) = P(∂ ∕∂x)u(t,x) + F (t,x ), x ∈ ℝ , t ≥ 0, (3.21 ) u(0,x) = f (x ), x ∈ ℝn, (3.22 )
where 2 n F : [0,∞ ) → L (ℝ ), t ↦→ F(t,⋅) is continuous:
t ∫ u(t,⋅) = U (t)f + U (t − s)F (s,⋅)ds. (3.23 ) 0
For a discussion on semi-groups in a more general context see Section 3.4.

3.1.3 Algebraic characterization

In order to extend the solution concept to initial data more general than analytic, we have introduced the concept of well-posedness in Definition 1. However, given a symbol P(ik), it is not always a simple task to determine whether or not constants K ≥ 0 and α ∈ ℝ exist such that P(ik)t αt |e | ≤ Ke for all t ≥ 0 and k ∈ ℝn. Fortunately, the matrix theorem by Kreiss [257] provides necessary and sufficient conditions on the symbol P(ik) for well-posedness.

Theorem 1. Let P(ik), n k ∈ ℝ, be the symbol of a constant coefficient linear problem, see Eq. (3.5View Equation), and let α ∈ ℝ. Then, the following conditions are equivalent:

  1. There exists a constant K ≥ 0 such that
    |eP(ik)t| ≤ Ke αt (3.24)
    for all t ≥ 0 and k ∈ ℝn.
  2. There exists a constant M > 0 and a family H (k) of m × m Hermitian matrices such that
    − 1 ∗ M I ≤ H (k) ≤ M I, H (k)P (ik ) + P (ik) H (k) ≤ 2αH (k) (3.25)
    for all k ∈ ℝn.

A generalization and complete proof of this theorem can be found in [259Jump To The Next Citation Point]. However, let us show here the implication (ii) ⇒ (i) since it illustrates the concept of energy estimates, which will be used quite often throughout this review (see Section 3.2.3 below for a more general discussion of these estimates). Hence, let H (k) be a family of m × m Hermitian matrices satisfying the condition (3.25View Equation). Let k ∈ ℝn and v0 ∈ ℂm be fixed, and define v (t) := eP(ik)tv0 for t ≥ 0. Then we have the following estimate for the “energy” density ∗ v(t) H (k )v(t),

d --v(t)∗H (k )v (t) = [P (ik)v(t)]∗H (k )v(t) + v(t)∗H (k)P(ik)v(t) dt ∗ ∗ = v(t) [P (ik) H (k) + H (k)P (ik )]v(t) ≤ 2αv (t)∗H (k )v (t),
which implies the differential inequality
d [ ] -- e−2αtv(t)∗H (k)v(t) ≤ 0, t ≥ 0, k ∈ ℝn. (3.26 ) dt
Integrating, we find
−1 2 ∗ 2αt ∗ 2αt 2 M |v(t)| ≤ v(t) H (k)v(t) ≤ e v0H (k)v0 ≤ M e |v0|, (3.27 )
which implies the inequality (3.24View Equation) with K = M.

3.1.4 First-order systems

Many systems in physics, like Maxwell’s equations, the Dirac equation, and certain formulation of Einstein’s equations are described by first-order partial-differential equations (PDEs). In fact, even systems, which are given by a higher-order PDE, can be reduced to first order at the cost of introducing new variables, and possibly also new constraints. Therefore, let us specialize the above results to a first-order linear problem of the form

n ∑ j-∂-- n ut = P (∂ ∕∂x)u ≡ A ∂xj u + Bu, x ∈ ℝ , t ≥ 0, (3.28 ) j=1
where A1,...,An, B are complex m × m matrices. We split P (ik) = P (ik ) + B 0 into its principal symbol, ∑n P0 (ik ) = i kjAj j=1, and the lower-order term B. The principal part is the one that dominates for large |k| and hence the one that turns out to be important for well-posedness. Notice that P0(ik) depends linearly on k. With these observations in mind we note:

These observations motivate the following three notions of hyperbolicity, each of them being a stronger condition than the previous one:

Definition 2. The first-order system (3.28View Equation) is called

  1. weakly hyperbolic, if all the eigenvalues of its principal symbol P0(ik) are purely imaginary.
  2. strongly hyperbolic, if there exists a constant M > 0 and a family of Hermitian m × m matrices H (k), k ∈ Sn− 1, satisfying
    M −1I ≤ H (k) ≤ M I, H (k)P (ik) + P (ik)∗H (k) = 0, (3.31) 0 0
    for all k ∈ Sn−1, where Sn−1 := {k ∈ ℝn : |k| = 1} denotes the unit sphere.
  3. symmetric hyperbolic, if there exists a Hermitian, positive definite m × m matrix H (which is independent of k) such that
    ∗ HP0 (ik) + P0(ik) H = 0, (3.32)
    for all k ∈ Sn−1.

The matrix theorem implies the following statements:

Example 10. Consider the weakly hyperbolic system [259Jump To The Next Citation Point]

( ) ( ) 1 1 − 1 +1 ut = 0 1 ux + a − 1 − 1 u, (3.35 )
with a ∈ ℝ a parameter. The principal symbol is ( ) P0(ik) = ik 1 1 0 1 and
( ) P0(ik)t ikt 1 ikt e = e 0 1 . (3.36 )
Using the tools described in Section 2 we find for the norm
┌ ------------------------------ ││ ∘ (---------)2----- |eP0(ik)t| = ∘ 1 + k2t2 + 1 + k2t2 − 1, (3.37 ) 2 2
which is approximately equal to |k|t for large |k|t. Hence, the solutions to Eq. (3.35View Equation) contain modes, which grow linearly in |k |t for large |k|t when a = 0, i.e., when there are no lower-order terms.

However, when a ⁄= 0, the eigenvalues of P (ik ) are

∘ --------- λ ± = ik − a ± i a(a + ik), (3.38 )
which, for large k has real part Re (λ ) ≈ ± ∘ |a∥k-|∕2 ±. The eigenvalue with positive real part gives rise to solutions, which, for fixed t, grow exponentially in |k|.

Example 11. For the system [353Jump To The Next Citation Point],

( 1 1) ( 1 0) ut = A1ux + A2uy, A1 = , A2 = , (3.39 ) 0 2 0 2
the principal symbol, ( ) k1 + k2 k1 P0(ik) = i 0 2(k1 + k2), is diagonalizable for all vectors 1 k = (k1,k2) ∈ S except for those with k + k = 0 1 2. In particular, P (ik) 0 is diagonalizable for k = (1,0) and k = (0,1 ). This shows that in general, it is not sufficient to check that the n matrices 1 A, 2 A,…, n A alone are diagonalizable and have real eigenvalues; one has to consider all possible linear combinations n ∑ Ajk j=1 j with k ∈ Sn −1.

Example 12. Next, we present a system for which the eigenvectors of the principal symbol cannot be chosen to be continuous functions of k:

( ) ( ) ( ) 1 2 3 1 1 0 2 0 1 3 0 0 ut = A ux + A uy + A uz, A = 0 − 1 , A = 1 0 , A = 0 0 . (3.40 )
The principal symbol ( ) P0(ik) = i k1 k2 k2 − k1 has eigenvalues ∘ -------- λ ±(k) = ±i k2 + k2 1 2 and for (k1,k2) ⁄= (0,0) the corresponding eigenprojectors are
1 ( λ (k) + ik ik ) P± (k1, k2) = ------- ± 1 2 . (3.41 ) 2λ± (k ) ik2 λ± (k ) − ik1
When (k1,k2) → (0,0) the two eigenvalues fall together, and A (k) converges to the zero matrix. However, it is not possible to continuously extend P (k ,k ) ± 1 2 to (k ,k ) = (0,0) 1 2. For example,
( ) ( ) 1 0 0 0 P+ (h, 0) = 0 0 , P+ (− h,0) = 0 1 , (3.42 )
for positive h > 0. Therefore, any choice for the matrix S(k), which diagonalizes A(k), must be discontinuous at k = (0,0,±1 ) since the columns of S(k) are the eigenvectors of A (k).

Of course, A (k) is symmetric and so S (k ) can be chosen to be unitary, which yields the trivial symmetrizer H (k) = I. Therefore, the system is symmetric hyperbolic and yields a well-posed Cauchy problem; however, this example shows that it is not always possible to choose S(k ) as a continuous function of k.

Example 13. Consider the Klein–Gordon equation

Φ = Δ Φ − m2 Φ, (3.43 ) tt
in two spatial dimensions, where m ∈ ℝ is a parameter, which is proportional to the mass of the field Φ. Introducing the variables u = (Φ,Φt, Φx,Φy ) we obtain the first-order system
( ) ( ) ( ) 0 0 0 0 0 0 0 0 0 1 0 0 | 0 0 1 0 | | 0 0 0 1| | − m2 0 0 0| ut = |( |) ux + |( |) uy + |( |) u. (3.44 ) 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
The matrix coefficients in front of ux and uy are symmetric; hence the system is symmetric hyperbolic with trivial symmetrizer 2 H = diag(m ,1,1,1).4 The corresponding Cauchy problem is well posed. However, a problem with this first-order system is that it is only equivalent to the original, second-order equation (3.43View Equation) if the constraints (u1)x = u3 and (u1)y = u4 are satisfied.

An alternative symmetric hyperbolic first-order reduction of the Klein–Gordon equation, which does not require the introduction of constraints, is the Dirac equation in two spatial dimensions,

( ) ( ) ( ) ( ) 1 0 0 1 0 1 v1 vt = 0 − 1 vx + 1 0 vy + m − 1 0 v, v = v . (3.45 ) 2
This system implies the Klein–Gordon equation (3.43View Equation) for either of the two components of v.

Yet another way of reducing second-order equations to first-order ones without introducing constraints will be discussed in Section 3.1.5.

Example 14. In terms of the electric and magnetic fields u = (E, B ), Maxwell’s evolution equations,

Et = + ∇ ∧ B − J, (3.46 ) Bt = − ∇ ∧ E, (3.47 )
constitute a symmetric hyperbolic system. Here, J is the current density and ∇ and ∧ denote the nabla operator and the vector product, respectively. The principal symbol is
( ) ( ) P0(ik) E = i +k ∧ B (3.48 ) B − k ∧ E
and a symmetrizer is given by the physical energy density,
∗ 1-( 2 2) u Hu = 2 |E | + |B| , (3.49 )
in other words, −1 H = 2 I is trivial. The constraints ∇ ⋅ E = ρ and ∇ ⋅ B = 0 propagate as a consequence of Eqs. (3.46View Equation, 3.47View Equation), provided that the continuity equation holds: (∇ ⋅ E − ρ)t = − ∇ ⋅ J − ρt = 0, (∇ ⋅ B)t = 0.

Example 15. There are many alternative ways to write Maxwell’s equations. The following system [353Jump To The Next Citation Point, 287Jump To The Next Citation Point] was originally motivated by an analogy with certain parametrized first-order hyperbolic formulations of the Einstein equations, and provides an example of a system that can be symmetric, strongly, weakly or not hyperbolic at all, depending on the parameter values. Using the Einstein summation convention, the evolution system in vacuum has the form

∂ E = ∂j(W − W ) − α(∂ W j − ∂jW ), (3.50 ) t i ij ji i j ij ∂ W = − ∂ E − β-δ ∂kE , (3.51 ) t ij i j 2 ij k
where E i and W = ∂ A ij i j, i = 1,2,3, represent the Cartesian components of the electric field and the gradient of the magnetic potential Aj, respectively, and where the real parameters α and β determine the dynamics of the constraint hypersurface defined by k ∂ Ek = 0 and ∂kWij − ∂iWkj = 0.

In order to analyze under which conditions on α and β the system (3.50View Equation, 3.51View Equation) is strongly hyperbolic we consider the corresponding symbol,

( ) ( ) (1 + α)kjWij − kjWji − αkiW jj Ei 2 P0(ik)u = i − k E − β δ klE , u = W , k ∈ S . (3.52 ) i j 2 ij l ij
Decomposing Ei and Wij into components parallel and orthogonal to ki,
1 Ei = E¯ki + ¯Ei, Wij = ¯W kikj + ¯Wikj + ki¯Vj + W¯ij + -γijU ¯, (3.53 ) 2
where in terms of the projector γij := δij − kikj orthogonal to k we have defined E¯ := klEl, E¯i := γijEj and ¯W := kikjWij, W¯i := γikWkjkj, ¯Vj := kiWik γkj, ¯U := γijWij, and W¯ij := (γikγjl − 2 −1γijγkl)Wkl,5 we can write the eigenvalue problem P (ik)u = iλu 0 as
λE¯= − α¯U , λ¯U = − β ¯E, ( β) λW¯ = − 1 + -- E¯, 2 λE¯i = (1 + α) ¯Wi − V¯i, λ¯V = − ¯E , i i λW¯i = 0, λW¯ = 0. ij
It follows that P0(ik) is diagonalizable with purely complex eigenvalues if and only if αβ > 0. However, in order to show that in this case the system is strongly hyperbolic one still needs to construct a bounded symmetrizer H (k). For this, we set μ := √ α-β and diagonalize P (ik) = iS(k)Λ (k)S(k)− 1 0 with Λ (k) = diag(μ,− μ,0,1,− 1,0, 0) and
( ¯ μ ¯ ) E − βμU || E¯+( β ¯U ) || || βW¯ − 1 + β U¯ || S(k )− 1u = | E¯ − V¯ + (1 + 2α)W¯ | . (3.54 ) || ¯i ¯i ¯i|| | Ei + Vi − (1 + α)Wi | ( ¯Wi ) W¯ij
Then, the quadratic form associated with the symmetrizer is
u ∗H (k )u = u∗(S(k)− 1)∗S (k)−1u | ( ) |2 ¯ 2 α- ¯ 2 || ¯ β- ¯|| ¯i ¯ = 2|E | + 2β |U | + |βW − 1 + 2 U | + 2E Ei [ i i][ ] i ij + 2 V¯ − (1 + α )W ¯ ¯Vi − (1 + α ) ¯Wi + W¯ W¯i + ¯W ¯Wij,
and H (k) is smooth in k ∈ S2. Therefore, the system is indeed strongly hyperbolic for α β > 0.

In order to analyze under which conditions the system is symmetric hyperbolic we notice that because of rotational and parity invariance the most general k-independent symmetrizer must have the form

u∗Hu = a(Ei )∗Ei + b(W [ij])∗W [ij] + c(Wˆij)∗W ˆij + dW ∗W, (3.55 )
with strictly positive constants a, b, c and d, where Wˆij := W (ij) − δijW ∕3 denotes the symmetric, trace-free part of Wij and W := W jj its trace. Then,
[ 2α ] u∗HP0 (ik )u = ia (Ei )∗ (α + 2)kjW [ij] + αkjWˆij − --kiW ( 3 ) [ij] ∗ ˆ ij ∗ 3β- ∗ i + ib(W ) Eikj − ic(W ) Eikj − id 1 + 2 W k Ei.
For H to be a symmetrizer, the expression on the right-hand side must be purely imaginary. This is the case if and only if a(α + 2) = b, − a α = c and 2aα∕3 = d (1 + 3 β∕2). Since a, b, c and d are positive, these equalities can be satisfied if and only if − 2 < α < 0 and β < − 2 ∕3. Therefore, if either α and β are both positive or α and β are both negative and α ≤ − 2 or β ≥ − 2∕3, then the system (3.50View Equation, 3.51View Equation) is strongly but not symmetric hyperbolic.

3.1.5 Second-order systems

An important class of systems in physics are wave problems. In the linear, constant coefficient case, they are described by an equation of the form

∑n jk ∂2 ∑n j ∂ ∑n j ∂ n vtt = A --j---kv + 2B --jvt + C ---jv + Dvt + Ev, x ∈ ℝ , t ≥ 0, (3.56 ) j,k=1 ∂x ∂x j=1 ∂x j=1 ∂x
where m v = v(t,x) ∈ ℂ is the state vector, and ij ji j j A = A ,B ,C ,D, E denote complex m × m matrices. In order to apply the theory described so far, we reduce this equation to a system that is first order in time. This is achieved by introducing the new variable ∑n w := vt − Bj ∂∂xjv j=1.6 With this redefinition one obtains a system of the form (3.1View Equation) with u = (v,w )T and
( ) ∑n 0 I P(∂ ∕∂x) = Bj -∂--+ ( ∑n jk j k ∂2 n∑ j j ∂ ) . (3.57 ) ∂xj (A + B B )∂xj∂xk + (C + DB )∂xj + E D j=1 j,k=1 j=1
Now we could apply the matrix theorem, Theorem 1, to the corresponding symbol P (ik ) and analyze under which conditions on the matrix coefficients Aij,Bj, Cj,D, E the Cauchy problem is well posed. However, since our problem originates from a second-order equation, it is convenient to rewrite the symbol in a slightly different way: instead of taking the Fourier transform of v and w directly, we multiply vˆ by |k| and write the symbol in terms of the variable ˆU := (|k|ˆv, ˆw )T. Then, the L2-norm of Uˆ controls, through Parseval’s identity, the L2-norms of the first partial derivatives of v, as is the case for the usual energies for second-order systems. In terms of Uˆ the system reads
ˆUt = Q (ik)ˆU , t ≥ 0, k ∈ ℝn, (3.58 )
in Fourier space, where
n ( 0 |k|I ) ∑ jˆ ( n∑ ∑n ) Q (ik ) = i|k| B kj + − |k | (Ajk + BjBk )ˆkjˆkk + i (Cj + DBj )ˆkj + 1|k|E D (3.59 ) j=1 j,k=1 j=1
with ˆ kj := kj∕|k|. As for first-order systems, we can split Q (ik) into its principal part,
( ) ∑n j n 0 I Q0 (ik) := i|k| B ˆkj + |k|( − ∑ (Ajk + BjBk )ˆk ˆk 0 ) , (3.60 ) j=1 j,k=1 j k
which dominates for |k | → ∞, and the remaining, lower-order terms. Because of the homogeneity of Q0 (ik) in k we can restrict ourselves to values of k ∈ Sn−1 on the unit sphere, like for first-order systems. Then, it follows as a consequence of the matrix theorem that the problem is well posed if and only if there exists a symmetrizer H (k) and a constant M > 0 satisfying
−1 ∗ M I ≤ H (k ) ≤ M I, H (k)Q0(ik) + Q0 (ik) H (k) = 0 (3.61 )
for all such k. Necessary and sufficient conditions under which such a symmetrizer exists have been given in [261Jump To The Next Citation Point] for the particular case in which the mixed–second-order derivative term in Eq. (3.56View Equation) vanishes; that is, when Bj = 0. This result can be generalized in a straightforward manner to the case where the matrices Bj = βjI are proportional to the identity:

Theorem 2. Suppose Bj = βjI, j = 1, 2,...,n. (Note that this condition is trivially satisfied if m = 1.) Then, the Cauchy problem for Eq. (3.56View Equation) is well posed if and only if the symbol

∑n R (k) := (Aij + BiBj )kikj, k ∈ Sn −1, (3.62 ) i,j=1
has the following properties: there exist constants M > 0 and δ > 0 and a family h(k ) of Hermitian m × m matrices such that
M − 1I ≤ h(k) ≤ M I, h(k )R (k) = R (k)∗h(k) ≥ δI (3.63 )
for all n−1 k ∈ S.

Proof. Since for Bj = βjI the advection term n i|k| ∑ Bjˆk j=1 j commutes with any Hermitian matrix H (k), it is sufficient to prove the theorem for Bj = 0, in which case the principal symbol reduces to

( ) 0 I n−1 Q0 (ik) := − R (k) 0 , k ∈ S . (3.64 )
We write the symmetrizer H (k ) in the following block form,
( H (k) H (k)) H (k) = 11 ∗ 12 , (3.65 ) H12 (k ) H22 (k)
where H11(k), H22(k ) and H12(k) are complex m × m matrices, the first two being Hermitian. Then,
( − H12(k)R (k) − R(k)∗H12 (k)∗ H11 (k) − R(k)∗H22 (k)) H (k)Q0 (ik) + Q0 (ik)∗H (k) = ∗ . (3.66 ) H11(k ) − H22 (k)R (k ) H12(k) + H12 (k )
Now, suppose h (k ) satisfies the conditions (3.63View Equation). Then, choosing H12 (k) := 0, H22 (k) := h (k) and H (k) := h (k)R(k ) 11 we find that H (k)Q (ik) + Q (ik )∗H (k) = 0 0 0. Furthermore, M −1I ≤ H (k) ≤ M I 22 and δI ≤ H11(k ) = h (k)R(k ) ≤ M CI where
n−1 m C := sup{ |R (k)u| : k ∈ S ,u ∈ ℂ ,|u | = 1} (3.67 )
is finite because R (k)u is continuous in k and u. Therefore, H (k ) is a symmetrizer for Q0(ik), and the problem is well posed.

Conversely, suppose that the problem is well posed with symmetrizer H (k). Then, the vanishing of ∗ H (k)Q0 (ik) + Q0 (ik) H (k) yields the conditions ∗ H11(k) = H22 (k)R(k ) = R (k) H22 (k) and the conditions (3.63View Equation) are satisfied for h(k) := H22(k). □

Remark: The conditions (3.63View Equation) imply that R (k) is symmetric and positive with respect to the scalar product defined by h (k ). Hence it is diagonalizable, and all its eigenvalues are positive. A practical way of finding h(k) is to construct T(k), which diagonalizes R(k), − 1 T (k ) R(k)T (k) = P (k) with P(k ) diagonal and positive. Then, −1 ∗ −1 h(k ) := (T (k ) )T (k) is the candidate for satisfying the conditions (3.63View Equation).

Let us give some examples and applications:

Example 16. The Klein–Gordon equation vtt = Δv − m2v on flat spacetime. In this case, ij ij A = δ and j B = 0, and 2 R (k) = |k | trivially satisfies the conditions of Theorem 2.

Example 17. In anticipation of the following Section 3.2, where linear problems with variable coefficients are treated, let us generalize the previous example on a curved spacetime (M, g). We assume that (M, g ) is globally hyperbolic such that it can be foliated by space-like hypersurfaces Σt. In the ADM decomposition, the metric in adapted coordinates assumes the form

g = − α2dt ⊗ dt + γij(dxi + βidt) ⊗ (dxj + βjdt), (3.68 )
with α > 0 the lapse, i β the shift vector, which is tangent to Σt, and i j γijdx ⊗ dx the induced three-metric on the spacelike hypersurfaces Σt. The inverse of the metric is given by
( ) ( ) −1 -1- -∂- i-∂-- -∂- j-∂-- ij-∂-- -∂-- g = − α2 ∂t − β ∂xi ⊗ ∂t − β ∂xj + γ ∂xi ⊗ ∂xj , (3.69 )
where ij γ are the components of the inverse three-metric. The Klein–Gordon equation on (M, g) is
μν 1 (∘ --------- μν ) 2 g ∇ μ∇ νv = ∘----------∂μ − det(g)g ∂ νv = m v, (3.70 ) − det(g)
which, in the constant coefficient case, has the form of Eq. (3.56View Equation) with
Ajk = α2γjk − βjβk, Bj = βj. (3.71 )
Hence, R (k) = α2 γijkikj, and the conditions of Theorem 2 are satisfied with h(k) = 1 since α > 0 and γij is symmetric positive definite.
  Go to previous page Go up Go to next page