3.3 Quasilinear equations

Next, we generalize the theory one more step and consider evolution systems, which are described by quasilinear partial differential equations, that is, by nonlinear partial differential equations, which are linear in their highest-order derivatives. This already covers most of the interesting physical systems, including the Yang–Mills and the Einstein equations. Restricting ourselves to the first-order case, such equations have the form
∑ n j ∂ n ut = A (t,x, u)---ju + F(t,x,u ), 0 ≤ t ≤ T, x ∈ ℝ , (3.115 ) j=1 ∂x
where all the coefficients of the complex m × m matrices 1 A (t,x,u), …, n A (t,x,u) and the nonlinear source term m F (t,x,u) ∈ ℂ belong to the class ∞ n m C b ([0,T ] × ℝ × ℂ ) of bounded, ∞ C-functions with bounded derivatives. Compared to the linear case, there are two new features the solutions may exhibit:

For these reasons, one cannot expect global existence of smooth solutions from smooth initial data with compact support in general, and the best one can hope for is existence of a smooth solution on some finite time interval [0,T ], where T might depend on the initial data.

Under such restrictions, it is possible to prove well-posedness of the Cauchy problem. The idea is to linearize the problem and to apply Banach’s fixed-point theorem. This is discussed next.

3.3.1 The principle of linearization

Suppose u(0)(t,x) is a C∞ (reference) solution of Eq. (3.115View Equation), corresponding to initial data (0) f (x ) = u (0,x). Assuming this solution to be uniquely determined by the initial data f, we may ask if a unique solution u also exists for the perturbed problem

n ∑ j -∂-- n ut(t,x) = A (t,x,u)∂xj u(t,x) + F(t,x,u ) + δF (t,x), x ∈ ℝ , 0 ≤ t ≤ T, (3.116 ) j=1 u(0,x ) = f (x) + δf(x), x ∈ ℝn, (3.117 )
where the perturbations δF(t,x) and δf(x) belong to the class of bounded, ∞ C-functions with bounded derivatives. This leads to the following definition:

Definition 5. Consider the nonlinear Cauchy problem given by Eq. (3.115View Equation) and prescribed initial data for u at t = 0. Let u (0) be a C ∞-solution to this problem, which is uniquely determined by its initial data f. Then, the problem is called well posed at (0) mathbf u, if there are normed vector spaces X, Y, and Z and constants K > 0, 𝜀 > 0 such that for all sufficiently-smooth perturbations δf and δF lying in Y and Z, respectively, with

∥δf∥Y + ∥δF ∥Z < 𝜀, (3.118 )
the perturbed problem (3.116View Equation, 3.117View Equation) is also uniquely solvable and the corresponding solution u satisfies u − u (0) ∈ X and the estimate
(0) ∥u − u ∥X ≤ K (∥ δf∥Y + ∥δF ∥Z ). (3.119 )

Here, the norms X and Y appearing on both sides of Eq. (3.119View Equation) are different from each other because ∥u − u(0)∥ X controls the function u − u(0) over the spacetime region [0,T] × ℝn while ∥δf ∥Y is a norm controlling the function δf on n ℝ.

If the problem is well posed at u(0), we may consider a one-parameter curve f𝜀 of initial data lying in C0∞(ℝn ) that goes through f and assume that there is a corresponding solution u𝜀(t,x ) for each small enough |𝜀|, which lies close to u(0) in the sense of inequality (3.119View Equation). Expanding

(0) (1) 2 (2) u𝜀(t,x ) = u (t,x) + 𝜀v (t,x) + 𝜀 v (t,x) + ... (3.120 )
and plugging into the Eq. (3.115View Equation) we find, to first order in 𝜀,
∑n ∂ v (1t)= Aj0(t,x)----v(1) + B0(t,x)v(1), (3.121 ) j=1 ∂xj
j (0) j j (0) ∂A-- (0) ∂u--- ∂F-- (0) A 0(t,x) = A (t,x,u (t,x )), B0(t,x) = ∂u (t,x,u (t,x))∂xj + ∂u (t,x,u (t,x)).(3.122 )
Eq. (3.121View Equation) is a first-order linear equation with variable coefficients for the first variation, (1) v, for which we can apply the theory described in Section 3.2. Therefore, it is reasonable to assume that the linearized problem is strongly hyperbolic for any smooth function u(0)(t,x). In particular, if we generalize the definitions of strongly and symmetric hyperbolicity given in Definition 4 to the quasilinear case by requiring that the symmetrizer H (t,x, k,u) has coefficients in C ∞ (Ω × Sn− 1 × ℂm ) b, it follows that the linearized problem is well posed provided that the quasilinear problem is strongly or symmetric hyperbolic.

The linearization principle states that the converse is also true: the nonlinear problem is well posed at (0) u if all the linear problems, which are obtained by linearizing Eq. (3.115View Equation) at functions in a suitable neighborhood of u(0) are well posed. To prove that this principle holds, one sets up the following iteration. We define the sequence u(k) of functions by iteratively solving the linear problems

(k+1) ∑n j (k) ∂ (k+1) (k) n ut = A (t,x,u )--j-u + F (t,x, u ) + δF (t,x), x ∈ ℝ , 0 ≤ t ≤ T, (3.123 ) j=1 ∂x (k+1) n u (0,x) = f(x) + δf(x), x ∈ ℝ , (3.124 )
for k = 0,1,2,... starting with the reference solution u(0). If the linearized problems are well posed in the sense of Definition 3 for functions lying in a neighborhood of u(0), one can solve each Cauchy problem (3.123View Equation, 3.124View Equation), at least for small enough time T k. The key point then, is to prove that T k does not shrink to zero when k → ∞ and to show that the sequence (k) u of functions converges to a solution of the perturbed problem (3.116View Equation, 3.117View Equation). This is, of course, a nontrivial task, which requires controlling u (k) and its derivatives in an appropriate way. For particular examples where this program is carried through; see [259Jump To The Next Citation Point]. For general results on quasilinear symmetric hyperbolic systems; see [251, 164Jump To The Next Citation Point, 412, 51Jump To The Next Citation Point].
  Go to previous page Go up Go to next page