Web page maintained by Evans M. Harrell, II, harrell@math.gatech.edu.
We consider a partial differential equation expressed by
F([[partialdiff]]u,[[partialdiff]]t) + A F([[partialdiff]]u,[[partialdiff]]x) = F(t,x,u) (18.1)
defined for - * < x < *, t > 0,
with the initial condition u(0,x) specified. (18.2)
In the equation (18.1), A denotes an nxn matrix whose entries depend on t, x, and u. The function F is vector valued and depends on t, x, and u also -- but not [[partialdiff]]u/[[partialdiff]]t or [[partialdiff]]u/[[partialdiff]]x. The solution u is a vector function of t and x. In Section 4 we wrote systems of first order partial differential equations in this form. Here, we will re-write second order equations as these matrix systems and use techniques that are suggested by the Jordon form representations of A and by the methods of characteristics to analyze these equations.
Re-writing second order equation as first order systems is not a new idea. Recall that this notion was used in sophomore ordinary differential equations. Here's an example to recall the form that the method took for ordinary differential equations.
We re-write the second order, ordinary differential equation
y'' + 3 y' + y = 0, y(0) = 5, y'(0) = 7
as a first order system. The procedure is standard: let u = y and v = y'. Then
u' = y' = v
and v' = y'' = - y - 3 y' = - u - 3 v.
Hence B(A( u, v))' = B(ACO2( 0, 1, -1, -3)) B(A( u, v))
with
B(A( u(0), v(0))) = B(A( 5, 7)).
The methods for solving systems are richer and encompass more varieties than just the second order equation converted to this system. Also, the system's method is the structure that generalizes to more complicated situations.
We now illustrate these methods by re-writing the wave equation in this form.
Example 18.1
Consider
utt - [[gamma]]2 uxx = 0. (18.3)
We express (18.3) as a first order system as follows: let v1 = ut and v2 = [[gamma]] ux. Then equation (18.3) implies that
F([[partialdiff]] v1,[[partialdiff]]t) = [[gamma]] F([[partialdiff]] v2,[[partialdiff]]x).
But, assuming the equality of mixed partials,
F([[partialdiff]] v2,[[partialdiff]]t) = [[gamma]] F([[partialdiff]] ux,[[partialdiff]]t) = [[gamma]] F([[partialdiff]] ut,[[partialdiff]]x) = [[gamma]] F([[partialdiff]]v1,[[partialdiff]]x).
Writing this as a system,
F([[partialdiff]] ,[[partialdiff]]t) B(A(v1,v2)) = B(ACO2( 0, [[gamma]], [[gamma]], 0)) F([[partialdiff]] ,[[partialdiff]]x) B(A(v1,v2))
or F([[partialdiff]] ,[[partialdiff]]t) B(A(v1,v2)) + B(ACO2( 0, -[[gamma]], -[[gamma]], 0)) F([[partialdiff]] ,[[partialdiff]]x) B(A(v1,v2)) = B(A(0,0)).
Thus,
F([[partialdiff]]v,[[partialdiff]]t) + A F([[partialdiff]]v,[[partialdiff]]x) = 0
where v = B(A(v1,v2)) and A is the matrix indicated.
Example 18.2
Another second order partial differential equation often encountered is Laplace's equation:
F([[partialdiff]]2u,[[partialdiff]]x2) + F([[partialdiff]]2u,[[partialdiff]]y2) = 0
This second order equation can be written as the system
F([[partialdiff]] ,[[partialdiff]]x) B(A(v1,v2)) + B(ACO2( 0, 1, -1, 0)) F([[partialdiff]] ,[[partialdiff]]y) B(A(v1,v2)) = B(A(0,0)).
In their book
Introduction to Partial Differential Equations with Applications, Dover Publications, Inc. New York (1976)
Zachmanoglou and Thoe define hyperbolic and elliptic systems. See page 362 of that book.
Definition: A system such as (18.1) with A an nxn matrix is
(1) hyperbolic if the eigenvalues of A are real and the Jordon form of A is diagonal; that is, if the eigenvalues of A are real and A has n linearly independent vectors.
(2) totally hyperbolic if the eigenvalues of A are real and distinct.
(3) elliptic if A has no real eigenvalues.
Remark. The wave equation leads to a hyperbolic system and Laplace's equation leads to an elliptic system, unremarkably.
In Section 11, we defined characteristics for a system of the form
autt + butx + cuxx +d ut + e ud + f u = g. (18.4)
The characteristics were given by equations (11.3) and (11.4). We now establish what might have been expected.
Theorem: When (18.4) is re-written as the system as (18.1), then the eigenvalues of A determine the characteristics of (18.4).
To verify this result, we identify v1 = ut and v2 = ux. Equality of mixed partial derivatives gives that
F([[partialdiff]] v2,[[partialdiff]]t) - F([[partialdiff]] v1,[[partialdiff]]x) = 0.
and, from (18.4),
F([[partialdiff]] v1,[[partialdiff]]t) + F(b,a) F([[partialdiff]] v1,[[partialdiff]]x) + F(c,a) F([[partialdiff]] v2,[[partialdiff]]x) = F(1,a) (g - f u - e v1 - d v2).
The resulting matrix form is
F([[partialdiff]] ,[[partialdiff]]t) B(A(v1,v2)) + B(ACO2( b/a, c/a, -1, 0)) F([[partialdiff]] ,[[partialdiff]]x) B(A(v1,v2)) = B(A(f1,0)).
The eigenvalues of this matrix A are solutions for the equation
([[lambda]] - b/a)[[lambda]] + c/a = 0.
That is,
a [[lambda]]2 - b [[lambda]] + c = 0.
This agrees with (11.4) and (11.5) and completes the verification of the Theorem.
In what follows we suppose that the system is totally hyperbolic. This implies that the characteristic curves will be real and distinct. The key feature of the method of characteristics is that along the characteristic curves system (18.1) can be reduced to a system of ordinary differential equations.
Example 18.3
Suppose that the coefficients of A in (18.1) depend on t, x, and u, that F = 0, and that the system is totally hyperbolic. Let [[lambda]] and u be the two eigenvalues of A. Construct the matrix K so that
KAK-1 = B(ACO2( [[lambda]], 0, 0, u)) = D,
or, what is the same,
A = K-1 D K.
Note that all of [[lambda]], u, K, and D are not constant in t or x since A is not. We recall that K is constructed as the matrix with columns the eigenvectors of A. If u = Kv then since
0 = F([[partialdiff]]u,[[partialdiff]]t) + A F([[partialdiff]]u,[[partialdiff]]x)
we have 0 = BBC[( F([[partialdiff]]K,[[partialdiff]]t) v + KF([[partialdiff]]v,[[partialdiff]]t)) + ABBC[(F([[partialdiff]]K,[[partialdiff]]x) v + K F([[partialdiff]]v,[[partialdiff]]x) )
= KF([[partialdiff]]v,[[partialdiff]]t) + A K F([[partialdiff]]v,[[partialdiff]]x) + F([[partialdiff]]K,[[partialdiff]]t) v + AF([[partialdiff]]K,[[partialdiff]]x) v .
Multiplying by K-1
0 = F([[partialdiff]]v,[[partialdiff]]t) + D F([[partialdiff]]v,[[partialdiff]]x) + K-1 F([[partialdiff]]K,[[partialdiff]]t) v + K-1AF([[partialdiff]]K,[[partialdiff]]x) v (18.5)
= F([[partialdiff]]v,[[partialdiff]]t) + D F([[partialdiff]]v,[[partialdiff]]x) + F(t,x,v)
Then
0 = F([[partialdiff]]v1,[[partialdiff]]t) + [[lambda]] F([[partialdiff]]v1,[[partialdiff]]x) + F(t,x,v)1
and
0 = F([[partialdiff]]v2,[[partialdiff]]t) + u F([[partialdiff]]v2,[[partialdiff]]x) + F(t,x,v)2.
This is a system of differential equations such as we solved with first order systems and can be solved with similar techniques.
Example 18.4: Consider the partial differential equation
uxx + 4 uxy + 3 uyy = 0.
We find the general solution by the techniques of this section.
The problem is rewritten as a system
F([[partialdiff]]U,[[partialdiff]]t) + A F([[partialdiff]]U,[[partialdiff]]x)
where A is the matrix
A = B(ACO2( 4, 3, -1, 0)).
This has Jordan form
D = B(ACO2( 1, 0, 0, 3)),
where the matrix K such that
D = K A K-1
is
K = B(ACO2( 1, 3, 1, 1)).
Choose
v(x,y) = f(y + x) and w(x,y) = g( y + 3 x).
Take u(x,y) to be the first component of
K-1 B(A(v(x,y),w(x,y))).
This provides a solution of the partial differential equation.
Exercises
18.1 Find the general solution for these two partial differential equations:
(a) uxx + 2 uxy - 3 uyy = 0.
(b) uxx - 4 uxy + 4 uyy = 0.
18.2 Solve the partial differential equation
uxx + 4 uxy + 3 uyy = 0
subject to these conditions:
(a) u(0,y) = sin(y) and ux(0,y) = 0.
(b) u(x,-x) = j(x) and [[partialdiff]]u/[[partialdiff]][[eta]] = h(x) where [[eta]] = {1,1}/R(2). (Recall that
[[partialdiff]]u/[[partialdiff]][[eta]] = < {ux, uy}, [[eta]] >.
(c) u((x,x2) = j(x) and [[partialdiff]]u/[[partialdiff]][[eta]] = h(x). In this case [[eta]](x) = {-2x,1}/R(1+4x2).
18.3 For each of the following matrices A solve the system
Zt = AZx
(a) A = B(ACO2( 3/2, -1/2, -1/2, 3/2))
(b) A = B(ACO2( -3/2, 1/2, 1/2, -3/2))
(c) A = B(ACO2(41/25, -12/25, -12/25, 34/25))
(d) A =B(ACO2(-41/25, 12/25, 12/25, -34/25))
18.4 For each of the following matrices A, solve the system
Zt = A Zx.
(a) A = B(ACO2( (1-x)/2, (1+x)/2, (1+x)/2, (1-x)/2))
(b) A = B(ACO2( (9-16x)/25, 12(1+x)/25, 12(1+x)/25, (16-9x)/25))
(c) A = B(ACO2( (t-x)/2, (t+x)/2, (t+x)/2, (t-x)/2))
(18.5) (a) Write the parabolic system
F([[partialdiff]]u,[[partialdiff]]t) = F([[partialdiff]]2u,[[partialdiff]]x2) (18.5)
as the system
B(ACO2( 0, 0, 1, 0)) F([[partialdiff]] ,[[partialdiff]]t) B(A(v1,v2)) + B(ACO2( 0, 1, 1, 0)) F([[partialdiff]] ,[[partialdiff]]x) B(A(v1,v2)) = B(ACO2( 1, 0, 0, 0)) B(A(v1,v2)).
(b) In his book
Partial differential Equations, Oxford Applied Mathematics and Computing Series, Oxford University Press, New York (1980)
W. E. Williams classifies n dimensional systems of the form
Aux + B uy = c. (18.6)
See page 297. Consider the equation
det(A[[lambda]] - B) = 0 (18.7)
and the vector equation
(A[[lambda]] - B)v = 0 . (18.8)
The system (18.6) is
hyperbolic if the roots of equation (18.7) are all real (but not necessarily distinct) and such that there exists n independent solutions of equation (18.8).
totally hyperbolic when the roots of equation (18.6) are distinct and real.
parabolic when the roots are real but not distinct and there exist less than n independent solutions for (18.8).
elliptic if none of the roots of (18.7) are real.
With these classifications, is (18.5) parabolic?
(c) Write F([[partialdiff]]2u,[[partialdiff]]x2) + F([[partialdiff]]2u,[[partialdiff]]y2) = 0 and F([[partialdiff]]2u,[[partialdiff]]x2) - F([[partialdiff]]2u,[[partialdiff]]y2) = 0
in the form (18.6) and classify them with the scheme of Williams.