Finding Green functions for ODEs.

Linear Methods of Applied Mathematics

Evans M. Harrell II and James V. Herod*

*(c) Copyright 1994,1995,1996 by Evans M. Harrell II and James V. Herod. All rights reserved.

This section has a translation into Belorussian, with permission.

version of 1 June 1996

Finding Green functions for ordinary differential equations.

We begin with the case of the first Fredholm alternative. If the equation is in this case, we are guaranteed that it has a unique solution - but how to find it?

Our method to solve a nonhomogeneous differential equation will be to find an integral operator which produces a solution satisfying all given boundary conditions. The integral operator has a kernel called the Green function , usually denoted G(t,x). This is multiplied by the nonhomogeneous term and integrated by one of the variables.

There are several methods of constructing Green functions. The one we will present first, and emphasize, is the one students seem to prefer. Perhaps this is because it is easy to remember and has an inherent simplicity. Other methods will be included in these notes for comparison. There are ideas which the other methods use that are important.

As before, we assume a certain form for the differential operator L:

L(y) = ISU(p=0,n, ) ap(x) y<sup>(p)</sup>(x)  = an(x)y<sup>(n)</sup>(x) +
... +a1(x)y'(x) + a0(x)y(x).

We suppose that an(x) is not zero on [0,1] and that each term of the sequence ap(x), p=0,..., n, has at least n continuous derivatives. We discuss the construction of the Green function in three cases depending on the nature of the boundary conditions. Until further notice, we assume the first alternative holds and will repeat this warning for emphasis. We continue to denote M and M* as the manifolds associated with {L,B} and {L*,B*}, respectively.

In most of our examples, and in the majority of applications, the differential equations are of second order. Ultimately, this arises from Newton's force law, F = m a, which is second order, since acceleration is a second derivative.

Let's begin by describing the algorithm for constructing G for second-order problems. We'll discuss why this works below

The function G depends on two variables and has the following properties: if t is in (0,1) , then

G(x,t), F([[partialdiff]](G(x,t)),[[partialdiff]]x)<sup>, and
 </sup> exist for 0 < x < t and for t < x < 1. Further suppose that these derivatives have a continuous extension to the triangular region 0 <= x <= t and t <= x <= 1. The effect of this extension is that




for p = 1,2

At the boundary we shall insist that G(x,t) be continuous. For the partial Gx, however, we require a special jump discontinuity as follows:

Pick t in [0,1].
(a)  L(G(.,t))(x) = 0 for 0 < x < t and for t < x < 1,
(b)  G(.,t) is in M, 
(c)  G(x,t) is a continuous function.
(d)     [[partialdiff]](G(t<sup>+</sup>,t))/[[partialdiff]]x  -
[[partialdiff]](G(t<sup>-</sup>,t))/[[partialdiff]]x<sup>  </sup>= 1/a2(t).

Here is what happens if there are derivatives of higher orders:

The function G should be constructed on [0,1]x[0,1] to have the following properties: if t is in (0,1) and 0 < p < n then




At this point, all we have asked of G is that it should have n continuous partials on the closed triangles 0 < x < t and t < x < 1. The requirement along the boundary will be that for p <= n-2, we have continuity. For example at p = 0, the effect is that G(t+,t ) = G(t-,t). Indeed, G(t+,t) = G(t,t-) = G(t,t+) = G(t-,t). And, this happens for the pth partials up to p = n-2. For the (n-1)th partial, we allow a jump discontinuity as prescribed in the summary below:

Pick t in [0,1].
(a)  L(G(.,t))(x) = 0 for 0 < x < t and for t < x < 1,
(b)  G(.,t) is in M, 
(c)  for 0 <= p <= n-2,
[[partialdiff]]<sup>p</sup>(G(t<sup>+</sup>,t))/[[partialdiff]]x<sup>p</sup>  =

-   [[partialdiff]]<sup>n-1</sup>(G(t<sup>-</sup>,t))/[[partialdiff]]x<sup>n-1
</sup>= 1/an(t).

Before showing that the above recipe really does provide solutions to the nth order equation, it would be well do to some examples, beginning with the TYPICAL PROBLEM:

EXAMPLE (First alternative, initial conditions): Here is the problem: given f continuous on [0,1], construct y such that

y'' + 3y' + 2y = f with y(0) = y'(0) = 0.

Let's identify the important parts here.

L(y) = y'' + 3y' + 2y, B1(y) = y(0) , B2(y) = y'(0),

and M = {y: y(0) = y'(0) = 0}.

We are in the first alternative because for this {L,B1,B2} the system L(y) = 0, B1(y) = B2(y) = 0 has only one solution and it is zero. We construct G step-by-step from the above directions.

To follow the directions of step (a) we need the general solution of the homogeneous equation L(y) = 0, that is, we need the general solution of the homogeneous equation

y'' + 3y' +2y = 0.

It's not so hard to see that linearly independent solutions for this equation are e-2x and e-x. (In this age, if you prefer, you can find these solutions with Maple, using the command dsolve or Mathematica, using DSolve.) Thus G satisfies step (a) if

G(x,t)  =  BLC{(A( Ae<sup>-2x</sup> + Be<sup>-x</sup>  for x < t,
Ce<sup>-2x</sup>  + De<sup>-x </sup> for t < x))

Note that A, B, C, and D are constant in x, but may change with t. We chall determine the four unknowns from the continuity and jump conditions.

To follow the directions of step (b) which requires that G(.,t) be in M, we need

G(0,t) = 0 and Gx(x,t)) | x=0 = 0

The implications of this are that

A + B = 0 and -2A - B = 0.

This implies that A = 0 and B = 0.

To follow the directions of step (c) which requires that G(t+,t) = G(t-,t), we need

(C - A)e-2t +(D - B)e-t = 0.

Or, knowing that A = B = 0,

Ce-2t + De-t = 0.

To follow the directions of step (d) which requires that

    Gx| x=t+ - Gx(x,t))|x=t- = 1,

we need

-2(C - A)e-2t - (D - B)e-t = 1.

Knowing that A = B = 0,

-2Ce-2t-De-t = 1.

This gives two equations, two unknowns in C and D. The solution is

C = -e2t and D = et.

Try to get an over view of this example: after finding two linearly independent solutions of the second order equation L(y) = 0, we would know G provided we solved for A, B, C, and D. Steps b, c, and d gave four equations in these four unknowns. Written in matrix form,

B(ACO4( 1, 1, 0, 0, -2, -1, 0, 0, 0, 0, e<sup>-2t</sup>, e<sup>-t</sup>, 0,
0, -2e<sup>-2t</sup>, -e<sup>-t</sup>))B(A(A,B,C,D)) =  B(A(0,0,0,1)).

The problem has been reduced to a matrix equation! We have solved the equations to find A = 0, B = 0, c = -e2t, and d = et. The end result is:

G(x,t)  =  BLC{(A( 0 , -e<sup>2(t-x)</sup> + e<sup>t-x</sup>))  A(if x <
t,if t < x).

We are confident that if f is continuous then the equation y = Gf provides a solution for L(y) = f because of Exercise 3 of the Introduction.


Suppose that L(y)(x) = a2(x)y''(x) + a1(x)y'(x) + a0(x) y(x). Let

u(x) =  I(0,1, ) G(x,t) f(t) dt.

Since G( , t) is in M then u is in M. It remains to see that L(u) = f. Note that

u(x) =  I(0,x, ) G(x,t) f(t) dt +  I(x,1, ) G(x,t) f(t) dt, u'(x) = ...

= I(0,x, ) [[partialdiff]](G(x,t))/[[partialdiff]]x f(t) dt
+<sup>   </sup>I(x,1, ) [[partialdiff]](G(x,t))/[[partialdiff]]x f(t) dt.

This last equality holds because of the assumption that G(x,x-) = G(x,x+).


 u''(x)  = .. =   f(x)/a2(x) + I(0,x, )
[[partialdiff]]<sup>2</sup>(G(x,t))/[[partialdiff]]x<sup>2</sup> f(t) dt +
I(x,1, ) [[partialdiff]]<sup>2</sup>(G(x,t))/[[partialdiff]]x<sup>2</sup> f(t)

This last equality uses the condition Gx(x,x-)) - Gx(x,x+)) = 1/a2(x). Finally, we use the fact that G(x, t), as a function of x, satisfies L(y) = 0 on [0,x] and on [x,1] to get that L(u) = f.

EXAMPLE( first alternative; unmixed, two point boundary conditions):

We will construct the Green function for the problem

y'' + 3y' + 2y = f with y(0) = 0 and y(1) = 0.

Here are the important parts:

L(y) = y'' + 3y' + 2y, B1(y) = y(0), B2(y) = y(1),

and M = {y: y(0) = y(1) = 0}.

A little bit of work needs to be done to verify that we are in the first alternative. If L(y) = 0, then there are numbers a and b such that

y(x) = a e-2x + b e-x.

To require that B1(y) = 0 and B2(y) = 0 requires that

0 = a + b

and 0 = a e-2 + b e-1.

The only solution to this pair of equations is a = 0 = b, which verifies that we are in the first alternative.

The construction for G is as before:

G(x,t)  =  BLC{(A( Ae<sup>-2x</sup> + Be<sup>-x</sup>  for x < t,
Ce<sup>-2x</sup>  + De<sup>-x </sup> for t < x))

The two boundary conditions and the continuity conditions lead to the equations 0 = A + B

0 = C e-2 + D e-1

0 = (C-A) e-2t + (D-B) e-t

1 = -2(C-A) e-2t - (D-B) e-t.

Certainly, these equations can be solved, although the details are tedious. Here is a better idea. Instead of choosing e-2t and e-t as linearly independent solutions of the equation L(y) = 0, choose another pair having these properties:

   u1(0) = 0 and u1(1) >< 0

   u2(0) >< 0 and u2(1) = 0.

(For this example, u1(t) = e-2t - e-t and u2(t) = e-2(t-1) - e-(t-1).)

Now, make up G this way,

G(x,t) = BLC{(A(Au1(x) + Bu2(x) for  x < t,Cu1(x) + Du2(x)<sup>   </sup>for  x > t))

Apply the boundary conditions:

0 = Bu2(0), which implies that B = 0,

and 0 = Cu1(1), which implies that C = 0.

The continuity conditions give the two equations

0 = G(t<sup>+</sup>,t) - G(t<sup>-</sup>,t) = Du1(t) - Au0(t); and 1/a2(t) = [[partialdiff]](G(t<sup>+</sup>,t))/[[partialdiff]]x -
[[partialdiff]](G(t<sup>-</sup>,t))/[[partialdiff]]x = D u'1(t) - A u'0(t).

From these equations we get

A = u2(t) /a2(t)w(t)

and D = u1(t) / a2(t)w(t)


w(t) = det  B(ACO2( u1(t), u2(t), u1'(t), u2'(t)))

is called the Wronskian of u0 and u1.

Here is the final result:

G(x,t) =  BLC{(A(u2(t) u1(x)/a2(t)w(t) for  x < t,u1(t) u2(x)/a2(t)w(t)  for  t < x.)).

There is one more important piece of information that you will learn, or be reminded of, if we work out both parts of the formula for G. Recall that

u1(t) = e-2t - e-t and u2(t) = e-2(t-1) - e-(t-1).

And now, to compute w(t). The chore of that computation seems too tedious to be fun. Not to worry! Look up "Wronskian" is some good sophomore differential equations book and you will find a convenient formula :

W(t) = W(0)  exp( I(0,t, ) -[a_{n-1}(s)/a_n(s)] ds ).


Notice that this is particularly simple if an-1 = 0, as often happens: The Wronskian is then a constant.

Now the computation is easy:

W(0) =   detB(ACO2( 0,  e<sup>2</sup> - e<sup>1, </sup>-1,  -2e<sup>2 </sup>+e<sup>1</sup>)) = e<sup>2</sup> -e<sup>1</sup>.

and        w(t) = (e2 - e1)e-3t.


F(u1(t) u2(x),w(t))   = (e<sup>2+t</sup> - e<sup>1+2t</sup>) (e<sup>-2x</sup> - e<sup>-x</sup>) / (e<sup>2</sup> - e<sup>1</sup>).... =  e<sup>2(t-x) </sup>(e<sup>1-t</sup> - 1)(1 - e<sup>x</sup>)/(e-1)


F(u1(t) u2(x),w(t))  = (e<sup>t</sup> - e<sup>2t</sup>) (e<sup>2-2x</sup> - e<sup>1-x</sup>) /(e<sup>2</sup> - e)... =  e<sup>t-x</sup> (1-e<sup>t</sup>) (e<sup>1-x</sup> - 1)/ (e-1).

We got more from this example than the answer: we also got the following quick method that works for this type problem.


Pick u1 and u2 such that B1(u1) = 0, B2(u1) >< 0, B2(u1) = 0, and B1(u2) >< 0.


G(x,t) =  BLC{(A(u2(t) u1(x)/a2(t)w(t), for  x < t,u1(t) u2(x)/a2(t)w(t), for  t < x.)).

where w is the Wronskian of u1 and u2.

EXAMPLE (first alternative; mixed, two point boundary conditions):


L(y) = y'', B1(y) = y(0) + y(1), and B2(y) = y'(0) + y'(1).

First, we verify that we have the first alternative B2 supposing that

L(y) = 0 and B1(y) = B2(y) = 0.

Then y(x) = a + bx , for constants a and b. Since

y(0) + y(1) = 0

then 2a + b = 0.


y'(0) + y'(1) = 0,

then 2a = 0.

These two equations imply that a = b = 0. We now begin the construction of the Green function.

Pick 0 < t < 1:

G(x,t) =  BLC{(A(A + Bx,  for x < t,C + Dx  for t < x))

We have four constants to determine; here are four equations:

0 = G(0,t) + G(1,t) = A + C + D,

0 = Gx(0,t)) + Gx(1,t)) = B + D,

0 = G(t+,t) - G(t-,t) = (C - A) + (D - B) t,

1/a2(t) = Gx(t+,t) - Gx(t-,t)) = D - B.

The solution for these four equations is A= (2t - 1)/4, B = -1/2,

C = -(2t + 1)/4, and D = 1/2.

An understanding of the equation L(y) = delta(x - t).

By now you should believe that except for arithmetic details, you can work any of these problems. We have come to the place where we need to get this problem into perspective.

We know that the requirements of Chapter XIV give the Kronecker delta symbol,

\delta<sub>jk</sub> = 1, if j=k, and otherwise 0; so    isu(k=1,n, djk vk) = vj

for any vector v.

Thus A G = Id in components becomes

isu(k=1,n, Ajk Gkm) = djm.

When trying to solve differential equations, we might hope to find G(.,t) as a solution to the equation L(G(.,t)) = \delta(.,t). Some understanding of this equation is in order for the right side is not a function in the ordinary sense. As has already been pointed out, it is a "generalized function". The analogy with the matrix problem is pretty close: The delta function in essence gives the continuous coordinates of the identity operator:

i(a,b,delta(x-t) f(t)) = f(x),

for any a,b with a < x < b. Recall that the integral is a sort of continuous sum, so this is appropriate.

We present here, not a proof, but an understanding that

L(G(.,t))(x) = \delta(x - t).


The ideas should be examined and re-examined in later courses as a theory for generalized functions is developed:

Suppose that

y(x) = I(0,1, )G(x,t) f(t) dt

and that equation (15.2) holds. Intuition is a guide:

L(y)(x)= L(I(0,1, ) G(x,t) f(t) dt)=...= I(0,1, ) d(x,t) f(t) dt  =  f(x).

If one were asked to solve the equation L(y) = f, where L is a reasonable second order operator, in the context of a sophomore differential equations course, one would think of the variations of parameter formula. In that setting, and for second order problems with u0 and u1 linearly independent solutions of the homogeneous equation,

y(x) = C0(x) u0(x) + C1(x) u1(x).


F([[partialdiff]]C0,[[partialdiff]]x) (x) u0(x) +
F([[partialdiff]]C1,[[partialdiff]]x) (x) u1(x)   =  0


F([[partialdiff]]C0,[[partialdiff]]x) (x) u'0(x) +
F([[partialdiff]]C1,[[partialdiff]]x) (x) u'1(x)   =  f(x)/a2(x).

F([[partialdiff]]C0,[[partialdiff]]x) (x)     =      F(- u1(x) f(x) ,a2(x)w(x))

F([[partialdiff]]C1,[[partialdiff]]x) (x)    =      F(u0(x)
f(x),a2(x) w(x))

This suggests an interpretation for "solution" of the second order equation:

L(G(.,t))(x) = \delta(x,t).

Namely, G (.,t) is the continuous function given by

(a) G(x,t) = C0(x,t)u0(x) + C1(x,t)u1(x)


F([[partialdiff]]C1,[[partialdiff]]x) (x) u1(x)   =  0


(c)  F([[partialdiff]]C0,[[partialdiff]]x) (x)u'0(x) + <sup> </sup>F([[partialdiff]]C1,[[partialdiff]]x) (x)
u'1(x)  =  d(x,t)/a2(x).

As above, the distribution equations should have solution

(d)         F([[partialdiff]]C0,[[partialdiff]]x) (x)   = F(- u1(x) d(x,t) ,a2(x) w(x));                    (e)           F([[partialdiff]]C1,[[partialdiff]]x) (x)   =    F( u0(x) d(x,t) ,a2(x) w(x)).

THEOREM. If, for each t, G(.,t) is in M and L(G(.,t))(x) = d(x,t) then G satisfies the four equations of Chapter XIV.

Proof. We hope to recognize the four equations which we used to define G for second order problems as arising from the above requirements for G. Two of those equations come from asking that G(.,t) should satisfy the two boundary equations. One other, G(t+,t) - G(t-,t) = 0, comes from the requirement that G(.,t) should be continuous. To derive the equation

Gx(t+,t) - Gx(t-,t) = 1/a2(t),

we first compute Gx(x,t).

   Gx(x,t) =
       = C0,x(x,t) u0(x) + C0(x,t) u0'(x) + C1,x(x,t) u1(x)+ C1(x,t) u1'(x)
       = C0(x,t) u0'(x) + C1(x,t) u1'(x).

This last equality follows from (b). To find

Gx(t+,t) - Gx(t-,t) = [C0(t+,t)-C0(t-,t)] u0'(t) + [C1(t+,t)-C1(t-,t)] u'1(t),

we must evaluate

[C0(t<sup>+</sup>,t) - C0(t<sup>-</sup>,t)] = ...

In a similar manner,

[C1(t+,t) - C1(t-,t)] = u0(t) /a2(t) w(t).


[[partialdiff]]G(x,t)/[[partialdiff]]x|x=t+  - [[partialdiff]]G(x,t)/[[partialdiff]]x|x=t-  = F(-u1(t) u0'(t) + u0(t) u1'(t), a2(t) w(t))   = F(1,a2(t))

Hence, the inverse of the differential operator L on the set M is obtained by finding the function G( ,t) in M which satisfies (15.2),

L(G(.,t))(x) = \delta(x-t).



The second alternative

We now discuss the problems where the Second Alternative holds. The supposition is that there is a nontrivial solution for L(y) = 0, B1(y) = B2(y) = 0. The Fredholm Theorems assure us that, if f is continuous, then there is a solution for L(y) = f, with B1(y) = B2(y) = 0 provided

< f, w > = I(0,1, ) f(x) w(x) dx = 0

for all solutions w of the equation L*(w) = 0, B1*(w) = B2*(w) = 0. As before, we will construct Green functions G such that, in case f satisfies the above requirement, then

y(x)  =  I(0,1, ) G(x,t) f(t) dt

provides a solution for L(y) = f.

In this second alternative, there may be many solutions for the equation L(y) = f. Consequently, we expect there may be many Green functions. In the technique developed below, G( ,t) is always in

   M= {y: B0(y) = B1(y) = 0 }.

This is not necessarily true for Green functions constructed by other methods: see for example the construction found by Don Jones while a graduate research assistant at GEORGIA TECH and given in an appendix.

We again divide the problems into three cases according to the nature of the boundary conditions. We shall illustrate methods of construction.

The first case to consider is where the boundary conditions arise as initial conditions. This case is not pertinent for the initial value problem has a unique solution. Thus, case one is always in the first alternative.

EXAMPLE:(Second Alternative,unmixed, two point boundary conditions)

Suppose that L(y) = y'' + y' - 2y, B1(y) = y(0) - y'(0), and B2(y) = y(1) - y'(1). It is the purpose of this example to show that there is no function G such that L(G(.,t))(x) = d(x,t). Note that L*(z) = z'' - z' -2z and M* = {y: 2z(1) = z'(1), 2z(0) = z'(0)}. Nontrivial functions in the nullspace of {L*, B1*, B2*} are multiples of e2t. Hence, we are in the second alternative. The Fredholm Alternative theorem suggests that there will be no function G such that, if t is in (0,1), then the distribution equation L(G( ,t)) (x) = d(x,t) holds unless

 I(0,1, ) delta(x,t) e<sup>2t</sup> dt = 0.

Of course, the value of this integral is not zero.

For this situation, we must modify the construction of the Green function.


Step (1) Find the nullspace of { L*, M*}

Step (2) Find an orthonormal basis for this nullspace. Call this basis v1,v2,...vm, m < n.

Step (3) Construct up such that L(up) = vp, p = 1, 2, ...n.

Step (4) Construct G such that L(G( ,t))(x) = d(x,t) - ISU(p=1,m, )v<sub>p</sub>(x)v<sub>p</sub>(t).

THEOREM If 0 < t < 1, then there is G(.,t) such that

L(G( ,t))(x) = d(x,t) - ISU(p=1,m, )v<sub>p</sub>(x)v<sub>p</sub>(t).

INDICATION OF PROOF. By the Fredholm Alternative Theorems, there will be such a function G provided

0 = < d(x,t) - ISU(p=1,n, )vp(x)vp(t) , w(t) > I(0,1, )[d(x,t) -ISU(p=1,n, )vp(x)vp(t)] w(t) dt

for all w in the nullspace of L*. This can be verified by writing w in terms of this orthonormal basis ,
w(x) = ISU(p=1,n, ) [[alpha]]p vp(x),
and evaluating the dot product.


L(G(.,t))(x) = delta(x,t) - ISU(p=1,n, )vp(x)vp(t).

First, find linearly independent solutions yp, p=1,...,n, of the homogeneous equation L(y) = 0. Then, find solutions up, p=1,..., m < n, for the equations L(up, p=1,...,)(x) = vp, p=1,...,(x). It is not required that these solutions should satisfy any special boundary conditions.

The problem of finding G is now a problem of finding constants Cp and Dp such that

G(x,t)  =  BLC{(A(ISU(p=1,n, ) Cp yp(x) -,ISU(p=1,n, ) Dp yp(x)-))A(ISU(p=1,m, ) vp(t) up(x) if x < t,ISU(p=1,m, ) vp(t) up(x) if t < x). The constants Cp and Dp are determined by these 2n equations:

(a)   B<sub>p</sub>(G(.,t)) = 0, p=1,2,...n; (b)  0 = [[partialdiff]]<sup>P</sup>G(x,t)/[[partialdiff]]x<sup>p</sup>|x=t+  -[[partialdiff]]<sup>P</sup>G(x,t)/[[partialdiff]]x<sup>p</sup>|x=t-,   0<u><</u> p <u><</u> n-2; (c)   1/an(t) =[[partialdiff]]<sup>n-1</sup>G(x,t)/[[partialdiff]]x<sup>n-1</sup>|x=t+  -[[partialdiff]]<sup>n-1</sup>G(x,t)/[[partialdiff]]x<sup>n-1</sup>|x=t-.


Recall that L(y) = y'' +y' -2y, B1(y) = y(0) - y'(0), and B2(y) = y(1) - y'(1). Linearly independent solutions for L(y) = 0 are e-2x and ex. A normalized basis for the one-dimensional nullspace of {L*,B1*,B2*} is alpha e2x where alpha is the positive number given by

 alpha<sup>2</sup>  =    1 / I(0,1, ) (e<sup>2x</sup>)<sup>2</sup> dx  =   4/(e<sup>4</sup>-1). A solution u for the equation y'' +y' -2y = alpha e2x is u(x) = alpha e2x/4.

Now, G is given by:

G(x,t) =  BLC{(A(Ae<sup>-2x</sup> +Be<sup>x</sup> -[[alpha]]<sup>2</sup>e<sup>2(x+t)</sup>/4  if x < t, Ce<sup>-2x</sup>+De<sup>x</sup> - [[alpha]]<sup>2</sup>e<sup>2(x+t)</sup>/4  if t < x.))

The four constants - A,B,C, and D - can be solved by these four equations:

(1) 0 = B1(G( ,t)) = G(0,t) - Gx(0,t) = 3A + [[alpha]]2et/4

(2) 0 = B2(G( ,t)) = G(1,t) - Gx(1,t) = 3Ce-2 + a2e2(1+t)/4,

(3) 0 = G(t+,t) - G(t-,t) = (C-A)e-2t + (D-B)et, and

(4) 1 = Gx(t+,t) - Gx(t-,t)= -2(C-A)e-2t + (D-B)et.

Upon solving this system of four equations and four unknowns, an infinity of solutions will be found determined by these three equations:

A = - alpha2e2t/12, C = Ae4, D-B = e-t/3.



(a) L(y) = y'' +y' -2y, B1(y) = y(0)-y'(0),B2(y) = y(1) - y'(1).

(b) L(y) =4y'' - y, B1(y) = y(0) - 2y'(0), B2(y) = y(1) - 2y'(1).

(c) L(y) = y'' - 2y' - 3y, B1(y) = 3y(0) - y'(0), B2(y) = 3y(1) - y'(1).

EXAMPLE(Second Alternative, mixed, two point boundary conditions.)

Suppose that L(y) = y'', B1(y) = y(0) + y(1), B2(y) = y'(0) - y'(1). Then L*(z) = z'', B1*(z) = z(0)- z(1), B2*(z) = z'(0) + z'(1). All solutions of {L,B1, B2} are multiples of 2x - 1 . A nontrivial solution of [L*,B1*, B2*} is the constant function 1. Also, the function V(x) = 1 forms a basis for the null space of 0 = L*(z) in M*. The function u(x) = x2/2 satisfies L(u) = 1. Thus

G(x,t) = BLC{(A(A + Bx - x2/2 if x < t,C + Dx - x2/2 if t < x.))

We have four unknowns; we have the following four equations:

(1) 0 = G(0,t) +G(1,t) = A + C + D - 1/2

(2) 0 = dG(x,t)/dx|x=0 - dG(x,t)/dx|x=1 = B - (D-1)

(3) 0 = G(t+,t) - G(t-,t) = C-A + (D-B)t

(4) 1 = dG(x,t)/dx|x=t+ - dG(x,t)/dx|x=t- = D-B.

As expected, there is an infinity of solutions to these equations which may be found by choosing D and then

B = D - 1

2C = -(t+D) + 1/2

A = C + t.


Exercises XV

(Review the general method or ad hoc method for constructing Green functions.) XV.1.    Find a Green function such that if f is continuous, then the equation y = Gf provides a solution for L(y) = f, y(0) = y'(0) = 0, where L is as defined below. In each case, first give L* and M* and verify that the first alternative holds.

(a) L(y) = y'' (b) L(y) = y'' + 4 \pi 2y (c) L(y) = 2y'' + y' - y (d) L(y)(x) = (exy'(x))'.        (Answers)

XV.2.    (Ad hoc method) Suppose that u is a function on [0,1] which

satisfies L(u) = 0, u(0) = 0, and u'(0) = 1/a2(t). Let H(x,t) = 0 if x < t and = u(x-t) if t < x. Show that H is a Green function for the problem {L, B1(y) = y(0), B2(y) = y'(0) }.

XV.3.    (Equivalent integral equation) Let G(x,t) be the Green function for problem I(a) above. Suppose that b and f are continuous functions. Let h be the function given by

h(x) = I(0,1, )G(x,t) f(t) dt

and H(x,t) be the function given by H(x,t) = G(x,t) b(t) (Note: Here H is not the Heaviside function). Show these are equivalent:

(a) y''(x) - b(x)y(x) = f(x), y(0) = y'(0) = 0, and

(b) y(x) = I(0,1, ) H(x,t) y(t) dt + h(x).

XV.4.    Construct L*, B*, and G for the following :

(a) L(y) = y'', B1(y) = y(0), B2(y) = y(1),

(b) L(y) = y'', B1(y) = y(0) + y'(0), B2(y) = y(1) + y'(1).

(c) L(y) = y'', B1(y) = y(0) + y'(0), B2(y) = y(1) - y'(1),

(d) L(y) = y'' + 4 \pi 2y, B1(y) = y(0) + y'(0), B2(y) = y(1) - y'(1).

(e) L(y) = 2y''+ y'- y, B1(y) = y(0) + y'(0), B2(y) = y(1) - y'(1).

(f) L(y)(x) = (ex y'(x))', B1(y) = y(0) + y'(0), B2(y) = y(1) - y'(1).


XV.5.    So that you will remember why we are constructing Green functions, use the above result to provide a solution for the equation y''(x) = x2, y(0) + y(1) = 0, and y'(0) + y'(1) = 0.

XV.6.   Give a formal argument, by interchanging limits and integrals, why if G satisfies (15.2), then

u(x) := i(0,1, G(x,t) f(t))


L(u) = f.

Notice that the integral is the same as <G(x,t),f(t)>, if the inner product is calculated with t as the integration variable.

XV.7.   Give a formal argument to show that d(x) is an even function, in the sense that d(-x) = d(x). (Use a change of variables.)

XV.8.   Let G1(x,t) and G2(x,t) be two Green functions for the differential equation

u''(x) - u(x) = f(x).

Since the boundary conditions have not been specified, there will be many Green function for the differential equation

u''(x) - u(x) = f(x), u(0) = u(1) = 0.

(a) Classify G(x,t) as an integral kernel as in Chapter XII. Is it separable? Is it small (in either sense)?

(b) Discuss how one could solve the integral equation

y(x) = i(0,1,G(x,t) y(t)) + f(x)

with the methods of Chapter XII. To what differential equation is it equivalent?

XV.10.    Verify that each of these problems is second alternative and find L*,B1*, B2*,and G.

(a) L(y) = y'', B1(y) = y(0) - y(1), B2(y) = y'(0) - y'(1),

(b) L(y) = y'' + 9[[pi]]2y, B1(y) = y(0) - y(1), B2(y) = y'(0) + y'(1),

(c) L(y) = y'' + y' - 2y, B1(y) = e y(0) - y(1), B2(y) = e y'(0) - y'(1).

XV.11.    Construct L*, B* and G for each of the following L's and with periodic boundary conditions y(0) = y(1), y'(0) = y'(1):

(a) L(y) = y'',

(b) L(y) = y'' + \pi2y

(c) L(y) = 2y'' + y' - y,


To solve the equations L(y) = f, B1(y) = alpha, B2(y) = \beta, first construct G for the problem L(y) = f, B1(y)= 0, B2(y) = 0. Then construct functions z1 and z2 such that B1(z1) = 0, B2(z1) != 0, and B1(z2) != 0, B2(z2) = 0. The solution for the original problem is

y(x) =  I(0,1, ) G(x,t)f(t) dt  + \betaF(z<sub>1</sub>(x),B2(z<sub>1</sub>))+ alpha F(z<sub>2</sub>(x),B1(z<sub>2</sub>))


XV.13.    Find a formula for u if u'' = f and

(a) u(0) = u(1) = 0.

(b) u(0) = u'(0) = 0.

(c) u(0) = 3, u(1) = 5,

(d) u'(0) = 3u(0), u'(1) = 5 u(1),

(e) u(0) = u(1), u'(0) = u'(1),

(f) u(0) = 3, u'(0) = 5,

(g) u'(0) = 3, u(0) = 5,

(h) u(0) = 0, I(0,1, ) u(x) dx = 0.


XV.14.    Find a formula for u if u'' + 9u = f and

(a) u(0) = 3, u'(1) = 5.

(b) u(0) - u'(0) = 3, u(1) = 5.

XV.15.    Find a formula for u if (x u'(x))' = f and u(1) = 0, u(2) = 5.


(a) Find conditions on f in order that u'' + 4 \pi2 u = f, u(0)=u(1), u'(0) = u'(1) should have a solution.

(b) Give the Green function for this problem.

(c) By finding the Green function for the problem L(y) = y'', y(0) = y(1), y'(0) = y'(1), re-write this equation as an integral equation such as was studied in the previous chapter.

XV.17.    Here is a linear differential operator with boundary conditions:

L(y)(x) = (ex y' )' and B1(y) = y(0), B2(y) = y'(0).

(a) Show that (ex y')' z - y (exz')' = [ ex (z y' - z' y)]'.

(b) Give L* and B*.

(c) Give the Green function for the problem L(y) = f with B1(y) = B2(y) = 0.

(d) Rewrite the problem (ex y')' + sin(x) y(x) = f(x) , y(0) = y'(0) = 0 as an integral equation in the form

y = K(y) + F.

Be sure to identify K and F carefully.

XV.18.    Consider the differential equation: f is continuous on [0, \pi ] and

(sin(x) y'(x))' + 2 sin(x) y(x) = f(x)

y(0) = 0 = y( \pi ).

(a) In the context of this course, what is the appropriate space and linear operator L?

(b) What is the adjoint of L in this space? Explain your answer.

(c) Is this problem 1st or 2nd alternative?

(d) If possible, solve this problem with f(x) = x. If it is not possible, explain why not.

Onward to Chapter XVI

Back to Chapter XIV

Return to Table of Contents

Return to Evans Harrell's home page