Differential Operators

Integral Equations and the Method of Green Functions

James V. Herod*

*(c) Copyright 1993,1994,1995 by James V. Herod, herod@math.gatech.edu. All rights reserved.

Page maintained with additions by Evans M. Harrell, II, harrell@math.gatech.edu.



By now you should believe that except for arithmetic details, you can work any of these problems. We have come to the place where we need to get this problem into perspective.

We know that the requirements of Section 2.4 give the Kronecker delta symbol,

\delta<sub>jk</sub> = 1, if j=k, and otherwise 0; so    isu(k=1,n, djk vk) = vj

for any vector v.

Thus A G = Id in components becomes

isu(k=1,n, Ajk Gkm) = djm.

When trying to solve differential equations, we might hope to find G(.,t) as a solution to the equation L(G(.,t)) = \delta(.,t). Some understanding of this equation is in order for the right side is not a function in the ordinary sense. As has already been pointed out, it is a "generalized function". The analogy with the matrix problem is pretty close: The delta function in essence gives the continuous coordinates of the identity operator:

i(a,b,delta(x-t) f(t)) = f(x),

for any a,b with a < x < b. Recall that the integral is a sort of continuous sum, so this is appropriate.

We present here, not a proof, but an understanding that

L(G(.,t))(x) = \delta(x - t).


The ideas should be examined and re-examined in later courses as a theory for generalized functions is developed:

Suppose that

y(x) = I(0,1, )G(x,t) f(t) dt

and that equation (2.6) holds. Intuition is a guide:

L(y)(x)= L(I(0,1, ) G(x,t) f(t) dt)=...= I(0,1, ) d(x,t) f(t) dt  =  f(x).

If one were asked to solve the equation L(y) = f, where L is a reasonable second order operator, in the context of a sophomore differential equations course, one would think of the variations of parameter formula. In that setting, and for second order problems with u0 and u1 linearly independent solutions of the homogeneous equation,

y(x) = C0(x) u0(x) + C1(x) u1(x).


F([[partialdiff]]C0,[[partialdiff]]x) (x) u0(x) +
F([[partialdiff]]C1,[[partialdiff]]x) (x) u1(x)   =  0


F([[partialdiff]]C0,[[partialdiff]]x) (x) u'0(x) +
F([[partialdiff]]C1,[[partialdiff]]x) (x) u'1(x)   =  f(x)/a2(x).

F([[partialdiff]]C0,[[partialdiff]]x) (x)     =      F(- u1(x) f(x) ,a2(x)w(x))

F([[partialdiff]]C1,[[partialdiff]]x) (x)    =      F(u0(x)
f(x),a2(x) w(x))

This suggests an interpretation for "solution" of the second order equation:

L(G(.,t))(x) = \delta(x,t).

Namely, G (.,t) is the continuous function given by

(a) G(x,t) = C0(x,t)u0(x) + C1(x,t)u1(x)


F([[partialdiff]]C1,[[partialdiff]]x) (x) u1(x)   =  0


(c)  F([[partialdiff]]C0,[[partialdiff]]x) (x)u'0(x) + <sup> </sup>F([[partialdiff]]C1,[[partialdiff]]x) (x)
u'1(x)  =  d(x,t)/a2(x).

As above, the distribution equations should have solution

(d)         F([[partialdiff]]C0,[[partialdiff]]x) (x)   = F(- u1(x) d(x,t) ,a2(x) w(x));                    (e)           F([[partialdiff]]C1,[[partialdiff]]x) (x)   =    F( u0(x) d(x,t) ,a2(x) w(x)).

THEOREM. If, for each t, G(.,t) is in M and L(G(.,t))(x) = d(x,t) then G satisfies the four equations of Section 2.4.

Proof. We hope to recognize the four equations which we used to define G for second order problems as arising from the above requirements for G. Two of those equations come from asking that G(.,t) should satisfy the two boundary equations. One other, G(t+,t) - G(t-,t) = 0, comes from the requirement that G(.,t) should be continuous. To derive the equation

Gx(t+,t) - Gx(t-,t) = 1/a2(t),

we first compute Gx(x,t).

   Gx(x,t) =
       = C0,x(x,t) u0(x) + C0(x,t) u0'(x) + C1,x(x,t) u1(x)+ C1(x,t) u1'(x)
       = C0(x,t) u0'(x) + C1(x,t) u1'(x).

This last equality follows from (b). To find

Gx(t+,t) - Gx(t-,t) = [C0(t+,t)-C0(t-,t)] u0'(t) + [C1(t+,t)-C1(t-,t)] u'1(t),

we must evaluate

[C0(t<sup>+</sup>,t) - C0(t<sup>-</sup>,t)] = ...

In a similar manner,

[C1(t+,t) - C1(t-,t)] = u0(t) /a2(t) w(t).


[[partialdiff]]G(x,t)/[[partialdiff]]x|x=t+  - [[partialdiff]]G(x,t)/[[partialdiff]]x|x=t-  = F(-u1(t) u0'(t) + u0(t) u1'(t), a2(t) w(t))   = F(1,a2(t))

Hence, the inverse of the differential operator L on the set M is obtained by finding the function G( ,t) in M which satisfies (2.6),

L(G(.,t))(x) = \delta(x-t).




1. Give a formal argument, by interchanging limits and integrals, why if G satisfies (2.6), then

u(x) := i(0,1, G(x,t) f(t))


L(u) = f.

Notice that the integral is the same as <G(x,t),f(t)>, if the inner product is calculated with t as the integration variable.

2. Give a formal argument to show that d(x) is an even function, in the sense that d(-x) = d(x). (Use a change of variables.)

3. Let G1(x,t) and G2(x,t) be two Green functions for the differential equation

u''(x) - u(x) = f(x).

Since the boundary conditions have not been specified, there will be many Green function for the differential equation

u''(x) - u(x) = f(x), u(0) = u(1) = 0.

(a) Classify G(x,t) as an integral kernel as in Chapter I. Is it separable? Is it small (in either sense)?

(b) Discuss how one could solve the integral equation

y(x) = i(0,1,G(x,t) y(t)) + f(x)

with the methods of Chapter I. To what differential equation is it equivalent?

Onward to Section 2.6

Back to Section 2.4

Return to Table of Contents (Green Track)

Return to Evans Harrell's