Linear Methods of Applied Mathematics
Evans M. Harrell II and James V. Herod*
version of 14 March 2000
Here is a Mathematica notebook with calculations for this chapter, and here are some Maple worksheets with similar calculations
In this chapter we shall learn how to solve integral equations in three situations:
Definition XIV.1 Suppose that there are an integer n and functions
such that, for each p , a_{p} and b_{p} are in L^{2}[0,1]. Then K has a separable kernel if its kernel is given by
or, using the notation of inner products,
there is a sequence {c_{p}} of numbers such that
Why is this? In (14.1), all the definite integrals over t are just numbers. Even though we do not know their values yet, we can call them c_{p} and procede to determine their values with a bit of algebraic labor.
Suppose
Substitute this in the equation to be solved:
and we see that
This now reduces to a matrix problem:
Define K and f to be the matrix and vector so defined that the last equation is rewritten as
c = K c + f.
We now employ ideas from linear algebra. The equation c = K c + f has exactly one solution provided
is found, we have a formula for y(x).
Example XIV.2: In the exercises of chapter XIII, it should have been established that if
K(x,t) = 1 + sin(x) cos(t),
then
K*(x,t) = 1 + sin(t) cos(x).
Also,
y = Ky has solution y(x) = 1
and
y = K*y has solution y(x) = + 2 cos(x).
It is the promise of the Fredholm Alternative theorems that
y = Ky + f
has a solution provided that
Let us try to solve y = Ky + f and watch to see where the requirement that f should be perpendicular to the function +2 cos(t) appears.
To solve y = Ky + f is to solve
As usual we see that the solution must be of the form y(x) = a + b sin(x) + f(x), and substitute this for y:
From this, we get the algebraic equations
Hence, in our guess for y, we find that a can be anything and that b must be
and b must also be
The naive pupil might think this means there are two (possibily contradictory) requirements on b. The third of the Fredholm Alternative theorems assures the student that there is only one requirement!
Take _{0}(x) to be f(x) and _{1} to be defined by
It is reasonable to ask: does this generated sequence converge to a limit and in what sense does it converge? The answer to both questions can be found under appropriate hypothesis on K.
Theorem XIV.3. If K satisfies the condition that
then lim_{p} _{p}(x) exists and the convergence is uniform on [0,1] - in the sense that if u = lim_{p}_{p} then
lim_{p} max_{x} | u(x) - _{p}(x) | = 0.
Furthermore, if p is a positive integer, the distance between successive iterates can be computed:
Inductively, this does not exceed
Model Problem XIV.5. Consider the integral equation
where g(x) is given. We wish to solve for u(x), and we try the method of iteration.
We begin with the guess _{0} = g(x), and calculate the next couple of iterates:
If we now calculate the further iterates, we find inductively that
It is a miracle when the series for K sums in closed form like this, but that is not important in applications, since the convergence of the Neumann series implies that we can calculate the answer to any desired accuracy.
Theorem XIV.6. If K satisfies the Hilbert- Schmidt condition (14.4), then lim_{p} _{p}(x) exists and the convergence is in the r.m.s. sense, that is:
lim_{p} || u(x) - _{p}(x) || = 0.
INDICATION OF PROOF. The analysis of the nature of the convergence will go like this:
As a consequence, the sequence _{n} is Cauchy convergent:
Let's state the conclusion in a careful way:
Corollary XIV.7. If ,
Then
Before addressing the final case - where
Re-examining the iteration process:
_{0}(x) = f(x),
_{1}(x) = K_{0}(x) + f(x)
_{2}(x) = K(K(_{0}))x + K(f)(x) + f(x)
One writes _{0}=f, _{1}=Kf+f, _{2} = K[Kf+f] + f = K^{2}f+Kf+f, .....
In fact, with
Hence, the kernel K_{2} associated with K^{2} is
Inductively,
and
We have, in this section, conditions which imply that
Link to