# Chapter 3 Two-point Boundary Conditions

Very often, the boundary conditions that determine the solution of an ordinary differential equation are applied not just at a single value of the independent variable, $x$, but at two points, ${x}_{1}$ and ${x}_{2}$. This type of problem is inherently different from the "initial value problems" discussed previously. Initial value problems are single-point boundary conditions. There must be more than one condition if the system is higher order than one, but in an initial value problem, all conditions are applied at the same place (or time). In two-point problems we have boundary conditions at more than one place (more than one value of the independent variable) and we are interested in solving for the dependent variable(s) in the interval ${x}_{1}\le x\le {x}_{2}$ of the independent variable.

## 3.1  Examples of Two-Point Problems

Many examples of two-point problems arise from steady flux conservation in the presence of sources.
In electrostatics the electric potential $\mathit{\phi }$ is related to the charge density $\mathit{\rho }$ through one of the Maxwell equations: a form of Poisson's equation
 $\nabla .\mathbit{E}=-{\nabla }^{2}\mathit{\phi }=\mathit{\rho }/{\mathit{\epsilon }}_{0}$ $\left(3.1\right)$
where $\mathbit{E}$ is the electric field and ${\mathit{\epsilon }}_{0}$ is the permittivity of free space. Figure 3.1: Electrostatic configuration independent of $y$ and $z$ with conducting boundaries at ${x}_{1}$ and ${x}_{2}$ where $\mathit{\phi }=0$. This is a second-order two-point boundary problem.
In a slab geometry where $\mathit{\rho }$ varies in a single direction (coordinate) $x$, but not in $y$ or $z$, an ordinary differential equation arises
 $\frac{{d}^{2}\mathit{\phi }}{{\mathit{dx}}^{2}}=-\frac{\mathit{\rho }\left(x\right)}{{\mathit{\epsilon }}_{0}}.$ $\left(3.2\right)$
If we suppose (see Fig. 3.1) that the potential is held equal to zero at two planes, ${x}_{1}$ and ${x}_{2}$, by placing grounded conductors there, then its variation between them depends upon the distribution of the charge density $\mathit{\rho }\left(x\right)$. Solving for $\mathit{\phi }\left(x\right)$ is a two-point problem. In effect, this is a conservation equation for electric flux, $\mathbit{E}$. Its divergence is equal to the source density, which is the charge density. Figure 3.2: Heat balance equation in cylindrical geometry leads to a two-point problem with conditions consisting of fixed temperature at the edge, $r=a$ and zero gradient at the center $r=0$.
A second-order two-point problem also arises from steady heat conduction. See Fig. 3.2. Suppose a cylindrical reactor fuel rod experiences volumetric heating from the nuclear reactions inside it with a power density $p\left(r\right)$ (Watts per cubic meter), that varies with cylindrical radius $r$.
Its boundary, at $r=a$ say, is held at a constant temperature ${T}_{a}$. If the thermal conductivity of the rod is $\mathit{\kappa }\left(r\right)$, then the radial heat flux density (Watts per square meter) is
 $q=-\mathit{\kappa }\frac{dT}{\mathit{dr}}.$ $\left(3.3\right)$
In steady state, the total heat flux across the surface at radius $r$ (per unit rod length) must equal the total heating within it:
 $2\mathit{\pi }rq=-2\mathit{\pi }r\mathit{\kappa }\left(r\right)\frac{dT}{\mathit{dr}}={\int }_{0}^{r}p\left(r\text{'}\right)2\mathit{\pi }r\text{'}\mathit{dr}\text{'}.$ $\left(3.4\right)$
Differentiating this equation we obtain:
 $\frac{d}{\mathit{dr}}\left(r\mathit{\kappa }\frac{dT}{\mathit{dr}}\right)=-rp\left(r\right).$ $\left(3.5\right)$
This second-order differential equation requires two boundary conditions. One is $T\left(a\right)={T}_{a}$, but the other is less immediately obvious. It is that the solution must be nonsingular at $r=0$, which requires that the derivative of $T$ be zero there:
 ${\frac{dT}{\mathit{dr}}|}_{r=0}=0.$ $\left(3.6\right)$

## 3.2  Shooting

### 3.2.1  Solving two-point problems by initial-value iteration

One approach to computing the solution to two-point problems is to use the same technique used to solve initial value problems. We treat ${x}_{1}$ as if it were the starting point of an initial value problem. We choose enough boundary conditions there to specify the entire solution. For a second-order equation such as (3.2) or (3.5), we would need to choose two conditions: $y\left({x}_{1}\right)={y}_{1}$, and $\mathit{dy}/\mathit{dx}{|}_{{x}_{1}}=s$, say, where ${y}_{1}$ and $s$ are the chosen values. Only one of these is actually the boundary condition to be applied at the initial point, ${x}_{1}$. We'll suppose it is ${y}_{1}$. The other, $s$, is an arbitrary guess at the start of our solution procedure.
Given these initial conditions, we can solve for $y$ over the entire range ${x}_{1}\le x\le {x}_{2}$. When we have done so for this case, we can find the value $y$ at ${x}_{2}$ (or its derivative if the original boundary conditions there required it). Generally, this first solution will not satisfy the actual two-point boundary condition at ${x}_{2}$, which we'll take as $y={y}_{2}$. That's because our guess of $s$ was not correct. Figure 3.3: Multiple successive shots from a cannon can take advantage of observations of where the earlier ones hit, in order to iterate the aiming elevation $s$ until they strike the target.
It's as if we are aiming at the point $\left({x}_{2},{y}_{2}\right)$ with a cannon located at $\left({x}_{1},{y}_{1}\right)$ (see Fig. 3.3). We elevate the cannon so that the cannonball's initial angle is $\mathit{dy}/\mathit{dx}{|}_{{x}_{1}}=s$, which is our initial guess at the best aim. We shoot. The cannonball flies over, (within our metaphor, the initial value solution is found) but is not at the correct height when it reaches ${x}_{2}$ because our first guess at the aim was imperfect. What do we do? We see the height at which the cannonball hits, above or below the target. We adjust our aim accordingly with a new elevation ${s}_{2}$, $\mathit{dy}/\mathit{dx}{|}_{{x}_{1}}={s}_{2}$, and shoot again. Then we iteratively refine our aim taking as many shots as necessary, and improving the aim each time, till we hit the target. This is the "shooting" method of solving a two-point problem. The cannonball's trajectory stands for the initial value integration with assumed initial condition.
One question that is left open in this description is exactly how we refine our aim. That is, how do we change the guess of the initial slope $s$ so as to get a solution that is nearer to the correct value of $y\left({x}_{2}\right)$? One of the easiest and most robust ways to do this is by bisection.

### 3.2.2  Bisection

Suppose we have a continuous function $f\left(s\right)$ over some interval $\left[{s}_{l},{s}_{u}\right]$ (i.e. ${s}_{l}\le s\le {s}_{u}$), and we wish to find a solution to $f\left(s\right)=0$ within that range. If $f\left({s}_{l}\right)$ and $f\left({s}_{u}\right)$ have opposite signs, then we know that there is a solution (a "root" of the equation) somewhere between ${s}_{l}$ and ${s}_{u}$. For definiteness in our description, we'll take $f\left({s}_{l}\right)\le 0$ and $f\left({s}_{u}\right)\ge 0$. To get a better estimate of where $f=0$ is, we can bisect the interval and examine the value of $f$ at the point $s=\left({s}_{l}+{s}_{u}\right)/2$. If $f\left(s\right)<0$, then we know that a solution must be in the half interval $\left[s,{s}_{u}\right]$, whereas if $f\left(s\right)>0$, then a solution must be in the other interval $\left[{s}_{l},s\right]$. We choose whichever half-interval the solution lies in, and update one or other end of our interval to be the new s-value. In other words, we set either ${s}_{l}=s$ or ${s}_{u}=s$ respectively. The new interval $\left[{s}_{l},{s}_{u}\right]$ is half the length of the original, so it gives us a better estimate of where the solution value is. Figure 3.4: Bisection successively divides in two an interval in which there is a root, always retaining the subinterval in which a root lies.
Now we just iterate the above procedure, as illustrated in Fig. 3.4. At each step we get an interval of half the length of the previous step, in which we know a solution lies. Eventually the interval becomes small enough that its extent can be ignored; we then know the solution accurately enough, and can stop the iteration.
The wonderful thing about bisection is that it highly efficient, because it is guaranteed to converge in "logarithmic time". If we start with an interval of length $L$, then at the $k$th interation the interval length is $L/{2}^{k}$. So if the tolerance with which we need the $s$-value solution is $\mathit{\delta }$ (generally a small length), the number of iterations we must take before convergence is $N={\mathrm{log}}_{2}\left(L/\mathit{\delta }\right)$. For example if $L/\mathit{\delta }={10}^{6}$, then $N=20$. This is a quite modest number of iterations even for a very high degree of refinement.
There are iterative methods of root finding that converge faster than bisection for well behaved functions. One is "Newton's method", which may succinctly be stated as ${s}_{k+1}={s}_{k}-f\left({s}_{k}\right)/{\mathit{df}/\mathit{ds}|}_{{s}_{k}}$. It converges in a few steps when the starting guess is not too far from the solution. Unlike bisection, it does not require two starting points on opposite sides of the root. However, Newton's method (1) requires derivatives of the function, which makes it more complicated to code, (2) is less robust, because it takes big steps near $\mathit{df}/\mathit{ds}=0$, and may even step in the wrong direction and not converge at all in some cases. Bisection is guaranteed to converge after a modest number of steps. Robustness is in practice usually more important than speed.${}^{16}$
In the context of our shooting solution of a two-point problem, the function $f$ is the error in the boundary value at the second point $y\left({x}_{2}\right)-{y}_{2}$ of the inital-value solution $y\left(x\right)$ that takes initial-value $s$ for its derivative at ${x}_{1}$. The bisection generally adjusts the initial-value $s$ until $|y\left({x}_{2}\right)-{y}_{2}|$ is less than some tolerance (rather than requiring some tolerance on $s$).

## 3.3  Direct Solution

The shooting method, while sometimes useful for situations where adaptive step-length is a major benefit, is rather a back-handed way of solving two-point problems. It is very often better to solve the problem by constructing a finite difference system to represent the differential equation including its boundary conditions, and then solve that system directly.

### 3.3.1  Second-order finite differences

First let's consider how one ought to represent a second-order derivative as finite differences. Suppose we have a uniformly spaced grid (or mesh) of values of the independent variable ${x}_{n}$ such that ${x}_{n+1}-{x}_{n}=\mathit{\Delta }x$. The natural definition of the first derivative is
 ${\frac{\mathit{dy}}{\mathit{dx}}|}_{n+1/2}=\frac{\mathit{\Delta }y}{\mathit{\Delta }x}=\frac{{y}_{n+1}-{y}_{n}}{{x}_{n+1}-{x}_{n}};$ $\left(3.7\right)$
and this should be regarded as an estimate of the value at the midpoint ${x}_{n+1/2}=\left({x}_{n}+{x}_{n+1}\right)/2$, which we denote via a half-integral index $n+1/2$. The second derivative is the derivative of the derivative. Its most natural definition, therefore is
 ${\frac{{d}^{2}y}{{\mathit{dx}}^{2}}|}_{n}=\frac{\mathit{\Delta }\left(\mathit{dy}/\mathit{dx}\right)}{\mathit{\Delta }x}=\frac{\left({\mathit{dy}/\mathit{dx}|}_{n+1/2}-{\mathit{dy}/\mathit{dx}|}_{n-1/2}\right)}{{x}_{n+1/2}-{x}_{n-1/2}},$ $\left(3.8\right)$
as illustrated in Fig. 3.5. Figure 3.5: Discrete second derivative at $n$ is the difference between the discrete derivatives at $n+\frac{1}{2}$ and $n-\frac{1}{2}$. In a uniform mesh, it is divided by the same $\mathit{\Delta }x$.
Because the first derivative is the value at $n+1/2$, the second derivative (the derivative of the first derivative) is the value at a point mid way between $n+1/2$ and $n-1/2$, i.e. at $n$. Substituting from the previous equation (3.7) we get${}^{17}$:
 ${\frac{{d}^{2}y}{{\mathit{dx}}^{2}}|}_{n}=\frac{\left({y}_{n+1}-{y}_{n}\right)/\mathit{\Delta }x-\left({y}_{n}-{y}_{n-1}\right)/\mathit{\Delta }x}{\mathit{\Delta }x}=\frac{{y}_{n+1}-2{y}_{n}+{y}_{n-1}}{\mathit{\Delta }{x}^{2}}.$ $\left(3.9\right)$
Now think of the entire mesh stretching from $n=1$ to $n=N$. The values ${y}_{n}$ at all the nodes can be considered to be a column vector of length $N$. The second derivative can then be considered to be a matrix operating on that column vector, to give the values of eq. (3.9). So, written in matrix form we have:
 $\left(\begin{array}{c}\hfill {{d}^{2}y/{\mathit{dx}}^{2}|}_{1}\hfill \\ \hfill :\hfill \\ \hfill {{d}^{2}y/{\mathit{dx}}^{2}|}_{n}\hfill \\ \hfill :\hfill \\ \hfill {{d}^{2}y/{\mathit{dx}}^{2}|}_{N}\hfill \end{array}\right)=\frac{1}{\mathit{\Delta }{x}^{2}}\left(\begin{array}{ccccc}\hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill \end{array}\right)\left(\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill :\hfill \\ \hfill {y}_{n}\hfill \\ \hfill :\hfill \\ \hfill {y}_{N}\hfill \end{array}\right),$ $\left(3.10\right)$
where the square $N×N$ matrix has diagonal elements equal to $-2$. On the adjacent diagonals, sometimes called subdiagonals, (indices $n,n+1$ and $n,n-1$) it has $1$; and everywhere else it is zero. This overall form is called tridiagonal.
If we are considering the equation
 $\frac{{d}^{2}y}{{\mathit{dx}}^{2}}=g\left(x\right)$ $\left(3.11\right)$
where $g\left(x\right)$ is some function (for example $g=-\mathit{\rho }/{\mathit{\epsilon }}_{0}$ for our electrostatic example) then the equation is represented by putting the column vector $\left({{d}^{2}y/{\mathit{dx}}^{2}|}_{n}\right)$ equal to the column vector $\left({g}_{n}\right)=\left(g\left({x}_{n}\right)\right)$.

### 3.3.2  Boundary Conditions

However, in eq. (3.10), the top left and bottom right corners of the derivative matrix have deliberately been left ambiguous, because that's where the boundary conditions come into play. Assuming they are on the boundaries, the quantities ${y}_{1}$ and ${y}_{N}$ are determined not by the differential equation and the function $g$, but by the boundary values. We must adjust the first and last row of the matrix accordingly to represent those boundary conditions. A convenient way to do this when the conditions consist of specifying ${y}_{L}$ and ${y}_{R}$ at the left and right-hand ends is to write the equation as:
 $\left(\begin{array}{ccccccc}\hfill -2\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -2\hfill \end{array}\right)\left(\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill :\hfill \\ \hfill {y}_{n}\hfill \\ \hfill :\hfill \\ \hfill {y}_{N-1}\hfill \\ \hfill {y}_{N}\hfill \end{array}\right)=\left(\begin{array}{c}\hfill -2{y}_{L}\hfill \\ \hfill {g}_{2}\mathit{\Delta }{x}^{2}\hfill \\ \hfill :\hfill \\ \hfill {g}_{n}\mathit{\Delta }{x}^{2}\hfill \\ \hfill :\hfill \\ \hfill {g}_{N-1}\mathit{\Delta }{x}^{2}\hfill \\ \hfill -2{y}_{R}\hfill \end{array}\right).$ $\left(3.12\right)$
Notice that the first and last rows of the matrix have been made purely diagonal, and the column vector on the right hand side (call it $\mathbf{h}$) uses for the first and last rows the boundary values, and for the others the elements of $\mathbf{g}\mathit{\Delta }{x}^{2}$. These adjustments enforce that the first and last values of $y$ are always the boundary values ${y}_{L}$ and ${y}_{R}$.${}^{18}$
Once we have constructed this matrix form of the differential equation, which we can write:
 $\mathbf{D}\mathbf{y}=\mathbf{h},$ $\left(3.13\right)$
it is obvious that we can solve it by simply inverting the matrix $\mathbf{D}$ and finding
 $\mathbf{y}={\mathbf{D}}^{-1}\mathbf{h}.$ $\left(3.14\right)$
(Or we can use some other appropriate matrix equation solution technique.)
In general, we must make the first and last rows of the matrix equation into discrete expressions of the boundary conditions there. If instead of Dirichlet boundary conditions (value is specified), we are given Neumann conditions, that is, the derivative (e.g. ${\mathit{dy}/\mathit{dx}|}_{1}$) is specified, a different adjustment of the corners is necessary. The most obvious thing to do is to make the first row of the matrix equation proportional to
 $\left(-1\mathrm{ }1\mathrm{ }0\mathrm{ }\dots \right)\left(\mathbf{y}\right)={y}_{2}-{y}_{1}=\mathit{\Delta }x\left({\mathit{dy}/\mathit{dx}|}_{1}\right).$ $\left(3.15\right)$
However, this choice does not calculate the derivative at the right place. The expression $\left({y}_{2}-{y}_{1}\right)/\mathit{\Delta }x$ is the derivative at ${x}_{3/2}$ rather than ${x}_{1}$, which is the boundary${}^{19}$. So the scheme (3.15) is not properly centered and will give only first order accuracy.${}^{20}$ A better extrapolation of the derivative to the boundary is to write instead for the first row
 $\left(-\frac{3}{2}\mathrm{ }2\mathrm{ }-\frac{1}{2}\mathrm{ }0\mathrm{ }\dots \right)\left(\mathbf{y}\right)=-\frac{1}{2}\left({y}_{3}-{y}_{2}\right)+\frac{3}{2}\left({y}_{2}-{y}_{1}\right)=\mathit{\Delta }x\left({\mathit{dy}/\mathit{dx}|}_{1}\right).$ $\left(3.16\right)$
This is a discrete form of the expression $y{\text{'}}_{1}\approx y{\text{'}}_{3/2}-y"{}_{2}.\frac{1}{2}\mathit{\Delta }x$, which is accurate to second-order, because it cancels out the first-order error in the derivative. The same treatment applies to a Neumann condition at ${x}_{N}$ (but of course using the mirror image of the row given in eq. (3.16)).
If the boundary condition is of a more general form (the so-called Robin condition)
 $\mathit{Ay}+\mathit{By}\text{'}+C=0,$ $\left(3.17\right)$
Then we want the first row to represent this equation discretely. The natural way to do this, based upon our previous forms, is to make it
 $\left[A\left(1\mathrm{ }0\mathrm{ }0\mathrm{ }\dots \right)+\frac{B}{\mathit{\Delta }x}\left(-\frac{3}{2}\mathrm{ }2\mathrm{ }-\frac{1}{2}\mathrm{ }0\mathrm{ }\dots \right)\right]\left(\mathbf{y}\right)=-C.$ $\left(3.18\right)$
In addition to combining the previous inhomogeneous boundary forms, this expression can also represent the specification of homogeneous boundary conditions, or in other words logarithmic gradient conditions. When $C=0$, the boundary condition is $d\left(\mathrm{ln}y\right)/\mathit{dx}=y\text{'}/y=-A/B$. This form (with $A/B=1$) is the condition that one might apply to the potential due to a spherically symmetric electrostatic charge at the outer boundary, for example.
It may be preferable in some cases to scale the first row of the matrix equation to make the diagonal entry the same as all the other diagonals, namely $-2$. This is done by multiplying all the row's elements of $\mathbf{D}$, and the corresponding entry of $\mathbf{h}$ by a factor $-2/{D}_{11}$, or $-2/{D}_{\mathit{NN}}$ respectively. This can improve the conditioning of the matrix, making inversion easier and more accurate. Figure 3.6: Periodic boundary conditions apply when the independent variable is, for example, distance around a periodic domain.
A final type of boundary condition worth discussing is called "periodic". This expression means that the end of the $x$-domain is considered to be connected to its beginning. Such a situation arises, for example, if the domain is actually a circle in two-dimensional space. But it is also sometimes used to approximate an infinite domain. For periodic boundary conditions it is usually convenient to label the first and last point 0 and $N$. See Fig. 3.6. They are the same point; so the values at ${x}_{0}$ and ${x}_{N}$ are the same. There are then $N$ different points and the discretized differential equation must be satisfied at them all, with the differences wrapping round to the corresponding point across the boundary. The resulting matrix equation is then
 $\left(\begin{array}{ccccccc}\hfill -2\hfill & \hfill 1\hfill & \hfill \dots \hfill & \hfill 0\hfill & \hfill \dots \hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill :\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill :\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill :\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill :\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill \dots \hfill & \hfill 0\hfill & \hfill \dots \hfill & \hfill 1\hfill & \hfill -2\hfill \end{array}\right)\left(\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill :\hfill \\ \hfill {y}_{n}\hfill \\ \hfill :\hfill \\ \hfill {y}_{N-1}\hfill \\ \hfill {y}_{N}\hfill \end{array}\right)=\left(\begin{array}{c}\hfill {g}_{1}\mathit{\Delta }{x}^{2}\hfill \\ \hfill {g}_{2}\mathit{\Delta }{x}^{2}\hfill \\ \hfill :\hfill \\ \hfill {g}_{n}\mathit{\Delta }{x}^{2}\hfill \\ \hfill :\hfill \\ \hfill {g}_{N-1}\mathit{\Delta }{x}^{2}\hfill \\ \hfill {g}_{N}\mathit{\Delta }{x}^{2}\hfill \end{array}\right),$ $\left(3.19\right)$
which maintains the pattern $1,-2,1$ for every row, without exception. For the first and last rows, the subdiagonal $1$ element that would be outside the matrix, is wrapped round to the other end of the row. It gives a new entry in the top right and bottom left corners.${}^{21}$

## 3.4  Conservative Differences, Finite Volumes

In our cylindrical fuel rod example, we had what one might call a "weighted derivative": something more complicated than a Laplacian. One might be tempted to write it in the following way:
 $\frac{d}{\mathit{dr}}\left(r\mathit{\kappa }\frac{dT}{\mathit{dr}}\right)=r\mathit{\kappa }\frac{{d}^{2}T}{{\mathit{dr}}^{2}}+\frac{d\left(r\mathit{\kappa }\right)}{\mathit{dr}}\frac{dT}{\mathit{dr}},$ $\left(3.20\right)$
and then use the discrete forms for the first and second derivative in this expression. The problem with that approach is that first derivatives are at half-mesh-points ($n+1/2$ etc.), while second derivatives are at whole mesh points ($n$). So it is not clear how best to express this multiple term formula discretely in a consistent manner. In particular, if one adopts an asymmetric form, such as writing $\mathit{dT}/\mathit{dr}{|}_{n}\approx \left({T}_{n+1}-{T}_{n}\right)/\mathit{\Delta }x$, (just ignoring the fact that this is really centered at $n+1/2$, not $n$), then the error will be of second-order in $\mathit{\Delta }x$. The scheme will be accurate only to first-order. That's bad.
We must avoid that error. But even so, there are various different ways to produce schemes that are second-order accurate. Generally the best way is to recall that the differential form arose as a conservation equation. It was the conservation of energy that required the heat flux through a particular radius cylinder $2\mathit{\pi }r\mathit{\kappa }\mathit{dT}/\mathit{dr}$ to vary with radius only so as to account for the power density at radius $r$. It is therefore best to develop the second-order differential in this way. First we form $\mathit{dT}/\mathit{dr}$ in the usual discrete form at $n-1/2$ and $n+1/2$. Then we multiply those values by $r\mathit{\kappa }$ at the same half-mesh positions $n-1/2$ and $n+1/2$. Then we take the difference of those two fluxes writing:
 $\left(3.21\right)$
The big advantage of this expression is that it exactly conserves the heat flux. This property can be seen by considering the exact heat conservation in integral form over the cell consisting of the range ${r}_{n-1/2}:
 $2\mathit{\pi }{r}_{n+1/2}{\mathit{\kappa }}_{n+1/2}{\frac{\mathit{dT}}{\mathit{dr}}|}_{n+1/2}-2\mathit{\pi }{r}_{n-1/2}{\mathit{\kappa }}_{n-1/2}{\frac{\mathit{dT}}{\mathit{dr}}|}_{n-1/2}=-{\int }_{{r}_{n-1/2}}^{{r}_{n+1/2}}p2\mathit{\pi }r\text{'}\mathit{dr}\text{'}.$ $\left(3.22\right)$
Then adding together the versions of this equation for two adjacent positions $n=k,k+1$, the ${\frac{\mathit{dT}}{\mathit{dr}}|}_{k+1/2}$ terms cancel, provided the expression for $r\mathit{\kappa }\frac{\mathit{dT}}{\mathit{dr}}$ is the same at the same $n$ value regardless of which adjacent cell ($k$ or $k+1$) it arises from. This symmetry is present when using ${\frac{\mathit{dT}}{\mathit{dr}}|}_{n+1/2}=\left({T}_{n+1}-{T}_{n}\right)/\mathit{\Delta }r$ and the half-mesh values of $r\mathit{\kappa }$. The sum of the equations is therefore the exact total conservation for the region ${r}_{k-1/2}, consisting of the sum of the two adjacent cells. This process can then be extended over the whole domain, proving total heat conservation. Approaching the discrete equations in this way is sometimes called the method of "finite volumes"${}^{22}$. The finite volume in our illustrative case is the annular region between ${r}_{n-1/2}$ and ${r}_{n+1/2}$.
A less satisfactory alternative which remains second-order accurate might be to evaluate the right hand side of eq. (3.20) using double distance derivatives that are centered at the $n$ mesh as follows
 $\begin{array}{ccc}\hfill \frac{d}{\mathit{dr}}\left(r\mathit{\kappa }\frac{dT}{\mathit{dr}}\right)& \multicolumn{2}{c}{=\left({r}_{n}{\mathit{\kappa }}_{n}\frac{{T}_{n+1}-2{T}_{n}+{T}_{n-1}}{\mathit{\Delta }{r}^{2}}+\frac{{r}_{n+1}{\mathit{\kappa }}_{n+1}-{r}_{n-1}{\mathit{\kappa }}_{n-1}}{2\mathit{\Delta }r}\frac{{T}_{n+1}-{T}_{n-1}}{2\mathit{\Delta }r}\right)}\\ \hfill & \hfill =\frac{1}{\mathit{\Delta }{r}^{2}}\left[ \left({r}_{n+1}{\mathit{\kappa }}_{n+1}/4+{r}_{n}{\mathit{\kappa }}_{n}-{r}_{n-1}{\mathit{\kappa }}_{n-1}/4\right)& \mathrm{ }{T}_{n+1}\hfill \\ \hfill & \hfill -2{r}_{n}{\mathit{\kappa }}_{n}& \mathrm{ }{T}_{n}\hfill \\ \hfill & \hfill +\left(-{r}_{n+1}{\mathit{\kappa }}_{n+1}/4+{r}_{n}{\mathit{\kappa }}_{n}+{r}_{n-1}{\mathit{\kappa }}_{n-1}/4\right)& \mathrm{ }{T}_{n-1}\right].\hfill \end{array}$ $\left(3.23\right)$
None of the coefficients of the $T$s in this expression is the same as in eq. (3.21) unless $r\mathit{\kappa }$ is independent of position. This is true even in the case where $r\mathit{\kappa }$ is known only at whole mesh points so the half-point values in (3.21) are obtained by interpolation. Expression (3.23) does not exactly conserve heat flux, which is an important weakness. Expression (3.21) is usually to be preferred.

## Worked Example: Formulating radial differences

Formulate a matrix scheme to solve by finite-differences the equation
 $\frac{d}{\mathit{dr}}\left(r\frac{dy}{\mathit{dr}}\right)+\mathit{rg}\left(r\right)=0$ $\left(3.24\right)$
with given $g$ and two-point boundary conditions $\mathit{dy}/\mathit{dr}=0$ at $r=0$ and $y=0$ at $r=N\mathit{\Delta }$, on an $r$-grid of uniform spacing $\mathit{\Delta }$.

We write down the finite difference equation at a generic position: ${\frac{\mathit{dy}}{\mathit{dr}}|}_{n+1/2}=\frac{{y}_{n+1}-{y}_{n}}{\mathit{\Delta }}.$ Substituting this into the differential equation, we get
 $\begin{array}{ccc}\multicolumn{1}{c}{-{r}_{n}{g}_{n}=\frac{d}{\mathit{dr}}{\left(r\frac{\mathit{dy}}{\mathit{dr}}\right)}_{n}}& =\hfill & \left({r}_{n+1/2}{\frac{\mathit{dy}}{\mathit{dr}}|}_{n+1/2}-{r}_{n-1/2}{\frac{\mathit{dy}}{\mathit{dr}}|}_{n-1/2}\right)\frac{1}{\mathit{\Delta }}\hfill \\ \multicolumn{1}{c}{}& =\hfill & \left({r}_{n+1/2}\frac{{y}_{n+1}-{y}_{n}}{\mathit{\Delta }}-{r}_{n-1/2}\frac{{y}_{n}-{y}_{n-1}}{\mathit{\Delta }}\right)\frac{1}{\mathit{\Delta }}\hfill \\ \multicolumn{1}{c}{}& =\hfill & \left({r}_{n+1/2}{y}_{n+1}-2{r}_{n}{y}_{n}+{r}_{n-1/2}{y}_{n-1}\right)\frac{1}{{\mathit{\Delta }}^{2}}.\hfill & \hfill \left(3.25\right)\end{array}$

It is convenient (and improves matrix conditioning) to divide this equation through by ${r}_{n}/{\mathit{\Delta }}^{2}$, so that the $n$th equation reads
 $\left(\frac{{r}_{n+1/2}}{{r}_{n}}\right){y}_{n+1}-2{y}_{n}+\left(\frac{{r}_{n-1/2}}{{r}_{n}}\right){y}_{n-1}=-{\mathit{\Delta }}^{2}{g}_{n}$ $\left(3.26\right)$
For definiteness we will take the position of the $n$th grid point to be ${r}_{n}=n\mathit{\Delta }$, so $n$ runs from $0$ to $N$. Then the coefficients become
 $\frac{{r}_{n±1/2}}{{r}_{n}}=\frac{n±1/2}{n}=1±\frac{1}{2n}.$ $\left(3.27\right)$
The boundary condition at $n=N$ is ${y}_{N}=0$. At $n=0$ we want $\mathit{dy}/\mathit{dr}=0$, but we need to use an expression that is centered at $n=0$, not $n=1/2$, to give second-order accuracy. Therefore following eq. (3.16) we write the equation at $n=0$
 $\mathit{\Delta }{\mathit{dy}/\mathit{dx}|}_{0}=-\frac{1}{2}\left({y}_{2}-{y}_{1}\right)+\frac{3}{2}\left({y}_{1}-{y}_{0}\right)=\left(-\frac{3}{2}\mathrm{ }2\mathrm{ }-\frac{1}{2}\mathrm{ }0\mathrm{ }\dots \right)\left(\mathbf{y}\right)=0.$ $\left(3.28\right)$
Gathering all our equations into a matrix we have:
 $\left(\begin{array}{ccccccc}\hfill -\frac{3}{2}\hfill & \hfill 2\hfill & \hfill -\frac{1}{2}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 1-\frac{1}{2}\hfill & \hfill -2\hfill & \hfill 1+\frac{1}{2}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1-\frac{1}{2n}\hfill & \hfill -2\hfill & \hfill 1+\frac{1}{2n}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1-\frac{1}{2\left(N-1\right)}\hfill & \hfill -2\hfill & \hfill 1+\frac{1}{2\left(N-1\right)}\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -2\hfill \end{array}\right)\left(\begin{array}{c}\hfill {y}_{0}\hfill \\ \hfill {y}_{1}\hfill \\ \hfill :\hfill \\ \hfill {y}_{n}\hfill \\ \hfill :\hfill \\ \hfill {y}_{N-1}\hfill \\ \hfill {y}_{N}\hfill \end{array}\right)=-{\mathit{\Delta }}^{2}\left(\begin{array}{c}\hfill 0\hfill \\ \hfill {g}_{1}\hfill \\ \hfill :\hfill \\ \hfill {g}_{n}\hfill \\ \hfill :\hfill \\ \hfill {g}_{N-1}\hfill \\ \hfill 0\hfill \end{array}\right)$ $\left(3.29\right)$
Fig. 3.7 shows the solution of an illustrative case. Figure 3.7: Example of the result of a finite difference solution for $y$ of eq. (3.24) using a matrix of the form of eq. (3.29). The source $g$ is purely illustrative, and is plotted in the figure. The boundary points at the ends of the range of solution are $r=0$, and $r=N\mathit{\Delta }=4$. A grid size $N=25$ is used.

## Exercise 3. Solving 2-point ODEs.

1. Write a code to solve, using matrix inversion, a 2-point ODE of the form
 $\frac{{d}^{2}y}{{\mathit{dx}}^{2}}=f\left(x\right)$

on the $x$-domain $\left[0,1\right]$, spanned by an equally spaced mesh of $N$ nodes, with Dirichlet boundary conditions $y\left(0\right)={y}_{0}$, $y\left(1\right)={y}_{1}$.
When you have got it working, obtain your personal expressions for $f\left(x\right)$, $N$, ${y}_{0}$, and ${y}_{1}$ from http://silas.psfc.mit.edu/22.15/giveassign3.cgi. (Or use $f\left(x\right)=a+\mathit{bx}$, $a=0.15346$, $b=0.56614$, $N=44$, ${y}_{0}=0.53488$, ${y}_{1}=0.71957$.) And solve the differential equation so posed. Plot the solution.
Submit the following as your solution:
1. Your code in a computer format that is capable of being executed.
2. The expressions of your problem $f\left(x\right)$, $N$, ${y}_{0}$, and ${y}_{1}$
3. The numeric values of your solution ${y}_{j}$.
5. Brief commentary ($<300$ words) on what problems you faced and how you solved them.

2. Save your code and make a copy with a new name. Edit the new code so that it solves the ODE
 $\frac{{d}^{2}y}{{\mathit{dx}}^{2}}+{k}^{2}y=f\left(x\right)$

on the same domain and with the same boundary conditions, but with the extra parameter ${k}^{2}$. Verify that your new code works correctly for small values of ${k}^{2}$, yielding results close to those of the previous problem.
Investigate what happens to the solution in the vicinity of $k=\mathit{\pi }$.
Describe what the cause of any interesting behavior is.
Submit the following as your solution:
1. Your code in a computer format that is capable of being executed.
2. The expressions of your problem $f\left(x\right)$, $N$, ${y}_{0}$, and ${y}_{1}$
3. Brief description ($<300$ words) of the results of your investigation and your explanation.
4. Back up the explanation with plots if you like.