Friday, July 24, 2015

So You Don’t Like Error Functions?

In Chapter 4 of the 5th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce the diffusion equation (Equation 4.26)
,
where C is the concentration and D is the diffusion constant. We then study one-dimensional diffusion where initially (t = 0) the region to the left of x = 0 has a concentration Co, and the region to the right has a concentration of zero (Section 4.13). We show that the solution to the diffusion equation is (Equation 4.75)
,
where erf is the error function.

Some students don’t like error functions (really?!). Moreover, often we can gain insight by solving a problem in several ways, obtaining solutions that are seemingly different yet equivalent. Let’s see if we can solve this problem in another way and avoid those pesky error functions. We will use a standard method for solving partial differential equations: separation of variables. Before I start, let’s agree to solve a slightly different problem: initially C(x,0) is Co/2 for x less than zero, and - Co/2 for x greater than zero. I do this so the solution will be odd in x: C(x,t) = - C(-x,t). At the end we can add the constant Co/2 and get back to our original problem. Now, let's begin.

Assume the solution can be written as the product of a function of only t and a function of only x: C(x,t) = T(t) X(x). Plug this into the diffusion equation, simplify, and divide both sides by TX 
.
The only way that the left and right hand sides can be equal at all values of x and t is if both are equal to a constant, which I will call – k2. This gives two ordinary differential equations
 and
The solution to the first equation is an exponential
and the solution to the second is a sine
.
There is no cosine term because of the odd symmetry. Unfortunately, we do not know the value of k. In fact, our solution can be a superposition of infinitely many values of k
,
where A(k) specifies the weighting.

To determine A(k), use the Fourier techniques developed in Chapter 11. The result is
.
How did I get that? Let me outline the process, leaving you to fill in the missing steps. I should warn you that a mathematician would worry about the convergence of the integrals we evaluate, but you and I will brush those concerns under the rug.

At t = 0, our solution becomes
.
Except for a missing factor of 2π, this looks just like the Fourier transform from Section 11.9 of IPMB. Next, multiply each side of the equation by sin(ωx), and integrate over all x. Then, use Equation 11.66b to express the integral of the product of sines as a delta function. You get
.
Both C(x,0) and sin(kx) are odd, so their product is even, and for x greater than zero C(x,0) is - Co/2. Therefore,
.
You know how to integrate sine (I hope you do!), so
.
Here is where things get dicey. We don’t know what cosine equals at infinity, but if we say it averages to zero the first term goes away and we get our result
.
Plugging in this expression for A(k) gives our solution for C(x,t). If we want to go back to our original problem with an initial condition of Co on the left and zero on the right, we must add Co/2. Thus
.
Let’s compare this solution to the one in Equation 4.75 (given above). Our new solution does not contain the error function! Those of you who dislike that function can celebrate. Unfortunately, we traded the error function for an integral that we cannot evaluate in closed form. So, you can have a function that you may be unfamiliar with and that has a funny name, or you can have an expression with common functions like the sine and the exponential inside an integral. Pick your poison.

No comments:

Post a Comment