Friday, September 9, 2011

Radon Transform

In Chapter 12 of the 4th Edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce the Radon transformation. It consists of finding the projections F(θ, x') at different angles θ from a function f(x,y). But why is it called the “Radon” transformation, and does it have anything to do with the radioactive gas radon discussed in Chapter 16?

Well, it has nothing to do with the element radon. Instead, and predictably, the term honors Johann Radon, the Austrian mathematician who investigated this transformation. In “A Tribute to Johann Radon” in the IEEE Transactions on Medical Imaging (Volume 5, Page 169, 1986, reproduced long after his death to honor his memory) Hans Hornich wrote
With the death in Vienna on 25 May 1956 of Dr. Johann Radon, Professor of the University of Vienna, not only the mathematical world and Austrian science but also the German Mathematical Union has suffered a severe loss, as have also many other scientific bodies of which the deceased was a prominent member, and who spent most of his teaching life in German universities.

Radon was born in the small town of Tetschen in Bohemia near the border of Saxony on December 16, 1887. He studied at Vienna University where, alongside Mertens and Wirtinger, Escherisch above all was the great influence on Radon’s development: Escherisch had, as one of the first in Austria, imparted to his students the world of ideas of Weierstrass and his rigorous foundations of analysis. Through Escherich, Radon was led next to variational calculus….

A few years later appeared his “Habilitationsschrift” “Theory and application of absolute additive weighting functions” (S. Ber. math. naturw., Kl. K. Akad. Wiss. Wien II Abt., vol. 122, pp. 1295–1438, 1913), which played a leading role in the development of analysis; the Radon integral and the Radon theorem laid the foundations of functional analysis. As an application Radon somewhat later treated the first and second boundary value problem of the logarithmic potential in a very general way.
The Radon transformation has important applications in medical imaging, and plays a crucial role in computed tomography, positron emission tomography, and single photon emission tomography. I found a nice layman’s description of the Radon transform in an essay at the website http://www.ams.org/samplings/feature-column/fcarc-tomography, written by Bill Casselman.
The original example of this sort of technology [involving a collaboration between medicine and mathematics], and the ancestor of many of these technologies, is what is now called computed tomography, for which Allan Cormack, a physicist whose research became more and more mathematical as time went on, laid down the theoretical foundations around 1960. He shared the 1979 Nobel prize in medicine for his work in this field.

In fact the basic idea of tomography had been discovered for purely theoretical reasons in 1917 by the Austrian mathematician Johann Radon, and it had been rediscovered several times since by others, but Cormack was not to know this until much later than his own independent discovery. The problem he solved is this: Suppose we know all the line integrals through a body of varying density. Can we reconstruct the body itself? The answer, perhaps surprisingly, is that we can, and furthermore we can do so constructively. In practical terms, we know that a single X-ray picture can give only limited information because certain things are obscured by other, heavier things. We might take more X-ray pictures in the hope that we can somehow see behind the obscuring objects, but it is not at all obvious that by taking a lot—really, a lot—of X-ray pictures we can in effect even see into objects, which is what Radon tells us, at least in principle. Making Radon’s theorem into a practical tool was not a trivial matter.”
You can listen to a lecture on tomography and inverting the Radon transform here.

Friday, September 2, 2011

Fraunhofer Diffraction

Last week’s blog entry discussed Fresnel diffraction, which Russ Hobbie and I analyzed in the 4th edition of Intermediate Physics for Medicine and Biology when we examined the ultrasonic pressure distribution produced near a circular piezoelectric transducer. This week, I will analyze diffraction far from the wave source, known as Fraunhofer diffraction, named for the German scientist Joseph von Fraunhofer.

The mathematics of Fraunhofer diffraction is a bit too complicated to derive here, but the gist of it can be found by inspecting the first equation in Section 13.7 (Medical Uses of Ultrasound), found at the bottom of the left column on page 351. The pressure is found by integrating 1/r times a cosine function over the transducer face. When you are far from the transducer, r is approximately a constant and can be taken out of the integral. In that case, you just integrate cosine over the transducer area. This becomes similar to the two-dimensional Fourier transform defined in Chapter 12 (Images). The far field pressure distribution produced by a circular transducer given in Eq. 13.40 is the same Bessel function result as derived in Problem 10 of Chapter 12.

The intensity distribution in Eq. 13.40 is known as the Airy pattern, after the English scientist and mathematician George Biddell Airy. As shown in Fig. 13.15, the pattern consists of a central peak, surrounded by weaker secondary maxima. The Airy pattern occurs during imaging using a circular aperture, such as when viewing stars through a telescope. Two adjacent stars appear as two Airy patterns. Distinguishing the two stars is difficult unless the separation between the images is greater than the separation between the peak of the Airy pattern and its first zero. This is called the Rayleigh criterion, after Lord Rayleigh. Rayleigh (1842–1919, born John William Strutt)—one of those 19th century English Victorian physicists I like so much—did fundamental work in acoustics, and published the classic textbook Theory of Sound.

Friday, August 26, 2011

Fresnel Diffraction

In Section 13.7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the medical uses of ultrasound. One important problem we analyze is the pressure distribution produced by a piezoelectric transducer.
There are some important features of the radiation pattern from a transducer which we review next. Consider a circular transducer, the surface of which is oscillating back and forth in a fluid… Each small element of the vibrating fluid creates a wave that travels radially outward, the points of constant phase being expanding hemispheres. The amplitude of each spherical wave decreases as 1/r, the intensity falling as 1/r2. We want the pressure at a point z on the axis of the transducer. It is obtained by summing up the effect of all the spherical waves emanating from the face of the transducer….

The [average intensity] is plotted in Fig. 13.13 for a fairly typical but small transducer (a = 0.5 cm, f = 2 MHz)... Close to the transducer there are large oscillations in intensity along the axis: there are corresponding oscillations perpendicular to the axis, as shown in Fig. 13.14. The maxima and minima form circular rings. This is called the near field or Fresnel zone… The depth of the Fresnel zone is approximately a2/λ [where a is the radius of the transducer, λ is the wavelength, and f is the frequency].
Figure 13.13 from the 4th edition of Intermediate Physics for Medicine and Biology, showing Fresnel diffraction.
Fig, 13.13.
The calculated intensity along the axis, as shown in our Fig. 13.13, is interesting. In the Fresnel zone, the intensity has many points where it is zero. In Intermediate Physics for Medicine and Biology we calculate why this happens mathematically, but it is illuminating to describe what is happening physically. Basically, this is a result of wave interference. Our statement that “each small element of the vibrating fluid creates a wave that travels radially outward” is often called Huygens principle. Each point on the face of the transducer produces such a wavelet. To understand the pressure distribution, we must examine the phase relationship among these various wavelets. Very near the face of the transducer, the waves that contribute significantly to the pressure are in phase; they all interfere constructively and you get a maximum (evaluate Eq. 13.39 at z = 0 and you get a nonzero constant). However, as you move away, more distant points on the transducer face contribute to the pressure on the axis, and these points may be out of phase with the pressure produced by the point at the center. For some value of z the in-phase and out-of-phase wavelets interfere destructively, resulting in zero intensity. Increase z a little more, and not only do the in-phase points at the center and the out-of-phase points just away from the center contribute to the pressure, but so do some in-phase points even farther from the center. When you add it all up, you get a net constructive interference and a non-zero intensity. And so it goes, as you move out farther and farther along the z axis.

Figure 13.14 from the 4th edition of Intermediate Physics for Medicine and Biology, showing Fresnel diffraction.
Fig. 13.14.
The radial distribution of the intensity is surprisingly rich and complex, given the rather simple integral that underlies the behavior. If you want to explore the radial distribution in more detail, go to the excellent website http://wyant.optics.arizona.edu/fresnelZones/fresnelZones.htm, where you can perform these calculations yourself. You can adjust the parameters as you wish and create plots such as those in Fig. 13.14, and also produce grayscale images of the full intensity distribution that provide much insight. The website was produced with optics in mind, so you have to put in strange looking parameters to model ultrasound. To reproduce the middle panel of Fig 13.14, input 770,000 for the wavelength in nm, 10,000 for the aperture diameter in microns, and 15.75 for the observation distance in mm. To my eye, the agreement between the website’s calculation and Fig. 13.14 is impressive. At small values of z the plots get very complex and beautiful. For the same wavelength and aperture, I like the richness of z = 5 mm, and for z = 4 mm you get a fairly uniform brightness except for a dramatic dark spot right at the center. It reminds me of Poisson’s spot, which I discussed in the September 17, 2010 entry in this blog, about Augustin-Jean Fresnel. Indeed, the physics behind the calculations in Fig. 13.14 and Poisson’s spot in optics are nearly identical. The circular aperture is a classic problem in Fresnel diffraction. You can find a more detailed discussion of this topic in the textbook Optics (4th edition), by Eugene Hecht. (My bookshelf contains the first edition, by Hecht and Zajac, that I used in my undergraduate optics class at the University of Kansas).

If you want to be clever, you could make the ultrasound transducer vibrate only at those radii that result in constructive interference along the axis, and have it remain stationary at radii that cause destructive interference. (Of course, this would mean you would have to design your transducer face cleverly so concentric rings vibrate, separated by rings that do not, which might make constructing the transducer more difficult.) Using such a trick eliminates the dark spots along the z axis, increasing the intensity there. This method is commonly used to focus light waves, and is called a zone plate. It has been used occasionally with ultrasound.

Friday, August 19, 2011

The Nonlinear Poisson-Boltzmann Equation

Last week’s blog entry was about the Gouy-Chapman model for a charged double layer at an electrode surface. The model is based on the Poisson-Boltzmann equation (Eq. 9.10 in the 4th edition of Intermediate Physics for Medicine and Biology). One interesting feature of the Poisson-Boltzmann equation is that it is nonlinear. In applications when the thermal energy of ions in solution is much greater than the energy of the ions in an electrical potential, the equation can be linearized (Eq. 9.13). That is not always the case.

Homework problem 9 in Chapter 9 of Intermediate Physics for Medicine and Biology was added in the 4th edition. It begins
Problem 9 Analytical solutions to the nonlinear Poisson-Boltzmann equation are rare but not unknown. Consider the case when the potential varies in one dimension (x), the potential goes to zero at large x, and there exists equal concentrations of monovalent cations and anions. Chandler et al. (1965) showed that the solution to the 1-d Poisson-Boltzmann equation, d2ζ/dx2=sinh(ζ), is…
You will need to get a copy of the book to see this lovely solution. It is a bit too complicated to write in this blog, but it involves the exponential function, the hyperbolic tangent function, and the inverse hyperbolic tangent function. I like this homework problem, because you can solve both the nonlinear and linear equations exactly, with the same boundary conditions, and compare them to get a good intuitive feel for the impact of the nonlinearity. I admit, the problem is a bit advanced for an intermediate-level book, but upper-level undergraduates or graduate students studying from our text should be up to the challenge.

The full citation to the paper by Knox Chandler, Alan Hodgkin, and Hans Meves mentioned in the problem is
Chandler, W. K., A. L. Hodgkin, and H. Meves (1965) “The Effect of Changing the Internal Solution on Sodium Inactivation and Related Phenomena in Giant Axons,” Journal of Physiology, Volume 180, Pages 821–836.
I always thought it odd that one finds a really elegant analytical solution to the nonlinear Poisson-Boltzmann equation in a paper about sodium channel inactivation in a squid nerve axon (with Nobel Prize-winning physiologist Alan Hodgkin as a coauthor). The solution is buried in the discussion (in a section set of in a smaller font than the rest of the paper). The reason for its appearance is that Chandler et al. found changes in membrane behavior with intracellular ion concentration, and postulated that the measured voltage drop between the inside and outside of the axon consisted of a voltage drop across the membrane itself (which affects the ion channel behavior) and a voltage drop within a double layer adjacent to the membrane. It is the double layer voltage that they model using the Poisson-Boltzmann equation.

Nowadays, the nonlinear Poisson-Boltzmann equation is typically solved using numerical methods. See, for example, the paper that Russ Hobbie and I cite in Intermediate Physics for Medicine and Biology, written by Barry Honig and Anthony Nicholls: “Classical Electrostatics in Biology and Chemistry,” Science, Volume 268, Pages 1144–1149, 1995 (it now has over 1500 citations in the literature). Their abstract states
A major revival in the use of classical electrostatics as an approach to the study of charged and polar molecules in aqueous solution has been made possible through the development of fast numerical and computational methods to solve the Poisson-Boltzmann equation for solute molecules that have complex shapes and charge distributions. Graphical visualization of the calculated electrostatic potentials generated by proteins and nucleic acids has revealed insights into the role of electrostatic interactions in a wide range of biological phenomena. Classical electrostatics has also proved to be a successful quantitative tool yielding accurate descriptions of electrical potentials, diffusion limited processes, pH-dependent properties of proteins, ionic strength-dependent phenomena, and the solvation free energies of organic molecules.
Such calculations continue to be an active area of research. See, for example, “The Role of DNA Shape in Protein-DNA Recognition” by Remo Rohs, Sean West, Alona Sosinsky, Peng Liu, Richard Mann and Barry Honig (Nature, Volume 461, Pages 1248–1253, 2009).

Friday, August 12, 2011

Gouy and Chapman

Sometimes when I am studying physics, I run across a model or equation with names attached to it, and I wonder “just who are these people?” For example, in Chapter 9 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Gouy-Chapman model.
In this section we study one model for how ions are distributed at the interface in Donnan equilibrium. The model was used independently by Gouy and by Chapman to study the interface between a metal electrode and an ionic solution. They investigated the potential changes along the x axis perpendicular to a large plane electrode. The same model is used to study the charge distribution in a semiconductor.
So who are Gouy and Chapman? I can tell they lived a long time ago, because in the next section we write “In an ionic solution, ions of opposite charge attract one another. A model of this neutralization was developed by Debye and Huckel a few years after Gouy and Chapman developed [their] model.” I know Peter Debye worked in the early days of quantum mechanics, so Gouy and Chapman had to be a bit earlier.

As if often the case with scientific developments in the 19th century, a Frenchman and an Englishman share credit for the discovery. Louis Georges Gouy (1854–1926) was a French experimental physicist from the University of Lyon. He is best known as the inventor of the Gouy balance, a device for measuring magnetic susceptibility. Gouy had an interest in Brownian motion at a time when the atomic nature of matter was still an open question. Russ and I discuss Brownian motion in Section 4.3 of our book.
[The] movement of microscopic-sized particles, resulting from bombardment by much smaller invisible atoms, was first observed by the English botanist Robert Brown in 1827 and is called Brownian motion.
Albert Einstein wrote a fundamental paper on Brownian motion in 1905, his miraculous year. In a subsequent paper on the same topic, Einstein began (Annalen der Physik, Volume 19, Pages 371–381, 1906)
Soon after the appearance of my paper on the movements of particles suspended in liquids demanded by the molecular theory of heat, Siedentopf (of Jena) informed me that he and other physicists—in the first instance, Prof. Gouy (of Lyons)—had been convinced by direct observation that the so-called Brownian motion is caused by the irregular thermal movements of the molecules of the liquid.
Gouy’s model sought to explain the electrical potential around small Brownian particles, or colloids. He described the double layer of charge that develops at the surface, with one charge layer bound to the surface of the particle, and a layer of counterions in the surrounding fluid.

David Leonard Chapman (1869–1958) was an English physical chemist at Oxford. He was interested in the theory of detonation in gasses, and developed the Chapman-Jouget condition describing their behavior. About three years after Gouy, Chapman derived a model describing the double layer at a charged surface. The Gouy-Chapman model is an application of what is now known as the Poisson-Boltzmann equation (Eq. 9.13 in our book).

In 1924, physicist Otto Stern extended the Gouy-Chapman model by noting that ions can’t be represented as point charges when they are within a few ion radii of the surface. This leads to the Stern layer of immobile counter-ions right next to the surface, and a diffuse layer of counter-ions whose concentration decays exponentially.

Friday, August 5, 2011

Fisher-Kolmogorov equation

The two volumes of Mathematical Biology, by James Murray, superimposed on Intermediate Physics for Medicine and Biology.
Mathematical Biology,
by James Murray.
In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss many of the important partial differential equations of physics, such as Laplace’s equation, the diffusion equation, and the wave equation. One lesser-known PDE that we don’t discuss is the Fisher-Kolmogorov equation. However, our book supplies most of what you need to understand this equation.

In Section 2.10, we examine the logistic equation, an ordinary differential equation governing population growth,

du/dt = b u (1 − u) .

For u much less than one, the population grows exponentially with rate b. As u approaches one, the population levels off near a steady state value of u = 1. Our Eq. 2.28 gives an analytical solution to this nonlinear equation.

In Section 4.8, we drive the diffusion equation, which for one dimension is

du/dt = D d2u/dx2 .

This linear partial differential equation is one of the most famous in physics. It describes diffusion of particles, and also the flow of heat by conduction. D is the diffusion constant.

To get the Fisher-Kolmogorov equation, just put the logistic equation and the diffusion equation together:

du/dt = D d2u/dx2 + b u (1 u) .

The Fisher-Kolmogorov equation is an example of a “reaction-diffusion equation.” Russ and I discuss a similar reaction-diffusion equation in Homework Problem 24 of Chapter 4, when modeling intracellular calcium waves. The only difference is that we use a slightly more complicated reaction term rather than the logistic equation.

Mathematical Biology, by James Murray, with Intermediate Physics for Medicine and Biology.
Mathematical Biology,
by James Murray.
In his book Mathematical Biology, James Murray discusses the Fisher-Kolmogorov equation in detail. He states
The classic simplest case of a nonlinear reaction diffusion equation … is [The Fisher-Kolmogorov equation]… It was suggested by Fisher (1937) as a deterministic version of a stochastic model for the spatial spread of a favoured gene in a population. It is also the natural extension of the logistic growth population model discussed in Chapter 11 when the population disperses via linear diffusion. This equation and its travelling wave solutions have been widely studied, as has been the more general form with an appropriate class of functions f(u) replacing ku(1u). The seminal and now classical paper is that by Kolmogoroff et al. (1937)…. We discuss this model equation in the following section in some detail, not because in itself it has such wide applicability but because it is the prototype equation which admits travelling wavefront solutions. It is also a convenient equation from which to develop many of the standard techniques for analyzing single-species models with diffusive dispersal.”
The Fisher-Kolmogorov equation was derived independently by Ronald Fisher (1890–1962), an English biologist, and Andrey Kolmogorov (1903–1987), a Russian mathematician. The key original papers are
Fisher, R. A. (1937) “The Wave of Advance of Advantageous Genes.” Annals of Eugenics, Volume 7, Pages 353–369.

Kolmogoroff, A., I. Petrovsky, and N. Piscounoff (1937) “Etude de l’equation de la diffusion avec croissance de la quantite de matiere et son application a un probleme biologique.” Moscow University Mathematics Bulletin, Volume 1, Pages 1–25.

Friday, July 29, 2011

The terahertz

In the opening section of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I provide a table of common prefixes used in the metric system:
gigaG 109
megaM106
kilo k103
milli m10−3
microμ10−6
nanon10−9
pico p10−12
femto f10−15
attoa10−18
We went all the way down to “atto” on the small side, but stopped at “giga” on the large side. I now wish we had skipped “atto” and instead included “tera,” or “T”, corresponding to 1012. Why? Because the prefix tera can be very useful in optics when discussing the frequency of light.

To better appreciate why, take a look at the letter to the editor “Let’s Talk TeraHertz!” in the April 2011 issue of my favorite journal, the American Journal of Physics. There Roger Lewis—from the melodious University of Wollongong—argues that the terahertz is superior to the nanometer when discussing light. Lewis writes
…the terahertz shares the desirable properties of the nanometer as a unit in teaching optics… Like the nanometer, the terahertz conveniently represents visible light to three digits in numbers that fall in the midhundreds… The terahertz has other desirable properties that the nanometer lacks. First, the frequency is a more fundamental property of light than the wavelength because the frequency does not change as light traverses different media, whereas the wavelength may. Second, the energy of a photon is directly proportional to its frequency… The visible spectrum is often taken to span 400–700 nm, corresponding to 749–428 THz, falling in the octave 400–800 THz. …
I suspect that the reason I have always preferred wavelength over frequency when discussing light is that the nanometer provides such a nice, easy-to-remember unit to work with. Had I realized from the start that terahertz offered an equally useful unit for discussing frequency, I might naturally think in terms of frequency rather than wavelength. Incidentally, Planck’s constant is 0.00414 (or about 1/240) in the units of eV/THz.

After reading Lewis’s letter, I checked Intermediate Physics for Medicine and Biology to see how Russ and I characterized the properties of visible light. On page 360 I found our Table 14.2, which lists the different colors of the electromagnetic spectrum in terms of wavelength (nm), energy (eV) and frequency. We didn’t explicitly mention the unit THz, but we did list the frequency in units of 1012 Hz, so terahertz was there in every way but name. As a rule I don’t like to write in my books, but nevertheless I suggest that owners of Intermediate Physics for Medicine and Biology take a pencil and replace “(1012 Hz)” in Table 14.2 with “(THz)”.

Russ and I discuss the terahertz explicitly in Chapter 14 about Atoms and Light.
14.6.4 Far Infrared or Terahertz Radiation

For many years, there were no good sources or sensitive detectors for radiation between microwaves and the near infrared (0.1–100 THz; 1 THz = 1012 Hz). Developments in optoelectronics have solved both problems, and many investigators are exploring possible medical uses of THz radiation (“T rays”). Classical electromagnetic wave theory is needed to describe the interactions, and polarization (the orientation of the E vector of the propagating wave) is often important. The high attenuation of water in this frequency range means that studies are restricted to the skin or surface of organs such as the esophagus that can be examined endoscopically. Reviews are provided by Smye et al. (2001), Fitzgerald et al. (2002), and Zhang (2002).
The citations are to
Fitzgerald, A. J., E. Berry, N. N. Zinonev, G. C. Walker, M. A. Smith and J. M. Chamberlain (2002) “An Introduction to Medical Imaging with Coherent Terahertz Frequency Radiation,” Physics in Medicine and Biology, Volume 47, Pages R67–R84.

Smye, S. W., J. M. Chamberlain, A. J. Fitzgerald and E. Berry (2001) “The Interaction Between Terahertz Radiation and Biological Tissue,” Physics in Medicine and Biology, Volume 46, Pages R101–R112.

Zhang, X-C. (2002) “Terahertz Wave Imaging: Horizons and Hurdles,” Physics in Medicine and Biology, Volume 47, Pages 3667–3677.
So not only is the terahertz useful when talking about visible light, but also it is useful if working in the far infrared, when the frequency is about 1 THz. Such “T rays” (I hate that term) are being used nowadays for imaging during airport security and as a tool to study cell biology and cancer.

Friday, July 22, 2011

Euler: The Master of Us All

Euler: The Master of Us All,  by William Dunham, superimposed on Intermediate Physics for Medicine and Biology.
Euler: The Master of Us All,
by William Dunham.
Swiss mathematician Leonhard Euler (1707–1783) is a fascinating man. I discussed him once before in this blog, during an entry about the book e: The Story of a Number. Euler’s name never appears in the 4th edition of Intermediate Physics for Medicine and Biology, but his influence is there.
William Dunham describes Euler’s life and work in his book Euler: The Master of Us All. In the Preface, Dunham writes
This book is about one of the undisputed geniuses of mathematics, Leonhard Euler. His insight was breathtaking, his vision profound, his influence as significant as that of anyone in history. Euler contributed to long-established branches of mathematics like number theory, analysis, algebra, and geometry. He also ventured into the largely unexplored territory of analytic number theory, graph theory, and differential geometry. In addition, he was his century’s foremost applied mathematician, as his work in mechanics, optics, and acoustics amply demonstrates. There was hardly an aspect of the subject that escaped Euler’s penetrating gaze. As the twentieth-century mathematician Andre Weil put it, “All his life…he seems to have carried in his head the whole of the mathematics of his day, both pure and applied.”
In Chapter 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss one of Euler’s best known contributions, his relationship between the exponential function, trigonometric functions, and complex numbers.
The numbers that we have been using are called real numbers. The number i = √−1 is called an imaginary number. A combination of a real and imaginary number is called a complex number. The remarkable property of imaginary numbers that make them useful in this context is that e = cosθ + i sinθ.
Dunham wrote about this identity:
“From these equations,” Euler noted with evident satisfaction, “we understand how complex exponentials can be expressed by real sines and cosines.” His enthusiasm has been echoed by mathematicians ever since. Few would argue that Euler’s identity is among the most beautiful formulas of all.
Euler didn’t invent complex numbers, but he did contribute significantly to their development, including a derivation of this gem (“which seems extraordinary to me,” wrote Euler)

ii = e−π/2.

Dunham’s book gives examples of Euler’s contributions to number theory, logarithms, infinite series, analytic number theory, complex variables, algebra, geometry, and combinatorics. For instance, Dunham describes an discovery Euler made when in his 20s.
One of his earliest triumphs was a solution of the so-called “Basel Problem” that perplexed mathematicians for the better part of the previous century. The issue was to determine the exact value of the infinite series

1 + 1/4 + 1/9 + 1/16 + 1/25 + … + 1/k2 + … .

… The answer was not only a mathematical tour de force but a genuine surprise, for the series sums to π2/6. This highly non-intuitive result made the solution all the more spectacular and its solver all the more famous.
As he grew older, Euler slowly became blind. His accomplishments despite his handicap remind me of Beethoven composing his majestic 9th symphony after going deaf. Dunham writes about Euler
Although unable to see, he not only maintained but even increased his scientific output. In the year 1775, for instance, he wrote an average of one mathematical paper per week. Such productivity came in spite of the fact that he now had to have others read him the contents of scientific papers, and he in turn had to dictate his work to diligent scribes. During his descent into blindness, he wrote an influential textbook on algebra, a 775-page treatise on the motion of the moon, and a massive, three-volume development of integral calculus, the Institutiones calculi integralis. Never was his remarkable memory more useful than when he could see mathematics only in his mind’s eye.

That this blind and aging man forged ahead with such gusto is a remarkable lesson, a tale for the ages. Euler’s courage, determination, and utter unwillingness to be beaten serves, in the truest sense of the word, as an inspiration for mathematician and non-mathematician alike. The long history of mathematics provides no finer example of the triumph of the human spirit.
Dunham concludes
Euler left behind a legacy of epic proportions. So prolific was he that the journal of the St. Petersburg Academy was still publishing the backlog of his papers a full 48 years after his death. There is hardly a branch of mathematics—or for that matter of physics—in which he did not play a significant role.
 Listen to William Dunham talk about Leonhard Euler.
https://www.youtube.com/embed/fEWj93XjON0

Friday, July 15, 2011

The leibniz

In order to motivate the study of thermal physics, Chapter 3 of the 4th edition of Intermediate Physics for Medicine and Biology begins with an examination of how many equations are required to simulate the motion of all the molecules in one cubic millimeter of blood. Russ Hobbie and I write
It is possible to identify all the external forces acting on a simple system and use Newton’s second law (F = ma) to calculate how the system moves … In systems of many particles, such calculations become impossible. Consider, for example, how many particles there are in a cubic millimeter of blood. Table 3.1 shows some of the constituents of such a sample [including 3.3 × 1019 water molecules]. To calculate the translational motion in three dimensions, it would be necessary to write three equations for each particle using Newton’s second law. Suppose that at time t the force on a molecule is F. Between t and t + Δt, the velocity of the particle changes according to the three equations

vi(t+Δt) = vi(t) + FiΔt/m, (i = x, y, z).

The three equations for the change of position of the particle are of the form x(t + Δt) = x(t) + vx(t)Δt … Solving these equations requires at least six multiplications and six additions for each particle. For 1019 particles, this means about 1020 arithmetic operations per time interval … It is impossible to trace the behavior of this many molecules on an individual basis.

Nor is it necessary. We do not care which water molecule is where. The properties of a system that are of interest are averages over many molecules: pressure, concentration, average speed, and so forth. These average macroscopic properties are studied in statistical or thermal physics or statistical mechanics.
It is difficult to gain an intuitive feel for just how many differential equations are needed in such a calculation, just as it is difficult to imagine just how many molecules make up a macroscopic bit of matter. Chemists have solved the problem of dealing with large numbers of molecules by introducing the unit of a mole, corresponding to Avogadro’s number (6 × 1023) of molecules. Other quantities involving Avogadro’s number are similarly defined. For instance, the Faraday corresponds to the magnitude of the charge of one mole of electrons (I admit, the Faraday is more of a constant than a unit); see page 60 and Eq. 3.32 of Intermediate Physics for Medicine and Biology. In Problem 2 of Chapter 14, Russ and I discuss the einstein, a unit corresponding to a mole of photons. When doing large-scale numerical simulations on a computer, it would be useful to have a similar unit to handle very large numbers of differential equations, such as are required to model a drop of blood.

Fortunately, such a unit exists, called the leibniz. Sui Huang and John Wikswo coined the term in their paper “Dimensions of Systems Biology,” published in the Reviews of Physiology, Biochemistry and Pharmacology (Volume 157, Pages 81–104, 2006). They write
The electrical activity of the heart during ten seconds of fibrillation could easily require solving 1018 coupled differential equations (Cherry et al. 2000). (N.B., Avogadro’s number of differential equations may be defined as one Leibnitz, so 10 s of fibrillation corresponds to a micro-Leibnitz problem.) Multiprocessor supercomputers running for a month can execute a micromole of floating point operations, but in the cardiac case such computers may run several orders of magnitude slower than real time, such that modeling 10 s of fibrillation might require 1 exaFLOP/s × year.
The leibniz appeared again in Wikswo et al.’s paper “Engineering Challenges of BioNEMS: The Integration of Microfluidics, Micro- and Nanodevices, Models and External Control for Systems Biology” in the IEE Proceedings Nanobiotechnology (Volume 153, Pages 81–101, 2006).
What distinguishes the models of systems biology from those of many other disciplines is their multiscale richness in both space and time: these models may eventually have millions of dynamic variables with complex non-linear interactions. It is conceivable that the ultimate models for systems biology might require a mole of differential equations (called a Leibnitz) and computations that require a yottaFLOPs (floating point operations per second) computer.
If we take the leibniz (Lz) as our unit of simulation complexity, the calculation Russ and I consider at the start of Chapter 3 requires solving approximately 6 × 1019 differential equations, or about 0.1 mLz. Note that we describe two first order differential equations for each molecule, but others might prefer to speak of a single second-order differential equation. This would make a difference of a factor of two in the number of equations. I propose that when using the leibniz we consider only first order ODEs. Moreover, when using a differential equation governing a vector, we count one equation per component.

For those not familiar with Gottfried Leibniz (1646–1716), he is a German mathematician and a co-inventor of the calculus, along with Isaac Newton. Leibniz and Newton got into one of the biggest priority disputes in the history of science about this landmark development. Newton has his unit, so it’s only fair that Leibniz has one too. Leibniz also made contributions to information theory and computational science, so the liebniz is a particularly appropriate way to honor this great mathematician.

John Wikswo, my PhD advisor when I was in graduate school at Vanderbilt University, notes that there are two alternative spellings of Leibniz’s name: Leibnitz and Leibniz. I favor “Leibniz,” the spelling on Wikipedia, and so does Wikswo now, but he points out that there’s plenty of support for “Leibnitz” used in his earlier publications. I had high hopes of enjoying a bit of fun at my friend’s expense by adding an annoying “[sic]” after each appearance of “Leibnitz” in the above quotes, but then Wikswo pointed out that Richard Feynman used “Leibnitz” in The Feynman Lectures on Physics. What can I say; you can’t argue with Feynman.

Friday, July 8, 2011

Gasiorowicz

Quantum Physics,  by Stephen Gasiorowicz, superimposed on Intermediate Physics for Medicine and Biology.
Quantum Physics,
by Stephen Gasiorowicz.
One of the standard topics in any modern physics class is blackbody radiation. Indeed, it was the study of blackbody radiation that led to the development of quantum mechanics. In Chapter 14 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
The spectrum of power per unit area emitted by a completely black surface in the wavelength interval between λ and λ + dλ is … a universal function called the blackbody radiation function. …The description of [this] function …by Planck is one of the foundations of quantum mechanics… We can find the total amount of power emitted per unit surface area by integrating10 Eq. 14.32 [Planck’s blackbody radiation function]…[The result] is the Stefan-Boltzmann law.
As I was reading over this section recently, I was struck by the footnote number ten (present in earlier editions of our book, so I know it was originally written by Russ).
10This is not a simple integration. See Gasiorowicz (1974, p. 6).
This is embarrassing to admit, but although I am a coauthor on the 4th edition, there are still topics in our book that I am learning about. I always feel a little guilty about this, so recently I decided it is high time to take a look at the book by Stephen Gasiorowicz and see just how difficult this integral really is. The result was fascinating. The integral is not terribly complicated, but it involves a clever trick I would have never thought of. Because math is rather difficult to write in the html of this blog (at least for me), I will explain how to evaluate this integral through a homework problem. When revising our book for the 4th edition, I enjoyed finding “missing steps” in derivations and then creating homework problems to lead the reader through them. For instance, in Problem 24 of Chapter 14, Russ and I asked the reader to “integrate Eq. 14.32 over all wavelengths to obtain the Stephan-Boltzmann law, Eq. 14.33.” Then, we added “You will need the integral [integrated from zero to infinity]

∫ x3/(ex−1) dx = π4/15 .

Below is a new homework problem related to footnote ten, in which the reader must evaluate the integral given at the end of Problem 24. I base this homework problem on the derivation I found in Gasiorowicz. In our book, we cite the 1974 edition.
Gasiorowicz, S. (1974) Quantum Physics. New York, Wiley.
This is the edition in Kresge library at Oakland University, and is the one I used to create the homework problem. However, I found using amazon.com’s “look inside” feature that this derivation is also in the more recent 3rd edition (2003). In addition, I found the derivation repeated in another of Gasiorowicz’s books, The Structure of Matter.
Problem 24 ½ Evaluate the integral given in Problem 24.
(a) Factor out e−x, and then use the geometric series 1 + z + z2 + z3 + …=1/(1−z) to replace the denominator by an infinite sum.
(b) Make the substitution y = (n+1) x.
(c) Evaluate the resulting integral over y, either by looking it up or (better) by repeated integration by parts.
(d) Make the substitution m=n+1
(e) Use the fact that the sum of 1/m4 from 1 to infinity is equal to π4/90 to evaluate the integral.
Really, who would have thought to replace 1/(1−z) by an infinite series? Usually, I am desperately trying to do just the opposite: get rid of an infinite series, such as a geometric series, by replacing it with a simple function like 1/(1−z). The last thing I would have wanted to do is to introduce a dreaded infinite sum into the calculation. But it works. I must admit, this is a bit of a cheat. Even if in part (c) you don’t look up the integral, but instead laboriously integrate by parts several times, you still have to pull a rabbit out of the hat in step (e) when you sum up 1/m4. Purists will verify this infinite sum by calculating the Fourier series over the range 0 to of the function

f(x) = π4/90 – π2 x2/12 + π x3/12 – x4/48

and then evaluating it at x = 0. (Of course, you know how to calculate a Fourier series, since you have read Chapter 11 of Intermediate Physics for Medicine and Biology). When computing Fourier coefficients, you will need to do a bunch of integrals containing powers of x times cos(nx), but you can do those by—you guessed it—repeated integration by parts. Thus, even if lost on a deserted island without your math handbook or a table of integrals, you should still be able to complete the new homework problem using Gasiorowicz’s method. I’m assuming you know how to do some elementary calculus—integrate by parts and simple integrals of powers, exponentials, and trigonometric functions—without looking it up. (Full disclosure: I found the function f(x) given above by browsing through a table of Fourier series in my math handbook. On that lonely island, you would have to guess f(x), so let’s hope you at least remembered to bring along plenty of scrap paper.)

I checked out the website for Gasiorowicz’s textbook. There is a lot of interesting material there. The book covers many of the familiar topics of modern physics: blackbody radiation, the photoelectric effect, the Bohr model for hydrogen, the uncertainty principle, the Schrodinger equation and more, all the way up to the structure of atoms and molecules. I learned this material from Eisberg and Resnick’s Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (1985), cited several times in Intermediate Physics for Medicine and Biology, when I used their book in my undergraduate modern physics class at the University of Kansas. For an undergraduate quantum mechanics class, I like Griffith’s Introduction to Quantum Mechanics, in part because I have taught from that book. But Gasiorowicz’s book appears to be in the same class as these two. I noticed that Gasiorowicz is from the University of Minnesota, so perhaps Russ knows him.

P.S. Did any of you dear readers notice that Russ and I spelled the name “Stefan” of the “Stefan-Boltzmann law” differently in the text of Chapter 14 and in Problem 24? I asked Google, and it found sites using both spellings, but the all-knowing Wikipedia favors “Stefan”. I’m not 100% certain which is correct (it may have to do with the translation from Slovene to English), but we should at least have been consistent within our book.