Last week’s blog entry discussed Fresnel diffraction, which Russ Hobbie and I analyzed in the 4th edition of Intermediate Physics for Medicine and Biology when we examined the ultrasonic pressure distribution produced near a circular piezoelectric transducer. This week, I will analyze diffraction far from the wave source, known as Fraunhofer diffraction, named for the German scientist Joseph von Fraunhofer.
The mathematics of Fraunhofer diffraction is a bit too complicated to derive here, but the gist of it can be found by inspecting the first equation in Section 13.7 (Medical Uses of Ultrasound), found at the bottom of the left column on page 351. The pressure is found by integrating 1/r times a cosine function over the transducer face. When you are far from the transducer, r is approximately a constant and can be taken out of the integral. In that case, you just integrate cosine over the transducer area. This becomes similar to the two-dimensional Fourier transform defined in Chapter 12 (Images). The far field pressure distribution produced by a circular transducer given in Eq. 13.40 is the same Bessel function result as derived in Problem 10 of Chapter 12.
The intensity distribution in Eq. 13.40 is known as the Airy pattern, after the English scientist and mathematician George Biddell Airy. As shown in Fig. 13.15, the pattern consists of a central peak, surrounded by weaker secondary maxima. The Airy pattern occurs during imaging using a circular aperture, such as when viewing stars through a telescope. Two adjacent stars appear as two Airy patterns. Distinguishing the two stars is difficult unless the separation between the images is greater than the separation between the peak of the Airy pattern and its first zero. This is called the Rayleigh criterion, after Lord Rayleigh. Rayleigh (1842–1919, born John William Strutt)—one of those 19th century English Victorian physicists I like so much—did fundamental work in acoustics, and published the classic textbook Theory of Sound.
Friday, September 2, 2011
Friday, August 26, 2011
Fresnel Diffraction
In Section 13.7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the medical uses of ultrasound. One important problem we analyze is the pressure distribution produced by a piezoelectric transducer.
The calculated intensity along the axis, as shown in our Fig. 13.13, is interesting. In the Fresnel zone, the intensity has many points where it is zero. In Intermediate Physics for Medicine and Biology we calculate why this happens mathematically, but it is illuminating to describe what is happening physically. Basically, this is a result of wave interference. Our statement that “each small element of the vibrating fluid creates a wave that travels radially outward” is often called Huygens principle. Each point on the face of the transducer produces such a wavelet. To understand the pressure distribution, we must examine the phase relationship among these various wavelets. Very near the face of the transducer, the waves that contribute significantly to the pressure are in phase; they all interfere constructively and you get a maximum (evaluate Eq. 13.39 at z = 0 and you get a nonzero constant). However, as you move away, more distant points on the transducer face contribute to the pressure on the axis, and these points may be out of phase with the pressure produced by the point at the center. For some value of z the in-phase and out-of-phase wavelets interfere destructively, resulting in zero intensity. Increase z a little more, and not only do the in-phase points at the center and the out-of-phase points just away from the center contribute to the pressure, but so do some in-phase points even farther from the center. When you add it all up, you get a net constructive interference and a non-zero intensity. And so it goes, as you move out farther and farther along the z axis.
The radial distribution of the intensity is surprisingly rich and complex, given the rather simple integral that underlies the behavior. If you want to explore the radial distribution in more detail, go to the excellent website http://wyant.optics.arizona.edu/fresnelZones/fresnelZones.htm, where you can perform these calculations yourself. You can adjust the parameters as you wish and create plots such as those in Fig. 13.14, and also produce grayscale images of the full intensity distribution that provide much insight. The website was produced with optics in mind, so you have to put in strange looking parameters to model ultrasound. To reproduce the middle panel of Fig 13.14, input 770,000 for the wavelength in nm, 10,000 for the aperture diameter in microns, and 15.75 for the observation distance in mm. To my eye, the agreement between the website’s calculation and Fig. 13.14 is impressive. At small values of z the plots get very complex and beautiful. For the same wavelength and aperture, I like the richness of z = 5 mm, and for z = 4 mm you get a fairly uniform brightness except for a dramatic dark spot right at the center. It reminds me of Poisson’s spot, which I discussed in the September 17, 2010 entry in this blog, about Augustin-Jean Fresnel. Indeed, the physics behind the calculations in Fig. 13.14 and Poisson’s spot in optics are nearly identical. The circular aperture is a classic problem in Fresnel diffraction. You can find a more detailed discussion of this topic in the textbook Optics (4th edition), by Eugene Hecht. (My bookshelf contains the first edition, by Hecht and Zajac, that I used in my undergraduate optics class at the University of Kansas).
If you want to be clever, you could make the ultrasound transducer vibrate only at those radii that result in constructive interference along the axis, and have it remain stationary at radii that cause destructive interference. (Of course, this would mean you would have to design your transducer face cleverly so concentric rings vibrate, separated by rings that do not, which might make constructing the transducer more difficult.) Using such a trick eliminates the dark spots along the z axis, increasing the intensity there. This method is commonly used to focus light waves, and is called a zone plate. It has been used occasionally with ultrasound.
There are some important features of the radiation pattern from a transducer which we review next. Consider a circular transducer, the surface of which is oscillating back and forth in a fluid… Each small element of the vibrating fluid creates a wave that travels radially outward, the points of constant phase being expanding hemispheres. The amplitude of each spherical wave decreases as 1/r, the intensity falling as 1/r2. We want the pressure at a point z on the axis of the transducer. It is obtained by summing up the effect of all the spherical waves emanating from the face of the transducer….
The [average intensity] is plotted in Fig. 13.13 for a fairly typical but small transducer (a = 0.5 cm, f = 2 MHz)... Close to the transducer there are large oscillations in intensity along the axis: there are corresponding oscillations perpendicular to the axis, as shown in Fig. 13.14. The maxima and minima form circular rings. This is called the near field or Fresnel zone… The depth of the Fresnel zone is approximately a2/λ [where a is the radius of the transducer, λ is the wavelength, and f is the frequency].
Fig, 13.13. |
Fig. 13.14. |
If you want to be clever, you could make the ultrasound transducer vibrate only at those radii that result in constructive interference along the axis, and have it remain stationary at radii that cause destructive interference. (Of course, this would mean you would have to design your transducer face cleverly so concentric rings vibrate, separated by rings that do not, which might make constructing the transducer more difficult.) Using such a trick eliminates the dark spots along the z axis, increasing the intensity there. This method is commonly used to focus light waves, and is called a zone plate. It has been used occasionally with ultrasound.
Friday, August 19, 2011
The Nonlinear Poisson-Boltzmann Equation
Last week’s blog entry was about the Gouy-Chapman model for a charged double layer at an electrode surface. The model is based on the Poisson-Boltzmann equation (Eq. 9.10 in the 4th edition of Intermediate Physics for Medicine and Biology). One interesting feature of the Poisson-Boltzmann equation is that it is nonlinear. In applications when the thermal energy of ions in solution is much greater than the energy of the ions in an electrical potential, the equation can be linearized (Eq. 9.13). That is not always the case.
Homework problem 9 in Chapter 9 of Intermediate Physics for Medicine and Biology was added in the 4th edition. It begins
The full citation to the paper by Knox Chandler, Alan Hodgkin, and Hans Meves mentioned in the problem is
Nowadays, the nonlinear Poisson-Boltzmann equation is typically solved using numerical methods. See, for example, the paper that Russ Hobbie and I cite in Intermediate Physics for Medicine and Biology, written by Barry Honig and Anthony Nicholls: “Classical Electrostatics in Biology and Chemistry,” Science, Volume 268, Pages 1144–1149, 1995 (it now has over 1500 citations in the literature). Their abstract states
Homework problem 9 in Chapter 9 of Intermediate Physics for Medicine and Biology was added in the 4th edition. It begins
Problem 9 Analytical solutions to the nonlinear Poisson-Boltzmann equation are rare but not unknown. Consider the case when the potential varies in one dimension (x), the potential goes to zero at large x, and there exists equal concentrations of monovalent cations and anions. Chandler et al. (1965) showed that the solution to the 1-d Poisson-Boltzmann equation, d2ζ/dx2=sinh(ζ), is…You will need to get a copy of the book to see this lovely solution. It is a bit too complicated to write in this blog, but it involves the exponential function, the hyperbolic tangent function, and the inverse hyperbolic tangent function. I like this homework problem, because you can solve both the nonlinear and linear equations exactly, with the same boundary conditions, and compare them to get a good intuitive feel for the impact of the nonlinearity. I admit, the problem is a bit advanced for an intermediate-level book, but upper-level undergraduates or graduate students studying from our text should be up to the challenge.
The full citation to the paper by Knox Chandler, Alan Hodgkin, and Hans Meves mentioned in the problem is
Chandler, W. K., A. L. Hodgkin, and H. Meves (1965) “The Effect of Changing the Internal Solution on Sodium Inactivation and Related Phenomena in Giant Axons,” Journal of Physiology, Volume 180, Pages 821–836.I always thought it odd that one finds a really elegant analytical solution to the nonlinear Poisson-Boltzmann equation in a paper about sodium channel inactivation in a squid nerve axon (with Nobel Prize-winning physiologist Alan Hodgkin as a coauthor). The solution is buried in the discussion (in a section set of in a smaller font than the rest of the paper). The reason for its appearance is that Chandler et al. found changes in membrane behavior with intracellular ion concentration, and postulated that the measured voltage drop between the inside and outside of the axon consisted of a voltage drop across the membrane itself (which affects the ion channel behavior) and a voltage drop within a double layer adjacent to the membrane. It is the double layer voltage that they model using the Poisson-Boltzmann equation.
Nowadays, the nonlinear Poisson-Boltzmann equation is typically solved using numerical methods. See, for example, the paper that Russ Hobbie and I cite in Intermediate Physics for Medicine and Biology, written by Barry Honig and Anthony Nicholls: “Classical Electrostatics in Biology and Chemistry,” Science, Volume 268, Pages 1144–1149, 1995 (it now has over 1500 citations in the literature). Their abstract states
A major revival in the use of classical electrostatics as an approach to the study of charged and polar molecules in aqueous solution has been made possible through the development of fast numerical and computational methods to solve the Poisson-Boltzmann equation for solute molecules that have complex shapes and charge distributions. Graphical visualization of the calculated electrostatic potentials generated by proteins and nucleic acids has revealed insights into the role of electrostatic interactions in a wide range of biological phenomena. Classical electrostatics has also proved to be a successful quantitative tool yielding accurate descriptions of electrical potentials, diffusion limited processes, pH-dependent properties of proteins, ionic strength-dependent phenomena, and the solvation free energies of organic molecules.Such calculations continue to be an active area of research. See, for example, “The Role of DNA Shape in Protein-DNA Recognition” by Remo Rohs, Sean West, Alona Sosinsky, Peng Liu, Richard Mann and Barry Honig (Nature, Volume 461, Pages 1248–1253, 2009).
Friday, August 12, 2011
Gouy and Chapman
Sometimes when I am studying physics, I run across a model or equation with names attached to it, and I wonder “just who are these people?” For example, in Chapter 9 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Gouy-Chapman model.
As if often the case with scientific developments in the 19th century, a Frenchman and an Englishman share credit for the discovery. Louis Georges Gouy (1854–1926) was a French experimental physicist from the University of Lyon. He is best known as the inventor of the Gouy balance, a device for measuring magnetic susceptibility. Gouy had an interest in Brownian motion at a time when the atomic nature of matter was still an open question. Russ and I discuss Brownian motion in Section 4.3 of our book.
David Leonard Chapman (1869–1958) was an English physical chemist at Oxford. He was interested in the theory of detonation in gasses, and developed the Chapman-Jouget condition describing their behavior. About three years after Gouy, Chapman derived a model describing the double layer at a charged surface. The Gouy-Chapman model is an application of what is now known as the Poisson-Boltzmann equation (Eq. 9.13 in our book).
In 1924, physicist Otto Stern extended the Gouy-Chapman model by noting that ions can’t be represented as point charges when they are within a few ion radii of the surface. This leads to the Stern layer of immobile counter-ions right next to the surface, and a diffuse layer of counter-ions whose concentration decays exponentially.
In this section we study one model for how ions are distributed at the interface in Donnan equilibrium. The model was used independently by Gouy and by Chapman to study the interface between a metal electrode and an ionic solution. They investigated the potential changes along the x axis perpendicular to a large plane electrode. The same model is used to study the charge distribution in a semiconductor.So who are Gouy and Chapman? I can tell they lived a long time ago, because in the next section we write “In an ionic solution, ions of opposite charge attract one another. A model of this neutralization was developed by Debye and Huckel a few years after Gouy and Chapman developed [their] model.” I know Peter Debye worked in the early days of quantum mechanics, so Gouy and Chapman had to be a bit earlier.
As if often the case with scientific developments in the 19th century, a Frenchman and an Englishman share credit for the discovery. Louis Georges Gouy (1854–1926) was a French experimental physicist from the University of Lyon. He is best known as the inventor of the Gouy balance, a device for measuring magnetic susceptibility. Gouy had an interest in Brownian motion at a time when the atomic nature of matter was still an open question. Russ and I discuss Brownian motion in Section 4.3 of our book.
[The] movement of microscopic-sized particles, resulting from bombardment by much smaller invisible atoms, was first observed by the English botanist Robert Brown in 1827 and is called Brownian motion.Albert Einstein wrote a fundamental paper on Brownian motion in 1905, his miraculous year. In a subsequent paper on the same topic, Einstein began (Annalen der Physik, Volume 19, Pages 371–381, 1906)
Soon after the appearance of my paper on the movements of particles suspended in liquids demanded by the molecular theory of heat, Siedentopf (of Jena) informed me that he and other physicists—in the first instance, Prof. Gouy (of Lyons)—had been convinced by direct observation that the so-called Brownian motion is caused by the irregular thermal movements of the molecules of the liquid.Gouy’s model sought to explain the electrical potential around small Brownian particles, or colloids. He described the double layer of charge that develops at the surface, with one charge layer bound to the surface of the particle, and a layer of counterions in the surrounding fluid.
David Leonard Chapman (1869–1958) was an English physical chemist at Oxford. He was interested in the theory of detonation in gasses, and developed the Chapman-Jouget condition describing their behavior. About three years after Gouy, Chapman derived a model describing the double layer at a charged surface. The Gouy-Chapman model is an application of what is now known as the Poisson-Boltzmann equation (Eq. 9.13 in our book).
In 1924, physicist Otto Stern extended the Gouy-Chapman model by noting that ions can’t be represented as point charges when they are within a few ion radii of the surface. This leads to the Stern layer of immobile counter-ions right next to the surface, and a diffuse layer of counter-ions whose concentration decays exponentially.
Friday, August 5, 2011
Fisher-Kolmogorov equation
Mathematical Biology, by James Murray. |
In Section 2.10, we examine the logistic equation, an ordinary differential equation governing population growth,
du/dt = b u (1 − u) .
For u much less than one, the population grows exponentially with rate b. As u approaches one, the population levels off near a steady state value of u = 1. Our Eq. 2.28 gives an analytical solution to this nonlinear equation.
In Section 4.8, we drive the diffusion equation, which for one dimension is
du/dt = D d2u/dx2 .
This linear partial differential equation is one of the most famous in physics. It describes diffusion of particles, and also the flow of heat by conduction. D is the diffusion constant.
To get the Fisher-Kolmogorov equation, just put the logistic equation and the diffusion equation together:
du/dt = D d2u/dx2 + b u (1 − u) .
The Fisher-Kolmogorov equation is an example of a “reaction-diffusion equation.” Russ and I discuss a similar reaction-diffusion equation in Homework Problem 24 of Chapter 4, when modeling intracellular calcium waves. The only difference is that we use a slightly more complicated reaction term rather than the logistic equation.
Mathematical Biology, by James Murray. |
The classic simplest case of a nonlinear reaction diffusion equation … is [The Fisher-Kolmogorov equation]… It was suggested by Fisher (1937) as a deterministic version of a stochastic model for the spatial spread of a favoured gene in a population. It is also the natural extension of the logistic growth population model discussed in Chapter 11 when the population disperses via linear diffusion. This equation and its travelling wave solutions have been widely studied, as has been the more general form with an appropriate class of functions f(u) replacing ku(1−u). The seminal and now classical paper is that by Kolmogoroff et al. (1937)…. We discuss this model equation in the following section in some detail, not because in itself it has such wide applicability but because it is the prototype equation which admits travelling wavefront solutions. It is also a convenient equation from which to develop many of the standard techniques for analyzing single-species models with diffusive dispersal.”The Fisher-Kolmogorov equation was derived independently by Ronald Fisher (1890–1962), an English biologist, and Andrey Kolmogorov (1903–1987), a Russian mathematician. The key original papers are
Fisher, R. A. (1937) “The Wave of Advance of Advantageous Genes.” Annals of Eugenics, Volume 7, Pages 353–369.
Kolmogoroff, A., I. Petrovsky, and N. Piscounoff (1937) “Etude de l’equation de la diffusion avec croissance de la quantite de matiere et son application a un probleme biologique.” Moscow University Mathematics Bulletin, Volume 1, Pages 1–25.
Friday, July 29, 2011
The terahertz
In the opening section of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I provide a table of common prefixes used in the metric system:
To better appreciate why, take a look at the letter to the editor “Let’s Talk TeraHertz!” in the April 2011 issue of my favorite journal, the American Journal of Physics. There Roger Lewis—from the melodious University of Wollongong—argues that the terahertz is superior to the nanometer when discussing light. Lewis writes
After reading Lewis’s letter, I checked Intermediate Physics for Medicine and Biology to see how Russ and I characterized the properties of visible light. On page 360 I found our Table 14.2, which lists the different colors of the electromagnetic spectrum in terms of wavelength (nm), energy (eV) and frequency. We didn’t explicitly mention the unit THz, but we did list the frequency in units of 1012 Hz, so terahertz was there in every way but name. As a rule I don’t like to write in my books, but nevertheless I suggest that owners of Intermediate Physics for Medicine and Biology take a pencil and replace “(1012 Hz)” in Table 14.2 with “(THz)”.
Russ and I discuss the terahertz explicitly in Chapter 14 about Atoms and Light.
We went all the way down to “atto” on the small side, but stopped at “giga” on the large side. I now wish we had skipped “atto” and instead included “tera,” or “T”, corresponding to 1012. Why? Because the prefix tera can be very useful in optics when discussing the frequency of light.
giga G 109 mega M 106 kilo k 103 milli m 10−3 micro μ 10−6 nano n 10−9 pico p 10−12 femto f 10−15 atto a 10−18
To better appreciate why, take a look at the letter to the editor “Let’s Talk TeraHertz!” in the April 2011 issue of my favorite journal, the American Journal of Physics. There Roger Lewis—from the melodious University of Wollongong—argues that the terahertz is superior to the nanometer when discussing light. Lewis writes
…the terahertz shares the desirable properties of the nanometer as a unit in teaching optics… Like the nanometer, the terahertz conveniently represents visible light to three digits in numbers that fall in the midhundreds… The terahertz has other desirable properties that the nanometer lacks. First, the frequency is a more fundamental property of light than the wavelength because the frequency does not change as light traverses different media, whereas the wavelength may. Second, the energy of a photon is directly proportional to its frequency… The visible spectrum is often taken to span 400–700 nm, corresponding to 749–428 THz, falling in the octave 400–800 THz. …I suspect that the reason I have always preferred wavelength over frequency when discussing light is that the nanometer provides such a nice, easy-to-remember unit to work with. Had I realized from the start that terahertz offered an equally useful unit for discussing frequency, I might naturally think in terms of frequency rather than wavelength. Incidentally, Planck’s constant is 0.00414 (or about 1/240) in the units of eV/THz.
After reading Lewis’s letter, I checked Intermediate Physics for Medicine and Biology to see how Russ and I characterized the properties of visible light. On page 360 I found our Table 14.2, which lists the different colors of the electromagnetic spectrum in terms of wavelength (nm), energy (eV) and frequency. We didn’t explicitly mention the unit THz, but we did list the frequency in units of 1012 Hz, so terahertz was there in every way but name. As a rule I don’t like to write in my books, but nevertheless I suggest that owners of Intermediate Physics for Medicine and Biology take a pencil and replace “(1012 Hz)” in Table 14.2 with “(THz)”.
Russ and I discuss the terahertz explicitly in Chapter 14 about Atoms and Light.
14.6.4 Far Infrared or Terahertz RadiationThe citations are to
For many years, there were no good sources or sensitive detectors for radiation between microwaves and the near infrared (0.1–100 THz; 1 THz = 1012 Hz). Developments in optoelectronics have solved both problems, and many investigators are exploring possible medical uses of THz radiation (“T rays”). Classical electromagnetic wave theory is needed to describe the interactions, and polarization (the orientation of the E vector of the propagating wave) is often important. The high attenuation of water in this frequency range means that studies are restricted to the skin or surface of organs such as the esophagus that can be examined endoscopically. Reviews are provided by Smye et al. (2001), Fitzgerald et al. (2002), and Zhang (2002).
Fitzgerald, A. J., E. Berry, N. N. Zinonev, G. C. Walker, M. A. Smith and J. M. Chamberlain (2002) “An Introduction to Medical Imaging with Coherent Terahertz Frequency Radiation,” Physics in Medicine and Biology, Volume 47, Pages R67–R84.So not only is the terahertz useful when talking about visible light, but also it is useful if working in the far infrared, when the frequency is about 1 THz. Such “T rays” (I hate that term) are being used nowadays for imaging during airport security and as a tool to study cell biology and cancer.
Smye, S. W., J. M. Chamberlain, A. J. Fitzgerald and E. Berry (2001) “The Interaction Between Terahertz Radiation and Biological Tissue,” Physics in Medicine and Biology, Volume 46, Pages R101–R112.
Zhang, X-C. (2002) “Terahertz Wave Imaging: Horizons and Hurdles,” Physics in Medicine and Biology, Volume 47, Pages 3667–3677.
Friday, July 22, 2011
Euler: The Master of Us All
Euler: The Master of Us All, by William Dunham. |
William Dunham describes Euler’s life and work in his book Euler: The Master of Us All. In the Preface, Dunham writes
This book is about one of the undisputed geniuses of mathematics, Leonhard Euler. His insight was breathtaking, his vision profound, his influence as significant as that of anyone in history. Euler contributed to long-established branches of mathematics like number theory, analysis, algebra, and geometry. He also ventured into the largely unexplored territory of analytic number theory, graph theory, and differential geometry. In addition, he was his century’s foremost applied mathematician, as his work in mechanics, optics, and acoustics amply demonstrates. There was hardly an aspect of the subject that escaped Euler’s penetrating gaze. As the twentieth-century mathematician Andre Weil put it, “All his life…he seems to have carried in his head the whole of the mathematics of his day, both pure and applied.”In Chapter 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss one of Euler’s best known contributions, his relationship between the exponential function, trigonometric functions, and complex numbers.
The numbers that we have been using are called real numbers. The number i = √−1 is called an imaginary number. A combination of a real and imaginary number is called a complex number. The remarkable property of imaginary numbers that make them useful in this context is that eiθ = cosθ + i sinθ.Dunham wrote about this identity:
“From these equations,” Euler noted with evident satisfaction, “we understand how complex exponentials can be expressed by real sines and cosines.” His enthusiasm has been echoed by mathematicians ever since. Few would argue that Euler’s identity is among the most beautiful formulas of all.Euler didn’t invent complex numbers, but he did contribute significantly to their development, including a derivation of this gem (“which seems extraordinary to me,” wrote Euler)
ii = e−π/2.
Dunham’s book gives examples of Euler’s contributions to number theory, logarithms, infinite series, analytic number theory, complex variables, algebra, geometry, and combinatorics. For instance, Dunham describes an discovery Euler made when in his 20s.
One of his earliest triumphs was a solution of the so-called “Basel Problem” that perplexed mathematicians for the better part of the previous century. The issue was to determine the exact value of the infinite seriesAs he grew older, Euler slowly became blind. His accomplishments despite his handicap remind me of Beethoven composing his majestic 9th symphony after going deaf. Dunham writes about Euler
1 + 1/4 + 1/9 + 1/16 + 1/25 + … + 1/k2 + … .
… The answer was not only a mathematical tour de force but a genuine surprise, for the series sums to π2/6. This highly non-intuitive result made the solution all the more spectacular and its solver all the more famous.
Although unable to see, he not only maintained but even increased his scientific output. In the year 1775, for instance, he wrote an average of one mathematical paper per week. Such productivity came in spite of the fact that he now had to have others read him the contents of scientific papers, and he in turn had to dictate his work to diligent scribes. During his descent into blindness, he wrote an influential textbook on algebra, a 775-page treatise on the motion of the moon, and a massive, three-volume development of integral calculus, the Institutiones calculi integralis. Never was his remarkable memory more useful than when he could see mathematics only in his mind’s eye.Dunham concludes
That this blind and aging man forged ahead with such gusto is a remarkable lesson, a tale for the ages. Euler’s courage, determination, and utter unwillingness to be beaten serves, in the truest sense of the word, as an inspiration for mathematician and non-mathematician alike. The long history of mathematics provides no finer example of the triumph of the human spirit.
Euler left behind a legacy of epic proportions. So prolific was he that the journal of the St. Petersburg Academy was still publishing the backlog of his papers a full 48 years after his death. There is hardly a branch of mathematics—or for that matter of physics—in which he did not play a significant role.
Friday, July 15, 2011
The leibniz
In order to motivate the study of thermal physics, Chapter 3 of the 4th edition of Intermediate Physics for Medicine and Biology begins with an examination of how many equations are required to simulate the motion of all the molecules in one cubic millimeter of blood. Russ Hobbie and I write
Fortunately, such a unit exists, called the leibniz. Sui Huang and John Wikswo coined the term in their paper “Dimensions of Systems Biology,” published in the Reviews of Physiology, Biochemistry and Pharmacology (Volume 157, Pages 81–104, 2006). They write
For those not familiar with Gottfried Leibniz (1646–1716), he is a German mathematician and a co-inventor of the calculus, along with Isaac Newton. Leibniz and Newton got into one of the biggest priority disputes in the history of science about this landmark development. Newton has his unit, so it’s only fair that Leibniz has one too. Leibniz also made contributions to information theory and computational science, so the liebniz is a particularly appropriate way to honor this great mathematician.
John Wikswo, my PhD advisor when I was in graduate school at Vanderbilt University, notes that there are two alternative spellings of Leibniz’s name: Leibnitz and Leibniz. I favor “Leibniz,” the spelling on Wikipedia, and so does Wikswo now, but he points out that there’s plenty of support for “Leibnitz” used in his earlier publications. I had high hopes of enjoying a bit of fun at my friend’s expense by adding an annoying “[sic]” after each appearance of “Leibnitz” in the above quotes, but then Wikswo pointed out that Richard Feynman used “Leibnitz” in The Feynman Lectures on Physics. What can I say; you can’t argue with Feynman.
It is possible to identify all the external forces acting on a simple system and use Newton’s second law (F = ma) to calculate how the system moves … In systems of many particles, such calculations become impossible. Consider, for example, how many particles there are in a cubic millimeter of blood. Table 3.1 shows some of the constituents of such a sample [including 3.3 × 1019 water molecules]. To calculate the translational motion in three dimensions, it would be necessary to write three equations for each particle using Newton’s second law. Suppose that at time t the force on a molecule is F. Between t and t + Δt, the velocity of the particle changes according to the three equationsIt is difficult to gain an intuitive feel for just how many differential equations are needed in such a calculation, just as it is difficult to imagine just how many molecules make up a macroscopic bit of matter. Chemists have solved the problem of dealing with large numbers of molecules by introducing the unit of a mole, corresponding to Avogadro’s number (6 × 1023) of molecules. Other quantities involving Avogadro’s number are similarly defined. For instance, the Faraday corresponds to the magnitude of the charge of one mole of electrons (I admit, the Faraday is more of a constant than a unit); see page 60 and Eq. 3.32 of Intermediate Physics for Medicine and Biology. In Problem 2 of Chapter 14, Russ and I discuss the einstein, a unit corresponding to a mole of photons. When doing large-scale numerical simulations on a computer, it would be useful to have a similar unit to handle very large numbers of differential equations, such as are required to model a drop of blood.
vi(t+Δt) = vi(t) + FiΔt/m, (i = x, y, z).
The three equations for the change of position of the particle are of the form x(t + Δt) = x(t) + vx(t)Δt … Solving these equations requires at least six multiplications and six additions for each particle. For 1019 particles, this means about 1020 arithmetic operations per time interval … It is impossible to trace the behavior of this many molecules on an individual basis.
Nor is it necessary. We do not care which water molecule is where. The properties of a system that are of interest are averages over many molecules: pressure, concentration, average speed, and so forth. These average macroscopic properties are studied in statistical or thermal physics or statistical mechanics.
Fortunately, such a unit exists, called the leibniz. Sui Huang and John Wikswo coined the term in their paper “Dimensions of Systems Biology,” published in the Reviews of Physiology, Biochemistry and Pharmacology (Volume 157, Pages 81–104, 2006). They write
The electrical activity of the heart during ten seconds of fibrillation could easily require solving 1018 coupled differential equations (Cherry et al. 2000). (N.B., Avogadro’s number of differential equations may be defined as one Leibnitz, so 10 s of fibrillation corresponds to a micro-Leibnitz problem.) Multiprocessor supercomputers running for a month can execute a micromole of floating point operations, but in the cardiac case such computers may run several orders of magnitude slower than real time, such that modeling 10 s of fibrillation might require 1 exaFLOP/s × year.The leibniz appeared again in Wikswo et al.’s paper “Engineering Challenges of BioNEMS: The Integration of Microfluidics, Micro- and Nanodevices, Models and External Control for Systems Biology” in the IEE Proceedings Nanobiotechnology (Volume 153, Pages 81–101, 2006).
What distinguishes the models of systems biology from those of many other disciplines is their multiscale richness in both space and time: these models may eventually have millions of dynamic variables with complex non-linear interactions. It is conceivable that the ultimate models for systems biology might require a mole of differential equations (called a Leibnitz) and computations that require a yottaFLOPs (floating point operations per second) computer.If we take the leibniz (Lz) as our unit of simulation complexity, the calculation Russ and I consider at the start of Chapter 3 requires solving approximately 6 × 1019 differential equations, or about 0.1 mLz. Note that we describe two first order differential equations for each molecule, but others might prefer to speak of a single second-order differential equation. This would make a difference of a factor of two in the number of equations. I propose that when using the leibniz we consider only first order ODEs. Moreover, when using a differential equation governing a vector, we count one equation per component.
For those not familiar with Gottfried Leibniz (1646–1716), he is a German mathematician and a co-inventor of the calculus, along with Isaac Newton. Leibniz and Newton got into one of the biggest priority disputes in the history of science about this landmark development. Newton has his unit, so it’s only fair that Leibniz has one too. Leibniz also made contributions to information theory and computational science, so the liebniz is a particularly appropriate way to honor this great mathematician.
John Wikswo, my PhD advisor when I was in graduate school at Vanderbilt University, notes that there are two alternative spellings of Leibniz’s name: Leibnitz and Leibniz. I favor “Leibniz,” the spelling on Wikipedia, and so does Wikswo now, but he points out that there’s plenty of support for “Leibnitz” used in his earlier publications. I had high hopes of enjoying a bit of fun at my friend’s expense by adding an annoying “[sic]” after each appearance of “Leibnitz” in the above quotes, but then Wikswo pointed out that Richard Feynman used “Leibnitz” in The Feynman Lectures on Physics. What can I say; you can’t argue with Feynman.
Friday, July 8, 2011
Gasiorowicz
Quantum Physics, by Stephen Gasiorowicz. |
The spectrum of power per unit area emitted by a completely black surface in the wavelength interval between λ and λ + dλ is … a universal function called the blackbody radiation function. …The description of [this] function …by Planck is one of the foundations of quantum mechanics… We can find the total amount of power emitted per unit surface area by integrating10 Eq. 14.32 [Planck’s blackbody radiation function]…[The result] is the Stefan-Boltzmann law.As I was reading over this section recently, I was struck by the footnote number ten (present in earlier editions of our book, so I know it was originally written by Russ).
10This is not a simple integration. See Gasiorowicz (1974, p. 6).This is embarrassing to admit, but although I am a coauthor on the 4th edition, there are still topics in our book that I am learning about. I always feel a little guilty about this, so recently I decided it is high time to take a look at the book by Stephen Gasiorowicz and see just how difficult this integral really is. The result was fascinating. The integral is not terribly complicated, but it involves a clever trick I would have never thought of. Because math is rather difficult to write in the html of this blog (at least for me), I will explain how to evaluate this integral through a homework problem. When revising our book for the 4th edition, I enjoyed finding “missing steps” in derivations and then creating homework problems to lead the reader through them. For instance, in Problem 24 of Chapter 14, Russ and I asked the reader to “integrate Eq. 14.32 over all wavelengths to obtain the Stephan-Boltzmann law, Eq. 14.33.” Then, we added “You will need the integral [integrated from zero to infinity]
.
Below is a new homework problem related to footnote ten, in which the reader must evaluate the integral given at the end of Problem 24. I base this homework problem on the derivation I found in Gasiorowicz. In our book, we cite the 1974 edition.
Gasiorowicz, S. (1974) Quantum Physics. New York, Wiley.This is the edition in Kresge library at Oakland University, and is the one I used to create the homework problem. However, I found using amazon.com’s “look inside” feature that this derivation is also in the more recent 3rd edition (2003). In addition, I found the derivation repeated in another of Gasiorowicz’s books, The Structure of Matter.
Problem 24 ½ Evaluate the integral given in Problem 24.Really, who would have thought to replace 1/(1−z) by an infinite series? Usually, I am desperately trying to do just the opposite: get rid of an infinite series, such as a geometric series, by replacing it with a simple function like 1/(1−z). The last thing I would have wanted to do is to introduce a dreaded infinite sum into the calculation. But it works. I must admit, this is a bit of a cheat. Even if in part (c) you don’t look up the integral, but instead laboriously integrate by parts several times, you still have to pull a rabbit out of the hat in step (e) when you sum up 1/m4. Purists will verify this infinite sum by calculating the Fourier series over the range 0 to 2π of the function
(a) Factor out e−x, and then use the geometric series 1 + z + z2 + z3 + …=1/(1−z) to replace the denominator by an infinite sum.
(b) Make the substitution y = (n+1) x.
(c) Evaluate the resulting integral over y, either by looking it up or (better) by repeated integration by parts.
(d) Make the substitution m=n+1
(e) Use the fact that the sum of 1/m4 from 1 to infinity is equal to π4/90 to evaluate the integral.
f(x) = π4/90 – π2 x2/12 + π x3/12 – x4/48
and then evaluating it at x = 0. (Of course, you know how to calculate a Fourier series, since you have read Chapter 11 of Intermediate Physics for Medicine and Biology). When computing Fourier coefficients, you will need to do a bunch of integrals containing powers of x times cos(nx), but you can do those by—you guessed it—repeated integration by parts. Thus, even if lost on a deserted island without your math handbook or a table of integrals, you should still be able to complete the new homework problem using Gasiorowicz’s method. I’m assuming you know how to do some elementary calculus—integrate by parts and simple integrals of powers, exponentials, and trigonometric functions—without looking it up. (Full disclosure: I found the function f(x) given above by browsing through a table of Fourier series in my math handbook. On that lonely island, you would have to guess f(x), so let’s hope you at least remembered to bring along plenty of scrap paper.)
I checked out the website for Gasiorowicz’s textbook. There is a lot of interesting material there. The book covers many of the familiar topics of modern physics: blackbody radiation, the photoelectric effect, the Bohr model for hydrogen, the uncertainty principle, the Schrodinger equation and more, all the way up to the structure of atoms and molecules. I learned this material from Eisberg and Resnick’s Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (1985), cited several times in Intermediate Physics for Medicine and Biology, when I used their book in my undergraduate modern physics class at the University of Kansas. For an undergraduate quantum mechanics class, I like Griffith’s Introduction to Quantum Mechanics, in part because I have taught from that book. But Gasiorowicz’s book appears to be in the same class as these two. I noticed that Gasiorowicz is from the University of Minnesota, so perhaps Russ knows him.
P.S. Did any of you dear readers notice that Russ and I spelled the name “Stefan” of the “Stefan-Boltzmann law” differently in the text of Chapter 14 and in Problem 24? I asked Google, and it found sites using both spellings, but the all-knowing Wikipedia favors “Stefan”. I’m not 100% certain which is correct (it may have to do with the translation from Slovene to English), but we should at least have been consistent within our book.
Friday, July 1, 2011
Physiology is the link between the basic sciences and medicine
Textbook of Medical Physiology, by Guyton and Hall. |
When I was a graduate student at Vanderbilt University, I decided to sit in on the physiology and biochemistry classes that the medical students took. The physiology class was based on Guyton’s book (likely the 6th or 7th edition). I took the class seriously, but since I had little formal coursework in biology (only one introductory class as an undergraduate at the University of Kansas, plus a high school course), and because I didn’t get as much out of the lectures as I should have, my main accomplishment was reading the Textbook of Medical Physiology, cover to cover. Unfortunately, my copy of the book has been lost (probably loaned out to someone who forgot to return it). It’s a pity, because I have fond memories of that book, and all the physiology I learned while reading it.
The 12th edition of the Textbook of Medical Physiology (2010) was published after the 4th edition of Intermediate Physics for Medicine and Biology went to press. Hall and Guyton’s preface states
The first edition of the Textbook of Medical Physiology was written by Arthur C. Guyton almost 55 years ago. Unlike most major medical textbooks, which often have 20 or more authors, the first eight editions of the Textbook of Medical Physiology were written entirely by Dr. Guyton, with each new edition arriving on schedule for nearly 40 years. The Textbook of Medical Physiology, first published in 1956, quickly became the best-selling medical physiology textbook in the world. Dr. Guyton had a gift for communicating complex ideas in a clear and interesting manner that made studying physiology fun. He wrote the book to help students learn physiology, not to impress his professional colleagues.If you are a physicist studying from Intermediate Physics for Medicine and Biology with little background in biology and medicine, you will need to find a good general source of information about physiology. The Guyton and Hall Textbook of Medical Physiology is a good choice. Another book Russ and I cite a lot is Textbook of Physiology by Patton, Fuchs, Hille, Scher and Steiner. However, I cannot find an edition more recent than 1989, so it would not be a good choice for getting up-to-date information.
I worked closely with Dr. Guyton for almost 30 years and had the privilege of writing parts of the 9th and 10th editions. After Dr. Guyton's tragic death in an automobile accident in 2003, I assumed responsibility for completing the 11th edition.
For the 12th edition of the Textbook of Medical Physiology, I have the same goal as for previous editions—to explain, in language easily understood by students, how the different cells, tissues, and organs of the human body work together to maintain life.
This task has been challenging and fun because our rapidly increasing knowledge of physiology continues to unravel new mysteries of body functions. Advances in molecular and cellular physiology have made it possible to explain many physiology principles in the terminology of molecular and physical sciences rather than in merely a series of separate and unexplained biological phenomena.
The Textbook of Medical Physiology, however, is not a reference book that attempts to provide a compendium of the most recent advances in physiology. This is a book that continues the tradition of being written for students. It focuses on the basic principles of physiology needed to begin a career in the health care professions, such as medicine, dentistry and nursing, as well as graduate studies in the biological and health sciences. It should also be useful to physicians and health care professionals who wish to review the basic principles needed for understanding the pathophysiology of human disease.
I have attempted to maintain the same unified organization of the text that has been useful to students in the past and to ensure that the book is comprehensive enough that students will continue to use it during their professional careers.
My hope is that this textbook conveys the majesty of the human body and its many functions and that it stimulates students to study physiology throughout their careers. Physiology is the link between the basic sciences and medicine. The great beauty of physiology is that it integrates the individual functions of all the body's different cells, tissues, and organs into a functional whole, the human body. Indeed, the human body is much more than the sum of its parts, and life relies upon this total function, not just on the function of individual body parts in isolation from the others…
Arthur Guyton (1919-2003) was a famous physiologist, known for his research on the circulatory system. An obituary published in The Physiologist says
Arthur Guyton’s research contributions, which include more than 600 papers and 40 books, are legendary and place him among the greatest figures in the history of cardiovascular physiology. His research covered virtually all areas of cardiovascular regulation and led to many seminal concepts that are now an integral part of our understanding of cardiovascular disorders such as hypertension, heart failure, and edema. It is difficult to discuss cardiovascular regulation without including his concepts of cardiac output and venous return, negative interstitial fluid pressure and regulation of tissue fluid volume and edema, regulation of tissue blood flow and whole body blood flow autoregulation, renal-pressure natriuresis, and long-term blood pressure regulation.If you’re interested in the interface between physics and physiology, you’ll find the Guyton and Hall Textbook of Medical Physiology to be a valuable resource.
Perhaps his most important scientific contribution, however, was a unique quantitative approach to cardiovascular regulation through the application of principles of engineering and systems analysis. He had an extremely analytical mind and an uncanny ability to integrate bits and pieces of information, not only from his own research but also from others, into a quantitative conceptual framework. He built analog computers and pioneered the application of large-scale systems analyses to modeling the cardiovascular system before digital computers were available. With the advent of digital computers, his cardiovascular models expanded dramatically in the 1960’s and 70’s to include the kidneys and body fluids, hormones, autonomic nervous system, as well as cardiac and circulatory functions. He provided the first comprehensive systems analysis of blood pressure regulation and used this same quantitative approach in all areas of his research, leading to new insights that are now part of the everyday vocabulary of cardiovascular researchers.
Many of his concepts were revolutionary and were initially met with skepticism, and even ridicule, when they were first presented. When he first presented his mathematical model of cardiovascular function at the Council for High Blood Pressure Research meeting in 1968, the responses of some of the hypertension experts, recorded at the end of the article, reflected a tone of disbelief and even sarcasm. Guyton’s systems analysis had predicted a dominant role for the renal pressure natriuresis mechanism in long-term blood pressure regulation, a concept that seemed heretical to most investigators at that time. One of the leading figures in hypertension research commented “I realize that it is an impertinence to question a computer and systems analysis, but the answers they have given to Guyton seem authoritarian and revolutionary.” Guyton’s concepts were authoritarian and revolutionary, but after 35 years of experimental studies by investigators around the world, they have also proved to be very powerful in explaining diverse physiological and clinical observations. His far-reaching concepts will continue to be the foundation for generations of cardiovascular physiologists.
Subscribe to:
Posts (Atom)