Friday, May 4, 2012

The Optics of Life

The Optics of Life:  A Biologist's Guide to Light in Nature,  by Sonke Johnsen, superimposed on Intermediate Physics for Medicine and Biology.
The Optics of Life:
A Biologist's Guide to Light in Nature,
by Sonke Johnsen.
As I mentioned two weeks ago, I’ve been reading The Optics of Life: A Biologist’s Guide to Light in Nature, by Sonke Johnsen. The book is delightful, exploring the biological implications of many fascinating phenomena such as scattering, interference, fluorescence, and bioluminescence. To me, The Optics of Life does for light what Life in Moving Fluids does for fluid dynamics; it explains how basic principles of physics apply to the diversity of life. Today, I want to focus on Chapter 8 of Johnsen’s book, about polarization.

The polarization of light is one of those topics Russ Hobbie and I don’t cover in the 4th edition of Intermediate Physics for Medicine and Biology. We only hint at its importance in Chapter 14 about Atoms and Light, when discussing Terahertz radiation.
Classical electromagnetic wave theory is needed to describe the interactions [of Terahertz radiation with the body], and polarization (the orientation of the E vector of the propagating wave) is often important.
Had you asked me two weeks ago why Russ and I skipped polarization, I would have said “because there are so few biological applications.” Johnsen proves me wrong. He writes
As I mentioned earlier, aside from the few people who can see Haidinger’s Brush in the sky, the polarization characteristics of light are invisible to humans. However, a host of animals can detect one or both aspects of linearly polarized light (see Talbot Waterman’s massive review [1981] and Horvath and Varju’s even more massive book [2004] for comprehensive lists of taxa). Arthropods are the big winners here, especially insects, though also most crustaceans and certain spiders and scorpions. In fact, it is unusual to find an insect without polarization sensitivity. Outside the arthropods, the other major polarization-sensitive invertebrates are the cephalopods. Among vertebrates, polarization sensitivity is rarer and more controversial, but has been found in certain fish (primarily trout and Talbot salmon), some migratory birds and a handful of amphibians and reptiles. It is important to realize, though, that there is a serious sampling bias. Testing for polarization sensitivity is difficult and so has usually only been looked for in migratory animals and those known to be especially good at navigation, such as certain desert ants. The true prevalence of polarization sensitivity is unknown.

The ability to sense the polarization of light has been divided into two types. One is known as “polarization sensitivity.” Animals that have polarization sensitivity are not much different from humans wearing Polaroid sunglasses. Polarization affects the intensity of what they see—but without a lot of head-turning and thinking, they cannot reliably determine the angle or degree of polarization or even separate polarization from brightness. The other type is known as “polarization vision.” Animals with polarization vision perceive the angle and degree of polarization as something separate from simple brightness differences. Behaviorally, this means that they can distinguish two lights with different degrees and/or angles of polarization regardless of their relative radiances and colors. This is much like the definition of color vision, which involves the ability to distinguish two lights of differing hue and/or saturation regardless of their relative radiances.
How I’d love to have polarization vision! It would be an entirely new sensory experience. When Dorothy entered the land of Oz, she went from a black and white world to the richness of color. Now imagine a similar experience when going from our drab nonpolarized vision to polarization vision; it would offer a whole new way to view the world; a sixth sense. Alas, not all animals have polarization sensitivity, and even fewer have polarization vision. How these senses work is still unclear.
While polarization sensitivity is certainly rarer among vertebrates [than invertebrates], it does exist… The mechanism of polarization sensitivity in vertebrates remains—along with the basis of animal magnetoreception—one of the two holy grails of sensory biology.
My favorite example discussed by Johnsen is the Mantis shrimp, which can distinguish between left-handed and right-handed circularly polarized light. They do this by passing the light through a biological quarter-wave plate. The quarter-wave plate was one of my favorite topics in my undergraduate optics class. Incoming linearly polarized light is converted into circularly polarizing light by inducing a phase difference of 90 degrees between the two linear components. Similarly, the plate can convert circularly polarized light into linearly polarized light. Circularly polarized light always struck me as somehow magical. You can’t detect it using a sheet of plastic polarizing film, yet it is as fundamental a polarization state for light as is linear polarization. That the Mantis shrimp could make use of a quarter-wave plate to detect circularly polarized light is awesome.

Let me conclude by quoting the first sentence of Johnsen’s introduction to The Optics of Life, which elegantly sums up the book itself.
Of all the remarkable substances of our experience—rain, leaves, baby toes—light is perhaps the most miraculous.

Added note in the evening of May 4: Russ Hobbie reminds me that on the book’s website is text from the first edition of Intermediate Physics for Medicine and Biology about optics, including much about polarization!

Friday, April 27, 2012

Physics and Medicine

Readers of Intermediate Physics for Medicine and Biology already know how important physics is to medicine. Now, subscribers to the famed British medical journal The Lancet are learning this too. The April 21–27 issue (Volume 379, Issue 9825) of The Lancet contains a series of articles under the heading “Physics and Medicine.” In his editorial introducing this series, Peter Knight (president of the Institute of Physics) calls for UK medical schools to reinstate an undergraduate physics requirement for aspiring premed students. The English don’t require their premed students to take physics? Yikes!

Richard Horton, editor-in-chief of The Lancet, seconds this call for better physics education. He concludes that “Young physicists need to be nurtured to ensure a sustainable supply of talented scientists who can take advantage of the opportunities for health-related physics research in the future. Schools, indeed all of us interested in the future of health care, should declare and implement a passion for physics. Our Series is our commitment to do so.” Bravo! Below I reproduce the abstracts to the five articles in the Physics and Medicine series. In brackets I indicate the chapter or section in the 4th edition of Intermediate Physics for Medicine and Biology where a particular topic is discussed.
Physics and Medicine: a Historical Perspective
Stephen F Keevil

Nowadays, the term medical physics usually refers to the work of physicists employed in hospitals, who are concerned mainly with medical applications of radiation, diagnostic imaging, and clinical measurement. This involvement in clinical work began barely 100 years ago, but the relation between physics and medicine has a much longer history. In this report, I have traced this history from the earliest recorded period, when physical agents such as heat and light began to be used to diagnose and treat disease. Later, great polymaths such as Leonardo da Vinci and Alhazen used physical principles to begin the quest to understand the function of the body. After the scientific revolution in the 17th century, early medical physicists developed a purely mechanistic approach to physiology, whereas others applied ideas derived from physics in an effort to comprehend the nature of life itself. These early investigations led directly to the development of specialties such as electrophysiology [Chpts 6, 7], biomechanics [Secs 1.5–1.7] and ophthalmology [Sec 14.12]. Physics-based medical technology developed rapidly during the 19th century, but it was the revolutionary discoveries about radiation and radioactivity [Secs 17.2–17.4] at the end of the century that ushered in a new era of radiation-based medical diagnosis and treatment, thereby giving rise to the modern medical physics profession. Subsequent developments in imaging [Chpt 12] in particular have revolutionised the practice of medicine. We now stand on the brink of a new revolution in post-genomic personalised medicine, with physics-based techniques again at the forefront. As before, these techniques are often the unpredictable fruits of earlier investment in basic physics research.

Diagnostic Imaging
Peter Morris, Alan Perkins

Physical techniques have always had a key role in medicine, and the second half of the 20th century in particular saw a revolution in medical diagnostic techniques with the development of key imaging instruments: x-ray imaging [Chpt 16] and emission tomography [Secs 12.4–12.6] (nuclear imaging [Secs 17.12-17.13] and PET [Sec 17.14]), MRI [Chpt 18], and ultrasound [Chpt 13] These techniques use the full width of the electromagnetic spectrum [Sec 14.1], from gamma rays to radio waves, and sound [Secs 13.1–13.3]. In most cases, the development of a medical imaging device was opportunistic; many scientists in physics laboratories were experimenting with simple x-ray images within the first year of the discovery of such rays, the development of the cyclotron and later nuclear reactors created the opportunity for nuclear medicine, and one of the co-inventors of MRI was initially attempting to develop an alternative to x-ray diffraction for the analysis of crystal structures. What all these techniques have in common is the brilliant insight of a few pioneering physical scientists and engineers who had the tenacity to develop their inventions, followed by a series of technical innovations that enabled the full diagnostic potential of these instruments to be realised. In this report, we focus on the key part played by these scientists and engineers and the new imaging instruments and diagnostic procedures that they developed. By bringing the key developments and applications together we hope to show the true legacy of physics and engineering in diagnostic medicine.

The Importance of Physics to Progress in Medical Treatment
Andreas Melzer, Sandy Cochran, Paul Prentice, Michael P MacDonald, Zhigang Wang, Alfred Cuschieri

Physics in therapy is as diverse as it is substantial. In this review, we highlight the role of physics—occasionally transitioning into engineering—through discussion of several established and emerging treatments. We specifically address minimal access surgery, ultrasound [Sec 13.7], photonics [Chpt 14], and interventional MRI, identifying areas in which complementarity is being exploited. We also discuss some of the fundamental physical principles involved in the application of each treatment to medical practice.

Future Medicine Shaped by an Interdisciplinary New Biology
Paul O'Shea

The projected effects of the new biology on future medicine are described. The new biology is essentially the result of shifts in the way biological research has progressed over the past few years, mainly through the involvement of physical scientists and engineers in biological thinking and research with the establishment of new teams and task forces to address the new challenges in biology. Their contributions go well beyond the historical contributions of mathematics, physical sciences, and engineering to medical practice that were largely equipment oriented. Over the next generation, the entire fabric of the biosciences will change as research barriers between disciplines diminish and eventually cease to exist. The resulting effects are starting to be noticed in front-line medicine and the prospects for the future are immense and potentially society changing. The most likely disciplines to have early effects are outlined and form the main thrust of this paper, with speculation about other disciplines and emphasis that although physics-based and engineering-based biology will change future medicine, the physical sciences and engineering will also be changed by these developments. Essentially, physics is being redefined by the need to accommodate these new views of what constitutes biological systems and how they function.

The Importance of Quantitative Systemic Thinking in Medicine
Geoffrey B West

The study and practice of medicine could benefit from an enhanced engagement with the new perspectives provided by the emerging areas of complexity science [Secs 10.7-10.8] and systems biology. A more integrated, systemic approach is needed to fully understand the processes of health, disease, and dysfunction, and the many challenges in medical research and education. Integral to this approach is the search for a quantitative, predictive, multilevel, theoretical conceptual framework that both complements the present approaches and stimulates a more integrated research agenda that will lead to novel questions and experimental programmes. As examples, the importance of network structures and scaling laws [Sec 2.10] are discussed for the development of a broad, quantitative, mathematical understanding of issues that are important in health, including ageing and mortality, sleep, growth, circulatory systems [Sec 1.17], and drug doses [Sec 2.5]. A common theme is the importance of understanding the quantifiable determinants of the baseline scale of life, and developing corresponding parameters that define the average, idealised, healthy individual.

Friday, April 20, 2012

Frequency versus Wavelength

The Optics of Life: A Biologist's Guide to Light in Nature, by Sonke Johnsen, superimposed on Intermediate Physics for Medicine and Biology.
The Optics of Life:
A Biologist's Guide to Light in Nature,
by Sonke Johnsen.
I am currently reading The Optics of Life: A Biologist’s Guide to Light in Nature, by Sonke Johnsen. I hope to have more to say about this fascinating book when I finish it, but today I want to consider a point made in Chapter 2 (Units and Geometry), which addresses the tricky issue of measuring light intensity as a function of either frequency or wavelength. Johnsen favors using wavelength whenever possible.
However, one critical issue must be discussed before we put frequency away for good. It involves the fact that light spectra are histograms. Suppose you measure the spectrum of daylight, and that the value at 500 nm is 15 photons/cm2/s/nm. That doesn’t mean that there are 15 photons/cm2/s with a wavelength of exactly 500 nm. Instead, it means that, over a 1-nm-wide interval centered on a wavelength of 500 nm, you have 15 photons/cm2/s. The bins in a spectrum don’t have to be 1 nm wide, but they all must have the same width.

Let’s suppose all the bins are 1 nm wide and centered on whole numbers (i.e., one at 400 nm, one at 401 nm, etc.). What happens if we convert these wavelength values to their frequency counterparts? Let’s pick the wavelengths of two neighboring bins and call them λ1 and λ2. The corresponding frequencies ν1 and ν2 are equal to c/λ1 and c/λ2, where c is the speed of light. We know that λ1−λ2 equals 1 nm, but what does ν1−ν2 equal?
ν1−ν2 = … = −c/λ12
…So the width of the frequency bins depends on the wavelengths they correspond to, which means they won’t be equal! In fact, they are quite unequal. Bins at the red end of the spectrum (700 nm) are only about one-third as wide as bins at the blue end (400 nm). This means that a spectrum generated using bins with equal frequency intervals would look different from one with equal wavelength intervals. So which one is correct? Neither or both. The take-home message is that the shape of a spectrum depends on whether you have equal frequency bins or equal wavelength bins.
Johnsen goes on to note that the wavelength at which the spectrum is maximum depends on if you use equal frequency or equal wavelength bins. It does not make sense to say that the spectrum of, say, sunlight peaks at a particular wavelength, unless you specify the type of spectrum you are using. Furthermore, you cannot unambiguously say light is “white” (a uniform spectrum). White light using equal wavelength bins is not white using equal frequency bins. Fortunately, if you integrate the spectrum, you get the same value regardless of if you express it in terms of wavelength or frequency.

Russ Hobbie and I discuss this issue in Chapter 14 (Atoms and Light) of the 4th edition of Intermediate Physics for Medicine and Biology.
Early measurements of the radiation function were done with equipment that made measurements vs. wavelength. It is also possible to measure vs. frequency. To rewrite the radiation function in terms of frequency, let λ1 and λ2 =  λ1 + dλ be two slightly different wavelengths, with power Wλ(λ, T) dλ emitted per unit surface area at wavelengths between λ1 and λ2. The same power must be emitted between frequencies ν1 = c1 and ν2 = c2:

Wν(ν,T) dν = Wλ(λ,T) dλ .     (14.35)

Since ν = c/λ, dν/dλ = − c2, and

|dν| = + c λ2 |dλ| .                 (14.36)

... This transformation is shown in Fig. 14.24. The amount of power per unit area radiated in the 0.5 μm interval between two of the vertical lines in the graph on the lower right is the area under the curve of Wλ between these lines. The graph on the upper right transforms to the corresponding frequency interval. The radiated power, which is the area under the Wν curve between the corresponding frequency lines on the upper left, is the same. We will see this same transformation again when we deal with x rays. Note that the peaks of the two curves are at different frequencies or wavelengths.
Students who prefer visual explanations should see Fig. 14.24, which Russ drew. It is one of my favorite pictures in our book, and provides an illuminating comparison of the two spectra.

One detail I should mention: why in Eq. 14.36 do we use absolute values to eliminate the minus sign introduced by the derivative dν/dλ? Typically, when you integrate a spectrum, you start from the lower frequency and go to the higher frequency (say, zero to infinity), and you start from the shorter wavelength and go to the longer wavelength (again, zero to infinity). However, zero frequency corresponds to an infinite wavelength, and an infinite frequency corresponds to zero wavelength. So, really one case should be integrated forward (zero to infinity) and the other backwards (infinity to zero). If we keep the convention of always integrating from zero to infinity in both cases, we introduce an extra minus sign, which cancels the minus sign introduced by dν/dλ.

Sometimes it helps to have an elementary example to illustrate these ideas. Therefore, I have developed a new homework problem that introduces an extremely simple spectrum for which you can do the math fairly easily, thereby allowing you to focus on the physical interpretation. Enjoy.
Section 14.7

Problem 23 ½ Let Wν(ν) = A ν (νο - ν) for ν less than νο, and Wν(ν) = 0 otherwise.
(a) Plot Wν(ν) versus ν.
(b) Calculate the frequency corresponding to the maximum of Wν(ν), called νmax.
(c) Let λο = c/νο and λmax = c/νmax. Write λmax in terms of λο.
(d) Integrate Wν(ν) over all ν to find Wtot.
(e) Use Eqs. 14.35 and 14.36 to calculate Wλ(λ).
(f) Plot Wλ(λ) versus λ.
(g) Calculate the wavelength corresponding to the maximum of Wλ(λ), called λ*max, in terms of λο.
(h) Compare λmax and λ*max. Are they the same or different? If λο is 400 nm, calculate λmax and λ*max? What part of the electromagnetic spectrum is each of these in?

(i) Integrate Wλ(λ) over all λ to find W*tot. Compare Wtot and W*tot. Are they the same or different?

Friday, April 13, 2012

Stirling’s Formula!

Factorials are used in many branches of mathematics and physics, and particularly in statistical mechanics. One often needs the natural logarithm of a factorial, ln(n!). In Chapter 3 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I use Stirling’s approximation to compute ln(n!). We analyze this approximation in Appendix I.
There is a very useful approximation to the factorial, called Stirling’s approximation:
ln(n!) = n ln nn .
To derive it, write ln(n!) as
ln(n!) = ln 1 + ln 2 + … + ln n = ∑ ln m
The sum is the same as the total area of the rectangles in Fig. I.1, where the height of each rectangle is ln m and the width of the base is one. The area of all the rectangles is approximately the area under the smooth curve, which is a plot of ln m. The area is approximately
1nln m dm = [m ln mm]1n = n ln nn + 1.
This completes the proof.
David Mermin—one of my favorite writers among physicists—has much more to say about Stirling’s approximation in his American Journal of Physics article “Stirling’s Formula!” (leave it to Mermin to work an exclamation point into his title). He writes Stirling’s approximation as n! = √(2 π n) (n/e)n. Taking the natural logarithm of both sides gives ln(n!) = ln(2 π n)/2 + n ln nn . For large n, the first term is small, and the result is the same as Russ and I present. I wonder what affect the first term has on the approximation? For small n, it makes a big difference! In Table I.1 of our textbook, we compute the accuracy of n ln nn for n = 5. In that case, n! = 120, so ln(n!) = ln(120) = 4.7875 and 5 ln 5 – 5 = 3.047, giving a 36% error. But ln(10 π)/2 + 5 ln 5 – 5 = 4.7708, implying an error of 0.35 %, so Mermin’s formula is much better than ours. (I shouldn’t call it Mermin’s formula; I believe Stirling himself derived n! = √(2 π n) (n/e)n.)

Mermin doesn’t stop there. He analyzes the approximation in more detail, and eventually derives an exact formula for n! that looks like Stirling’s approximation given above, except multiplied by an infinite product. In the process, he looks at the approximation for the base of the natural logarithms, e, presented in Chapter 2 of Intermediate Physics for Medicine and Biology, e = (1 + 1/N)N, and shows that a “spectacularly better” approximation for e is (1 + 1/N)N+1/2. He then goes on to derive an improved approximation for n!, which is his expression for Stirling’s formula times e(1/12n). Perhaps getting carried away, he then derives even better approximations.

All of this matters little in applications to statistical mechanics, where n is on the order of Avogadro’s number, in which case the first term in Stirling’s formula is utterly negligible. Nevertheless, I urge you to read Mermin’s paper, if only to enjoy the elegance of his writing. To learn more about Mermin’s views on writing physics, see his essay “Writing Physics.”

Friday, April 6, 2012

Stokes' Law

Stokes’ law appears in Chapter 4 of the 4th edition of Intermediate Physics for Medicine and Biology. Russ Hobbie and I write
For a Newtonian fluid … with viscosity η, one can show (although it requires some detailed calculation6) that the drag force on a spherical particle of radius a is given by

 Fdrag = − β v = − 6 π η a v.

This equation is valid when the sphere is so large that there are many collisions of fluid molecules with it and when the velocity is low enough so that there is no turbulence. The result is called Stokes’ law.
Footnote 6 says “This is an approximate equation. See Barr (1931, p. 171).”

We can derive the form of Stokes’s law from dimensional reasoning. For a spherical particle of radius a in a fluid moving with speed v and having viscosity η, try to create a quantity having dimensions of force from some combination of a (meter), v (meter/second), and η (Newton second/meter2; see Sec. 1.14). The only way to do this is the combination η a v. You get the form of Stokes’ law, except for the dimensionless factor of 6π. Calculating the 6π was Stokes’ great accomplishment.

Boundary Layer Theory, by Schlichting and Gersten, superimposed on Intermediate Physics for Medicine and BIology.
Boundary Layer Theory,
by Schlichting and Gersten.
In order to learn how Stokes obtained his complete solution, I turn to one of my favorite books on fluid dynamics: Boundary Layer Theory, by Hermann Schlichting. Consider a sphere of radius R placed into in an unbounded fluid moving with speed U. Assume that the motion occurs at low Reynolds number (a “creeping motion”), so that inertial effects are negligible compared to viscous forces. The Navier-Stokes equation (see Problem 28 of Chapter 1 in Intermediate Physics for Medicine and Biology) reduces to ∇ p = μ ∇2 v, where p is the pressure, μ the viscosity, and v the fluid speed. Assume further that the fluid is incompressible, so that div v = 0 (see Problem 35 of Chapter 1), and that far from the sphere the fluid speed is v = U. Finally, assume no-slip boundary conditions at the sphere surface, so that v = 0 at r = R. At this point, let us hear the results in Schlichting’s own words (translated from the original German, of course).
The oldest known solution for a creeping motion was given by G. G. Stokes who investigated the case of parallel flow past a sphere [17]. The solution of eqns. (6.3) [Navier-Stokes equation] and (6.4) [div v = 0] for the case of a sphere of radius R, the centre of which coincides with the origin, and which is placed in a parallel stream of uniform velocity U, Fig. 6.1, along the x-axis can be represented by the following equations for the pressure and velocity components [Eqs. 6.7, which are slightly too complicated to reproduce in this blog, but which involve no special functions or other higher mathematics]. . . The pressure distribution along a meridian of the sphere as well as along the axis of abscissae, x, is shown in Fig. 6.1 [a plot with a peak positive pressure at the upstream edge and a peak negative pressure at the downstream edge]. The shearing-stress distribution over the sphere can also be calculated from the above formula. It is found that the shearing stress has its largest value [at a point along the equator in the sphere center] . . . Integrating the pressure distribution and the shearing stress over the surface of the sphere we obtain the total drag D = 6 π μ R U This is the very well known Stokes equation for the drag of a sphere. It can be shown that one third of the drag is due to the pressure distribution and that the remaining two thirds are due to the existence of shear. . . the sphere drags with it a very wide layer of fluid which extends over about one diameter on both sides.
Reference 17 is to Stokes, G. G. (1851) “On the Effect of Internal Friction of Fluids on the Motion of Pendulums,” Transactions of the Cambridge Philosophical Society, Volume 9, Pages 8–106.

Schlichting goes on to analyze the flow around a sphere for high Reynolds number, which is particularly fascinating because in that case viscosity is negligible everywhere except near the sphere surface where the no-slip boundary condition holds. This results in a thin boundary layer forming at the sphere surface. In his introduction, Schlichting writes
In a paper on “Fluid Motion with Very Small Friction,” read before the Mathematical Congress in Heidelberg in 1904, L. Prandtl showed how it was possible to analyze viscous flows precisely in cases which had great practical importance. With the aid of theoretical considerations and several simple experiments, he proved that the flow about a solid body can be divided into two regions: a very thin layer in the neighbourhood of the body (boundary layer) where friction plays an essential part, and the remaining region outside this layer, where friction may be neglected.
The book that Russ and I cite in footnote 6 is A Monograph of Viscometry by Guy Barr (Oxford University Press, 1931). I obtained a yellowing and brittle copy of this book through interlibrary loan. It doesn’t describe the derivation of Stokes law in as much detail as Schlichting, but it does consider many corrections to the law, including Oseen’s correction (a first order correction when expanding the drag force in powers of the Reynold’s number), corrections for the effects of walls, consideration of the ends of tubes, and even the mutual effect of two spheres interacting. I found the following sentence, discussing cylinders as opposed to spheres, to be particularly interesting: “Stoke’s approximation leads to a curious paradox when his system of equations is applied to the movement of an infinite cylinder in an infinite medium, the only stable condition being that in which the whole of the fluid, even at infinity, moves with the same velocity as the cylinder.” You can’t derive a Stokes’ law in two dimensions.

While boundary layer theory and high Reynolds number flow is important for many engineering applications, much of biology takes place at low Reynolds number where Stokes’ law applies. (For more about life at low Reynolds number, see “Life at Low Reynolds Number” by Edward Purcell.)

Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov's Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
Stokes’ life is described in Asimov's Biographical Encyclopedia of Science and Technology
Stokes, Sir George Gabriel
British mathematician and physicist
Born: Skreen, Sligo, Ireland, August 13, 1819
Died: Cambridge, England, February 1, 1903

Stokes was the youngest child of a clergyman. He graduated from Cambridge in 1841 at the head of his class in mathematics and his early promise was not belied. In 1849 he was appointed Lucasian professor of mathematics at Cambridge; in 1854, secretary of the Royal Society; and in 1885, president of the Royal Society. No one had held all three offices since Isaac Newton a century and a half before. Stokes’s vision is indicated by the fact that he was one of the first scientists to see the value of Joule’s work.

Between 1845 and 1850 Stokes worked on the theory of viscous fluids. He deduced an equation (Stokes’s law) that could be applied to the motion of a small sphere falling through a viscous medium to give its velocity under the influence of a given force, such as gravity. This equation could be used to explain the manner in which clouds float in air and waves subside in water. It could also be used in practical problems involving the resistance of water to ships moving through it. In fact such is the interconnectedness of science that six decades after Stokes’s law was announced, it was used for a purpose he could never have foreseen—to help determine the electric charge on a single electron in a famous experiment by Millikan. . .

Friday, March 30, 2012

iBioMagazine

I recently discovered iBioMagazine, which I highly recommend. The iBioMagazine website describes its goals.
iBioMagazine offers a collection of short (less than 15 min) talks that highlight the human side of research. iBioMagazine goes 'behind-the-scenes' of scientific discoveries, provides advice for young scientists, and explores how research is practiced in the life sciences. New topics will be covered in each quarterly issue. Subscribe to be notified when a new iBioMagazine is released.
Here are some of my favorites:
Bruce Alberts, Editor-in-Chief of Science magazine and coauthor of The Molecular Biology of the Cell, tells about how he learned from failure.

Former NIH director Harold Varmus explains why he became a scientist.

Young researchers participating in a summer course at the Marine Biological Laboratory at Woods Hole explain why they became scientists.

Hugh Huxley discusses his development of the sliding filament theory of muscle contraction. Of particular interest is that Huxley began his career as a physics student, and then changed to biology. Andrew Huxley (no relation), of Hodgkin and Huxley fame, independently developed a similar model.
Finally, readers of Intermediate Physics for Medicine and Biology should be sure to listen to Rob Phillips’ wonderful talk about the role of quantitative thinking and mathematical modeling in biology. Phillips is coauthor of the textbook Physical Biology of the Cell, which I have discussed earlier in this blog.

Friday, March 23, 2012

Saltatory Conduction

Action potential propagation along a myelinated nerve axon is often said to occur by “saltatory conduction.” The 4th edition of Intermediate Physics for Medicine and Biology follows this traditional explanation.
We have so far been discussing fibers without the thick myelin sheath. Unmyelinated fibers constitute about two-thirds of the fibers in the human body . . . Myelinated fibers are relatively large, with outer radii of 0.5 – 10 μm. They are wrapped with many layers of myelin between the nodes of Ranvier . . . In the myelinated region the conduction of the nerve impulse can be modeled by electrotonus because the conductance of the myelin sheath is independent of voltage. At each node a regenerative Hodgkin-Huxley-type (HH-type) conductance change restores the shape of the pulse. Such conduction is called saltatory conduction because saltare is the Latin verb “to jump.”
I have never liked the physical picture of an action potential jumping from one node to the next. The problem with this idea is that the action potential is distributed over many nodes simultaneously as it propagates along the axon. Consider an action potential with a rise time of about half a millisecond. Let the radius of the axon be 5 microns. Table 6.2 in Intermediate Physics for Medicine and Biology indicates that the speed of propagation for this axon is 85 m/s, which implies that the upstroke of the action potential is spread over (0.5 ms) × (85 mm/ms) = 42.5 mm. But the distance between nodes for this fiber (again, from Table 6.2) is 1.7 mm. Therefore, the action potential upstroke is distributed over 25 nodes! The action potential is not rising at one node and then jumping to the next, but it propagates in a nearly continuous way along the myelinated axon. I grant that in other cases, when the speed is slower or the rise time is briefer, you can observe behavior that begins to look saltatory (e.g., Huxley and Stampfli, Journal of Physiology, Volume 108, Pages 315–339, 1949), but even then the action potential upstroke is distributed over many nodes (see their Fig. 13).

If saltatory conduction is not the best description of propagation along a myelinated axon, then what is responsible for the speedup compared to unmyelinated axons? Primarily, the action potential propagates faster because of a reduction of the membrane capacitance. Along the myelinated section of the membrane, the capacitance is low because of the many layers of myelin (N capacitors C in series result in a total capacitance of C/N). At a node of Ranvier, the capacitance per unit area of the membrane is normal, but the area of the nodal membrane is small. Adding these two contributions together leads to a very small average, or effective, capacitance, which allows the membrane potential to increase very quickly, resulting in fast propagation.

In summary, I don’t find the idea of an action potential jumping from node to node to be the most useful image of propagation along a myelinated axon. Instead, I prefer to think of propagation as being nearly continuous, with the reduced effective capacitance increasing the speed. This isn’t the typical explanation found in physiology books, but I believe it’s closer to the truth. Rather than using the term saltatory conduction, I suggest we use curretory conduction, for the Latin verb currere, “to run.”

Friday, March 16, 2012

Henry Moseley

Henry Moseley is an English physicist who developed x-ray methods to assign a unique atomic number Z to each element. He appears in Problem 3 of Chapter 16 in the 4th edition of Intermediate Physics for Medicine and Biology.
Problem 3 Henry Moseley first assigned atomic numbers to elements by discovering that the square root of the frequency of the Kα photon is linearly related to Z. Solve Eq. 16.2 for Z and show that this is true. Plot Z vs the square root of the frequency and compare it to data you look up.
Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov’s Biographical Encyclopedia of
Science and Technology,
by Isaac Asimov.
Asimov’s Biographical Encyclopedia of Science and Technology (Second Revised Edition) describes Moseley.
For a time [Moseley] did research under Ernest Rutherford where he was the youngest and most brilliant of Rutherford’s brilliant young men . . .

This discovery [that each element could be assigned an atomic number] led to a major improvement of Mendeleev’s periodic table. Mendeleev had arranged his table of elements in order of atomic weight, but this order had had to be slightly modified in a couple of instances to keep the table useful. Moseley showed that if it was arranged in order of nuclear charge (that is, according to the number of protons in the nucleus, a quantity that came to be known as the atomic number) no modifications were necessary . . . Furthermore, Moseley’s X-ray technique could locate all the holes in the table representing still-undiscovered elements, and exactly seven such holes remained in 1914, the year Moseley developed the concept of the atomic number.
Moseley died when he was only 28 years old. Asimov tells the story.
World War I had broken out at this time and Moseley enlisted at once as a lieutenant of the Royal Engineers. Nations were still naïve in their understanding of the importance of scientists to human society and there seemed no reason not to expose Moseley to the same chances of death to which millions of other soldiers were being exposed. Rutherford tried to get Moseley assigned to scientific labors but failed. On June 13, 1915, Moseley shipped to Turkey and two months later he was killed at Gallipoli as part of a thoroughly useless and badly bungled campaign, his death having brought Great Britain and the world no good . . . In view of what he might still have accomplished (he was only twenty-seven when he died), his death might well have been the most costly single death of the war to mankind generally.

Had Moseley lived it seems as certain as anything can be in the uncertain world of scientific history, that he would have received a Nobel Prize in physics . . .
The Making of the Atomic Bomb, by Richard Rhodes, superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.
To learn more about Moseley, I recommend Chapter 4 (The Long Grave Already Dug) in Richard Rhodes’ classic The Making of the Atomic Bomb. Rhodes writes that “When he heard of Moseley’s death, the American physicist Robert A. Millikan wrote in public eulogy that his loss alone made the war ‘one of the most hideous and most irreparable crimes in history.’”

Friday, March 9, 2012

Glimpses of Creatures in Their Physical Worlds

Glimpses of Creatures in their Physical Worlds, by Steven Vogel, superimposed on Intermediate Physics for Medicine and Biology.
Glimpses of Creatures
in their Physical Worlds,
by Steven Vogel.
I have recently finished reading Steven Vogel’s book Glimpses of Creatures in Their Physical Worlds, which is a collection of twelve essays previously published in the Journal of Biosciences. The Preface begins
The dozen essays herein look at bits of biology, bits that reflect the physical world in which organisms find themselves. Evolution can do wonders, but it cannot escape its earthy context—a certain temperature range, a particular gravitational acceleration, the physical properties of air and water, and so forth. Nor can it tamper with mathematics. Thus the design of organisms—the level of organization at which natural selection acts most directly as well as the focus here—must reflect that physical context. The baseline it provides both imposes constraints and affords opportunities, the co-stars in what follows….”
The first essay is titled “Two Ways to Move Material,” and the two ways it discusses are diffusion and flow. To compare the two quantitatively, Vogel uses the Péclet number, Pe, defined as Pe = VL/D, where V is the flow speed, L the distance, and D the diffusion coefficient. As I read his analysis I suddenly got a sinking feeling: Russ Hobbie and I discussed just such a dimensionless number in Problem 37 of Chapter 4 in the 4th edition of Intermediate Physics for Medicine and Biology, but we called it the Sherwood Number, not the Péclet number. Were we wrong?

Edward Purcell, in his well-known article “Life at Low Reynolds Number,” introduced the quantity VL/D, which he called simply S with no other name. However, in a footnote at the end of the article he wrote “I’ve recently discovered that its official name is the Sherwood number, so S is appropriate after all!” Mark Denny, in his book Air and Water, states that VL/D is the Sherwood number, but in his Encyclopedia of Tide Pools and Rocky Shores he calls it the Péclet number. Vogel, in his earlier book Life in Moving Fluids, introduces VL/D as the Péclet number but adds parenthetically “sometimes known as the Sherwood number.”

Some articles report a more complicated relationship between the Péclet and Sherwood number, implying they can’t be the same. For instance, consider the paper “Nutrient Uptake by a Self-Propelled Steady Squirmer,” by Vanesa Magar, Tomonobu Goto, and T. J. Pedley (Quarterly Journal of Mechanics and Applied Mathematics, Volume 56, Pages 65–91, 2003), in which they write “We find the relationship between the Sherwood number (Sh), a measure of the mass transfer across the surface, and the Péclet number (Pe), which indicates the relative effect of convection versus diffusion”. Similarly, Fumio Takemura and Akira Yabe (“Gas Dissolution Process of Spherical Rising Gas Bubbles,” Chemical Engineering Science, Volume 53, Pages 2691–2699, 1998) define the Péclet number as VL/D, but define the Sherwood number as αL/D, where α is the mass transfer coefficient at an interface (having, by necessity, the same units as speed, m/s). After reviewing these and other sources, I conclude that Vogel is probably right: VL/D should properly be called the Péclet number and not the Sherwood number, although the distinction is not always clear in the literature.

Now that we have cleared up this Péclet/Sherwood unpleasentness, let’s return to Vogel’s lovely essay about two ways to move material. He calculated the Péclet number for capillaries, using a speed of 0.7 mm/s (close to the 1 mm/s listed in Table 1.4 of Intermediate Physics for Medicine and Biology), a capillary radius of 3 microns (we use 4 microns in Table 1.4), and a oxygen diffusion constant of 1.8 × 10−9 m/s2 (2 × 10−9 m/s2 in Intermediate Physics for Medicine and Biology), and finds a Péclet number of 1.2 (if you use the data in our book, you would get 2). Vogel then argues that the optimum size for capillaries is when the Péclet number is approximately one, so that evolution has created a nearly optimized system. The argument, in my words, is that oxygen transport changes from convection to diffusion in the capillaries. If the Péclet number were much smaller than one, diffusion would dominate and we would be better off with larger capilarries that are farther apart and faster blood flow to improve convection. If the Péclet number were much larger than one, convection would dominate and our circulatory system would be improved by using smaller capilarries closer together, even if that means slower blood flow, to improve diffusion. A Péclet number of about one seems to be the happy medium.

The third essay, “Getting Up to Speed” is also relevant to readers of Intermediate Physics for Medicine and Biology. Our Problem 43 of Chapter 2 is about how high animals can jump.
Problem 43 Let’s examine how high animals can jump [Schmidt-Nielsen (1984), pp. 176-179]. Assume that the energy output of the jumping muscle is proportional to the body mass, M. The gravitational potential energy gained upon jumping to a height h is Mgh (g = 9.8 m s−2). If a 3 g locust can jump 60 cm, how high can a 70 kg human jump? Use scaling arguments.
In the next exercise, Problem 44, Russ and I ask the reader to calculate the acceleration of the jumper, which if you solve the problem you will find varies inversely with length.

Vogel analyzes this same topic, but digs a little deeper. Here examines all sorts of jumpers, including seeds and spores that are hurled upward without the help of muscles at all. He finds that the scaling laws from Problems 43 and 44 do indeed hold, but the traditional reasoning behind the law is flawed.
The diversity of cases for which the scaling rule works ought to raise a flag of suspicion. Why should an argument based on muscle work for systems that do their work with other engines? . . . Something else must be afoot—again, the original argument presumed isometrically built muscle-powered jumpers. In short, the fit of the far more diverse projectiles demands a more general argument for the scaling of projectile performance. . .
Vogel goes on to show that for small animals, muscles would have to work unrealistically fast in order to produce the accelerations required to jump to a fixed height.
The old argument has crashed and burned. The work relative to mass of a contracting muscle deteriorates as animals get smaller rather than holding constant—a consequence of the requisite rise in intrinsic speed. Muscle need not and commonly does not power jumps in real time—elastic energy storage in tendons of collagen, in apodemes of chitin, and in pads of resilin provides power amplification. Finally, muscle powers none of those seed and tiny fungal projectiles. Yet acceleration persists in scaling as the classic argument anticipates. . .
So how does Vogel explain the scaling law?
A possible alternative emerges if we reexamine the relationship between force and acceleration defined by Newton’s second law. If acceleration indeed scales inversely with length and mass directly with the cube of length, then force should scale with the square of the length. Or, put another way, force divided by the square of the length should remain constant. Force over the square of length corresponds to stress, so we’re saying that stress should be constant. Perhaps our empirical finding that acceleration varies with length tells us that stress in some manner limits the systems.
Vogel’s book is full of these sorts of physical insights. I recommend it as supplemental reading for those studying from Intermediate Physics for Medicine and Biology.

Friday, March 2, 2012

Odds and Ends

It’s time to catch up on topics discussed previously in this blog.

The Technetium-99m Shortage

Several times I have written about the Technetium-99m shortage facing the United States (see here, here, here, and here). Russ Hobbie and I discuss 99mTc in Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology.
The most widely used isotope is 99mTc. As its name suggests, it does not occur naturally on earth, since it has no stable isotopes. We consider it in some detail to show how an isotope is actually used. Its decay scheme has been discussed above. There is a nearly monoenergetic 140-keV γ ray. Only about 10% of the energy is in the form of nonpenetrating radiation. The isotope is produced in the hospital from the decay of its parent, 99Mo, which is a fission product of 235U and can be separated from about 75 other fission products. The 99Mo decays to 99mTc.
An interesting article by Matthew Wald about the supply of 99mTc appeared in the February 6 issue of the New York Times. Wald writes
For years, scientists and policy makers have been trying to address two improbably linked problems that hinge on a single radioactive isotope: how to reduce the risk of nuclear weapons proliferation, and how to assure supplies of a material used in thousands of heart, kidney and breast procedures a year. . .

The isotope is technetium 99m, or tech 99 for short. It is useful in diagnostic tests because it throws off an easy-to-detect gamma ray; also, because it breaks down very quickly, it gives only a small dose of radiation to the patient.

But the recipe for tech 99 requires another isotope, molybdenum 99, that is now made in nuclear reactors using weapon-grade uranium. In May 2009, a Canadian reactor that makes most of the North American supply of moly 99 was shut because of a safety problem. A second reactor, in the Netherlands, was simultaneously closed for repairs.

The 54-year-old Canadian reactor, Chalk River in Ontario, is running now, but its license expires in four years. Canada built two replacement reactors, but even though they turned out to be unusable, their construction discouraged potential competitors ...
One solution to the 99mTc shortage may be to produce 99Mo in a cyclotron. The New York Times article discussed this solution briefly, and more detail is supplied by a report written by Hamish Johnston and published on the website medicalphysicsweb.org (all readers of Intermediate Physics for Medicine and Biology should become familiar with medicalphysicsweb.org). The gist of the method is to bombard 100Mo with protons in a cyclotron. Recently, researchers have made progress in developing this method. Johnston writes
Scientists in Canada are the first to make commercial quantities of the medical isotope technetium-99m using medical cyclotrons. The material is currently made in just a few ageing nuclear reactors worldwide, and recent reactor shutdowns have highlighted the current risk to the global supply of this important isotope.
See also an article in the Canadian newspaper, The Globe and Mail.

The Linear-No-Threshold Model

Another topic addressed recently in this blog is the risk of low levels of radiation, discussed in Chapter 16 of Intermediate Physics for Medicine and Biology.
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear-nonthreshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.
On February 21, medicalphysicsweb.org published an article asking “Does LNT model overestimate cancer risk?” Science writer Jude Dineley reports
An in vitro study has demonstrated that DNA repair mechanisms respond more effectively when exposed to low doses of ionizing radiation, compared to high doses. The observations potentially contradict the benchmark for radiation-induced cancer risk estimation, the linear-no-threshold (LNT) model, and if so, could have large implications for cancer risk estimation (PNAS 109 443).
The Proceedings of the National Academy of Sciences paper that Dineley cites is titled “Evidence for Formation of DNA Repair Centers and Dose-Response Nonlinearity in Human Cells,” and is written by a team of researchers at the Lawrence Berkeley National Laboratory. The abstract is given below.
The concept of DNA 'repair centers' and the meaning of radiation-induced foci (RIF) in human cells have remained controversial. RIFs are characterized by the local recruitment of DNA damage sensing proteins such as p53 binding protein (53BP1). Here, we provide strong evidence for the existence of repair centers. We used live imaging and mathematical fitting of RIF kinetics to show that RIF induction rate increases with increasing radiation dose, whereas the rate at which RIFs disappear decreases. We show that multiple DNA double-strand breaks (DSBs) 1 to 2 μm apart can rapidly cluster into repair centers. Correcting mathematically for the dose dependence of induction/resolution rates, we observe an absolute RIF yield that is surprisingly much smaller at higher doses: 15 RIF/Gy after 2 Gy exposure compared to approximately 64 RIF/Gy after 0.1 Gy. Cumulative RIF counts from time lapse of 53BP1-GFP in human breast cells confirmed these results. The standard model currently in use applies a linear scale, extrapolating cancer risk from high doses to low doses of ionizing radiation. However, our discovery of DSB clustering over such large distances casts considerable doubts on the general assumption that risk to ionizing radiation is proportional to dose, and instead provides a mechanism that could more accurately address risk dose dependency of ionizing radiation.
PNAS published an editorial by Lynn Hlatky, titled “Double-Strand Break Motions Shift Radiation Risk Notions,” accompanying the article. Also, see the Lawrence Berkeley lab press release.

See Russ Hobbie Demonstrate MacDose

Finally, my coauthor Russ Hobbie is now on iTunes! His video “Photon Interactions: A Simulation Study with MacDose” can be downloaded free from iTunes, and provides much insight into how radiation interacts with tissue. The description on iTunes states
This 26-minute video uses a computer simulation to demonstrate how x-ray photons interact in the body through coherent scattering, the photoelectric effect, Compton scattering, and pair production. It emphasizes the statistical nature of photon attenuation and energy absorption. The viewer should be able to distinguish between the quantities energy transferred, energy absorbed, Kerma, and absorbed dose, describe the effect of secondary photons on energy transferred and absorbed dose, and understand the effect of photons of different energy when used for radiation therapy.
You can also find Russ’s video on Youtube, included below.

Russell Hobbie Demonstrates MacDose, Part 1

Russell Hobbie Demonstrates MacDose, Part 2 

 Russell Hobbie Demonstrates MacDose, Part 3