Friday, July 9, 2010

Paris

I just returned from a vacation in Paris, where my wife and I celebrated our 25th wedding anniversary. Russ Hobbie was there at the same time, although conflicting schedules did not allow us to get together. My daughter Katherine posted the blog entries for the last two weeks, when I had limited computer access. Thanks, Kathy.

Although most of our time was spent doing the usual tourist activities (for example, the Arc de Triomphe, the Notre Dame Cathedral, Versailles, and, my favorite, a dinner cruise down the Seine), I did keep my eye open for those aspects of France that might be of interest to readers of the 4th edition of Intermediate Physics for Medicine and Biology. We visited the Pantheon, where we saw the tomb of Marie Curie (a unit of nuclear decay activity, the curie, was named after her and is discussed on page 489 of Intermediate Physics for Medicine and Biology). Marie Curie lies next to her husband Pierre Curie (of the Curie temperature, page 216). Also in the Pantheon is Jean Perrin, who determined Avogadro’s number (see the footnote on page 85) and Paul Langevin, of the Langevin equation (page 87). Hanging from the top of the dome is a Foucault pendulum, in the exact place where Leon Foucault publicly demonstrated the rotation of the earth in 1851. I like it when physics takes center stage like that.

Another scientific site we visited is a museum honoring Louis Pasteur at the Pasteur Institute. Pasteur chose to be buried in his home (now the museum) rather than in the Pantheon. Readers of Intermediate Physics for Medicine and Biology will find him to be an excellent example of a researcher who bridges the physical and biological sciences. His first job was as a professor of physics, although he would probably be considered more of a chemist that a physicist. His early work was on chiral molecules and how they rotated light. He later became famous for his research on the spontaneous generation of life and a vaccine for rabies. In his book Adding A Dimension, Isaac Asimov lists Pasteur as one of the ten greatest scientists of all time. The museum is enjoyable, although it is not as accessible to English speakers as some of the larger museums such as the Louvre and the delightful Musee d’Orsay. Because I speak no French, I had a difficult time following many of the Pasteur exhibits. Also at the museum was a nice display about microbiologist Jacques Monod, who I will discuss in a future entry to this blog.

A Short History of Chemistry, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
A Short History of Chemistry,
by Isaac Asimov.
The only other French scientist on Asimov’s top-ten list was the chemist Antoine Lavoisier. Oddly, the French don’t seem to celebrate Lavoisier’s accomplishments as much as you might expect. (Beware, my conclusion is based on a brief 2-week vacation, and I may have missed something.) Perhaps his death by the guillotine during the French Revolution has something to do with it. We visited the Place de la Concorde, where Lavoisier was beheaded. In A Short History of Chemistry, Asimov writes
In 1794, then, this man [Lavoisier], one of the greatest chemists who ever lived, was needlessly and uselessly killed in the prime of life. “It required only a moment to sever that head, and perhaps a century will not be sufficient to produce another like it,” said Joseph Lagrange, the great mathematician. Lavoisier is universally remembered today as “the father of modern chemistry.”
I normally associate Leonardo da Vinci with Italy, but when touring the Chateau at Amboise in the Loire Valley, we stumbled unexpectedly upon his grave. He spent the last three years of his life in France. We toured an excellent museum dedicated to da Vinci, containing life-size reconstructions of some of his engineering inventions. Although da Vinci had many interests and may be best known for his paintings (yes, I saw the Mona Lisa while at the Louvre), at least some of his work might be called biomedical engineering, such as his work on an underwater breathing apparatus and on human flight.

Seventy-two famous French scientists and mathematicians are listed on the Eifel Tower, including Laplace (of the Laplacian, page 91), Ampere (of Ampere’s law, page 206, and the unit of current, page 145), Navier (of the Navier-Stokes equation, page 27), Legendre (of Legendre polynomials, page 184), Becquerel (of the unit of activity, page 489), Fresnel (of the Fresnel zone for diffraction, page 352), Coulomb (of the unit of charge and Coulomb’s law, both on page 137), Poisson (of Poisson’s ratio, page 27; the Poisson-Boltzmann equation, page 230; and the Poisson probability distribution, page 572), Clapeyron (of the Clausius-Clapeyron relation, page 78), and Fourier (of the Fourier series, page 290). I could not see all these names because the tower was partially covered for painting. Note that Lavoisier was included on the Eiffel Tower, but Poiseuille (of Poiseuille flow, page 17) was not. The view from the top of the tower is spectacular.

I admit, I am not the best of travelers and am glad to be home in Michigan. But I believe there is much in France that readers of Intermediate Physics for Medicine and Biology will find interesting.

Friday, July 2, 2010

Reynolds Number

The Reynolds number is a key concept for anyone interested in biofluid dynamics. Russ Hobbie and I discuss the Reynolds number in Section 1.18 (Turbulant Flow and the Reynolds Number) of the 4th edition of Intermediate Physics for Medicine and Biology.
The importance of turbulence (nonlarminar) flow is determined by a dimensionless number characteristic of the system called the Reynolds number NR. It is defined by

NR = L V ρ/η

where L is a length characteristic of the problem, V a velocity characteristic of the problem, ρ the density, and η the viscosity of the fluid. When NR is greater than a few thousand, turbulence usually occurs…

When NR is large, inertial effects are important. External forces accelerate the fluid. This happens when the mass is large and the viscosity is small. As the viscosity increases (for fixed L, V, and ρ) the Reynolds number decreases. When the Reynolds number is small, viscous effects are important. The fluid is not accelerated, and external forces that cause the flow are balanced by viscous forces… The low-Reynolds-number regime is so different from our everyday experience that the effects often seem counterintuitive.”
Steven Vogel, in his fascinating book Life in Moving Fluids, describes the importance of the Reynolds number more elegantly.
The peculiarly powerful Reynolds number [is] the center piece of biological (and even nonbiological) fluid mechanics. The utility of the Reynolds number extends far beyond mere problems of drag; it’s the nearest thing we have to a completely general guide to what’s likely to happen when solid and fluid move with respect to each other. For a biologist, dealing with systems that span an enormous size range, the Reynolds number is the central scaling parameter that makes order of a diverse set of physical phenomena. It plays a role comparable to that of the surface-to-volume ratio in physiology.
The Reynolds number is named after the British engineer Osborne Reynolds (1842–1912). He developed the Reynolds number as a simple way to understand the transition from laminar to turbulent flow of fluids in a pipe. Perhaps it is fitting to let Reynolds have the last word. Below he describes experiments in which he added a filament of dye to the fluid (as quoted by Vogel in Life in Moving Fluids).
When the velocities were sufficiently low, the streak of colour extended in a beautiful straight line across the tube. If the water in the tank had not quite settled to rest, as sufficiently low velocities, the streak would shift about the tube, but there was no appearance of sinuosity. As the velocity was increased by small stages, at some point in the tube, always at a considerable distance from the trumpet or intake, the colour band would all at once mix up with the surrounding water. Any increase in the velocity caused the point of break-down to approach the trumpet, but with no velocities that were tried did it reach this. On viewing the tube by the light of an electric spark, the mass of colour resolved itself into a mass of more or less distinct curls showing eddies.

Friday, June 25, 2010

Adolf Fick

Russ Hobbie and I discuss Fick’s laws of diffusion in Chapter 4 of the 4th edition of Intermediate Physics for Medicine and Biology. The German scientist Adolf Fick (1829–1901) was a classic example of a researcher who was comfortable in both physics and physiology. He enrolled at the University of Marburg with the goal of studying mathematics and physics, but eventually switched to medicine, and earned an MD in 1852. Of particular interest to me is that he wrote a classic textbook titled Medical Physics (1856), which was one of the first books on this topic. I have not read this book, which almost certainly is written in German (although I am half German through my father’s side, I cannot speak or read the language). Nevertheless, I wonder if Intermediate Physics for Medicine and Biology might be a descendant of this text.

Fick was only 26 when he proposed his two laws of diffusion. The first law (Eq. 4.18a in our book)—similar to Ohm’s law for electrical current or Fourier’s law for heat conduction—states that the diffusive flux is proportional to the concentration gradient. The constant of proportionality is the diffusion constant, which Fick first introduced. Fick’s second law (Eq. 4.24) arises by combining his first law with the equation of continuity (Eq. 4.2) and is what we generally refer to as the diffusion equation. He tested his two laws by measuring the diffusion of salt in water. He even noticed the strong temperature dependence of the diffusion constant.

Fick contributed to physiology and medicine in several ways. He made the first successful contact lens, and he developed a method to measure cardiac output based on oxygen consumption and blood oxygen concentration. You can find more information about his life at http://www.corrosion-doctors.org/Biographies/FickBio.htm.

Friday, June 18, 2010

Myopia

Section 14.12 in the 4th edition of Intermediate Physics for Medicine and Biology discusses the physics of the eye. One topic related to vision that I have always found fascinating is myopia.
In nearsightedness or myopia, parallel rays come to a focus in front of the retina. The eye is slightly too long for the shape of the cornea… The total converging power of the eye is too great, and the relaxed eye focuses at some closer distance, from which the rays are diverging. Accommodation can only increase the converging power of the eye, not decrease it, so the unassisted myopic eye cannot focus on distant objects. Myopia can be corrected by placing a diverging spectacle or contact lens in front of the eye, so that incoming parallel rays are diverging when the strike the cornea.
The interesting thing about myopia is that, in contrast to far-sightedness (hypermetropy), you cannot correct it by accommodation. Before the invention of eye glasses in the late Middle Ages, if you were born with myopia then distant objects would always be a blur.

Mornings on Horseback, by David McCullough, superimposed on Intermediate Physics for Medicine and BIology.
Mornings on Horseback,
by David McCullough.
When teaching Biological Physics (PHY 325) at Oakland University, I often end my discussion of myopia with a quote from David McCullough’s wonderful biography of Theodore Roosevelt, Mornings on Horseback. Roosevelt suffered from myopia and didn’t get his first glasses until he was a teenager. McCullough tells the story:
Then, at a stroke, the summer of 1872, he was given a gun and a large pair of spectacles and nothing had prepared him for the shock, for the infinite difference in his entire perception of the world about him or his place in it.

The gun was a gift from Papa—a 12-gauge, double-barreled French-made (Lefaucheux) shotgun with a lot of kick and of such simple, rugged design that it could be hammered open with a brick if need be, the ideal gun for an awkward, frequently absent-minded thirteen-year-old. But blasting away with it in the woods near Dobbs Ferry he found he had trouble hitting anything. More puzzling, his friends were constantly shooting at things he never even saw. This and the fact that they could also read words on billboards that he could barely see, he reported to his father, and it was thus, at summer’s end, that the spectacles were obtained.

They transformed everything. They “literally opened an entirely new world to me,” he wrote years afterward, remembering the moment. His range of vision until then had been about thirty feet, perhaps less. Everything beyond was a blur. Yet neither he nor the family had sensed how handicapped he was. “I had no idea how beautiful the world was… I could not see, and yet was wholly ignorant that I was not seeing.”

How such a condition could possibly have gone undetected for so long is something of a mystery, but once discovered it did much to explain his awkwardness and the characteristic detached look he had, those large blue eyes “not looking at anything present.”
I am a lover of history, and a big fan of David McCullough. A couple of his books with a scientific or engineering bent are Path Between the Seas: The Creations of the Panama Canal and The Great Bridge: The Epic Story of the Building of the Brooklyn Bridge. His purely historical books, such as 1776 and John Adams, are also excellent.

To learn more, see the information about myopia on the website for the American Optometric Association. An modern option for correcting myopia that was not available in Roosevelt’s time is laser surgery to reshape the cornea.

Friday, June 11, 2010

The Gibbs Paradox

Last week in this blog, I wrote that the “Gibbs Paradox” deserved an entire entry of its own. Well, here it is. Russ Hobbie and I mention the Gibbs Paradox in a footnote in Section 3.18 (The Chemical Potential of a Solution) in the 4th edition of Intermediate Physics for Medicine and Biology. When calculating the entropy of mixing (where a solute and solvent are intermixed), we derived an expression for the number of ways N particles can be distributed among N sites. If we assume the solute particles are indistinguishable, there is only one way. The footnote then reads
The fact that there is only one mircostate because of the indistinguishability of the particles is called the Gibbs paradox. For an illuminating discussion of the Gibbs paradox, see Casper and Freier (1973).
Fundamentals of Statistical and Thermal Physics, by Frederick Reif, superimposed on Intermediate Physisc for Medicine and Biology.
Fundamentals of Statistical
and Thermal Physics,
by Frederick Reif.
The Gibbs Paradox is examined in more detail by Frederick Reif in his landmark textbook Fundamentals of Statistical and Thermal Physics. (Indeed, Chapter 3 of Intermediate Physics for Medicine and Biology follows a statistical approach similar to Reif’s analysis, and even more similar to the discussion in his introductory textbook—a personal favorite of mine—Statistical Physics, Berkeley Physics Course Volume 5). Reif considers “a gas consisting of N identical monatomic molecules of mass m enclosed in a container of volume V.” When he calculates the entropy, S, of the gas, he obtains
S = N k [ln V + 3/2 ln T + σ]     (7.2.16)
where k is Boltzmann’s constant, T is the absolute temperature, and σ is a constant independent of N, T, and V. He then ends the section with the provocative statement “This expression for the entropy is, however, not correct,” which leads to his discussion (Sec. 7.3) of the Gibbs paradox. Reif continues
The challenging statement at the end of the last section suggests that the expression (7.2.16) for the entropy merits some discussion… [The expression] for S is clearly wrong since it implies that the entropy does not behave properly as an extensive quantity. Quite generally, one must require that all thermodynamic relations remain valid if the size of the whole system under consideration is simply increased by a scale factor α, i.e., if all its extensive parameters are multiplied by the same factor α. In our case, if the independent extensive parameters V and N are multiplied by α, the mean energy… is indeed properly increased by this same factor, but the entropy S in (7.2.16) is not increased by α because of the term N ln V.

Indeed, (7.2.16) asserts that the entropy S of a fixed volume V of gas is simply proportional to the number N of molecules. But this dependence on N is not correct, as can readily be seen in the following way. Imagine that a partition is introduced which divides the container into two parts. This is a reversible process which does not affect the distribution of systems over accessible states. Thus, the total entropy ought to be the same with, or without, the partition in place; i.e.

S = S' + S"     (7.3.1)

where S' and S" are the entropies of the two parts. But the expression (7.2.16) does not yield the simple additivity required by (7.3.1). This is easily verified. Suppose, for example, that the partition divides the gas into two equal parts, each containing N' molecules of gas in a volume V'. Then the entropy of each part is given by (7.2.16) as

S' = S" = N' k [ln V' + 3/2 ln T + σ]

while the entropy of the whole gas without partition is by (7.2.16)

S = 2 N' k [ ln (2 V') + 3/2 ln T + σ] .

Hence

S – 2 S' = 2 N' k ln(2 V') – 2 N' k ln V' = 2 N' k ln2

and is not equal to zero as required by (7.3.1).

This paradox was first discussed by Gibbs and is commonly referred to as the “Gibbs paradox.” Something is obviously wrong in our discussion; the question is what.
Reif then analyzes in more detail the implications of removing the partition between the two sides of the box. He finds that
The act of removing the partition has thus very definite physical consequences. Whereas before removal of the partition a molecule of each subsystem could only be found within a volume V', after the partition is removed it can be located anywhere within the volume V = 2 V'. If the two subsystems consisted of different gasses, the act of removing the partition would lead to diffusion of the molecules throughout the whole volume 2V' and consequent random mixing of the different molecules. This is clearly an irreversible process; simply replacing the partition would not unmix the gases. In this case the increase in entropy in (7.3.2) would make sense as being simply a measure of the irreversible increase of disorder resulting from the mixing of unlike gases [the entropy of mixing that Russ and I calculated].

But if the gases in the subsystems are identical, such an increase of entropy does not make physical sense. The root of the difficulty embodied in the Gibbs paradox is that we treated the gas molecules as individually distinguishable, as though interchanging the positions of two like molecules would lead to a physically distinct state of the gas. This is not so. Indeed, if we treated the gas by quantum mechanics (as we shall do in Chapter 9), the molecules would, as a matter of principle, have to be regarded as completely indistinguishable. A calculation of the partition function would then automatically yield the correct result, and the Gibbs paradox would never arise. Our mistake has been to take the classical point of view too seriously. Even though one may be in a temperature and density range where the motion of molecules can be treated to a very good approximation by classical mechanics, one cannot go so far as to disregard the essential indistinguishability of the molecules.
In a sidenote, Reif adds
Just how different must molecules be before they should be considered distinguishable?… In a classical view of nature two molecules could, or course, differ by infinitesimal amounts… In a quantum description this troublesome question does not arise because of the quantized discreteness of nature… Hence the distinction between identical and nonidentical molecules is completely unambiguous in a quantum-mechanical description. The Gibbs paradox thus foreshadowed already in the last [19th] century conceptual difficulties that were resolved satisfactorily only by the advent of quantum mechanics.
Several good American Journal of Physics articles discuss the Gibbs phenomenon. Pesic examines Gibb’s own writings to trace his thoughts on the issue (“The Principle of Identicality and the Foundations of Quantum Theory: I. The Gibbs Paradox,” American Journal of Physics, Volume 59, Pages 971–974, 1991), and Landsberg and Tranah study in more detail in role of the Gibbs paradox for quantum mechanics (“The Gibbs Paradox and Quantum Gases,” American Journal of Physics, Volume 46, Pages 228–230, 1978). Finally, Casper and Freier (the authors of the paper cited in our footnote) analyze the Gibbs paradox by comparing macroscopic and microscopic points of view (“‘Gibbs Paradox’ Paradox,” American Journal of Physics, Volume 41, Pages 509–511, 1973).

You know, there is a lot of physics in that little footnote on page 68 of Intermediate Physics for Medicine and Biology.

Friday, June 4, 2010

The Gibbs Phenomenon

In chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Fourier analysis, a fascinating but very mathematical subject. One of the most surprising results of Fourier analysis is the Gibbs phenomenon, which we describe at the end of Sec. 11.5 (Fourier Series for a Periodic Function).
Table 11.4 shows the first few coefficients for the Fourier series representing the square wave, obtained from Eq. 11.34… Figure 11.16 shows the fits for n = 3 and n = 39. As the number of terms in the fit is increased, the value of Q [measuring the least squares fit between the function at its Fourier series] decreases. However, spikes of constant height (about 18% of the amplitude of the square wave or 9% of the discontinuity in y) remain. These are seen in Fig. 11.16. These spikes appear whenever there is a discontinuity in y and are called the Gibbs phenomenon.
You have to be amazed by the Gibbs phenomenon. Think about it: as you add terms in the sum, the fit between the function and its Fourier series gets better and better, but the overshoot in amplitude does not get any smaller. Instead, the region containing ringing near the discontinuity gets narrower and narrower. If you want to see a figure like our Fig. 11.16 presented as a neat animation, take a look at http://www.sosmath.com/fourier/fourier3/gibbs.html. Also, check out http://ocw.mit.edu/ans7870/18/18.06/javademo/Gibbs/ for an interactive demo that will let you include up to 200 terms in the Fourier series.

The Gibbs phenomenon is important in medical imaging. The entry for the Gibbs phenomenon from the Encyclopedia of Medical Imaging is reproduced below.
Gibbs phenomenon, (J. Willard Gibbs, 1839-1903, American physicist), phenomenon occurring whenever a “curve” with sharp edges is subject to Fourier analysis. The Gibbs phenomenon is relevant in MR imaging, where it is responsible for so-called Gibbs artefacts. Consider a signal intensity profile across the skull, where at the edge of the brain the signal intensity changes from virtually zero to a finite value. In MR imaging the measurement process is a breakdown of such intensity profiles into their Fourier harmonics. i.e. sine and cosine functions. Representation of the profiles measured with a limited number of Fourier harmonics is imperfect, resulting in high frequency oscillations at the edges, and the image can therefore exhibit some noticeable signal intensity variations at intensity boundaries: the Gibbs phenomenon, overshoot artefacts, or “ringing.” The artefacts can be suppressed by filtering the images. However, filtering can in turn reduce spatial resolution.
Figures 12.24 and 12.25 of our book show a CT scan with ringing inside the skull and its removal by filtering, an example of the Gibbs phenomenon.

Josiah Willard Gibbs was a leading American physicist from the 19th century. He is particularly well known for his contributions to thermodynamics. Gibbs appears at several places in Intermediate Physics for Medicine and Biology. Section 3.17 discusses the Gibbs free energy, a quantity that provides a simple way to keep track of the changes in total entropy when a system is in contact with a reservoir at constant temperature and pressure. A footnote on page 68 addresses the Gibbs paradox (which deserves an entire blog entry of its own), and Problem 47 in Chapter 3 introduces the Gibbs factor (similar to the Boltzmann factor but including the chemical potential).

Selected Papers of Great American Physicists, superimposed on Intermediate Physics for Medicine and Biology.
Selected Papers of Great
American Physicists.
The preface to Gibbs’ book on statistical mechanics is reproduced in Selected Papers of Great American Physicists: The Bicentennial Commemorative Volume of the American Physical Society 1976, edited by Spencer Weart. I recall being quite impressed by this book when in graduate school at Vanderbilt University. Below is a quote from Weart’s biographical notes about Gibbs.
Gibbs, son of a Yale professor of sacred literature, descended from a long line of New England college graduates. He studied at Yale, received his Ph.D. there in 1863—one of the first doctorates granted in the United States—tutored Latin and natural philosophy there, and then left for three decisive years in Europe. Up to that time, Gibbs had shown interest in both mathematics and engineering, which he combined in his dissertation “On the Form of the Teeth in Wheels in Spur Gearing.” The lectures he attended in Paris, Berlin and Heidelberg, given by some of the greatest men of the day, changed him once and for all. In 1871, two years after his return from Europe, he became Yale’s first Professor of Mathematical Physics. He had not yet published any papers on this subject. For nine years he held the position without pay, living on the comfortable inheritance his father had left; only when Johns Hopkins University offered Gibbs a post did Yale give him a small salary.

Gibbs never married. He lived out a calm and uneventful life in the house where he grew up, which he shared with his sisters. He was a gentle and considerate man, well-liked by those who knew him, but he tended to avoid society and was little known even in New Haven. Nor was he known to more than a few of the world’s scientists—partly because his writings were extremely compact, abstract and difficult. As one of Gibb’s European colleagues wrote, “Having once condensed a truth into a concise and very general formula, he would not think of churning out the endless succession of specific cases that were implied by the general proposition; his intelligence, like his character, was of a retiring disposition.” The Europeans paid for their failure to read Gibbs: A large part of the work they did in thermodynamics before the turn of the century could have been found already in his published work.

Friday, May 28, 2010

Happy Birthday Laser!

Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, by Eisberg and Resnick, superimposed on Intermediate Physics for Medicine and Biology.
Quantum Physics of Atoms,
Molecules, Solids, Nuclei, and Particles,
by Eisberg and Resnick.
This month marks the 50th anniversary of the invention of the laser. In May 1960, Theodore Maiman built the first device to produce coherent light by the mechanism of “Light Amplification by Stimulated Emission of Radiation” at Hughes Research Laboratories in Malibu, making the laser just slightly older than I am. A special website, called laserfest, is commemorating this landmark event. Eisberg and Resnick discuss lasers in Section 11.7 of their textbook Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (quoted from the first edition, 1974).
In the solid state laser that operates with a ruby crystal, some Al atoms in the Al2O3 molecules are replaced by Cr atoms. These “impurity” chromium atoms account for the laser action… The level of energy E1 is the ground state and the level of energy E3 is the unstable upper state with a short lifetime (≈10−8 sec), the energy difference E3-E1 corresponding to a wavelength of about 5500 Å. Level E2 is an intermediate excited state which is metastable, its lifetime against spontaneous decay being about 3 x 10−3 sec. If the chromium atoms are in thermal equilibrium, the population number of the states are such that [n3 is less than n2 is less than n1]. By pumping in radiation of wavelength 5500 Å, however, we stimulate absorption of incoming photons by Cr atoms in the ground state, thereby raising the population number of energy state E3 and depleting energy state E1 of occupants. Spontaneous emission, bringing atoms from state 3 to state 2, then enhances the occupancy of state 2, which is relatively long-lived. The result of this optical pumping is to decrease n1 and increase n2, such that n2 is greater than n1 and population inversion exists. Now, when an atom does make a transition from state 2 to state 1, the emitted photon of wavelength 6943 Å will stimulate further transitions. Stimulated emission will dominate stimulated absorption (because n2 is greater than n1) and the output of photons of wavelength 6943 Å is much enhanced. We obtain an intensified coherent monochromatic beam.
Lasers are an important tool in biology and medicine. Russ Hobbie and I discuss their applications in Chapter 14 (Atoms and Light) the 4th edition of Intermediate Physics for Medicine and Biology. In Section 14.5 (The Diffusion Approximation to Photon Transport) we write
A technique made possible by ultrashort light pulses from a laser is time-dependent diffusion. It allows determination of both μs and μa [the scattering and absorption attenuation coefficients]. A very short (150-ps) pulse of light strikes a small region on the surface of the tissue. A detector placed on the surface about 4 cm away records the multiply-scattered photons… A related technique is to apply a continuous laser beam, the amplitude for which is modulated at various frequencies between 50 and 800 MHz. The Fourier transform of Eq. 14.29 gives the change in amplitude and phase of the detected signal. Their variation with frequency can also be used to determine μa and μs.
We also mention lasers in Section 14.10 (Heating Tissue with Light).
Sometimes tissue is irradiated in order to heat it; in other cases tissue heating is an undesired side effect of irradiation. In either case, we need to understand how the temperature changes result from the irradiation. Examples of intentional heating are hyperthermia (heating of tissue as a part of cancer therapy) or laser surgery (tissue ablation). Tissue is ablated when sufficient energy is deposited to vaporize the tissue.
Russ and I give many references about lasers in medicine in our Resource Letter (“Resource Letter MP-2: Medical Physics,” American Journal of Physics, Volume 77, Pages 967–978, 2009):
F. Lasers and optics

Lasers have introduced many medical applications of light, from infrared to the visible spectrum to ultraviolet.

150. Lasers in Medicine, edited by R. W. Waynant (CRC, Boca Raton, 2002). (I)

151. Laser-Tissue Interactions: Fundamentals and Applications, M. H. Niemz (Springer, Berlin, 2007). (I)

152. “Lasers in medicine,” Q. Peng, A. Juzeniene, J. Chen, L. O. Svaasand, T. Warloe, K.-E. Giercksky, and J. Moan, Rep. Prog. Phys. 71, Article 056701, 28 pages
(2008). (A)

A fascinating and fast-growing new technique to image biological tissue is optical coherence tomography “OCT.” It uses reflections like ultrasound but detects the reflected rays using interferometry.

153. Optical Coherence Tomography, M. E. Brezinski (Elsevier, Amsterdam, 2006). Overview of the physics of OCT and applications to cardiovascular medicine, musculoskeletal disease, and oncology. (I)

154. “Optical coherence tomography: Principles and applications,” A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser, Rep. Prog. Phys. 66, 239–303 (2003). (I)

With infrared light, scattering dominates over absorption. In this case, light diffuses through the tissue. Optical imaging in turbid media is difficult but not impossible.

155. “Recent advances in diffuse optical imaging,” A. P. Gibson, J. C. Hebden, and S. R. Arridge, Phys. Med. Biol. 50, R1–R43 (2005). (I)

156. “Pulse oximetry,” R. C. N. McMorrow and M. G. Mythen, Current Opinion in Critical Care 12, 269–271 (2006). The pulse oximeter measures the oxygenation of blood and is based on the diffusion of infrared light. (I)

One impetus for medical applications of light has been the development of new light sources, such as free-electron lasers and synchrotrons. In both cases, the light frequency is tunable over a wide range.

157. “Free-electron-laser-based biophysical and biomedical instrumentation,” G. S. Edwards, R. H. Austin, F. E. Carroll, M. L. Copeland, M. E. Couprie, W. E. Gabella, R. F. Haglund, B. A. Hooper, M. S. Hutson, E. D. Jansen, K. M. Joos, D. P. Kiehart, I. Lindau, J. Miao, H. S. Pratisto, J. H. Shen, Y. Tokutake, A. F. G. van der Meer, and A. Xie, Rev. Sci. Instrum. 74, 3207–3245 (2003). (I)

158. “Medical applications of synchrotron radiation,” P. Suortti and W. Thomlinson, Phys. Med. Biol. 48, R1– R35 (2003). (I)

Finally, photodynamic therapy uses light-activated drugs to treat diseases.

159. “The physics, biophysics and technology of photodynamic therapy,” B. C. Wilson and M. S. Patterson, Phys. Med. Biol. 53, R61–R109 (2008). (A)
Happy birthday, laser!

Friday, May 21, 2010

Kalin Lucas and his ruptured Achilles tendon

Basketball fans may recall that in this year's NCAA tournament, Michigan State University (located about a 90 minute drive from where I work here at Oakland University in Rochester Michigan) made it into the final four before losing to Butler. They might have won the entire tournament if they were not without their star, Kalin Lucus, who ruptured his left Achilles tendon in a second round game against the University of Maryland. Coach Tom Izzo, who is much beloved here in southeast Michigan, managed two more wins without Lucus, before losing in the semifinals.

Why did Lucus injure his Achilles tendon? There was no collision or accident, he just landed awkwardly. Readers of the 4th edition of Intermediate Physics for Medicine and Biology won’t be too surprised when they hear about sports injuries to the Achilles tendon. In Section 1.5 of our book, Russ Hobbie and I analyze the forces on the Achilles tendon and show that the tension in this tendon can be nearly twice the body weight.
The Achilles tendon connects the calf muscles (the gastrocnemius and soleus) to the calcaneus at the back of the heal (Fig. 1.9). To calculate the force exerted by this tendon on the calcaneus when a person is standing on the ball of one foot, assume that the entire foot can be regarded as a rigid body. This is our first example of creating a model of the actual situation. We try to simplify the real situation to make the calculation possible while keeping the features that are important to what is happening. In this model the internal forces within the foot are being ignored.

Figure 1.10 shows the force exerted by the tendon on the foot (FT), the force of the leg bones (tibia and fibula) on the foot (FB), and the force of the floor upward, which is equal to the weight of the body (W)...
We then go one to solve the equations of translational and rotational equilibrium to find that FT = 1.8 W and FB = 2.8 W, and conclude
The tension in the Achilles tendon is nearly twice the person’s weight, while the force exerted on the leg by the talus is nearly three times the body weight. Once can understand why the tendon might rupture.
Many sports injuries result from the laws of biomechanics. The problem is often that a tendon must exert a large force in order to create enough torque to maintain rotational equilibrium. For the Achilles tendon, the moment arm between the joint and the tendon is roughly half the moment arm between the joint and the ball of the foot, so the force on the tendon must be about twice the weight. Our bodies are often built this way: large forces are required to make up for small moment arms. I sometimes give a problem on one of my Biological Physics (PHY 325) exams that illustrates this by calculating the forces on the shoulder of a gymnast performing an “iron cross” on the rings. Here again, the torque exerted by the rings on the arm is huge because of the large moment arm (essentially the entire length of the arm itself), while the moment arm of the pectoral muscle is small because it connects to the humerus (the arm bone) only about 5 cm from the shoulder, at a small angle. The problem suggests that the pectoral muscle must supply a force of over twenty times the body weight! No wonder I was so poor at the rings in my high school physical education class. Readers interested in learning more about this topic might want to read Williams and Lissner's classic textbook Biomechanics of Human Motion. Russ and I cite the 1962 first edition, but I believe that the book has evolved into Biomechanics of Human Motion: Basics and Beyond for the Health Professions, by Barney LeVeau, due out later this year.

Hopefully these insights into biomechanics can help you appreciate how Kalin Lucus could suffer a season-ending injury so easily. I’m glad MSU was able to make it to the final four even without Lucus. However, to be honest, the Spartans were only my 4th favorite team in the tournament this year. Oakland University participated in March Madness for only the second time in the school’s history. Vanderbilt University (where I went for graduate school) also reached the Big Dance, and many predicted that the University of Kansas (where I attended college) would win the entire event. Unfortunately, all three schools lost in the first weekend of play. Russ didn’t fare much better, as the University of Minnesota lost in the first round. Congratulations to Duke University (home to an excellent Biomedical Engineering Department) for their ultimate victory.

Friday, May 14, 2010

Single-Pool Exponential Decomposition Models: Potential Pitfalls in their Use in Ecology Studies

In Section 11.2 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss fitting data using nonlinear least squares. Our first example in this section is a fit using a single exponential decay, y(x) = a e–bx, where a and b are to be determined. We suggest that the reader
“take logarithms of each side of the equation

log y = log ab x log e

v = a' – b' x.

This can be fit by the linear [least squares] equation, determining constants a' and b' using Eqs. 11.5.”
This method works fine for ideal data, but in almost any real application the data will be corrupted by noise. In that case, fitting a linear equation to the logarithm of the data may not be wise. I discussed this issue last year in the May 22 entry to this blog, but wish to explore it in more detail this week.

Recently, Russ coauthored a paper about his recent results on this topic, published in the journal Ecology (Volume 91, Pages 1225–1236). In collaboration with his daughter Sarah Hobbie (Associate Professor in the Department of Ecology, Evolution and Behavior at the University of Minnesota), and her former postdoc E. Carol Adair (currently with the National Center for Ecological Analysis and Synthesis at the University of California Santa Barbara), Russ examined “Single-Pool Exponential Decomposition Models: Potential Pitfalls in their Use in Ecology Studies.” The abstract to the paper is given below.
The importance of litter decomposition to carbon and nutrient cycling has motivated substantial research. Commonly, researchers fit a single-pool negative exponential model to data to estimate a decomposition rate (k). We review recent decomposition research, use data simulations, and analyze real data to show that this practice has several potential pitfalls. Specifically, two common decisions regarding model form (how to model initial mass) and data transformation (log-transformed vs. untransformed data) can lead to erroneous estimates of k. Allowing initial mass to differ from its true, measured value resulted in substantial over- or underestimation of k. Log-transforming data to estimate k using linear regression led to inaccurate estimates unless errors were lognormally distributed, while nonlinear regression of untransformed data accurately estimated k regardless of error structure. Therefore, we recommend fixing initial mass at the measured value and estimating k with nonlinear regression (untransformed data) unless errors are demonstrably lognormal. If data are log-transformed for linear regression, zero values should be treated as missing data; replacing zero values with an arbitrarily small value yielded poor k estimates. These recommendations will lead to more accurate k estimates and allow cross-study comparison of k values, increasing understanding of this important ecosystem process.
The authors performed a massive review of the literature, reading and analyzing nearly 500 papers about litter decomposition, most of which fit data to an exponential decay, e–kt. The bottom line is that doing a linear least squares fit to the logarithm of the data can cause significant errors. Much better is to use a nonlinear least squares fit. The manuscript concludes “We suggest that careful selection of fitting methods, as we have described above, will lead to more accurate and comparable k estimates, thereby increasing our understanding of this important ecosystem process.” Of course, my favorite thing about their paper is that it cites the 4th edition of Intermediate Physics for Medicine and Biology!

One pitfall can be illustrated by considering measurements of the voltage across a resistor in an RC circuit. The voltage decays with an RC time constant. However thermal, or Johnson, noise is also present (see Section 9.8). Once the voltage decays to less than the Johnson noise, the measured voltage fluctuates between positive and negative values. If you take the logarithm of the voltage, the negative values are undefined. In other words, you can’t do a linear least squares fit to the logarithm of the data if the data can be negative. The problem remains even when the data is nonnegative (as in Hobbie’s paper) if the data can be zero. However, if you make a nonlinear least squares fit of the data itself (rather than the logarithm of the data) the problem vanishes.

In order to explain these observations in Intermediate Physics for Medicine and Biology, Russ and I (mainly Russ) wrote an Addendum available at the book’s website. It lists what changes are needed to properly explain least squares fitting of exponential data. Enjoy!

Friday, May 7, 2010

Hysteresis and Bistability in the Direct Transition from 1:1 to 2:1 Rhythm in Periodically Driven Single Ventricular Cells

When preparing the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I added two homework problems (Problems 37 and 38) in Chapter 10 (Feedback and Control) about “cardiac restitution.” These problems contain a fascinating and elegantly simple example of restitution that provides insight into nonlinear dynamics and chaos. Problem 37 begins
Problem 37 The onset of ventricular fibrillation in the heart can be understood in part as a property of cardiac restitution.” The action potential duration (APD) depends on the previous diastolic interval (DI): the time from the end of the last action potential until the start of the next one. The relationship between APD and DI is called the restitution curve. In cardiac muscle, a typical restitution curve has the form

APDi+1 = 300 (1 – exp(DIi/100))

where all times are given in ms. Suppose we apply to the heart a series of stimuli, with period (or cycle length) CL. Since APD + DI = CL, we have DIi+1 = CL – APDi+1.
The problem then goes on to have the reader do some numerical calculations using various cycle lengths and initial diastolic intervals. Depending on the parameters, you can get (a) a simple 1:1 response between stimulation and action potential, (b) a 2:2 response in which every stimulus triggers an action potential but the APD alternates between long and short, a behavior called “alternans,” (c) a 2:1 response where an action potential is triggered by every second stimulus, with the tissue being refractory and not responding to the other stimuli, and (d) chaos. I have found this model is an excellent way to introduce students to chaotic behavior; even students with a weak mathematics background can understand it. When discussing this mathematical model with students, I often hand out a particularly clear paper to serve as background reading: J. N. Weiss, A. Garfinkel, H. S. Karagueuzian, Z. Qu, and P.-S. Chen (1999) “Chaos and the Transition to Ventricular Fibrillation: A New Approach to Antiarrhythmic Drug Evaluation,” Circulation, Volume 99, Pages 2819–2826.

Problem 38 explores how to understand this behavior by analyzing the slope of the restitution curve. If the slope is too steep, the behavior becomes more complex. Part (d) of Problem 38 says
Suppose you apply a drug to the heart that can change the restitution curve to

APDi+1 = 300 (1 – b exp(DIi/100)) .

Plot APD as a function of DI for b = 0, 0.5, and 1. What value of b ensures that the slope of the restitution curve is always less than 1? Garfinkel et al. (2000) have suggested that one way to prevent ventricular fibrillation is to use drugs to flatten the restitution curve.
There is yet another type of behavior that is not discussed in Problems 37 or 38: a bistable response. Below is a new homework problem that discusses bistable behavior.
Problem 38 ½ Use the restitution curve from Problem 38, with b = 1/3 and CL = 250, to analyze the response of the system with initial diastolic intervals of 50, 60, 70, 80, and 90. You should find that the qualitative behavior depends on the initial condition. Which values of the initial diastolic interval give a 1:1 response, and which give 2:1? Determine the initial value of the DI, to three significant figures, for which the system makes a transition from one behavior to the other. When two qualitatively different behaviors can both occur, depending on the initial conditions, the system is “bistable.” To learn more about such behavior, see Yehia et al. (1999).
The full citation to the paper mentioned at the end of the problem is
Yehia, A. R., D. Jeandupeux, F. Alonso, and M. R. Guevara (1999) “Hysteresis and Bistability in the Direct Transition From 1:1 to 2:1 Rhythm in Periodically Driven Single Ventricular Cells,” Chaos, Volume 9, Pages 916–931.
The senior author on this article is Michael Guevara, of the Centre for Applied Mathematics in Bioscience and Medicine at McGill University. The introductory paragraph of their paper is reproduced below.
The majority of cells in the heart are not spontaneously active. Instead, these cells are excitable, being driven into activity by periodic stimulation originating in a specialized pacemaker region of the heart containing spontaneously active cells. This pacemaker region normally imposes a 1:1 rhythm on the intrinsically quiescent cells. However, the 1:1 response can be lost when the excitability of the paced cells is decreased, when there are problems in the conduction of electrical activity from cell to cell, or when the heart rate is raised. When 1:1 synchronization is lost in the intact heart, one of a variety of abnormal cardiac arrhythmias can arise. In single quiescent cells isolated from ventricular muscle, 1:1 rhythm can be replaced by a N+1:N rhythm (N≥2), a period-doubled 2:2 rhythm, or a 2:1 rhythm. We investigate below the direct transition from 1:1 to 2:1 rhythm in experiments on single cells and in numerical simulations of an ionic model of a single cell formulated as a nonlinear system of differential equations. We show that there is hysteresis associated with this transition in both model and experiment, and develop a theory for the bistability underlying this hysteresis that involves the coexistence of two stable fixed-points on a two-branched one-dimensional map.
For those interested in exploring the application of nonlinear dynamics to biology and medicine in more detail, two books Russ and I cite in Intermediate Physics for Medicine and Biology—and which I recommend highly—are From Clocks to Chaos by Leon Glass and Michael Mackey (both also at McGill) and Nonlinear Dynamics and Chaos by Steven Strogatz.