Friday, October 24, 2014

A Log-Log Plot of the Blackbody Spectrum

Section 14.7 (Thermal Radiation) of the 4th edition of Intermediate Physics for Medicine and Biology contains one of my favorite illustrations: Figure 14.24, which compares the blackbody spectrum as a function of wavelength λ and as a function of frequency ν. One interesting feature of the blackbody spectrum is that its peak (the wavelength or frequency for which the most thermal radiation is emitted) is different depending on if you plot it as a function of wavelength (Wλ(λ,T) in units of W m−3) or frequency (Wν(ν,T) in units of W s m−2). The units make more sense if we express the units of Wλ as W m−2 per m, and the units of Wν as W m−2 per Hz.

A few weeks ago I discussed the book The First Steps in Seeing, in which the blackbody spectrum was plotted using a log-log scale. This got me to thinking, “I wonder how Fig. 14.24 would look if all axes were logarithmic?” The answer is shown below.

Plots of the blackbody spectrum as functions of wavelength and frequency, shown on a log-log scale.
Figure 14.24 from Intermediate Physics for Medicine and Biology,
but plotted using a log-log scale.
The caption for Fig. 14.24 is “The transformation from Wλ(λ,T) to Wν(ν,T) is such that the same amount of power per unit area is emitted in wavelength interval (λ, dλ) and the corresponding frequency interval (ν, dν). The spectrum shown is for a blackbody at 3200 K.” I corrected the wrong temperature T in the caption as printed in the 4th edition.

The bottom right panel of the above figure is a plot of Wλ versus λ. For this temperature the spectrum peaks just a bit below λ = 1 μm. At longer wavelengths, it falls off approximately as λ−4 (shown as the dashed line, known as the Rayleigh-Jeans approximation). At short wavelengths, the spectrum rises abruptly and is exponential.

The top left panel contains a plot of Wν versus ν. The spectrum peaks at a frequency just below about 0.3 THz. At low frequencies it increases approximately as ν2 (again, the Rayleigh-Jeans approximation). At high frequencies the spectrum falls dramatically and exponentially.

The connection between these two plots is illustrated in the upper right panel, which plots the relationship ν = c/λ. This equation has nothing to do with blackbody radiation, but merely shows a general relationship between frequency, wavelength, and the speed of light for electromagnetic radiation.

Why is it useful to show these functions in a log-log plot? First, it reinforces the concepts Russ Hobbie and I introduced in Chapter 2 of IPMB (Exponential Growth and Decay). In a log-log plot, power laws appear as straight lines. Thus, in the book’s version of Fig. 14.24 the equation ν = c/λ is a hyperbola, but in the log-log version this is a straight line with a slope of negative one. Furthermore, the Rayleigh-Jeans approximation implies a power-law relationship, which is nicely illustrated on a log-log plot by the dashed line. In the book’s version of the figure, Wλ falls off at both large and small wavelengths, and at first glance the rates of fall off look similar. You don’t really see the difference until you look at very small values of Wλ, which are difficult to see in a linear plot but are apparent in a logarithmic plot. The falloff at short wavelengths is very abrupt while the decay at long wavelengths is gradual. This difference is even more striking in the book’s plot of Wν. The curve doesn’t even go all the way to zero frequency in Fig. 14.24, making its limiting behavior difficult to judge. The log-log plot clearly shows that at low frequencies Wν rises as ν2.

Both the book’s version and the log-log version illustrate how the two functions peak at different regions of the electromagnetic spectrum, but for this point the book’s linear plot may be clearer. Another advantage of the linear plot is that I have an easier time estimating the area under the curve, which is important for determining the total power emitted by the blackbody and the Stefan-Boltzmann law. Perhaps there is some clever way to estimate areas under a curve on a log-log plot, but it seems to me the log plot exaggerates the area under the small frequency section of the curve and understates the area under the large frequencies (just as on a map the Mercator projection magnifies the area of Greenland and Antarctica). If you want to understand how these functions behave completely, look at both the linear and log plots.

Yet another way to plot these functions would be on a semilog plot. The advantage of semilog is that an exponential falloff shows up as a straight line. I will leave that plot as an exercise for the reader.

For those who want to learn about the derivation and history of the blackbody spectrum, I recommend Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (although any good modern physics book should discuss this topic). A less mathematical but very intuitive description of why Wλ and Wν peak at different parts of the spectrum is given in The Optics of Life. For a plot of photon number (rather than energy radiated) as a function of λ or ν, see The First Steps in Seeing.

Friday, October 17, 2014

A Theoretical Model of Magneto-Acoustic Imaging of Bioelectric Currents

Twenty years ago, I became interested in magneto-acoustic imaging, primarily influenced by the work of Bruce Towe that was called to my attention by my dissertation advisor and collaborator John Wikswo. (See, for example, Towe and Islam, “A Magneto-Acoustic Method for the Noninvasive Measurement of Bioelectric Currents,” IEEE Trans. Biomed. Eng., Volume 35, Pages 892–894, 1988). The result was a paper by Wikswo, Peter Basser, and myself (“A Theoretical Model of Magneto-Acoustic Imaging of Bioelectric Currents,” IEEE Trans. Biomed. Eng., Volume 41, Pages 723–728, 1994). This was my first foray into biomechanics, a subject that has become increasingly interesting to me, to the point where now it is the primary focus of my research (but that’s another story; for a preview look here).

A Treatise on the Mathematical Theory of Elasticity, by A. E. H. Love, superimposed on Intermediate Physics for Medicine and BIology.
A Treatise on the Mathematical
Theory of Elasticity,
by A. E. H. Love.
I started learning about biomechanics mainly through my friend Peter Basser. We both worked at the National Institutes of Health in the early 1990s. Peter used continuum models in his research a lot, and owned a number of books on the subject. He also loved to travel, and would often use his leftover use-or-lose vacation days at the end of the year to take trips to exotic places like Kathmandu. When he was out of town on these adventures, he left me access to his personal library, and I spent many hours in his office reading classic references like Schlichting’s Boundary Layer Theory and Love’s A Treatise on the Mathematical Theory of Elasticity. Peter and I also would read each other’s papers, and I learned much continuum mechanics from his work. (NIH had a rule that someone had to sign a form saying they read and approved a paper before it could be submitted for publication, so I would give my papers to Peter to read and he would give his to me.) In this way, I became familiar enough with biomechanics to analyze magneto-acoustic imaging. Interestingly, we published our paper in the same year Basser began publishing his research on MRI diffusion tensor imaging, for which he is now famous (see here).

As with much of my research, our paper on magneto-acoustic imaging addressed a simple “toy model”: an electric dipole in the center of an elastic, conducting sphere exposed to a uniform magnetic field. We were able to calculate the tissue displacement and pressure analytically for the cases of a magnetic field parallel and perpendicular to the dipole. One of my favorite results in the paper was that we found a close relationship between magneto-acoustic imaging and biomagnetism.
“Magneto-acoustic pressure recordings and biomagnetic measurements image action currents in an equivalent way: they both have curl J [the curl of the current density] as their source.”
For about ten years, our paper had little impact. A few people cited it, including Amalric Montalibet and later Han Wen, who each developed a method to use ultrasound and the Lorentz force to generate electrical current in tissue. I’ve described this work before in a review article about the role of magnetic forces in medicine and biology, which I have mentioned before in this blog. Then, in 2005 Bin He began citing our work in a long list of papers about magnetoacoustic tomography with magnetic induction, which again I've written about previously. His work generated so much interest in our paper that in 2010 alone it was cited 19 times according to Google Scholar. Of course, it is gratifying to see your work have an impact.

But the story continues with a more recent study by Pol Grasland-Mongrain of INSERM in France. Building on Montalibet’s work, Grasland-Mongrain uses an ultrasonic pulse and the Lorentz force to induce a voltage that he can detect with electrodes. The resulting electrical data can then be analyzed to determine the distribution of electrical conductivity (see Ammari, Grasland-Mongrain, et al. for one way to do this mathematically). In many ways, their technique is in competition with Bin He’s MAT-MI as a method to image conductivity.

Grasland-Mongrain also publishes his own blog about medical imaging. (Warning: The website is in French, and I have to rely on Google Translate to read it. It is my experience that Google has a hard time translating technical writing). There he discusses his most recent paper about imaging shear waves using the Lorentz force. Interestingly, shear waves in tissue is one of the topics Russ Hobbie and I added to the 5th edition of Intermediate Physics for Medicine and Biology, due out next year. Grasland-Mongrain’s work has been highlighted in Physics World and Focus Physics, and a paper about it appeared this year in Physical Review Letters, the most prestigious of all physics journals (and one I’ve never published in, to my chagrin).

I am amazed by what can happen in twenty years.


As a postscript, let me add a plug for toy models. Russ and I use a lot of toy models in IPMB. Even though such simple models have their limitations, I believe they provide tremendous insight into physical phenomena. I recently reviewed a paper in which the authors had developed a very sophisticated and complex model of a phenomena, but examination of a toy model would have told them that the signal they calculated was far, far to small to be observable. Do the toy model first. Then, once you have the insight, make your model more complex.

Friday, October 10, 2014

John H Hubbell

In the references at the end of Chapter 15 (Interaction of Photons and Charged Particles with Matter) in the 4th edition of Intermediate Physics for Medicine and Biology, you will find a string of publications authored by John H. Hubbell (1925–2007), covering a 27-year period from 1969 until 1996. Data from his publications are plotted in Fig. 15.2 (Total cross section for the interactions of photons with carbon vs. photon energy), Fig. 15.3 (Cross sections for the photoelectric effect and incoherent and coherent scattering from lead), Fig. 15.8 (The coherent and incoherent differential cross sections as a function of angle for 100-keV photons scattering from carbon, calcium, and lead), Fig. 15.14 (Fluorescence yields from K-, L-, and M-shell vacancies as a function of atomic number Z), and Fig. 15.16 (Coherent and incoherent attenuation coefficients and the mass energy absorption coefficient for water).

Hubbell’s 1982 paper “Photon Mass Attenuation and Energy-Absorption Coefficients from 1 keV to 20 MeV” (International Journal of Applied Radiation and Isotopes, Volume 33, Pages 1269–1290) has been cited 976 times according to the Web of Science. It has been cited so many times that it was selected as a citation classic, and Hubbell was invited to write a one-page reminiscence about the paper. It began modestly
Some papers become highly cited due to the creativity, genius, and vision of the authors, presenting seminal work stimulating and opening up new and multiplicative lines of research. Another, more pedestrian class of papers is “house-by-the-side-of-the-road” works, highly cited simply because these papers provide tools required by a substantial number of researchers in a single discipline or perhaps in several diverse disciplines, as is here the case.
At the time of his death, the International Radiation Physics Society Bulletin published the following obituary
The International Radiation Physics Society (IRPS) lost one of its major founding members, and the field of radiation physics one of its advocates and contributors of greatest impact, with the death this spring of John Hubbell.

John was born in Michigan in 1925, served in Europe in World War II [he received a bronze star], and graduated from the University of Michigan with a BSE (physics) in 1949 and MS (physics) in 1950. He then joined the National Bureau of Standards (NBS), later NIST, where he worked during his entire career. He married Jean Norford in 1955, and they had three children. He became best known and cited for National Standards Reference Data Series Report 29 (l969), “Photon Cross Sections, Attenuation Coefficients, and Energy Absorption Coefficients from 10 keV to 100 GeV.” He was one of the two leading founding members of the International Radiation Physics Society in 1985, and he served as its President 1994–97. While he retired from NIST in 1988, he remained active there and in the affairs of IRPS, until the stroke that led to his death this year.
Learn more about John Hubbell here and here.

Friday, October 3, 2014

Update on the 5th edition of IPMB

A few weeks ago, Russ Hobbie and I submitted the 5th edition of Intermediate Physics for Medicine and Biology to our publisher. We are not done yet; page proofs should arrive in a few months. The publisher is predicting a March publication date. I suppose whether we meet that target will depend on how fast Russ and I can edit the page proofs, but I am nearly certain that the 5th edition will be available for fall 2015 classes (for summer 2015, I am hopeful but not so sure). In the meantime, use the 4th edition of IPMB.

What is in store for the 5th edition? No new chapters; the table of contents will look similar to the 4th edition. But there are hundreds—no, thousands—of small changes, additions, improvements, and upgrades. We’ve included many new up-to-date references, and lots of new homework problems. Regular readers of this blog may see some familiar additions, which were introduced here first. We tried to cut as well as add material to keep the book the same length. We won’t know for sure until we see the page proofs, but we think we did a good job keeping the size about constant.

We found several errors in the 4th edition when preparing the 5th. This week I updated the errata for the 4th edition, to include these mistakes. You can find the errata at the book website, https://sites.google.com/view/hobbieroth. I won’t list here the many small typos we uncovered, and all the misspellings of names are just too embarrassing to mention in this blog. You can see the errata for those. But let me provide some important corrections that readers will want to know about, especially if using the book for a class this fall or next winter (here in Michigan we call the January–April semester "winter"; those in warmer climates often call it spring).
  • Page 78: In Problem 61, we dropped a key minus sign: “90 mV” should be “−90 mV”. This was correct in the 3rd edition (Hobbie), but somehow the error crept into the 4th (Hobbie and Roth). I can’t figure out what was different between the 3rd and 4th editions that could cause such mistakes to occur.
  • Page 137: The 4th edition claimed that at a synapse the neurotransmitter crosses the synaptic cleft and “enters the next cell.” Generally a neurotransmitter doesn’t “enter” the downstream cell, but is sensed by a receptor in the membrane that triggers some response.
  • Page 338: I have already told you about the mistake in the Bessel function identity in Problem 10 of Chapter 12. For me, this was THE MOST ANNOYING of all the errors we have found. 
  • Page 355: In Problem 12 about sound and hearing, I used an unrealistic value for the threshold of pain, 10−4 W m−2. Table 13.1 had it about right, 1 W m−2. The value varies between people, and sometimes I see it quoted as high as 10 W m−2. I suggest we use 1 W m−2 in the homework problem. Warning: the solution manual (available to instructors who contact Russ or me) is based on the problem as written in the 4th edition, not on what it would be with the corrected value.
  • Page 355: Same page, another screw up. Problem 16 is supposed to show how during ultrasound imaging a coupling medium between the transducer and the tissue can improve transmission. Unfortunately, in the problem I used a value for the acoustic impedance that is about a factor of a thousand lower than is typical for tissue. I should have used Ztissue = 1.5 × 106 Pa s m−1. This should have been obvious from the very low transmission coefficient that results from the impedance mismatch caused by my error. Somehow, the mistake didn’t sink in until recently. Again, the solution manual is based on the problem as written in the 4th edition.
  • Page 433: Problem 30 in Chapter 15 is messed up. It contains two problems, one about the Thomson scattering cross section, and another (parts a and b) about the fraction of energy due to the photoelectric effect. Really, the second problem should be Problem 31. But making that change would require renumbering all subsequent problems, which would be a nuisance. I suggest calling the second part of Problem 30 as Problem “30 ½.” 
  • Page 523: When discussing a model for the T1 relaxation time in magnetic resonance imaging, we write “At long correlation times T1 is proportional to the Larmor frequency, as can be seen from Eq. 18.34.” Well, a simple inspection of Eq. 18.34 reveals that T1 is proportional to the SQUARE of the Larmor frequency in that limit. This is also obvious from Fig. 18.12, where a change in Larmor frequency of about a factor of three results in a change in T1 of nearly a factor of ten. 
  • Page 535: In Chapter 18 we discuss how the blood flow speed v, the repetition time TR, and the slice thickness Δz give rise to flow effects in MRI. Following Eq. 18.56, we take the limit when v is much greater than "TR/Δz". I always stress to my students that units are their friends. They can spot errors by analyzing if their equation is dimensionally correct. CHECK IF THE UNITS WORK! Clearly, I didn’t take my own advice in this case.
For all these errors, I humbly apologize. Russ and I redoubled our effort to remove mistakes from the 5th edition, and we will redouble again when the page proofs arrive. In the meantime, if you find still more errors in the 4th edition, please let us know. If the mistake is in the 4th edition, it could well carry over to the 5th edition if we don’t root it out immediately.

Friday, September 26, 2014

The First Steps in Seeing

The First Steps in Seeing,  by Robert Rodieck, superimposed on Intermediate Physics for Medicine and Biology.
The First Steps in Seeing,
by Robert Rodieck.
Russ Hobbie and I discuss the eye and vision in Chapter 14 of the 4th edition of Intermediate Physics for Medicine and Biology. But we just barely begin to describe the complexities of how we perceive light. If you want to learn more, read The First Steps in Seeing, by Robert Rodieck. This excellent book explains how the eye works. The preface states
This book is about the eyes—how they capture an image and convert it to neural messages that ultimately result in visual experience. An appreciation of how the eyes work is rooted in diverse areas of science—optics, photochemistry, biochemistry, cellular biology, neurobiology, molecular biology, psychophysics, psychology, and evolutionary biology. This gives the study of vision a rich mixture of breadth and depth.

The findings related to vision from any one of these fields are not difficult to understand in themselves, but in order to be clear and precise, each discipline has developed its own set of words and conceptual relations—in effect is own language—and for those wanting a broad introduction to vision, these separate languages can present more of an impediment to understanding than an aid. Yet what lies beneath the words usually has a beautiful simplicity.

My aim in this book is to describe how we see in a manner understandable to all. I’ve attempted to restrict the number of technical terms, to associate the terms that are used with a picture or icon that visually express what they mean, and to develop conceptual relations according to arrangements of these icons, or by other graphical means. Experimental findings have been recast in the natural world whenever possible, and broad themes attempt to bring together different lines of thought that are usually treated separately.

The main chapters provide a thin thread that can be read without reference to other books. They are followed by some additional topics that explore certain areas in greater depth, and by notes that link the chapters and topics to the broader literature.

My intent is to provide you with a framework for understanding what is known about the first steps in seeing by building upon what you already know.
Rodieck explains things in a quantitative, almost “physicsy” way. For instance, he imagines a person staring at the star Polaris, and estimates the number of photons (5500) arriving at the eye each tenth of a second (approximately the time required for visual perception), then determines their distribution on the retina, finds how many are at each wavelength, and how many per cone cell.

Color vision is analyzed, as are the mechanisms of how rhodopsin responds to a photon, how the photoreceptor produces a polarization of the neurons, how the retina responds with such a large dynamic range (“the range of vision extends from a catch rate of about one photon per photoreceptor per hour to a million per second”), and how eye movements hold an image steady on the retina. There’s even a discussion of photometry, with a table similar to the one I presented last week in this blog. I learned that the unit of retinal illuminance is the troland (td), defined as the luminance (candelas per square meter) times the pupil area (square millimeters).

Like IPMB, Rodieck ends his book with several appendices, including a first one on angles. His appendix on blackbody radiation includes in a figure showing the Planck function versus frequency plotted on log-log paper (I’ve always seen it plotted on linear axes, but the log-log plot helps clairfy the behavior at very large and small frequencies). The photon emission from the surface of a blackbody as a function of temperature is 1.52 × 1015 T3 photons per second per square meter (Rodieck does everything in terms of the number of photons). The factor of temperature cubed is not a typo; Stefan's law contains a T3 rather than T4 when written in terms of photon number. A lovely appendix analyzes the Poisson distribution, and another compares frequency and wavelength distributions.

The best feature of The First Steps in Seeing are the illustrations. This is a beautiful book. I suspect Rodieck read Edward Tufte’s the Visual Display of Quantitative Information, because his figures and plots elegantly make his points with little superfluous clutter. I highly recommend this book.

Friday, September 19, 2014

Lumens, Candelas, Lux, and Nits

In Chapter 14 (Atoms and Light) of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss photometry, the measurement of electromagnetic radiation and its ability to produce a human visual sensation. I find photometry interesting mainly because of all the unusual units.

Let’s start by assuming you have a source of light emitting a certain amount of energy per second, or in other words with a certain power in watts. This is called the radiant power or radiant flux, and is a fundamental concept in radiometry. But how do we perceive such a source of light? That is a question in photometry. Our perception will depend on the wavelength of light. If the light is all in the infrared or ultraviolet, we won’t see anything. If in the visible spectrum, our perception depends on the wavelength. In fact, the situation is even more complicated than this, because our perception depends on if we are using the cones in the retina of our eye to see bright light in color (photopic vision), or we are using rods to see dim light in black and white (scotopic vision). Moreover, our ability to see varies among individuals. The usual convention is to assume we are using photopic vision, and to say that a source radiating a power of one watt of light at a wavelength of 555 nm (green light, the wavelength that the eye is most sensitive to) has a luminous flux of 683 lumens.

The light source may emit different amounts of light in different directions. In radiometry, the radiant intensity is the power emitted per solid angle, in units of watt per steradian. We can define an analogous photometric unit for the luminous intensity to be the luman per steradian, or the candela. The candela is one of seven “SI base units” (the others are the kilogram, meter, second, ampere, mole, and kelvin). Russ and I mention the candela in Table 14.6, which is a large table that compares radiometric, photometric and actinometric quantities. We also define it in the text, using the old-fashioned name “candle” rather than candela.

Often you want to know the intensity of light per unit area, or irradiance. In radiometry, irradiance is measured in watts per square meter. In photometry, the illuminance is measured in lumens per square meter, also called the lux.

Finally, the radiance of a surface is the radiant power per solid angle per unit surface area (W sr−1 m−2). The analogous photometric quantity is the luminance, which is measured in units of lumen sr−1 m−2, or candela m−2, or lux sr−1, or nit. The brightness of a computer display is measured in nits.

In summary, below is an abbreviated version of Table 14.6 in IPMB
Radiometry Photometry
Radiant power (W) Luminous flux (lumen)
Radiant Intensity (W sr−1) Luminous intensity (candela)
Irradiance (W m−2) Illuminance (lux)
Radiance (W sr−1 m−2) Luminance (nit)
Where did the relationship between 1 W and 683 lumens come from? Before electric lights, a candle was a major source of light. A typical candle emits about 1 candela of light. The relationship between the watt and the lumen is somewhat analogous to the relationship between absolute temperature and thermal energy, and the relationship between a mole and the number of molecules. This would put the conversion factor of 683 lumens per watt in the same class as Boltzmann's constant (1.38 × 10−23 J per K) and Avogadro's number (6.02 × 1023 molecules per mole).

Friday, September 12, 2014

More about the Stopping Power and the Bragg Peak

The Bragg peak is a key concept when studying the interaction of protons with tissue. In Chapter 16 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
Protons are also used to treat tumors. Their advantage is the increase of stopping power at low energies. It is possible to make them come to rest in the tissue to be destroyed, with an enhanced dose relative to intervening tissue and almost no dose distally (“downstream”) as shown by the Bragg peak in Fig. 16.51 [see a similar figure here]. Placing an absorber in the proton beam before it strikes the patient moves the Bragg peak closer to the surface. Various techniques, such as rotating a variable-thickness absorber in the beam, are used to shape the field by spreading out the Bragg peak (Fig. 16.52) [see a similar figure here].
Figure 16.52 is very interesting, because it shows a nearly uniform dose throughout a region of tissue produced by a collection of Bragg peaks, each reaching a maximum at a different depth because the protons have different initial energies. The obvious question is: how many protons should one use for each energy to produce a uniform dose in some region of tissue? I have discussed the Bragg peak before in this blog, when I presented a new homework problem to derive an analytical expression for the stopping power as a function of depth. An extension of this problem can be used to answer this question. Russ and I considered including this extended problem in the 5th edition of IPMB (which is nearing completion), but it didn’t make the cut. Discarded scraps from the cutting room floor make good blog material, so I present you, dear reader, with a new homework problem.
Problem 31 3/4 A proton of kinetic energy T is incident on the tissue surface (x = 0). Assume its stopping power s(x) at depth x is given by
An equation showing the stopping power as a function of depth. This equation illustrates the Bragg peak.
where C is a constant characteristic of the tissue.
(a) Plot s(x) versus x. Where does the Bragg peak occur?
(b) Now, suppose you have a distribution of N protons. Let the number with incident energy between T and T+dT be A(T)dT, where
An equation giving the distribution of proton energies in this example of spreading out the Bragg peak.
Determine the constant B by requiring
An equation showing how to normalize the distribution of proton energies.
Plot A(T) vs T.
(c) If x is greater than T22/2C what is the total stopping power? Hint: think before you calculate; how many particles can reach a depth greater than T22/2C?

(d) If x is between T12/2C and T22/2C, only particles with energy from (2Cx)1/2 to T2 contribute to the stopping power at x, so
An integral giving the stopping power as a function of position.
Evaluate this integral. Hint: let u = T2 - (2Cx + T22)/2.
(e) If x is less than T12/2C, all the particles contribute to the stopping power at x, so
An integral giving the stopping power as a function of position.
Evaluate this integral.

(f) Plot S(x) versus x. Compare your plot with that found in part a, and with Fig. 16.52.
One reason this problem didn’t make the cut is that it is rather difficult. Let me know if you need the solution. The bottom line: this homework problem does a pretty good job of explaining the results in Fig. 16.52, and provides insight into how to apply proton therapy to an large tumor.

Friday, September 5, 2014

Raymond Damadian and MRI

The 2003 Nobel Prize in Physiology or Medicine was awarded to Paul Lauterbur and Sir Peter Mansfield “for their discoveries concerning magnetic resonance imaging.” In Chapter 18 of the 4th Edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss MRI and the work behind this award. Choosing Nobel Prize winners can be controversial, and in this case some suggest that Raymond Damadian should have shared in the prize. Damadian himself famously took out an ad in the New York Times claiming his share of the credit. Priority disputes are not pretty events, but one can gain some insight into the history of magnetic resonance imaging by studying this one. The online news source Why Files tells the story in detail. The controversy continues even today (see, for instance, the website of Damadian's company FONAR). Unfortunately, Damadian’s religious beliefs have gotten mixed up in the debate.

I think the issue comes down to a technical matter about MRI. If you believe the variation of T1 and T2 time constants among different tissues is the central insight in developing MRI, then Damadian has a valid claim. If you believe the use of magnetic field gradients for encoding spatial location is the key insight in MRI, his claim is weaker than Lauterbur and Mansfield's. Personally, I think the key idea of magnetic resonance imaging is using magnetic field gradients. IPMB states
“Creation of the images requires the application of gradients in the static magnetic field Bz which cause the Larmor frequency to vary with position.”
My understanding of MRI history is that this idea originated with Lauterbur and Mansfield (and was also earlier discovered by Hermann Carr).

To learn more, I suggest you read Naked to the Bone, which I discussed previously in this blog. This book discusses both the Damadian controversy, and a similar controversy centered around William Oldendorf and the development of computed tomography (which is mentioned in IPMB).

Friday, August 29, 2014

Student’s T Test

Primer of Biostatistics, by Stanton Glantz, superimposed on Intermediate Physics for Medicine and Biology.
Primer of Biostatistics,
by Stanton Glantz.
Probability and statistics is an important topic in medicine. Russ Hobbie and I discuss probability in the 4th edition of Intermediate Physics for Medicine and Biology, but we don’t delve into statistics. Yet, basic statistics is crucial for analyzing biomedical data, such as the results of a clinical trial.

Suppose IPMB did contain statistics. What would that look like? I suspect Russ and I would summarize this topic in an appendix. The logical place seems to be right after Appendix G (The Mean and Standard Deviation). We would probably not want to go into great detail, so we would only consider the simplest case: a “student’s t-test” of two data sets. It would be something like this (but probably less wordy).
Appendix G ½  Student’s T Test

Suppose you divide a dozen patients into two groups. Six patients get a drug meant to lower their blood pressure, and six others receive a placebo. After receiving the drug for a month, their blood pressure is measured. The data is given in Table G ½.1.

Table G ½.1. Systolic Blood Pressure (in mmHg)
Drug Placebo
115   99
  90 106
  99 100
108 119
107   96
  96 104

Is the drug effective in lowering blood pressure? Statisticians typically phrase the question differently: they adopt the null hypothesis that the drug has no effect, and ask if the data justifies the rejection of this hypothesis.

The first step is to calculate the mean, using the methods described in Appendix G. The mean for those receiving the drug is 102.5 mmHg, and the mean for those receiving the placebo is 104.0 mmHg. So, the mean systolic blood pressure was lower with the drug. The crucial question is: could this difference arise merely from chance, or does it represent a real difference? In other words, is it likely that this difference is a coincidence caused by taking too small of a sample?

To answer this question, we need to next calculate the standard deviation σ of each data set. We calculate this using Eq. G.4, except that because we do not know the mean of the data but only estimate it from our sample, we should use the factor N/(N-1) for the best estimate of the variance, where N = 6 in this example. The standard deviation is then σ = √( Σ (x -xmean)2/(N-1) ). The calculated standard deviation for the patients who took the drug is 9.1, whereas for the patients who took the placebo it is 8.2. 

The standard deviation describes the spread of the data within the sample, but what we really care about is how accurately we know the mean of the data. The standard deviation of the mean is calculated by dividing the standard deviation by the square root of N. This gives 3.7 for patients taking the drug, and 3.3 for patients taking the placebo.

We are primarily interested in the difference of the means, which is 104.0 – 102.5 = 1.5 mmHg. The standard deviation of the difference in the means can be found by squaring each standard deviation of the mean, adding them, and taking the square root (standard deviations add like in the Pythagorean theorem). You get
√(3.72 + 3.32) = 5.0 mmHg.

Compare the difference of the means to the standard deviation of the difference of the means by taking their ratio. Following tradition we will call this ratio T, so T = 1.5/5.0 = 0.3. If the drug has a real effect, we would expect the difference of the mean to be much larger than the standard deviation of the difference of the mean, so the absolute value of T should be much greater than 1. On the other hand, if the difference of means is much smaller than the standard deviation of the difference of the means, the result could arise easily from chance and |T| should be much less than 1. Our value is 0.3, which is less than 1, suggesting that we cannot reject the null hypothesis, and that we have not shown that the drug has any effect. 

But can we say more? Can we transform our value of T into a probability that the null hypothesis is true? We can. If the drug truly had no effect, then we could repeat the experiment many times and get a distribution of T values. We would expect the values of T to be centered about T = 0 (remember, T can be positive or negative), with small values much more common than large. We could interpret this as a probability distribution: a bell shaped curve peaked at zero and falling as T becomes large. In fact, although we will not go into the details here, we can determine the probability that |T| is greater than some critical value. By tradition, one usually requires the probability p to be larger than one twentieth (p greater than 0.05) if we want to reject the null hypothesis and claim that the drug does indeed have a real effect. The critical value of T depends on N, and values are tabulated in many places (for example, see here). In our case, the tables suggest that T would have to be greater than 2.23 in order to reject the null hypothesis and say that the drug has a true (or, in the technical language, a “significant”) effect.

If taking p greater than 0.05 seems like an arbitrary cutoff for significance, then you are right. Nothing magical happens when p reaches 0.05. All it means is that the probability that the difference of the means could have arisen by chance is less than 5%. It is always possible that you were really, really unlucky and that your results arose by chance but |T| just happened to be very large. You have to draw a line somewhere, and the accepted tradition is that p greater than 0.05 means that the probability of the results being caused by random chance is small enough to ignore. 

Problem 1 Analyze the following data and determine if X and Y are significantly different. 
  X  Y
  94 122
  93 118
104 119
105 123
115 102
  96 115
Use the table of critical values for the T distribution at
http://en.wikipedia.org/wiki/Student%27s_t-distribution.
I should mention a few more things.

1. Technically, we consider above a two-tailed t-test, so we’re testing if we can reject the null hypothesis that the two means are the same, implying that either the drug had a significant effect of lowering blood pressure or the drug had a significant effect of raising blood pressure. If we wanted to test only if the drug lowered blood pressure, we should use a one-tailed test.

2. We analyzed what is known as an unpaired test. The patients who got the drug are different than the patients who did not. Suppose we gave the drug to the patients in January, let them go without the drug for a while, then gave the same patients the placebo in July (or vice versa). In that case, we have paired data. It may be that patients vary a lot among themselves, but that the drug reduced everyone’s blood pressure by the same fixed percentage, say 12%. There are special ways to generalize the t-test for paired data.

3. It’s easy to generalize these results to the case when the two samples have different numbers N.

4. Please remember, if you found 20 papers in the literature that all observed significant effects with p less than but on the order of 0.05, then on average one of those papers is going to be reporting a spurious result: the effect is reported as significant when in fact it is a statistical artifact. Given that there are thousands (millions?) of papers out there reporting the results of t-tests, there are probably hundreds (hundreds of thousands?) of such spurious results in the literature. The key is to remember what p means, and to not over-interpret or under-interpret your results.

5. Why is this called the “student’s t-test”? The inventor of the test, William Gosset, was a chemist working for Guinness, and he devised the t-test to assess the quality of stout. Guinness would not let its chemists publish, so Gosset published under the pseudonym “student.”

6. The t-test is only one of many statistical methods. As is typical of IPMB, we have just scratched the surface of an exciting and extensive topic. 

7. There are many good books on statistics. One that might be useful for readers of IPMB (focused on biological and medical examples, written in engaging and nontechnical prose) is Primer of Biostatistics, 7th edition, by Stanton Glantz.

Friday, August 22, 2014

Point/Counterpoint: Low-Dose Radiation is Beneficial, Not Harmful

I have discussed the pedagogical virtues of point/counterpoint articles published by Medical Physics before in this blog (see, for instance, here and here). They are a wonderful resource to augment any medical physics class, and serve as an excellent supplement to the 4th edition of Intermediate Physics for Medicine and Biology. Medical Physics posts all its point/counterpoint articles freely available online (open access). Each article presents a somewhat controversial proposition in the title, and two leading medical physicists then debate the issue, one pro and one con. Each makes an opening statement, and each has a chance to respond to their opponents opening statement in a rebuttal.

One example that is closely related to a topic in IPMB is addressed in the July 2014 point/counterpoint article, which debates the proposition that “low-dose radiation is beneficial, not harmful.” Mohan Doss argues for the proposition, and Mark Little argues against it. The issue is central to the “linear no threshold” model of radiation risk that Russ Hobbie and I discuss in Sec. 16.13 (The Risk of Radiation) of IPMB. Mohan Doss leads off with this claim:
When free radical production is increased, e.g., from low-dose radiation (LDR) exposure (or increased physical/mental activity), our body responds with increased defenses consisting of increased antioxidants, DNA repair enzymes, immune system response, etc. referred to as adaptive protection. With enhanced protection, there would be reduced cumulative damage in the long term and reduced diseases. The disease-preventive effects of increased physical/mental activities are well known.
Little responds:
Dr. Doss discusses the well-known involvement of the immune system in cancer, and more generally the role of adaptive response. The critical issue is whether the up-regulation of the immune system or other forms of adaptive response that may result from a radiation dose offsets the undoubted carcinogenic damage that is caused. The available evidence, summarized in my Opening Statement, is that it does not.
Both cite the literature extensively. I find it fascinating that such a basic hypothesis hasn’t, to this day, been resolved. We don’t even know the sign of the effect: is low dose radiation positive or negative for our health. Although I can’t tell you who is right, Doss or Little, I can tell you who wins: the reader. And especially the student, who gets a front-row seat at a cutting-edge scientific debate between two world-class experts.

By the way, point/counterpoint articles aren’t the only articles available free-of-charge at the Medical Physics website. You can get 50th Anniversary Papers [for its 50th anniversary, Medical Physics published several retrospective papers], Vision 20/20 papers [summaries of state-of-the-art developments in medical physics], award papers, special focus papers, and more. And it’s all free.

I love free stuff.

Friday, August 15, 2014

Physics of Phoxhounds

I don’t have any grandchildren yet, but I am fortunate to have a wonderful “granddog.” This weekend, my wife and I are taking care of Auggie, the lovable foxhound that my daughter Kathy rescued from an animal shelter in Lansing, Michigan. Auggie gets along great with our Cocker-Westie mix, “Aunt Suki,” my dog-walking partner who I’ve mentioned often in this blog (here, here, here, and here).

Do dogs and physics mix? Absolutely! If you don’t believe me, then check out the website dogphysics.com. I plan to read “How To Teach Physics To Your Dog” with Auggie and Suki. According to this tee shirt foxhounds are particularly good at physics. Once we finish “How To Teach Physics To Your Dog,” we may move on to “Physics for Dogs: A Crash Course in Catching Cats, Frisbees, and Cars.” Apparently there is even a band that sings about dog physics, but I don’t know what that is all about.

Auggie is a big fan of the 4th edition of Intermediate Physics for Medicine and Biology. His favorite part is Section 7.10 (Electrical Stimulation) because there Russ Hobbie and I discuss the “dog-bone” shaped virtual cathode that arises when you stimulate cardiac tissue using a point electrode. He thinks “Auger electrons,” discussed in Sec. 17.11, are named after him. Auggie’s favorite scientist is Godfrey Hounsfield (Auggie adds a “d” to his name: “Houndsfield”), who earned a Nobel Prize for developing the first clinical computed tomography machine. And his favorite homework problem is Problem 34 in Chapter 2, about the Lotka-Volterra equations governing the population dynamics of rabbits and foxes.

How did Auggie get his name? I’m not sure, because he had the name Auggie when Kathy adopted him. I suspect it comes from an old Hanna-Barbera cartoon about Augie Doggie and Doggie Daddy. When Auggie visits, I get to play doggie [grand]daddy, and say “Augie, my son, my son” in my best Jimmy Durante voice. I’m particularly fond of the Augie doggie theme song. What is Auggie’s favorite movie? Why, The Fox and the Hound, of course.

A photograph of Brad Roth holding his dog Suki Roth in Michigan's fall color.
Me holding Suki.
Our dog Suki has some big news this week. My friend and Oakland University colleague Barb Oakley has a new book out: A Mind for Numbers: How to Excel at Math and Science (Even if You Flunked Algebra). I contributed a small sidebar to the book offering some tips for learning physics, and it includes a picture of me with Suki! Thanks to my friend Yang Xia for taking the picture. Barb is a fascinating character and author of an eclectic collection of books. I suggest Hair of the Dog: Tails from Aboard a Russian Trawler. Her amazon.com author page first gave me the idea of publishing a blog to go along with IPMB. To those of you who are interested in physics applied to medicine and biology but struggle with all the equations in IPMB, I suggest Barb's book or her MOOC Learning How to Learn.

All Creatures Great and Small, by James Herriot, superimposed on Intermediate Physics for Medicine and Biology.
All Creatures Great and Small,
by James Herriot.
James Herriot—the author of a series of wonderful books including All Creatures Great and Small, which will warm the heart of any dog-lover—loved beagles, which look similar to foxhounds, but are smaller. If you’re looking for an uplifting and enjoyable book to read on a late-summer vacation (and you have already finished IPMB), try Herriot’s books. But skip the chapters about cats (yuck).

Auggie may not be the brightest puppie in the pack, and he is too timid to be an effective watch dog, but he has a sweet and loving disposition. I think of him as a gentle soul (even if he did chew up his grandma’s shoe). Below is a picture of Auggie and his Aunt Suki, getting ready for their favorite activity: napping.

Suki Roth with the lovable foxhound Auggie, napping.
Suki and Auggie.

Friday, August 8, 2014

On Size and Life

I have recently been reading the fascinating book On Size and Life, by Thomas McMahon and John Tyler Bonner (Scientific American Library, 1983). In their preface, McMahon and Bonner write
This book is about the observable effects of size on animals and plants, seen and evaluated using the tools of science. It will come as no surprise that among those tools are microscopes and cameras. Ever since Antoni Van Leeuwenhoek first observed microorganisms (he called them “animalcules”) in a drop of water from Lake Berkel, the reality of miniature life has expanded our concepts of what all life could possibly be. Some other tools we shall use—equally important ones—are mathematical abstractions, including a type of relation we shall call an allometric formula. It turns out that allometric formulas reveal certain beautiful regularities in nature, describing a pattern in the comparisons of animals as different in size as the shrew and the whale, and this can be as delightful in its own way as the view through a microscope.
Their first chapter is similar to Sec. 1.1 on Distances and Sizes in the 4th edition of Intermediate Physics for Medicine and Biology, except it contains much more detail and is beautifully illustrated. They focus on larger animals; if you want to see a version of our Figs. 1.1 and 1.2 but with a scale bar of about 10 meters, take a look at McMahon and Bonner’s drawing of “the biggest living things” on Page 2 (taken from the 1932 book The Science of Life by the all-star team of H. G. Wells, J. S. Huxley, and G. P. Wells).

In their Chapter 2 (Proportions and Size) is a discussion of allometric formulas and their representation in log-log plots, similar to but more extensive than Russ Hobbie and my Section 2.10 (Log-Log Plots, Power Laws, and Scaling). McMahon and Bonner present in-depth analysis of several biomechanical explanations for many allometric relationships. For instance, below is their description of “elastic similarity” in their Chapter 4 (The Biology of Dimensions).
Let us now consider a new scaling rule as an alternative to isometry (geometric similarity [all length scales increase together, leading to a change in size but no change in shape]), which was the main rule employed for discussing the theory of models in Chapter 3. This new scaling theory, which we shall call elastic similarity, uses two length scales instead of one. Longitudinal lengths, proportional to the longitudinal length scale ℓ, will be measured along the axes of the long bones and generally along the direction in which muscle tensions act. The transverse length scale, d, will be defined at right angles to ℓ, so that bone and muscle diameters will be proportional to d…When making the transformations of shape from a small animal to a large one, all longitudinal lengths (or simply “lengths”) will be multiplied by the same factor that multiples the basic length, ℓ, and all diameters will be multiplied by the factor that multiplies the basic diameter, d. Furthermore, there will be a rule connecting ℓ and dd ∝ ℓ3/2.
They then show that elastic similarity can be used to derive Kleiber’s law (metabolic rate is proportional to mass to the ¾ power), and justify elastic similarity using biomechanical analysis of buckling of a leg. I must admit I am a bit skeptical that the ultimate source of Kleiber’s law is biomechanics. In IPMB, Russ and I review more recent work suggesting that Kleiber’s law arises from general considerations of models that supply nutrients through branching networks, which to me sound more plausible. Nevertheless, McMahon and Bonner’s ideas are interesting, and do suggest that biomechanics can sometimes play a significant role in scaling.

Their Chapter 5 (On Being Large) presents a succession of intriguing allometric relationships related to the motion of large animals (running, flying, swimming, etc). Let me give you one example: large animals have a harder time running uphill than smaller animals. McMahon and Bonner present a plot of oxygen consumption per unit mass versus running speed, and find that for a 30 g mouse there is almost no difference between running uphill and downhill, but for a 17.5 kg chimpanzee running uphill requires about twice as much oxygen as running downhill. In Chapter 6 (On Being Small) they examine what life is like for little organisms, and analyze some of the same issues Edward Purcell discusses in “Life at Low Reynolds Number.”

Overall, I enjoyed the book very much. I have a slight preference for Knut Schmidt-Nielsen’s book Scaling: Why Is Animal Size So Important?, although I must admit that Size and Life is the better illustrated of the two books.

Author Thomas McMahon was a major figure in biomechanics. He was a Harvard professor particularly known for his study of animal motion, and even wrote a paper about “Groucho Running”; running with bent knees like Groucho Marx. Russ and I cite his paper “Size and Shape in Biology” (Science, Volume 179, Pages 1201–1204, 1973) in IPMB. I understand that his book Muscles, Reflexes and Locomotion is also excellent, although more technical, but I have not read it. Below is the abstract from the article “Thomas McMahon: A Dedication in Memoriam” by Robert Howe and Richard Kronauer (Annual Review of Biomedical Engineering, Volume 3, Pages xv-xxxix, 2001).
Thomas A. McMahon (1943–1999) was a pioneer in the field of biomechanics. He made primary contributions to our understanding of terrestrial locomotion, allometry and scaling, cardiac assist devices, orthopedic biomechanics, and a number of other areas. His work was frequently characterized by the use of simple mathematical models to explain seemingly complex phenomena. He also validated these models through creative experimentation. McMahon was a successful inventor and also published three well-received novels. He was raised in Lexington, Massachussetts, attended Cornell University as an undergraduate, and earned a PhD at MIT. From 1970 until his death, he was a member of the faculty of Harvard University, where he taught biomedical engineering. He is fondly remembered as a warm and gentle colleague and an exemplary mentor to his students.
His New York Times obituary can be found here.

Friday, August 1, 2014

Interview with Russ Hobbie in The Biological Physicist

In 2006, just as Springer was about to publish the 4th edition of Intermediate Physics for Medicine and Biology, an interview with Russ Hobbie appeared in The Biological Physicist, a newsletter of the Division of Biological Physics of the American Physical Society. Below are some excerpts from the interview. You can read the entire thing in the December 2006 newsletter.
THE BIOLOGICAL PHYSICIST: Are there any stories you have about particular physics examples you have used in the book or in the classroom that have really awakened the interest of medical students to the importance of physics?

Russ Hobbie: I cannot speak to what has triggered a response in different students. But there is one amusing story. I was working with a pediatric cardiologist, Jim Moeller, to understand the electrocardiogram. I finally wrote up a 5-page paper explaining it with an electrostatic model. When I showed what I thought was simplicity itself to Jim, he could not understand a word of it. But he finally agreed to show it to some second- year medical students. Their response: “Thanks goodness it is rational.” I think this shows the gap between our premed course and what the student needs in medical school and also the fact that the physics we love so dearly may be helpful to a medical student during the basic science years but is not so helpful later on. It also became clear to me that what we teach about x-rays and radioactivity is the only exposure to those topic that physicians will receive, unless they go into radiology!

THE BIOLOGICAL PHYSICIST: How has the book changed over its four editions? Has the way you have presented material evolved over the years?

Russ Hobbie: It is amusing to compare my explanation of the electrocardiogram in the four editions. In the first, I was thinking in terms of an electrostatic model. By the second edition, I had realized that a current dipole model was much better and had been in the literature for a long time. This has been improved even more in the 3rd and 4th editions. I am a slow learner! But as an excuse, I was confused for a long time because the physiologists called the current dipole moment “the electric force vector.”

As I have added material (such as non-linear systems and chaos) it has been necessary to remove material. For example, the first edition had 11 pages and 3 color plates on polarized light and birefringence. This was gone to save money and to make room for biomagnetism in the second edition. I wish it was still there. I did not get around to discussing acoustics, hearing, and ultrasound until the fourth edition.

THE BIOLOGICAL PHYSICIST: How would you assess the impact of the book on the field of interdisciplinary research, and on interdisciplinary education? Do you have any information on the history of how quickly it was adopted by other departments, and how it is used in other interdisciplinary programs?

Russ Hobbie: I have always hoped that a physicist without the biological background could teach from the book, and the solutions manual was written in the hope that students could use it for an independent study course. (At the request of instructors, the solutions manual is now an Adobe Acrobat file which is password-protected. Instructors can ask me or Brad for the password and give it to students if they wish.)

Many physicists are more interested in molecular biophysics than physiology- and radiology- oriented physics and find that other books better meet their needs. However, there seems to be a growing interest in the book among biomedical engineers. One teaching technique that was very successful in the early years of the course had to be abandoned while I was serving as Associate Dean, because it took too much of my time. I required the students to find an article in the research literature that interested them and then to write a paper filling in all the missing steps. They could come to me for help as often as they needed. Then, three days after they submitted the paper, I would give them an oral exam on anything that I suspected they did not fully understand. They said this was a valuable experience; my office was packed with students the week before the papers were due; and I learned a lot myself.

THE BIOLOGICAL PHYSICIST: Have you found that there is a “cultural divide” between physicists and MDs? Some people in the Division of Biological Physics describe having difficulty communicating with medical researchers. Do you ever find that?

Russ Hobbie: Absolutely. One friend, Robert Tucker, got a PhD in biophysics with Otto Schmitt and then went to medical school. Bob said that medical school destroyed his ability to reason. This was probably an extreme statement, but it does capture the “drink from a fire hose” character of medical school. On the other hand, if I am having a myocardial infarct, I would prefer that the clinician taking care of me not start with Coulomb’s law!

Friday, July 25, 2014

The Eighteenth Elephant

I know that there are very few people out there interested in reading a blog about physics applied to medicine and biology. But those few (those wonderful few) might want to know of ANOTHER blog about physics applied to medicine and biology. It is called The Eighteenth Elephant. The blog is written by Professor Raghuveer Parthasarathy at the University of Oregon. He is a biological physicist, with an interest in teaching “The Physics of Life” to non-science majors. He also leads a research lab that studies many biological physics topics, such as imaging and the mechanical properties of membranes. If you like my blog about the 4th edition of Intermediate Physics for Medicine and Biology, you will also like The Eighteenth Elephant. Even if you don’t enjoy my blog, you still might like Parthasarathy’s blog (he doesn’t constantly bombard you with links to the amazon.com page where you can purchase his book).

One of my favorite entries from The Eighteenth Elephant was from last April. I’ve talked about animal scaling of bones in this blog before. A bone must support an animal’s weight (proportional to the animal’s volume), its strength increases with its cross-sectional area, and its length generally increases with the linear size of an animal. Therefore, large animals need bones that are thicker relative to their length, in order to support their weight. I demonstrate this visually by showing my class pictures of bones from different animals. Parthasarathy doesn’t mess around with pictures; he brings a dog femur and an elephant femur to class! (See the picture here, its enormous.) How much better than showing pictures! Now, I just need to find my own elephant femur….

Be sure to read the delightful story about 18 elephants that gives the blog its name.

Friday, July 18, 2014

Hexagons and Cellular Excitable Media

Two of my favorite homework problems in the 4th edition of Intermediate Physics for Medicine and Biology are Problems 39 and 40 in Chapter 10. Russ Hobbie and I ask the student to analyze a cellular excitable medium (often called a cellular automaton), which provides much insight into propagation of excitation in cardiac tissue. I’ve discussed these problems before in this blog. I’m always amazed how well you can understand cardiac arrhythmias using such a simple model that you could teach it to third graders.

When Time Breaks Down, The Three-Dimensional Dynamics of Electrochemical Waves and Cardiac Arrhythmias, by Art Winfree, superimposed on Intermediate Physics for Medicine and Biology.
When Time Breaks Down,
The Three-Dimensional Dynamics of
Electrochemical Waves and Cardiac Arrhythmias,
by Art Winfree.
I learned about cellular excitable media from Art Winfree’s book When Time Breaks Down. To the best of my knowledge, the idea was first introduced by James Greenberg and Stuart Hastings in their paper “Spatial Patterns for Discrete Models of Diffusion in Excitable Media” (SIAM Journal on Applied Mathematics, Volume 34, pages 515–523, 1978), although they performed their simulations on a rectangular grid rather than on a hexagonal grid as in the homework problems from IPMB. Winfree, with his son Erik Winfree and Herbert Seifert, extended the model to three dimensions, and found exotic “organizing centers” such as a “linked pair of twisted scroll rings” (“Organizing Centers in a Cellular Excitable Medium,” Physica D: Nonlinear Phenomena, Volume 17, Pages 109–115, 1985).

Predrawn hexagon grids to use with homework problems about cellular automata in Intermediate Physics for Medicine and Biology.
Predrawn hexagon grids to use with
homework problems about
cellular automata.
I imagine that students may have a difficult time with our homework problems, not because the problems themselves are difficult, but because they don’t have easy access to predrawn hexagon grids. It would be like trying to play chess without a chessboard. When I assign these problems, I provide my students with pages of hexagon grids, so they can focus on the physics. I thought my blog readers might also find this useful, so now you can find a page of predrawn hexagons on the book website. Or, if you prefer, you can find hexagon graph paper for free online here.

In the previous blog entry I mention a paper I published in the Online Journal of Cardiology in which I extended the cellular excitable medium to account for the virtual electrodes created when stimulating cardiac tissue. This change allowed the model to predict quatrefoil reentry. I concluded the paper by writing
This extremely simple cellular excitable medium—which is nothing more than a toy model, stripped down to contain only the essential features—can, with one simple modification for strong stimuli, predict many interesting and important phenomena. Much of what we have learned about virtual electrodes and deexcitation is predicted correctly by the model (Efimov et al., 2000; Trayanova, 2001). I am astounded that this simple model can reproduce the complex results obtained by Lindblom et al. (2000). The model provides valuable insight into the essential mechanisms of electrical stimulation without hiding the important features behind distracting details.
Virtual Electrodes Made Simple: A Cellular Excitable Medium Modified for Strong Electrical Stimuli.
“Virtual Electrodes Made Simple.”
Unfortunately, the Online Journal of Cardiology no longer exists, so the link in my previous blog entry does not work. You can download a copy of this paper at my website. It contains everything except the animations that accompanied the figures in the original journal article. If you want to see the animations, you can look at the article archived here.