Friday, November 2, 2012

Art Winfree and Cellular Excitable Media

When Time Breaks Down, by Art Winfree, superimposed on Intermediate Physics for Medicine and Biology.
When Time Breaks Down,
by Art Winfree.
Ten years ago Art Winfree died. I’ve written about Winfree in this blog before (for example, see here, and here). He shows up often in the 4th edition of Intermediate Physics for Medicine and Biology; Russ Hobbie and I cite Winfree’s research throughout our discussion of nonlinear dynamics and cardiac electrophysiology.

One place where Winfree’s work impacts our book is in Problems 39 and 40 in Chapter 10, discussing cellular automata. Winfree didn’t invent cellular automata, but his discussion of them in his wonderful book When Time Breaks Down is where I first learned about the topic.
Box 5.A: A Cellular Excitable Medium
Take a pencil and a sheet of tracing paper and play with Figure 5.2 [a large hexagonal array of cells] according to the following game rules … Each little hexagon in this honeycomb is supposed to represent a cell that may be excited for the duration of one step (put a “0” in the cell) or refractory (after the excited moment, replace the “0” with a “1”) or quiescent (after that erase the “1”) until such time as any adjacent cell becomes excited: then pencil in a “0” in the next step.
If you start with no “0’s,” you’ll never get any, and this simulation will cost you little effort. If you start with a single “0” somewhere, it will next turn to “1” while a ring of 6 neighbors become infected with “0”. As the hexagonal ring of “0’s” propagates, it is followed by a concentric ring of “1” refractoriness, right to the edge of the honeycomb, where all vanish.
Now see what happens if you violate the rules just once by erasing a segment of that ring wave when it is about halfway to the edges: you will have created a pair of counter-rotating vortices (alias phase singularities), each of which turns out to be a source of radially propagating waves.
(Stop reading until you have played some.)
You may feel a bit foolish, since this is obviously supposed to mimic action potential propagation, and the caricature is embarrassingly crude. Which aspects of its behavior are realistic and which others are merely telling us “honeycombs are not heart muscle”? The way to find out is to undertake successively more refined caricatures until a point of diminishing returns is reached. For most purposes, it is reached surprisingly soon.
I consider cellular automata—whose three simple rules can be mastered by a child—to be among the best tools for illustrating cardiac reentry. I like this model so much that I generalized it to account for electrical simulation that produces adjacent regions of depolarization and hyperpolarization (Sepulveda et al., 1989; read more about that paper here). In “Virtual Electrodes Made Simple: A Cellular Excitable Medium Modified for Strong Electrical Stimuli,” published in the Online Journal of Cardiology, I added a fourth rule
During a cathodal stimulus, the state of the cell directly under the electrode and its four nearest neighbors in the direction perpendicular to the fibers change to the excited state, and the two remaining nearest neighbors in the direction parallel to the fibers change to the quiescent state, regardless of their previous state.
Using this simple model, I was able to initiate “quatrefoil reentry” (Lin et al., 1999; read more here). I also could reproduce most of the results of a simulation of the “pinwheel experiment” (a point stimulus applied near the end of the refractory period of a previous planar wave front) predicted by Lindblom et al. (2000). I concluded
This extremely simple cellular excitable medium—which is nothing more than a toy model, stripped down to contain only the essential features—can, with one simple modification for strong stimuli, predict many interesting and important phenomena. Much of what we have learned about virtual electrodes and deexcitation is predicted correctly by the model (Efimov et al., 2000; Trayanova, 2001). I am astounded that this simple model can reproduce the complex results obtained by Lindblom et al. (2000). The model provides valuable insight into the essential mechanisms of electrical stimulation without hiding the important features behind distracting details.
My online paper came out in 2002, the same year that Winfree died. In an obituary, Steven Strogatz wrote
When Art Winfree died in Tucson on November 5, 2002, at the age of 60, the world lost one of its most creative scientists. I think he would have liked that simple description: scientist. After all, he made it nearly impossible to categorize him any more precisely than that. He started out as an engineering physics major at Cornell (1965), but then swerved into biology, receiving his PhD from Princeton in 1970. Later, he held faculty positions in theoretical biology (Chicago, 1969–72), in the biological sciences (Purdue, 1972–1986), and in ecology and evolutionary biology (University of Arizona, from 1986 until his death).

So the eventual consensus was that he was a theoretical biologist. That was how the MacArthur Foundation saw him when it awarded him one of its “genius” grants (1984), in recognition of his work on biological rhythms. But then the cardiologists also claimed him as one of their own, with the Einthoven Prize (1989) for his insights about the causes of ventricular fibrillation. And to further muddy the waters, our own community honored his achievements with the 2000 AMS-SIAM Norbert Wiener Prize in Applied Mathematics, which he shared with Alexandre Chorin.

Aside from his versatility, what made Winfree so special (and in this way he was reminiscent of Wiener himself) was the originality of the problems he tackled; the sparkling creativity of his methods and results; and his knack for uncovering deep connections among previously unrelated parts of science, often guided by geometrical arguments and analogies, and often resulting in new lines of mathematical inquiry.

Friday, October 26, 2012

The Logistic Map

In Section 10.8 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the logistic map, a difference equation that can describe phenomena such as population dynamics. We are by no means the first to use the logistic map to illustrate deterministic chaos. Indeed, it has become the canonical example of chaos since Robert May published “Simple Mathematical Models With Very Complicated Dynamics” in 1976 (Nature, Volume 261, Pages 459–467). This paper has been cited nearly 2500 times, implying that it has had a major impact.

Russ and I write the logistic equation as (Eq. 10.36 in our book)

xj+1 = a xj (1 – xj)

where xj is the population in the jth generation. Our first task is to determine the equilibrium value for xj.
The equilibrium value x* can be obtained by solving Eq. 10.36 with xj+1 = xj = x*:

x* = a x* (1 – x*) = 1 – 1/a.

Point x* can be interpreted graphically as the intersection of Eq. 10.36 with the equation xj+1 = xj as shown in Fig. 10.22. You can see from either the graph or from Eq. 10.37 that there is no solution for positive x if a is less than 1. For a = 1 the solution occurs at x* = 0. For a = 3 the equilibrium solution is x* = 2/3. Figure 10.23 shows how, for a = 2.9 and an initial value x0 = 0.2, the values of xj approach the equilibrium value x* = 0.655. This equilibrium point is called an attractor.

Figure 10.23 also shows the remarkable behavior that results when a is increased to 3.1. The values of xj do not come to equilibrium. Rather, they oscillate about the former equilibrium value, taking on first a larger value then a smaller value. This is called a period-2 cycle. The behavior of the map has undergone period doubling. What is different about this value of a? Nothing looks strange about Fig. 10.22. But it turns out that if we consider the slope of the graph of xj+1 vs xj at x*, we find that for a greater than 3 the slope of the curve at the intersection has a magnitude greater than 1.
Usually, when Russ and I say something like “it turns out”, we include a homework problem to verify the result. Homework 34 in Chapter 10 does just this; the reader must prove that the magnitude of the slope is greater than 1 for a greater than 3.

One theme of Intermediate Physics for Medicine and Biology is the use of simple, elementary examples to illustrate fundamental ideas. I like to search for such examples to use in homework problems. One example that has great biological and medical relevance is discussed in Problems 37 and 38 (a model for cardiac electrical dynamics based on the idea of action potential restitution). But when reading May’s review in Nature, I found another example that—while it doesn’t have much direct biological relevance—is as simple or even simpler than the logistic map. Below is a homework problem based on May’s example.
Section 10.8

Problem 33 ½ Consider the difference equation
     (a) Plot xn+1 versus xn for the case of a=3/2, producing a figure analogous to Fig. 10.22.
     (b) Find the range of values of a for which the solution for large n does not diverge to infinity or decay to 0. You can do this using either arguments based on plots like in part (a), or using numerical examples.
     (c) Find the equilibrium value x* as a function of a, using a method similar to that in Eq. 10.37.
     (d) Determine if this equilibrium value is stable or unstable, based on the magnitude of the slope of the xn+1 versus xn curve.
     (e) For a = 3/2, calculate the first 20 values of xn using 0.250 and 0.251 as initial conditions. Be sure to carry your calculations out to at least five significant figures. Do the results appear to be chaotic? Are the results sensitive to the initial conditions?
     (f) For one of the data sets generated in part (e), plot xn+1 versus xn for 25 values of n, and create a plot analogous to Fig. 10.27. Explain how you could use this plot to distinguish chaotic data from a random list of numbers between zero and one.

Friday, October 19, 2012

Ernest Rutherford

Who is the greatest physicist never mentioned by name in the 4th edition of Intermediate Physics for Medicine and Biology? Russ Hobbie and I allude to Newton, Maxwell, Faraday, Bohr, Einstein, and many others. But a search for the name “Rutherford” comes up empty. In my opinion, Ernest Rutherford is the greatest physicist absent from our book. Ironically, he is also one of my favorite physicists; a colorful character who rivals Faraday as the greatest experimental scientist of all time.
Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and BIology.
Asimov's Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
Rutherford (1871–1937) was born in New Zeeland, and attended Cambridge University in England on a scholarship. His early work was on radioactivity, a subject discussed in Chapter 17 of our textbook. Asimov’s Biographical Encyclopedia of Science and Technology states
[Rutherford] was one of those who, along with the Curies, had decided that the rays given off by radioactive substances were of several different kinds. He named the positively charged ones alpha rays and the negatively charged ones beta rays… Between 1906 and 1909 Rutherford, together is his assistant, Geiger, studied alpha particles intensively and proved quite conclusively that the individual particle was a helium atom with its electrons removed.

Rutherford’s interest in alpha particles led to something greater still. In 1906, while still at McGill in Montreal, he began to study how alpha particles are scattered by thin sheets of metal… From this experiment Rutherford evolved the theory of the nuclear atoms, a theory he first announced in 1911…

For working out the theory of radioactive disintegration of elements, for determining the nature of alpha particles, [and] for devising the nuclear atom, Rutherford was awarded the 1908 Nobel Prize in chemistry, a classification he rather resented, for he was a physicist and tended to look down his nose at chemists…

Rutherford was … the first man ever to change one element into another as a result of the manipulations of his own hands. He had achieved the dream of the alchemists. He had also demonstrated the first man-made “nuclear reaction”…

He was buried in Westminster Abbey near Newton and Kelvin.
Rutherford also measured the size of the nucleus. To explain his alpha particle scattering experiments, he derived his famous scattering formula (see Chapter 4 of Eisberg and Resnick for details). He found that his formula worked well except when very high energy alpha particles are fired at low atomic-number metal sheets. For instance, results began to deviate from his formula when 3 MeV alpha particles are fired at aluminum. The homework problem below explains how to estimate the size of the nucleus from this observation. This problem is based on data shown in Fig. 4-7 of Eisberg and Resnick’s textbook.
Section 17.1

Problem ½  An alpha particle is fired directly at a stationary aluminum nucleus. Assume the only interaction is the electrostatic repulsion between the alpha particle and the nucleus, and the aluminum nucleus is so heavy that it is stationary. Calculate the distance of their closest approach as a function of the initial kinetic energy of the alpha particle. This calculation is consistent with Ernest Rutherford’s alpha particle scattering experiments for energies lower than 3 MeV, but deviates from his experimental results for energies higher than 3 MeV. If the alpha particle enters the nucleus, the nuclear force dominates and the formula you calculated no longer applies. Estimate the radius of the aluminum nucleus.
Rutherford, Simple Genius, by David Wilson, with Intermediate Physics for Medicine and Biology.
Rutherford, Simple Genius,
by David Wilson.
To learn more about Ernest Rutherford and his groundbreaking experiments, I recommend the book Rutherford: Simple Genius by David Wilson.

In addition to his fundamental contributions to physics, I have a personal reason for liking Rutherford. Academically speaking, he is my great-great-great-great-grandfather. My PhD advisor was John Wikswo, who got his PhD working under William Fairbank at Stanford. Fairbank got his PhD under Cecil Lane, who studied under Etienne Bieler, who worked for James Chadwick (discoverer of the neutron), who was a student of Rutherford’s.

Ernest Rutherford died (needlessly) on October 19, 1937; 75 years ago today.

Friday, October 12, 2012

The Gaussian integral

One of my favorite “mathematical tricks” is given in Appendix K of the 4th edition of Intermediate Physics for Medicine and Biology. The goal is to calculate the integral of the Gaussian function, e-x2, or bell shaped curve. (This is often called the Gaussian Integral). The indefinite integral cannot be expressed in terms of elementary functions (in fact, “error functions” are defined as the integral of the Gaussian), but the definite integral integrated over the entire x axis (from –∞ to ∞) is amazingly simple: the square root of π. Here is how Russ Hobbie and I describe how to derive this result:
Integrals involving e-ax2 appear in the Gaussian distribution. The integral
The integral of the Gaussian.
can also be written with y as the dummy variable:
The integral of the Gaussian.
There can be multiplied together to get
A double integral of the Gaussian.
A point in the xy plane can also be specified by the polar coordinates r and θ (Fig. K.1). The element of area dxdy is replaced by the element rdrdθ:
A double integral of the Gassian, writen in polar coordinates.
To continue, make the substitution u = ar2, so that du = 2ardr. Then
The desired integral is, therefore,
The integral of the Gaussian equals the square root of pi over a.
Of course, if you let a =1, you get the simple result I mentioned earlier. Isn’t this a cool calculation?

To learn more, click here. For those of you who prefer video, click here.

Evaluation of the Gaussian Integral.

Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov's Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
The integral and function are named after the German mathematician Johann Karl Friedrich Gauss (1777–1855). Asimov’s Biographical Encyclopedia of Science and Technology (2nd Revised Edition) says
Gauss, the son of a gardener and a servant girl, had no relative of more than normal intelligence apparently, but he was an infant prodigy in mathematics who remained a prodigy all his life. He was capable of great feats of memory and of mental calculation. There are those with this ability who are only average or below-average mentality, but Gauss was clearly a genius. At the age of three, he was already correcting his father’s sums, and all his life he kept all sorts of numerical records, even useless ones such as the length of lives of famous men, in days. He was virtually mad over numbers.

Some people consider him to have been one of the three great mathematicians of all time, the others being Archimedes and Newton.

Friday, October 5, 2012

The Truth About Terahertz

In Chapter 14 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Terahertz Radiation.
For many years, there were no good sources or sensitive detectors for radiation between microwaves and the near infrared (0.1-100 THz; 1 THz = 1012 Hz). Developments in optoelectronics have solved both problems, and many investigators are exploring possible medical uses of THz radiation (“T rays”). Classical electromagnetic wave theory is needed to describe the interactions, and polarization (the orientation of the E vector of the propagating wave) is often important. The high attenuation of water to this frequency range means that studies are restricted to the skin or surface of organs such as the esophagus that can be examined endoscopically. Reviews are provided by Smye et al. (2001), Fitzgerald et al. (2002), and Zhang (2002).
(By the way, apologies to Dr. N. N. Zinovev, a coauthor on the Fitzgerald et al. paper, whose last name is spelled incorrectly in our book.) 

In the September 2012 issue of the magazine IEEE Spectrum, Carter Armstrong (a vice president of engineering at L-3 Communications, in San Francisco) reviews some of the challenges facing the development of Terahertz radiation. His article “The Truth About Terahertz” begins
Wirelessly transfer huge files in the blink of an eye! Detect bombs, poison gas clouds, and concealed weapons from afar! Peer through walls with T-ray vision! You can do it all with terahertz technology—or so you might believe after perusing popular accounts of the subject.

The truth is a bit more nuanced. The terahertz regime is that promising yet vexing slice of the electromagnetic spectrum that lies between the microwave and the optical, corresponding to frequencies of about 300 billion hertz to 10 trillion hertz (or if you prefer, wavelengths of 1 millimeter down to 30 micrometers). This radiation does have some uniquely attractive qualities: For example, it can yield extremely high-resolution images and move vast amounts of data quickly. And yet it is nonionizing, meaning its photons are not energetic enough to knock electrons off atoms and molecules in human tissue, which could trigger harmful chemical reactions. The waves also stimulate molecular and electronic motions in many materials—reflecting off some, propagating through others, and being absorbed by the rest. These features have been exploited in laboratory demonstrations to identify explosives, reveal hidden weapons, check for defects in tiles on the space shuttle, and screen for skin cancer and tooth decay.

But the goal of turning such laboratory phenomena into real-world applications has proved elusive. Legions of researchers have struggled with that challenge for decades.
Armstrong then explores the reasons for these struggles. The large attenuation coefficient of T-rays places severe limitations on imaging applications. He then turns specifically to medical imaging.
Before leaving the subject of imaging, let me add one last thought on terahertz for medical imaging. Some of the more creative potential uses I’ve heard include brain imaging, tumor detection, and full-body scanning that would yield much more detailed pictures than any existing technology and yet be completely safe. But the reality once again falls short of the dream. Frank De Lucia, a physicist at Ohio State University, in Columbus, has pointed out that a terahertz signal will decrease in power to 0.0000002 percent of its original strength after traveling just 1 mm in saline solution, which is a good approximation for body tissue. (Interestingly, the dielectric properties of water, not its conductive ones, are what causes water to absorb terahertz frequencies; in fact, you exploit dielectric heating, albeit at lower frequencies, whenever you zap food in your microwave oven.) For now at least, terahertz medical devices will be useful only for surface imaging of things like skin cancer and tooth decay and laboratory tests on thin tissue samples.
Following a detailed review of terahertz sources, Armstrong finishes on a slightly more optimistic note.
There is still a great deal that we don’t know about working at terahertz frequencies. I do think we should keep vigorously pursuing the basic science and technology. For starters, we need to develop accurate and robust computational models for analyzing device design and operation at terahertz frequencies. Such models will be key to future advances in the field. We also need a better understanding of material properties at terahertz frequencies, as well as general terahertz phenomenology.

Ultimately, we may need to apply out-of-the-box thinking to create designs and approaches that marry new device physics with unconventional techniques. In other areas of electronics, we’ve overcome enormous challenges and beat improbable odds, and countless past predictions have been subsequently shattered by continued technological evolution. Of course, as with any emerging pursuit, Darwinian selection will have its say on the ultimate survivors.
Terahertz radiation is such a big field that one year ago the IEEE introduced a new journal: IEEE Transactions on Terahertz Science and Technology. In the inaugural issue, Taylor et al. examine “THz Medical Imaging.”

Friday, September 28, 2012

Benedek and Villars, Volume 3

Physics With Illustrative Examples  From Medicine and Biology, Volume 3,  by Benedek and Villars, superimposed on Intermediate Physics for Medicine and Biology.
Physics With Illustrative Examples
From Medicine and Biology, Volume 3,
by Benedek and Villars.
This is the third and final entry in a series of blog entries about Benedek and Villars’ textbook Physics With Illustrative Examples From Medicine and Biology. Today I discuss Volume 3, about electricity and magnetism. In the preface to the first edition of Volume 3, Benedek and Villars write
With this volume on Electricity and Magnetism, we complete the third and final volume of our textbooks on Physics, with Illustrative Examples from Medicine and Biology. We believe that this volume is as unique as our previous books on Classical Mechanics (Vol. 1) and Statistical Physics (Vol. 2). Here, we continue our program of interweaving into the rigorous development of classical physics, an analysis and clarification of a wide variety of important phenomena in physical chemistry, biology, physiology, and medicine.
All three volumes of Physics With Illustrative Examples From Medicine and Biology, by Benedek and Villars, superimposed on Intermediate Physics for Medicine and Biology.
All three volumes of
Physics With Illustrative Examples
From Medicine and Biology,
by Benedek and Villars.
The topics covered in Volume 3 are similar to those Russ Hobbie and I discuss in Chapters 6-9 in the 4th edition of Intermediate Physics for Medicine and Biology. Because I do research in the fields of bioelectricity and biomagnetism, you might expect that this would be my favorite volume of the three, but it’s not. I don’t find that it contains as many rich and interesting biological examples. Yet it is a solid book, and contains much useful electricity and magnetism.

All three volumes of Physics With Illustrative Examples From Medicine and Biology, by Benedek and Villars, with Intermediate Physics for Medicine and Biology.
Physics With Illustrative Examples
From Medicine and Biology,
by Benedek and Villars.
Before leaving this topic, I should say a few words about George Benedek and Felix Villars. Benedek is currently the Alfred H. Caspary Professor of Physics and Biological Physics in the Department of Physics in the Harvard-MIT Division of Health Sciences and Technology. His group’s research program “centers on phase transitions, self-assembly and aggregation of biological molecules. These phenomena are of biological and medical interest because phase separation, self-assembly and aggregation of biological molecules are known to play a central role in several human diseases such as cataract, Alzheimer's disease, and cholesterol gallstone formation.” Villars was born in Switzerland. In the late 1940s, he collaborated with Wolfgang Pauli, and developed Pauli-Villars regularization. He began work at the MIT in 1950, where he collaborated with Herman Feshbach and Victor Weisskopf. He became interested in the applications of physics to biology and medicine, and helped establish the Harvard-MIT Division of Health Sciences and Technology. He died in 2002 at the age of 81.

Friday, September 21, 2012

Benedek and Villars, Volume 2

Physics With Illustrative Examples  From Medicine and Biology, Volume 2,  by Benedek and Villars, superimposed on Intermediate Physics for Medicine and Biology.
Physics With Illustrative Examples
From Medicine and Biology, Volume 2,
by Benedek and Villars.
Last week I discussed volume 1 of Benedek and Villars’ three-volume textbook Physics With Illustrative Examples From Medicine and Biology, which dealt with mechanics. The second volume discusses statistical physics. The preface to their first edition of Volume 2 states
In the present volume we develop and present the ideas of statistical physics, of which statistical mechanics and thermodynamics are but one part. We seek to demonstrate to students, early in their career, the power, the broad range, and the astonishing usefulness of a probabilistic, non-deterministic view of the origin of a wide range of physical phenomena. By applying this approach analytically and quantitatively to problems such as: the size of random coil polymers; the diffusive flow of solutes across permeable membranes; the survival of bacteria after viral attack; the attachment of oxygen to the binding sites on the hemoglobin molecule; and the effect of solutes on the boiling point and vapor pressure of volatile solvents; we demonstrate that the probabilistic analysis of statistical physics provides a satisfying understanding of important phenomena in fields as diverse as physics, biology, medicine, physiology, and physical chemistry.
Many of the topics in Volume 2 of Benedek and Villars are similar to Chapters 3–5 in the 4th edition of Intermediate Physics for Medicine and Biology: the Boltzmann factor, diffusion, and osmotic pressure. As I said last week, I’m most interested in those topics Benedek and Villars discuss that are not covered in Intermediate Physics for Medicine and Biology, such as their fascinating description of the use of Poisson statistics by Luria and Delbruck.
If a bacterial culture is brought into contact with bacteriophage virus particles, the viruses will attack the bacteria and kill them in a matter of hours. However, a small number of bacteria do survive the attack. These survivors will reproduce and pass on to their descendants their resistance to the virus. The form of resistance of the offspring of the surviving bacteria is that their surface does not adsorb the attacking virus. Bacterial strains can also be resistant to metabolic inhibitors, such as streptomycin, penicillin, and sulphonamide. If a bacterial culture is subjected to attack by these antibiotics, the resistant strain will emerge just as in the case of the phage resistant bacteria.

In the early 1940s, Luria and Delbruck were working on ‘mixed infection’ experiments in which the bacteriophage resistant strain of E. coli bacteria were used as indicators in studies they were making on T1 and T2 virus particles. Starting in the Fall of 1942, they began to put aside the mixed infection experiment and asked themselves: What is the origin of those resistant bacterial strains that they were using as indicators?
They go on to give a detailed description of how Poisson statistics were used by Luria and Delbruck to study mutations.

Russ Hobbie and I discuss the Poisson distribution in our Appendix J. The Poisson distribution is an approximation of the more familiar binomial distribution, applicable for large numbers and small probabilities. One can see how this distribution would be appropriate for Luria and Delbruick, who had large numbers of viruses and a small probability of a mutation.

Russ and I cite Volume 1 of Benedek and Villars’ text in our Chapter 1 on biomechanics. We draw the data for our Fig. 4.12 from Benedek and Villars’ Volume 2. We never cite their Volume 3, about electricity and magnetism, which I’ll discuss next week.

Friday, September 14, 2012

Benedek and Villars, Volume 1

Physics With Illustrative Examples  From Medicine and Biology, Volume 1,  by Benedek and Villars, superimposed on Intermediate Physics for Medicine and Biology.
Physics With Illustrative Examples
From Medicine and Biology, Volume 1,
by Benedek and Villars.
One early textbook that served as a precursor to Introductory Physics for Medicine and Biology is the 3-volume Physics With Illustrative Examples From Medicine and Biology, by George Benedek and Felix Villars. The first edition, published in 1973, was just bound photocopies of a typewritten manuscript, but a nicely printed second edition appeared in 2000. The preface to Volume 1 of the first edition states
This is a unique book. It is an introductory textbook of physics in which the development of the principles of physics is interwoven with the quantitative analysis of a wide range of biological and medical phenomena. Conversely, the biological and medical examples serve to vitalize and motivate the learning of physics. By its very nature, this book not only teaches physics, but also exposes the student to topics in fields such as anatomy, orthodedic medicine, physiology, and the principles of hemostatic control.

This book, and its follow-up, Volumes II and III, grew out of an introductory physics course which we have offered to freshmen and sophomores at MIT since 1970. The stimulus for this course came from Professor Irving M. London, MD, Director of the Harvard-MIT Program in Health Sciences and Technology. He convinced us that continued advances in the biological and medical sciences demand that students, researchers, and physicians should be capable of applying the quantitative methods of the physical sciences to problems in the life sciences. We have written this book in the hope that students of the life sciences will come to appreciate the value of training in physics in helping them to formulate, analyze, and solve problems in their own fields.
This quote applies almost without change (except for replacing MIT with Russ Hobbie’s University of Minnesota) to Intermediate Physics for Medicine and Biology. Clearly the goals and objectives of the two works are the same.

Many of the topics in Volume 1 of Benedek and Villars are similar to those found in Chapters 1 and 10 of Intermediate Physics for Medicine and Biology: biomechanics, fluid dynamics, and feedback. Particularly interesting to me are the topics that Russ Hobbie and I don’t discuss, such as the physiological effects of underwater diving.
On ascent and descent the diver must arrange to have the pressure of gas in his lungs be the same as that of the surrounding water. He can do this either by breathing out on ascent or by adjusting the output pressure of his compressed air tanks. Second to drowning, the most serious underwater diving accident is produced by taking a full breath of air at depth, and holding this breath as the diver rises to the surface quickly. For example, if the diver did this at 99 ft he would have gas at 4 atm in his lungs. This if fine at 99 ft, but if he holds this total volume of gas on ascending, then at the surface the surrounding water is at 1 atm, and his lungs are holding air at 3 atm. This can do two things. His lungs can rupture, thereby allowing gas to flow into the space between lungs and ribs. This is called pneumothorax. Also the great pressure of air in the lungs can force air bubbles into the blood stream. These air embolisms can then occlude blood vessels in the brain or the coronary circulation, and this can lead to death. Of course, the obvious necessity of balancing pressure in the ears, sinuses, and intestines must be realized.
Benedek and Villars also have a delightful description of the physiological effects of low air pressure experienced by balloonists. It is too long to reproduce here, but well worth reading.

In the coming weeks, I will discuss Benedek and Villars’ second and third volumes.

Friday, September 7, 2012

Are Backscatter x-ray machines at airports safe?

Two competing devices are used in airports to obtain full-body images of passengers: backscatter x-ray scanners and millimeter wave scanners. Today I want to examine those scanners that use x-rays.

Backscatter x-ray scanners work by a different mechanism than ordinary x-ray images used in medicine. Chapter 16 of the 4th edition of Intermediate Physics for Medicine and Biology discusses traditional medical imaging (see Fig. 16.14). X-rays are passed through the body, and the attenuation of the beam provides the signal that produces the image. Backscatter x-ray scanners are different. They record the x-rays scattered backwards toward the incident beam via Compton scattering. This allows the use of very weak x-ray beams, resulting in a lower dose.

The dose (or, more accurately the equivalent dose) from one backscatter x-ray scan is about 0.05 μSv. The unit of a sievert is defined in Chapter 16 of Intermediate Physics for Medicine and Biology as a joule per kilogram (the definition includes a weighting factor for different types of radiation; for x-rays this factor is equal to one). The average annual background dose that we are all exposed to is about 3 mSv, or 3000 μSv, arising mainly from inhalation of the radioactive gas radon. Clearly the dose from a backscatter x-ray scanner is very low, being 60,000 times less than the average yearly background dose.

Nevertheless, the use of x-rays for airport security remains controversial because of our uncertainly about the effect of low doses of radiation. Russ Hobbie and I address this issue in Section 16.13 about the Risk of Radiation.
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear-nonthreshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.

If the excess probability of acquiring a particular disease is αH [where H is the equivalent dose in sieverts] in a population N, the average number of extra persons with the disease is

m = α N H.

The product NH, expressed in person-Sv, is called the collective dose. It is widely used in radiation protection, but it is meaningful only if the LNT assumption is correct [emphasis added].
So, are backscatter x-ray scanners safe? This question was debated in a Point/Counterpoint article appearing in the August issue of Medical Physics, a leading journal published by the American Association of Physicists in Medicine. A Point/Counterpoint article is included in each issue of Medical Physics, providing insight into medical physics topics at a level just right for readers of Intermediate Physics for Medicine and Biology. The format is always the same: two leading medical physicists each defend one side or the other of a controversial proposition. In August, the proposition is “Backscatter X-ray Machines at Airports are Safe.” Elif Hindie of the University of Bordeaux, France argues for the proposition, and David Brenner of Columbia University argues against it.

Now let us see what Drs. Hindie and Brenner have to say about this idea. Hindie writes (references removed)
The LNT model postulates that every dose of radiation, no matter how small, increases the probability of getting cancer. This highly speculative hypothesis was introduced on the basis of flimsy scientific evidence more than 50 years ago, at a time when cellular biology was a largely unexplored field. Over the past decades, an ever-increasing number of scientific studies have consistently shown that the LNT model is incompatible with radiobiological and experimental data, especially for very low doses.

The LNT model was mainly intended as a tool to facilitate radioprotection regulations and, despite its biological implausibility, this may remain its raison d’être. However, the LNT model is now used in a misguided way. Investigators multiply infinitesimal doses by huge numbers of individuals in order to obtain the total number of hypothetical cancers induced in a population. This practice is explicitly condemned as “incorrect” and “not reasonable” by the International Commission on Radiological Protection, among others.
Brenner counters
Of course this individual risk estimate is exceedingly uncertain. Some have argued that the risk at very low doses is zero. Others have argued that phenomena such as tissue/organ microenvironment effects, bystander effects, and “sneaking through” immune surveillance, imply that low-dose radiation risks could be higher than anticipated. The bottom line is that individual risk estimates at very low doses are extremely uncertain.

But when extremely large populations are involved, with up to 109 scans per year in this case, risk should also be viewed from the perspective of the entire exposed population. Population risk quantifies the number of adverse events expected in the exposed population as a result of a proposed practice, and so depends on both the individual risk and on the number of people exposed. Population risk is described by ICRP as “one input to . . . a broad judgment of what is reasonable,” and by NCRP as “one of the means for assessing the acceptability of a facility or practice.” Population risk is considered in many other policy areas where large populations are exposed to very small risks, such as nuclear waste disposal or vaccination.
The debate about the LNT model and the validity of the concept of collective dose is not merely of academic interest. It gets to the heart of how we perceive, assess, and defend ourselves against the risk of radiation. Low doses of radiation are risky to a large population only if there is no threshold below which the risk falls to zero. Until the validity of the linear non-threshold model is confirmed, I suspect we will continue to witness passionate debates—and future point/counterpoint articles—about the safety of ionizing radiation.

Friday, August 31, 2012

Edward Purcell (1912-1997)

Yesterday, August 30, was the 100-year anniversary of the birth of physicist Edward Purcell (1912–1997). Purcell appears several times in the 4th edition of Intermediate Physics for Medicine and Biology. He first shows up in Chapter 1, when discussing fluid dynamics and the Reynolds number.
When the Reynolds number is small, viscous effects are important. The fluid is not accelerated, and external forces that cause the flow are balanced by viscous forces. Since viscosity is a form of internal friction in the fluid, work done on the system by the external forces is transformed into thermal energy. The low-Reynolds number regime is so different from our everyday experience that the effects often seem counterintuitive. They are nicely described by Purcell (1977).
The reference is to Purcell’s wonderful American Journal of Physics paper “Life at Low Reynolds Number” (Volume 45, Pages 3–11, 1977). It is a classic that I always hand out to my students when I teach PHY 325, Biological Physics (a class based on the textbook….you guessed it….Intermediate Physics for Medicine and Biology).

Purcell makes his second appearance in Chapter 4
The analysis [of diffusion] can also be applied to the problem of bacterial chemotaxis—the movement of bacteria along concentration gradients. This problem has been discussed in detail by Berg and Purcell (1977).
In this case, the reference is to his article with Howard Berg
Berg, H. C., and E. M. Purcell (1977). “Physics of Chemoreception,” Biophysical Journal, Volume 20, Pages 193–219.
Electricity and Magnetism, Volume 2 of the Berkeley Physics Course, by Edward Purcell, superimposed on Intermediate Physics for Medicine and Biology.
Electricity and Magnetism,
Volume 2 of the Berkeley Physics Course,
by Edward Purcell.
In Chapter 8, Russ Hobbie and I often cite Purcell’s excellent textbook Electricity and Magnetism (1985), which is Volume 2 of the renowned Berkeley Physics Course. Our Figure 8.10 is a reproduction of one of his figures. Purcell’s book is unusual for an introductory text in that it develops magnetism as an implication of special relativity. Russ and I write
We now know that magnetism results from electric forces that moving charges exert on other moving charges and that the appearance of the magnetic force is a consequence of special relativity. An excellent development of magnetism from this perspective is found in Purcell (1985).
I’m not sure that this is the best way to teach magnetostatics to college freshman taking introductory physics, but it does provide exceptional insight into the ultimate origin of the magnetic force, especially when described in Purcell’s prose.

In Chapter 18 Russ and I describe the work that earned Purcell the Nobel Prize that he shared with Felix Bloch: nuclear magnetic resonance. We describe the Carr-Purcell pulse sequence, which is a set of radio-frequency magnetic pulses that result in a series of spin-echos, allowing the measurement of the NMR T2 relaxation time. We then consider the improved Carr-Purcell-Meiboom-Gill pulse sequence, which is like the Carr-Purcell sequence except that it avoids cumulative errors if the radio-frequency pulse does not have exactly the correct duration or amplitude.

I’m a loyal reader of Time Magazine, and to me it is impressive that Purcell has appeared on the cover of Time when the magazine selected 15 scientists—including Purcell—as men of the year for 1960.

You can learn more about Edward Purcell in an oral history transcript prepared by the Neils Bohr Library and Archives, part of the American Institute of Physics Center for History of Physics. Also, see his New York Times obituary here.