Friday, May 20, 2011

Non-Newtonian Fluids and the Rheology of Blood

In Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I explain the difference between a Newtonian fluid and a non-Newtonian fluid.
A fluid can support a viscous shear stress if the shear strain is changing. One way to create such a situation is to immerse two parallel plates, each of area S, in the fluid, and to move one parallel to the other … The variation of velocity between the plates gives rise to a velocity gradient dvx/dy

In order to keep the top plate moving and the bottom plate stationary, it is necessary to exert a force of magnitude F on each plate: to the right on the upper plate and to the left on the lower plate. The resulting shear stress or force per unit area is in many cases proportional to the velocity gradient:

F/S = η dvx/dy .   (1.33)

The constant η is called the coefficient of viscosity … Fluids that are described by Eq. 1.33 are called Newtonian fluids. Many fluids are not Newtonian.
At the end of the chapter, we give an example of a biologically important non-Newtonian fluid.
Blood is not a Newtonian fluid. The viscosity depends strongly on the fraction of volume occupied by red cells (the hematocrit).
An excellent review of blood’s fluid behavior can be found in the article “Rheology of Blood” by Edward Merrill (Physiological Reviews, Volume 49, Pages 863–888, 1969). Rheology is the part of fluid mechanics that deals with non-Newtonian fluids. Merrill explains clearly the difference between a Newtonian fluid with a high viscosity and a Non-Newtonian fluid.
A Newtonian liquid is one in which the viscosity, at fixed temperature and pressure, is independent of the shear stress. Thus, a non-Newtonian liquid is one in which the viscosity depends on shear stress. Water and honey are Newtonian, but many aqueous suspensions of fine particulate matter such as water-base paint, plaster, and oil emulsions are non-Newtonian. The distinction is qualitatively obvious if one imagines two spoons, one in a pot of honey (Newtonian) and the other in a pot of mayonnaise (non-Newtonian emulsion). The honey is harder to stir (has a higher viscosity) than the mayonnaise, but when the spoons are removed and held above the pots, the honey continues to drizzle off its spoon, whereas the mayonnaise coating the other spoon clings indefinitely to it without flow, thus exhibiting “infinite” viscosity.
An important concept when discussing the rheology of blood is yield stress. Merrill explains
Blood … exhibits a “yield stress.” This means that, if …one increases from zero the stress, but keeps it less than a critical value, the response will be elastic … On removal of the stress, the shape of the blood film will be unaltered, i.e., no flow will have occurred. However, if the yield stress is exceeded, irreversible deformation will occur.
In other words, it acts like a solid at low stress, and a fluid at high stress. Merrill concludes by discussing the physiological significance of the non-Newtonian nature of blood.
In summary, the relevance of blood rheology to physiological fluid mechanics is to make stopping of flows easier, starting of flows more difficult, and slow flows more energy consuming than would be expected if blood were a simple, cell-less, micromolecular fluid of equal viscosity—and these effects are increasingly emphasized with increase of hematocrit and fibrinogen concentration.
Besides blood, another dramatic example of a non-Newtonian fluid is a mixture of corn starch and water. My Oakland University colleague Alberto Rojo (whose office is next door to mine) has made a fun video demonstrating how you can “walk on water” by taking advantage of this mixture’s non-Newtonian properties. The effect is fascinating.

Alberto Rojo walks on a mixture of corn starch and water.

Friday, May 13, 2011

Drawing Figures

Two of my favorite figures in the 4th edition of Intermediate Physics for Medicine and Biology are Fig. 7.13 (the extracellular potential produced by an action potential along a nerve axon) and Fig. 8.14 (the magnetic field produced by the same axon). John Wikswo and I prepared these figures when I was in graduate school at Vanderbilt University.
Fig. 7.13 of Intermediate Physics for Medicine and Biology, 4th edition. The exterior potential calculated using the method of Clark and Plonsey.
Fig. 7.13. The exterior potential calculated using the method of Clark and Plonsey.
From Intermediate Physics for Medicine and Biology, 4th edition.
Fig. 8.14 of Intermediate Physics for Medicine and Biology, 4th edition. A three-dimensional plot of the magnetic field around the crayfish axon.
Fig. 8.14. A three-dimensional plot of the magnetic field around the crayfish axon.
From Intermediate Physics for Medicine and Biology, 4th edition.
Soon after entering graduate school in 1982, I took a class taught by John based on Russ Hobbie’s first edition of Intermediate Physics for Medicine and Biology. Clearly, the book had a significant influence on my subsequent career. (I remember the bright yellow cover of the first edition: my office is probably one of the few places outside of Minnesota where all four editions of the book sit proudly, side-by-side, on a bookshelf.) When preparing the second edition, Russ added a chapter on biomagnetism, and asked John to contribute a figure showing the magnetic field produced by an axon. Of course, this is just the sort of work graduate students are good for, and I was given the task of preparing the figure (actually two figures, as we decided to make a similar figure for the extracellular potential). This was not a big job, because I already had access to the computer code that my friend Jim Woosley had written for his master’s thesis, and which I used when preparing our paper “The Magnetic Field of a Single Axon: A Volume Conductor Model,” (Woosley, Roth, and Wikswo, 1985, Mathematical Biosciences, Volume 76, Pages 1–36).

In the mid-1980s, three-dimensional graphics programs were not as common as they are now, but we had one and I was able to create the figure. What we didn’t have was a publication-quality printer or software to prepare and manipulate figures. Therefore, once I had the plots created, they went to the drafting room to be finished. John usually had one or more undergraduates hired for the sole task of preparing figures. I don’t remember exactly who worked on the two figures for the 2nd edition, but it may have been David Barach, son of Vanderbilt physics professor John Barach. The daftsman’s job was to retrace the figure, thereby providing a higher quality appearance than a dot-matrix printer could provide. As I recall, his job was also to remove hidden lines (I don’t think that our 3-d graphics program was “smart” enough to remove hidden lines on its own). He also labeled all the axes using some really neat rub-on letters that John was able to purchase in both Roman and Greek fonts (note the “μ” in μV in Fig. 7.13). I remember David working on figures at a large, slanted drafting table, using very high quality, vellum-like paper. He had rulers, triangles, and “French curves” of all types. First the drawing was done in pencil, and then traced with black ink. Once finished, additional copies were made using photography by a center in the Vanderbilt Medical School dedicated to such work. Before Photoshop, Powerpoint, and other such programs, that is the way figures were prepared. John had a policy that all graduate students had to get some experience at the drafting table, which I didn’t mind at all. At the risk of sounding like a Luddite who is nostalgic for the days of buggy whips, I think those figures have a little more personality and visual appeal than computer-generated figures drawn today.

The figures appeared in the second edition of Russ’s book, and have continued on through subsequent editions (including the 4th edition, on which I have the high honor of becoming a coauthor). Figures like that required much time and expense to prepare, and are difficult to edit. But my, it was more fun to really “draw” those figures than it is to churn out figures using graphics software.

Friday, May 6, 2011

Central Slice Theorem and Ronald Bracewell

Chapter 12 of the 4th edition of Intermediate Physics for Medicine and Biology deals with images and tomography. One of the key ideas in tomography is the “central slice theorem.” Russ Hobbie and I write in Section 12.4 that
The Fourier transform of the projection at angle θ is equal to the two-dimensional Fourier transform of the object, evaluated in the direction θ in Fourier transform space. This result is known as the projection theorem or the central slice theorem (Problem 17). The transforms of a set of projections at many different angles provide values of C and S [the cosine and sine parts of the 2-d Fourier transform] throughout the kxky plane [frequency space] that can be used in Eq. 12.9a [the definition of the 2-d Fourier transform] to calculate f(x,y).
I consider the central slice theorem to be one of the most important concepts in medical imaging. How was this fundamental idea first developed? The answer to that question provides a fascinating example of how physics and engineering can contribute to medicine.

Ronald Bracewell first developed the central slice theorem while working in the field of radio astronomy. His 2007 New York Times obituary states
Ronald N. Bracewell, an astronomer and engineer who used radio telescopes to make early images of the Sun’s surface, in work that also led to advances in medical imaging, died on Aug. 12 at his home in Stanford, Calif. He was 86…

With his colleagues at Stanford University in the 1950s, Dr. Bracewell designed a specialized radio telescope, called a spectroheliograph, to receive and evaluate microwaves emitted by the Sun…

Later, in the 1970s, the techniques and a formula devised by Dr. Bracewell were applied by other scientists in developing X-ray imaging of tumors, called tomography, and other forms of medical imaging that scan electromagnetic and radio waves. Dr. Bracewell advised researchers at Stanford and other institutions, but did not conduct laboratory research in the field.
The Fourier Transform  and Its Applications,  by Ronald Bracewell, superimposed on Intermediate Physics for Medicine and Biology.
The Fourier Transform
and Its Applications,
by Ronald Bracewell.
Russ and I cite Bracewell’s 1990 paper “Numerical Transforms” (Science, Volume 248, Pages 697–704). The central slice theorem was published in 1956 in the Australian Journal of Physics (Volume 9, Pages 198–217). Early in this career Bracewell published a lot in that journal, which is now defunct but maintains a website with free access to all the papers. Bracewell also wrote a marvelous book: The Fourier Transform and Its Applications (originally published in 1965, the revised 2nd edition is published by McGraw-Hill, New York, 1986). When writing this blog entry, I checked this book out of Kresge Library here at Oakland University. Once I opened it, I realized it is an old friend. I am sure I read this book in graduate school. It contains many pictures that allow the student to gain an intuition about the Fourier transform; an extraordinarily valuable skill to develop. The introduction states
The present work began as a pictorial guide to Fourier transforms to complement the standard lists of pairs of transforms expressed mathematically. It quickly became apparent that the commentary would far outweigh the pictorial list in value, but the pictorial dictionary of transforms is nevertheless important, for a study of the entries reinforces the intuition, and many valuable and common types of function are included which, because of their awkwardness when expressed algebraically, do not occur in other lists.
The text also does a fine job describing convolutions.
Convolution is used a lot here. Experience shows that it is a fairly tricky concept when it is presented bluntly under its integral definition, but it becomes easy if the concept of a functional is first understood.
Many of the ideas that Russ and I present in Chapter 11 of Intermediate Physics for Medicine and Biology are examined in more detail in Bracewell’s book. I recommend it as a reference to keep at your side as your plow through the mathematics of Fourier analysis.

Finally, Bracewell’s view of homework problems, as stated in his Preface to the second edition, mirrors my own.
A good problem assigned at the right stage can be extremely valuable for the student, but a good problem is hard to compose. Among the collection of supplementary problems now included at the end of the book are several that go beyond being mathematical exercises by inclusion of technical background or by asking for opinions.

Friday, April 29, 2011

Bursting

Last week in this blog I talked briefly about bursting in pancreatic beta cells. A bursting cell fires several action potential spikes consecutively, followed by an extended quiescent period, followed again by another burst of action potentials, and so on. One of the first and best-known models for bursting was developed by James Hindmarsh and Malcolm Rose (“A Model of Neuronal Bursting Using Three Coupled First Order Differential Equations,” Proceedings of the Royal Society of London, B, Volume 221, Pages 87–102, 1984). Their analysis was an extension of the FitzHugh-Nagumo model, with an additional variable governed by a very slow time constant. Their system of equations is

dx/dt = y – x3 + 3 x2 – z + I

dy/dt = 1 – 5 x2 – y

dz/dt = 0.001 [ 4(x + 1.6) – z]

where x is the membrane potential (appropriately made dimensionless), y is a recovery variable (like a sodium channel inactivation gate), z is the slow bursting variable, and I is an external stimulus current. For some values of I, this model predicts bursting behavior.

Bursting: The Genesis of
Rhythm in the Nervous System,
by Stephen Coombes and Paul Bressloff.
There is an entire book dedicated to this topic: Bursting--The Genesis of Rhythm in the Nervous System, by Stephen Coombes and Paul Bressloff (World Sci. Pub. Co., 2005). The first chapter, co-written by Hindmarsh, provides a little of the history behind the Hindmarsh-Rose model:
The collaboration that led to the Hindmarsh-Rose model began in 1979 shortly after Malcolm Rose joined Cardiff University. The particular project was to model the synchronization of firing of two snail neurons in a relatively simple way that did not use the full Hodgkin-Huxley equations... A natural choice at the time was to use equations of the FitzHugh [type]…

A problem with this choice was that these equations do not provide a very realistic description of the rapid firing of the neuron compared to the relatively long interval between firing. Attempts were made to achieve a more realistic description by making the time constants … voltage dependent. In particular so the rates of change of x and y were much smaller in the subthreshold or recovery phase. These were not convincing and it was not until Malcolm raised the question about whether the FitzHugh equations could account for “tail current reversal” that progress was made.

The modification of the FitzHugh equations to account for tail current reversal was crucial for the development of the Hindmarsh-Rose model.
For those not familiar with the FitzHugh-Nagumo model, see Problem 33 in Chapter 10 of the 4th edition of Intermediate Physics for Medicine and Biology, or see the Scholarpedia article by FitzHugh himself, written before he died in 2007. If you want to see some bursting patterns, check out this youtube video. It is not great, but you will get the drift of what the model predicts.

My friend Artie Sherman also had a chapter in the bursting book, titled “Beyond Synchronization: Modulatory and Emergent Effects of Coupling in Square-Wave Bursting.” He has been working on bursting in pancreatic beta cells for years, as a member (and now chief) of the Laboratory of Biological Modeling in the Mathematical Research Branch, the National Institute of Diabetes, Digestive and Kidney Diseases, part of the National Institutes of Health. His work is the best I am aware of for modeling bursting.

Friday, April 22, 2011

Effects of Rapid Buffers on Ca2+ Diffusion and Ca2+ Oscillations

I enjoy taking a scientific paper and reducing it to a homework problem. For example, one of the new homework problems in the 4th edition of Intermediate Physics for Medicine and Biology is Problem 23 of Chapter 4 (Transport in an Infinite Medium), based on a paper by John Wagner and Joel Keizer.

Problem 23 Calcium ions diffuse inside cells. Their concentration is also controlled by a buffer:
Ca + B ⇐⇒ CaB.
The concentrations of free calcium, unbound buffer, and bound buffer ([Ca], [B], and [CaB]) are governed, assuming the buffer is immobile, by the differential equations
∂[Ca]/∂t= D∇2[Ca] − k+[Ca][B] + k[CaB],
∂[B]/∂t= −k+[Ca][B] + k[CaB],
∂[CaB]/∂t= k+[Ca][B] − k[CaB].
(a) What are the dimensions (units) of k+ and k if the concentrations are measured in mole l−1 and time in s?
(b) Derive differential equations governing the total calcium and buffer concentrations, [Ca]T = [Ca]+[CaB] and [B]T= [B] + [CaB] . Show that [B]T is independent of time.
(c) Assume the calcium and buffer interact so rapidly that they are always in equilibrium:
[Ca][B]/[CaB]= K,
where K = k/k+.Write [Ca]T in terms of [Ca] , [B]T , and K (eliminate [B] and [CaB]).
(d) Differentiate your expression in (c) with respect to time and use it in the differential equation for [Ca]T found in (b). Show that [Ca] obeys a diffusion equation with an “effective” diffusion constant that depends on [Ca]:
Deff = D/(1 + K [B]T/(K+[Ca])2) .
(e) If [Ca] < < K and [B]T = 100K (typical for the endoplasmic reticulum), determine Deff/D.
For more about diffusion with buffers, see Wagner and Keizer (1994).
The reference and abstract of the paper is given below.
John Wagner and Joel Keizer (1994) “Effects of Rapid Buffers on Ca2+ Diffusion and Ca2+ Oscillations,” Biophysical Journal, Volume 67, Pages 447–456.

Based on realistic mechanisms of Ca2+ buffering that include both stationary and mobile buffers, we derive and investigate models of Ca2+ diffusion in the presence of rapid buffers. We obtain a single transport equation for Ca2+ that contains the effects caused by both stationary and mobile buffers. For stationary buffers alone, we obtain an expression for the effective diffusion constant of Ca2+ that depends on local Ca2+ concentrations. Mobile buffers, such as fura-2, BAPTA, or small endogenous proteins, give rise to a transport equation that is no longer strictly diffusive. Calculations are presented to show that these effects can modify greatly the manner and rate at which Ca2+ diffuses in cells, and we compare these results with recent measurements by Allbritton et al. (1992). As a prelude to work on Ca2+ waves, we use a simplified version of our model of the activation and inhibition of the IP3 receptor Ca2+ channel in the ER membrane to illustrate the way in which Ca2+ buffering can affect both the amplitude and existence of Ca2+ oscillations.
John Wagner is currently with the Functional Genomics and Systems Biology Group of the IBM T. J. Watson Research Center. In the mid 1990s he was a research assistant with Joel Keizer.

Joel Keizer was a long-time member of the University of California at Davis. A UC Davis website states
Joel’s scientific legacy encompassed several fields. Joel originally trained as a chemist at the University of Oregon under Terrell Hill, where he received his doctorate in theoretical physical chemistry, and did postdoctoral work in chemical physics at the Battelle Institute in Columbus, Ohio. He began his career in 1971 at the University of California, Davis, as an assistant professor of chemistry. He pioneered an approach to the thermodynamics of non-equilibrium steady states, which culminated in the monograph, Statistical Thermodynamics of Nonequilibrium Processes in 1987. By this time, he had over 60 journal publications to his credit.

In the 1980s, Joel gradually shifted his research program and focused his powerful intellect on problems within the biological sciences, first on mathematical models of insulin secretion, and later on intracellular calcium oscillations and diffusion. He subsequently transferred his appointment to the Division of Biological Sciences, where both theoreticians and empiricists respected and admired Joel for his strong modeling work and his insightful collaborations with experimental biologists.
I never met Joel Keizer, but I did know a couple of his collaborators, John Rinzel and Arthur Sherman, both at NIH when I was there in the early 1990s. They worked on bursting in pancreatic beta-cells, and published some influential papers with Keizer (for example, see: Sherman, Rinzel, and Keizer (1988) “Emergence of Organized Bursting in Clusters of Pencreatic Beta-Cells by Channel Sharing,” Biophysical Journal, Volume 54, Pages 411–425).

Finally, the paper by Allbritton et al. cited in the Wagner and Keizer paper is:
Allbritton, Meyer, Stryer (1992) “Range of Messenger Action of Calcium-Ion and Inositol 1,4,5-Trisphosphate,” Science, Volume 258, Pages 1812–1815.

Friday, April 15, 2011

Superconductivity

This month marks the hundredth anniversary of the discovery of superconductivity. An article in the magazine IEEE Spectrum states
On April 8, 1911, physicist Heike Kamerlingh Onnes of Leiden University used an intricate glass cryostat to cool mercury down to just a few degrees above absolute zero. Then he scribbled down three words that ultimately marked the discovery of an entirely new physical phenomenon.

The phrase, jotted more than halfway down the page of a messy lab notebook, didn’t really match the occasion. What Kamerlingh Onnes wrote was “Mercury practically zero,” or, according to a more literal translation, “Quick[silver] near-enough dull.” But what he saw was the first evidence of superconductivity, the ability of some substances to conduct electricity with no resistance at all.
You can learn more about this landmark event in a Physics Today September 2010 article “The Discovery of Superconductivity,” by Dirk van Delft and Peter Kes, and the article “Superconductivity’s Smorgasbord of Insights: A Movable Feast,” in the April 8, 2011 issue of Science by Adrian Cho. Also, see the biography of Onnes on Nobelprize.org.

The Quest for Absolute Zero,  by Kurt Mendelssohn, superimposed on Intermediate Physics for Medicine and Biology.
The Quest for Absolute Zero,
by Kurt Mendelssohn.
One of my favorite books is The Quest for Absolute Zero, by Kurt Mendelssohn. He starts his tale in 1877 with the liquefaction of oxygen and then tells the subsequent history of low temperature physics, including the fascinating story of how Onnes liquefied helium and his early superconductivity studies. According to Mendelssohn, the reason mercury was used for the first experiment is because it could be purified:
There was one other metal which might be obtained in an even purer state than gold, and that was mercury. Being a liquid at room temperatures, it can be distilled and re-distilled again and again until an extreme degree of purity is reached. The results were communicated to the Netherlands Royal Academy on the 28th April 1911, when Onnes reported that mercury, as well as a sample of very pure gold, had, at helium temperature, reached resistivities so low that his instruments had failed to detect them. He was particularly intrigued with the behavior of the mercury sample because it had still a fairly high resistance at liquid hydrogen temperatures and could also be recorded at the boiling point of liquid helium but then vanished at lower temperatures.
Russ Hobbie and I discuss superconductivity in Section 8.9 (Detection of Weak Magnetic Fields) of the 4th edition of Intermediate Physics for Medicine and Biology.
The [magnetic] signals from the body are weaker, and their measurement requires higher sensitivity and often special techniques to reduce noise. Hämäläinen et al. (1993) present a detailed discussion of the instrumentation problems. Sensitive detectors are constructed from superconducting materials. Some compounds, when cooled below a certain critical temperature, undergo a sudden transition and their electrical resistance falls to zero. A current in a loop of superconducting wire persists for as long as the wire is maintained in the superconducting state. The reason there is a superconducting state is a well-understood quantum-mechanical effect that we cannot go into here. It is due to the cooperative motion of many electrons in the superconductor [Eisberg and Resnick (1985), Sec. 14.1; Clarke (1994)].
We then go on to discuss superconducting quantum interference device (SQUID) magnetometers, which are often used to measure the small magnetic fields produced by the brain or the heart. Although not discussed in our book, superconductivity is also used in many MRI machines to produce the strong static magnetic field without losses due to heating of a copper coil.

The citations in the quote from our book are to:
Clarke, J. (1994) “SQUIDS,” Scientific American, Volume 1994, Pages 46–53.

Eisberg, R., and R. Resnick (1985) Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles, 2nd ed. New York, Wiley.

Hämäläinen, M., R. Harri, R. J. Ilmoniemi, J. Knuutila, and O. V. Lounasmaa (1993) “Magnetoencephalography—Theory, Instrumentation, and Applications to Noninvasive Studies of the Working Human Brain.” Reviews of Modern Physics, Volume 65, Pages 413–497.

Friday, April 8, 2011

Point/Counterpoint Revisited

In one of my first entries in this blog, I introduced readers to the Point/Counterpoint articles in the journal Medical Physics. I enjoy these articles immensely, and they provide valuable insight into important and controversial questions in the medical physics field. Each issue of Medical Physics contains one Point/Counterpoint, in which a proposition is stated and two prominent medical physicists debate it, one for and one against. They each have “opening statements” and then provide a “rebuttle” to their opponent’s claims. The articles make for more lively reading than a typical scientific paper full of jargon and technical content.

The September 2010 issue of Medical Physics contains a Point/Counterpoint article titled “Ultrasonography is Soon Likely to Become a Viable Alternative to X-Ray Mammography for Breast Cancer Screening.” Arguing for the proposition is Carri Glide-Hurst, a Senior Associate Physicist at Henry Ford Hospital in Detroit. Her opponent is Andrew Maidment, Associate Professor of Radiology at the University of Pennsylvania in Philadelphia. I am in favor of the proposition, for the silly reason that I always root for the home team. (Oakland University, where I work, is about 20 miles north of Detroit, and our medical physics program has several adjunct faculty at Henry Ford Hospital.)

The 4th edition of Intermediate Physics for Medicine and Biology provides much of the scientific background needed to understand this debate. Russ Hobbie and I added a new chapter to the 4th edition that describes ultrasound and its applications to medical imaging (Chapter 13, Sound and Ultrasound). After introducing the wave equation, we describe the decibel intensity scale, attenuation, medical uses of ultrasound, and the Doppler effect. Two chapters are dedicated to understanding x rays and x-ray imaging. In Chapter 15 (Interaction of Photons and Charged Particles with Matter) we analyze the basic mechanisms by which an x-ray photon affects tissue, including the photoelectric effect, Compton scattering, and pair production. Chapter 16 (Medical Use of X Rays) focuses on applications, including a section dedicated entirely to mammography.

Magnetic resonance imaging is another modality that produces no ionizing radiation. It is described in Chapter 18 of Intermediate Physics for Medicine and Biology. However, MRI is expensive, takes a long time, and cannot be used in some patients, such as those with surgical clips. Therefore the American Cancer Society recommends MRI only for a small group of patients.

Which side wins the debate? It’s always hard to say. I’m sure the moderator, Colin Orton, chooses only those questions that do not have obvious answers. Glide-Hurst concludes her opening statement by arguing
Ultrasound poses a practical and affordable solution for screening younger women with dense breasts, pregnant females, and those who do not meet the risk level requirements of breast MRI screening. Overall, whole-breast ultrasound is advantageous because it is volumetric, noninvasive, and nonionizing, and the current literature supports the routine implementation for breast cancer screening, particularly for women with dense breasts.
Maidment ends his opening statement by stating
Since ultrasound can distinguish solid tumors from fluidfilled cysts, it has a clear clinical role as a diagnostic tool in breast imaging. However, ultrasound does not appear useful for routine screening because of lower sensitivity and specificity compared to mammography, the suboptimal imaging of microcalcifications with ultrasonography, and the projected costs.
All things considered, I do know who wins the debate. The winner is the reader, who witnesses two experts carefully weighing the evidence, analyzing the physics, and predicting future trends. I encourage any student reading Intermediate Physics for Medicine and Biology to also browse through recent issues of Medical Physics. If, like me, you’re often short on time, skip the articles and just read Point/Counterpoint. You won’t regret it.

Friday, April 1, 2011

Fukushima Nuclear Reactors

Because of the scary events at the Fukushima nuclear reactors in Japan, the health hazards of radiation is in the news a lot. One place I turn to for authoritative information is the Health Physics Society. Here is what their website says:
As you are well aware, the Japanese experienced the worst earthquake in their history, followed by a devastating tsunami. These natural disasters have had a serious impact on several Japanese nuclear reactors, principally those at the Fukushima Daiichi site. The Health Physics Society is concerned about radiation exposures associated with these reactor problems and desires to keep our members and the concerned public advised on current events associated with the Japanese nuclear plants.

For information on the potential for radiation from the Japanese Nuclear Plants reaching the United States, see this Health Physics Society Ask the Experts FAQ. For information on radiation particle effects on food, read this Bloomberg FAQ.

Details of the status of the reactors at Fukushima are available in a document issued by the Japan Atomic Industrial Forum that is provided here. We will be updating this news item periodically to provide current information.
The Health Physics Society links to an interesting youtube video: an interview with John Boice of Vanderbilt University. He says “the fear is out of proportion to the risk,” and claims this event is no where near the situation in the Chernobyl diasater. (Warning: The interview was on March 20, and events seem to change daily.)


The Health Physics Society website also links to the following statement:
RADIATION RISKS TO HEALTH
A Joint Statement from the American Association of Clinical Endocrinologists, the American Thyroid Association, The Endocrine Society, and the Society of Nuclear Medicine
March 18, 2011

The recent nuclear reactor accident in Japan due to the earthquake and tsunami has raised fears of radiation exposure to populations in North America from the potential plume of radioactivity crossing the Pacific Ocean. The principal radiation source of concern is radioactive iodine including iodine-131, a radioactive isotope that presents a special risk to health because iodine is concentrated in the thyroid gland and exposure of the thyroid to high levels of radioactive iodine may lead to development of thyroid nodules and thyroid cancer years later. During the Chernobyl nuclear plant accident in 1986, people in the surrounding region were exposed to radioactive iodine principally from intake of food and milk from contaminated farmlands. As demonstrated by the Chernobyl experience, pregnant women, fetuses, infants and children are at the highest risk for developing thyroid cancer whereas adults over age 20 are at negligible risk.

Radioiodine uptake by the thyroid can be blocked by taking potassium iodide (KI) pills or solution, most importantly in these sensitive populations. However, KI should not be taken in the absence of a clear risk of exposure to a potentially dangerous level of radioactive iodine because potassium iodide can cause allergic reactions, skin rashes, salivary gland inflammation, hyperthyroidism or hypothyroidism in a small percentage of people. Since radioactive iodine decays rapidly, current estimates indicate there will not be a hazardous level of radiation reaching the United States from this accident. When an exposure does warrant KI to be taken, it should be taken as directed by physicians or public health authorities until the risk for significant exposure to radioactive iodine dissipates, but probably for no more than 1-2 weeks. With radiation accidents, the greatest risk is to populations close to the radiation source. While some radiation may be detected in the United States and its territories in the Pacific as a result of this accident, current estimates indicate that radiation amounts will be little above baseline atmospheric levels and will not be harmful to the thyroid gland or general health.

We discourage individuals needlessly purchasing or hoarding of KI in the United States. Moreover, since there is not a radiation emergency in the United States or its territories, we do not support the ingestion of KI prophylaxis at this time. Our professional societies will continue to monitor potential risks to health from this accident and will issue amended advisories as warranted.
News sources have been reporting that higher-than-normal radiation levels were detected in the United States. These observations say more about our ability to detect small amounts of radiation than about any risk to Americans. People living in the United States are at no risk of health hazards from radiation exposure caused by the Fukushima reactors.

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the risk of radiation in Section 13 of Chapter 16 (Medical Use of X Rays). We introduce the unit of the sievert (Sv), one of the most important units used when discussing radiation risk.
Both the sievert and the gray are J kg-1. Different names are used to emphasize the fact that they are quite different quantities. One is physical, and the other includes biological effects. An older unit … is the rem. 100 rem = 1 Sv.
We then analyze the natural background dose, which is about 3 mSv per year, and which arises from several sources, including cosmic radiation, terrestrial rocks, and inhalation of radon gas.

If you prefer learning from a video, watch Understanding the Reactor Meltdown in Fukushima, Japan from a Physics Perspective on Youtube.


Time will tell if this event turns into a full-scale disaster. At the moment, it is a serious situation, but does not appear to be a serious health hazard, except perhaps for the workers trying to repair the power plants.

Friday, March 25, 2011

Maxwell Equation Sesquicentennial

A Treatise on Electricity and Magnetism, by James Clerk Maxwell, superimposed on Intermeidate Physics for Medicine and Biology.
A Treatise on
Electricity and Magnetism,
by James Clerk Maxwell.
I am a big James Clerk Maxwell fan. In fact, I have made my living applying Maxwell’s equations to biology and medicine. Yes, I own one of those tee shirts with Maxwell’s equations written on it. I keep a copy of Maxwell’s A Treatise on Electricity and Magnetism in my office (although I have never read it in its entirety…Oh how I wish Maxwell had access to modern vector notation!). I have read The Maxwellians (outstanding) and The Man Who Changed Everything: The Life of James Clerk Maxwell (good). So, this month I am celebrating with gusto the sesquicentennial of the publication of Maxwell’s famous equations. The March 17 issue of the journal Nature has a special section containing four articles about Maxwell’s equation. In an editorial titled “A Bold Unifying Leap” (Volume 471, Page 265) the editor writes
In this issue we celebrate the first expression of those equations by Scottish physicist Maxwell in the Philosophical Magazine 150 years ago. There he drew together several strands of understanding about the behaviour of electricity, of magnetism, of light, and of the ways in which these fundamental aspects of nature behave in matter. As Albert Einstein remarked, “so bold was the leap” of this work that it took decades for physicists to grasp its full significance. And although it was a wonderful expression of science at its purest, it was forged in the thoroughly practical culture of intellects at that time.
Russ Hobbie and I mention Maxwell’s equations in the 4th edition of Intermediate Physics for Medicine and Biology. We added a new homework problem to the 4th edition in Chapter 8 (Biomagnetism).
Problem 22 Write down in differential form (a) the Faraday induction law, (b) Ampere’s law including the displacement current term, (c) Gauss’s law, and (d) Eq. 8.7. … These four equations together constitute “Maxwell’s equations.” Together with the Lorentz force law (Eq. 8.2), Maxwell’s equations summarize all of electricity and magnetism.
All four of Maxwell’s equations are discussed in our book. Section 6.3 is dedicated to Gauss’s law, governing the electric field produced by a collection of charges, and we analyze the usual suspects: a line of charge and a charged sheet. Ampere’s law appears in Section 8.2 (The Magnetic Field of a Moving Charge or Current), and—in one of my favorite homework problems—we show in Problem 13 of Chapter 8 how “one can obtain a very different physical picture of the source of a magnetic field using the Biot Savart law than one gets using Ampere’s law, even though the field is the same.” Faraday’s law is presented in Section 8.6 on Electromagnetic Induction, followed by a discussion of magnetic stimulation of the brain. Even Gauss’s law for a magnetic field (Eq. 8.7, stating that the magnetic field has no divergence) is introduced. Maxwell’s great insight was to add the displacement current term to Ampere’s law. We show how the charging of a capacitor implies the existence of this additional term on page 207, and explore its role in biomagnetism (slight).

The Feynman Lectures on Physics, by Richard Feynman, superimposed on Intermediate Physics for Medicine and Biology.
The Feynman Lectures on Physics,
by Richard Feynman.
Russ and I never analyze what may be the greatest prediction of Maxwell’s equations: the wave nature of light. We state in Section 14.1 that “the velocity of light traveling in a vacuum is given by electromagnetic theory as c = 1/√(ε0 μ0)”, but we never derive this result from Maxwell’s equations. Many of the applications of electromagnetic waves—such as wave guides, antennas, diffraction, radiation, and all of optics—are barely mentioned, if mentioned at all, in our text. For those who want to learn these topics (and all students of physics should want to learn these topics), I suggest Griffith’s Introduction to Electrodynamics (undergraduate) or Jackson’s Classical Electrodynamics (graduate). Richard Feynman introduces Maxwell’s equations in his celebrated book The Feynman Lectures on Physics. In Chapter 18 of Volume 2, he writes
It was not customary in Maxwell’s time to think in terms of abstract fields. Maxwell discussed his ideas in terms of a model in which the vacuum was like an elastic solid. He also tried to explain the meaning of his new equation in terms of the mechanical model. There was much reluctance to accept his theory, first because of the model, and second because there was at first no experimental justification. Today, we understand better that what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false. This is answered by doing experiments, and untold numbers of experiments have confirmed Maxwell’s equations. If we take away the scaffolding he used to build it, we find that Maxwell’s beautiful edifice stands on its own. He brought together all of the laws of electricity and magnetism and made one complete and beautiful theory.
Anyone with a historical bent may want to read Maxwell’s original papers and accompanying commentary in Maxwell on the Electromagnetic Field: a Guided Study, by Thomas Simpson. The book contains a detailed analysis of Maxwell’s papers, including “On the Physical Lines of Force,” which is the publication we celebrate this month. Simpson’s book is the best place I know of to learn about the “scaffolding” Maxwell used to build his theory.

I will close with one of my favorite quotes, again from The Feynman Lectures. At the end of his first chapter introducing electromagnetism, Feynman writes
From a long view of the history of mankind—seen from, say, ten thousand years from now—there can be little doubt that the most significant event of the 19th century will be judged as Maxwell’s discovery of the laws of electrodynamics. The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade.

Friday, March 18, 2011

Murderous Microwaves

I have written previously on the topic of cell phone electromagnetic radiation and cancer, but the issue remains a concern among the general public. Kenneth Foster reviewed three new books about the risks associated with cell phones in the March issue of IEEE Spectrum (disclaimer: I have not read any of these books):
Disconnect: The Truth About Cell Phone Radiation, What the Industry Has Done to Hide It, and How to Protect Your Family, by Devra Davis;

Zapped: Why Your Cell Phone Shouldn’t Be Your Alarm Clock and 1268 Ways to Outsmart the Hazards of Electronic Pollution, by Ann Louise Gittleman

Dirty Electricity: Electrification and the Diseases of Civilization, by Samuel Milham.
Foster writes
Do you feel zapped, disconnected, electronically polluted by electromagnetic fields in your homes and workplace? Are you fearful of your electricity? These three books will feed your fears.

But are such fears justified? Public debates have been going on for more than a century about the possible health hazards of electromagnetic fields from power lines and radio-frequency energy from broadcast transmitters—and now cellphones. At the same time, health agencies have repeatedly reviewed the scientific literature and found no clear evidence of a problem. How can these totally different perspectives be reconciled?
Foster ultimately concludes that these perspectives can’t be reconciled. He counters these alarmist books with exhaustive scientific studies, such as Exposure to High Frequency Electromagnetic Fields, Biological Effects and Health Consequences (100 kHz–300 GHz), Edited by Paolo Vecchia et al., International Commission on Non-Ionizing Radiation Protection, 2009; and Risk Analysis of Human Exposure to Electromagnetic Fields, by Zenon Sienkiewicz, Joachim Schüz, Aslak Harbo Poulsen, and Elisabeth Cardis, report of the European Health Risk Assessment Network on Electromagnetic Fields Exposure, 2010. The first report concludes that
In the last few years the epidemiologic evidence on mobile phone use and risk of brain and other tumors of the head has grown considerably. In our opinion, overall the studies published to date do not demonstrate a raised risk within approximately ten years of use for any tumor of the brain or any other head tumor. However, some key methodologic problems remain—for example, selective non-response and exposure misclassification. Despite these methodologic shortcomings and the still limited data on long latency and long-term use, the available data do not suggest a causal association between mobile phone use and fast-growing tumors such as malignant glioma in adults, at least those tumors with short induction periods. For slow-growing tumors such as meningioma and acoustic neuroma, as well as for glioma among long-term users, the absence of associations reported thus far is less conclusive because the current observation period is still too short. Currently data are completely lacking on the potential carcinogenic effect of exposures in childhood and adolescence.
In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I examine this topic in Section 9.10, Possible Effects of Weak External Electric and Magnetic Fields. We focus on power line (60 Hz) fields (another story….), but many of the same conclusions apply to cell phone (1 GHz) fields. A key factor is the energy of a microwave photon.
Radiated energy is in the form of discrete packets or photons, whose energy is related to the frequency of oscillation of the fields. The energy of each photon is E = , where h is Planck’s constant and ν the frequency. At room temperature, the energy of random thermal motion is kBT = 4 × 10−21 J. At 60 Hz, the energy in each photon is much smaller: 4 × 10−32 J. At 100 MHz it is 7 × 10−26 J.
Therefore, cell phone frequencies correspond to photon energies that are nearly 10,000 times less than thermal energies. Moreover, the energy required to break chemical bonds is hundreds of times greater than thermal energies. If cancer is caused by the breaking of bonds in DNA by photons, then cell phone photons are one millions times too weak to cause cancer. If enough photons were present, the tissue temperature could rise, but no one has evidence that there is a significant heating of the brain by photons; the fields are not that strong. We are left with no plausible mechanism connecting microwaves and cancer.

With weak epidemiological evidence and no mechanism, I remain a hard-boiled skeptic. In fact, my only reservation with Foster’s review is that his criticisms may have been too tame. My views are closer to physicist Bob Park, who is a vocal (and often sarcastic) critic of those who insist that cell phones cause cancer. Nevertheless, even Foster’s mild criticisms triggered a heated debate in the comments section following his review. Interestingly, most of the comments make emotional arguments, not scientific ones, indicating the need for a better understanding by the pubic of the basic physics of how electromagnetic fields interact with tissue. (At this point, I again plug our book, Intermediate Physics for Medicine and Biology, as the best source to learn the physics—although I admit on this one claim I may be slightly biased.)

So who should you believe in this debate? How about the National Cancer Institute? It is hard to think of a more unbiased or authoritative source of information. Their fact sheet provides a science-based analysis of the issue. But Ken Foster is a pretty reliable source of information too. He has spent nearly 40 years studying electricity and magnetism, with much of that analyzing the biological effects of E and M fields. His 1989 paper “Dielectric-Properties of Tissues and Biological Materials: A Critical Review,” (Critical Reviews in Biomedical Engineering, Volume 17, Pages 25–104), written with Herman Schwan, is a highly-cited classic. Foster’s article “Risk Management: Science and the Precautionary Principle” (Science, Volume 288, Pages 979–981, 2000) provides useful insight into the role of scientific evidence in evaluating risk. Russ and I cite several of Foster’s papers in the 4th edition of Intermediate Physics for Medicine and Biology, including
Foster, K. R. (1996) “Electromagnetic Field Effects and Mechanisms: In Search of an Anchor,” IEEE Engineering in Medicine and Biology, Volume 15, Pages 50–56.

Foster, K. R., and H. P. Schwan (1996) “Dielectric Properties of Tissues.” In C. Polk and E. Postow, eds. Handbook of Biological Effects of Electromagnetic Fields, Boca Raton, FL, CRC Press, Pages 25–102.

Moulder, J. E., and K. R. Foster (1995) “Biological Effects of Power-Frequency Fields as They Relate to Carcinogenesis,” Proceedings of the Society of Experimental Biology and Medicine, Volume 209, Pages 309–323.

Moulder, J. E., and K. R. Foster (1999) “Is There a Link Between Power-Frequency Electric Fields and Cancer?IEEE Engineering in Medicine and Biology Magazine, Volume 18, Pages 109–116.
(Note: when preparing this blog entry, I found that we have the title to the last paper incorrect in our book. It should be "Is There a Link Between Exposure to Power-Frequency Electric Fields and Cancer?" I will correct that in the erratum, found at the book website.)

In conclusion, I don’t believe the evidence supports the hypothesis that cell phones cause cancer. Give me some convincing new evidence or a plausible mechanism, and I’ll reconsider.