Friday, July 27, 2012

Frank Netter, Medical Illustrator

The CIBA Collection of Medical Illustrations, by Frank Netter, with Intermediate Physics for Medicine and Biology.
The CIBA Collection
of Medical Illustrations,
by Frank Netter.
When I started graduate school at Vanderbilt University, I had a strong background in physics but was weak in biology and medicine. One of the sources I used to learn some anatomy was the eight-volume CIBA Collection of Medical Illustrations by Frank Netter. I dearly loved browsing through his illustrations. Because of my interest in cardiac electrophysiology, I was particularly fond of Volume 5 about the heart. Some of his illustrations can be seen online here, here, here, here, and here.

At http://www.netterimages.com you can learn much about Netter and his work. Some of Netter’s books have recently been updated and reissued. A video (below) about this reissue includes an interview with Netter, showing him at work on his drawings. Netter’s Atlas of Neuroscience was updated by David Felten, the Associate Dean for Research at the Oakland University William Beaumont School of Medicine. I often see students walking around the OU campus carrying Netter’s Atlas of Human Anatomy. You can even buy Netter flash cards.

The Netter Collection of Medical Illustrations.
The CIBA “green books” relaunched. 

The Society of Illustrators Hall of Fame contains an entry about Netter.
Frank H. Netter (1906–1991) was born in New York and grew up during the Golden Age of Illustration. He studied at the National Academy of Design, and later at the Art Students League. But his mother wanted him to be a doctor, and when she died suddenly, he resolved to give up art, and study medicine as she had wished. He graduated from City College of New York, BS 1927, and New York University Medical College, MD 1931. But the demand for his pictures far exceeded the demand for his surgery.
More about Netter’s life is described in his New York Times obituary. Also, see the article “Frank H Netter, Medicine's Michelangelo: An Editorial Perspective,” by Rita Washko (Science Editor, Volume 29, Pages 16–18, 2006).

Readers of the 4th edition of Intermediate Physics for Medicine and Biology who need to brush up on the anatomy should take a look at Netter’s books.

Finally, listen to his daughter talk about Frank Netter's life and work.

Medicine’s Michelangelo: The Life and Art of Frank H. Netter, M.D. 

Friday, July 20, 2012

A Mechanism for Anisotropic Reentry in Electrically Active Tissue

1992 was a good year for me. My wife and I, who had been married for seven years, had two young daughters, and we had just bought a house in Kensington, Maryland. While working at the National Institutes of Health I published eight papers in 1992, mostly about magnetic stimulation of nerves. My favorite paper from that year, however, was one about the heart: “A Mechanism for Anisotropic Reentry in Electrically Active Tissue” (Journal of Cardiovascular Electrophysiology, Volume 3, Pages 558–566). The lead author was Joshua Saypol, an engineering undergraduate at Brown University who would come home to Maryland each summer and work at NIH. Josh was a big, strong fellow, and handy to have around when we had heavy things to move. But he was also smart and hard-working, and we ended up publishing three papers together. The cardiac paper was the last of these, and the least cited (indeed, according to the Web of Science the paper has not been cited by anyone other than me for the last ten years). You can get the gist of it from the abstract.
Introduction: Numerical simulations of wavefront propagation were performed using a two-dimensional sheet of tissue with different anisotropy ratios in the intracellular and extracellular spaces.
Methods and Results: The tissue was represented by the bidomain model, and the active properties of the membrane were described by the Hodgkin-Huxley equations. Two successive stimuli, delivered through a single point electrode, resulted in the formation of a reentrant wavefront when the second stimulus was delivered during the vulnerable period of the first wavefront.
Conclusion: The mechanism for the development of reentry was that the bidomain tissue responded to point cathodal stimulation by depolarizing the tissue under the electrode in the direction perpendicular to the fiber axis, and hyperpolarizing the tissue in the direction parallel to the fiber axis. Such a distribution of depolarization and hyperpolarization modifies the refractory period of the action potential differently in each direction, resulting in block in the direction perpendicular to the fiber axis and leading to reentry and the formation of stable, rotating wavefronts.
The paper arises from two previous lines of research. First is the calculation of the transmembrane potential induced by a point stimulus, performed by Nestor Sepulveda, John Wikswo and me (“Current Injection Into a Two Dimensional Bidomain,” Biophysical Journal, Volume 55, Pages 987–999, 1989), which I discussed in a previous blog entry. We found that cardiac tissue is depolarized (positive transmembrane potential) under a cathode, but hyperpolarized (negative transmembrane potential) a millimeter or two from the cathode in each direction along the cardiac fibers (at what are nowadays called “virtual anodes”). That paper used a passive steady-state membrane, but in a subsequent paper I derived an algorithm to solve the bidomain equations including time dependence and an active model for the membrane kinetics (“Action Potential Propagation in a Thick Strand of Cardiac Muscle,” Circulation Research, Volume 68, Pages 162–173, 1991). Having this algorithm, I decided to investigate what effect the virtual anodes had on propagation following an extracellular stimulus. Both of these papers were based on the bidomain model, which is a mathematical model of the electrical properties of cardiac tissue that Russ Hobbie and I describe in Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology.

Another reason I like the paper with Josh so much is that we collaborated with Arthur Winfree while writing it. I’ve written about our work with Winfree previously in this blog. Art had made a prediction about the induction of “reentry” (a cardiac arrhythmia) following a premature stimulus (a stimulus applied to tissue near the end of its refractory period). It provided the ideal hypothesis to test with our model. I remember vividly Josh coming to me with plots of the transmembrane potential showing the first signs of reentry, and how even after our discussions with Art we didn’t quite believe what we were seeing. Winfree helped Josh and I publish a preliminary letter about our calculation in the International Journal of Bifurcation and Chaos (Volume 1, Pages 927–928, 1991), which was just starting up. It has been almost ten years now since Art passed away, and I still miss him.

Astute readers (and I’m sure ALL readers of this blog are astute) will notice something odd in the abstract quoted above: “the active properties of the membrane were described by the Hodgkin-Huxley equations” (my italics). What are the Hodgkin-Huxley equations—which describe a squid nerve axon action potential (see Chapter 6 of Intermediate Physics for Medicine and Biology)—doing in a paper about cardiac tissue? This is a legitimate criticism, and one the reviewers raised, but surprisingly it didn’t prove fatal for publication of the article (although I was asked to change the title to the generic “...Electrically Active Tissue,” and you won’t find the word “cardiac” in the abstract). I used the Hodgkin-Huxley model because I didn’t know any better at the time. (In the paper, we claimed that “we used the Hodgkin-Huxley model instead of a myocardial membrane model because of a limitation of computer resources,” and that might be part of the reason too.) Models more appropriate for cardiac tissue (such as the Beeler-Reuter model that I used in later publications) were more complicated, and I wasn’t familiar with them. Besides, Josh and I were most interested in generic properties of reentry induction that would probably not be too sensitive to the membrane model (or so we told ourselves). I wonder now why we didn’t use the generic FitzHugh-Nagumo model, but we didn’t. Never again would reviewers let me get away with using the Hodgkin-Huxley model for cardiac tissue (and rightly so), and I suspect it is one of the reasons the paper is rarely cited anymore.

Nevertheless, the paper did make an important contribution to our understanding of the induction of reentry, which is why I like it so much. It was the first paper to present the idea of regions of hyperpolarization shortening the refractory period, and thereby creating regions of excitable tissue through which wave fronts can propagate. We state this clearly in the discussion.
The crucial point is that a premature stimulus causes an unusually-shaped transmembrane potential distribution that produces a directionally-dependent change of the refractory period, thereby creating a necessary condition for conduction block in one direction.
We go into more detail in the results
The depolarization wavefront is followed by a front of refractoriness. During the refractory period, the sodium channel inactivation gate (the h gate) opens slowly, while the potassium channel [activation gate] (the n gate) closes slowly; the tissue remains refractory until these two gates have recovered sufficiently. If a hyperpolarizing current is applied to the tissue during the refractory period, it will cause the h gate to open and the n gate to close more quickly than they normally would, thereby shortening the refractory period. Thus, when the tissue is stimulated, the refractory period is shortened in the area of hyperpolarization along the x axis [parallel to the fibers]. In the area along the y axis [perpendicular to the fibers] that is depolarized by the stimulus, on the other hand, the h and n gates move away from their resting values, and therefore the refractory period is lengthened. If the second stimulus is timed just right, it can take the tissue along the x direction out of the refractory period, while along the y direction the tissue remains unexcitable. Thus, the action potential elicited by the large depolarization directly below the electrode can propagate only in the x direction.
This idea influenced subsequent work by myself (“Nonsustained Reentry Following Successive Stimulation of Cardiac tissue Through a Unipolar Electrode,” Journal of Cardiovascular Electrophysiology, Volume 8, Pages 768–778, 1997) and others (Efimov et al., “Virtual Electrode-Induced Phase Singularity: A Basic Mechanism of Defibrillation Failure,” Circulation Research, Volume 82, Pages 918–925, 1998), and now, twenty years later, lies at the heart of the concept of “virtual electrodes” and their role during defibrillation (see, for instance: Efimov, Gray, and Roth, “Virtual Electrodes and Deexcitation: New Insights into Fibrillation Induction and Defibrillation,” Journal of Cardiovascular Electrophysiology, Volume 11, Pages 339–353, 2000). After I left NIH, Marc Lin, Wikswo and I confirmed experimentally this mechanism of reentry induction (“Quatrefoil Reentry in Myocardium: An Optical Imaging Study of the Induction Mechanisms,” Journal of Cardiovascular Electrophysiology, Volume 10, Pages 574–586, 1999).

The acknowledgments section of the paper brings back many memories.
Acknowledgments: We thank Art Winfree for his many ideas and suggestions, Peter Basser for his careful reading of the manuscript and Barry Bowman for his editorial assistance. The calculations were performed on the NIH Convex C240 computer. We thank the staff of the NIH computer center for their support.
First, of course, we mentioned Art’s contributions. Peter Basser, the inventor of MRI Diffusion Tensor Imaging, was a friend of mine at NIH, and we used to read each other’s papers before submission to a journal. Barry Bowman also worked at NIH. He was a former high school English teacher, and I’d always give him drafts of my papers for polishing. Much of what I know about writing English well I learned from him. I suspect my current laptop computer can calculate faster than the Convex C240 supercomputer could, but it was fairly powerful for the time. In 1992, Josh and I did our programming in FORTRAN. Some things never change.

Friday, July 13, 2012

Magnetic Characterization of Isolated Candidate Vertebrate Magnetoreceptor Cells

Big news this week in the field of magnetoreception. A paper titled “Magnetic Characterization of Isolated Candidate Vertebrate Magnetoreceptor Cells” by Stephan Eder and his colleagues was published online (“early edition”) in the Proceedings of the National Academy of Sciences. One commentator went so far as to suggest that these magnetoreceptors are “the biological equivalent of the elusive Higgs boson” (an exaggeration, but a catchy quote with a grain of truth). The abstract to the paper is given below.
Over the past 50 y, behavioral experiments have produced a large body of evidence for the existence of a magnetic sense in a wide range of animals. However, the underlying sensory physiology remains poorly understood due to the elusiveness of the magnetosensory structures. Here we present an effective method for isolating and characterizing potential magnetite-based magnetoreceptor cells. In essence, a rotating magnetic field is employed to visually identify, within a dissociated tissue preparation, cells that contain magnetic material by their rotational behavior. As a tissue of choice, we selected trout olfactory epithelium that has been previously suggested to host candidate magnetoreceptor cells. We were able to reproducibly detect magnetic cells and to determine their magnetic dipole moment. The obtained values (4 to 100 fA m2) greatly exceed previous estimates (0.5 fA m2). The magnetism of the cells is due to a μm-sized intracellular structure of iron-rich crystals, most likely single-domain magnetite. In confocal reflectance imaging, these produce bright reflective spots close to the cell membrane. The magnetic inclusions are found to be firmly coupled to the cell membrane, enabling a direct transduction of mechanical stress produced by magnetic torque acting on the cellular dipole in situ. Our results show that the magnetically identified cells clearly meet the physical requirements for a magnetoreceptor capable of rapidly detecting small changes in the external magnetic field. This would also explain interference of ac powerline magnetic fields with magnetoreception, as reported in cattle.
The PNAS published a highlight about the article.
Identification of cells that sense Earth’s magnetic field
Researchers have isolated magnetic cells thought to underlie certain animals’ ability to navigate by Earth’s magnetic field. Behavioral studies have long provided evidence for the existence of a magnetic sense, but the identity of the specialized cells that comprise this internal compass has remained elusive. Stephan Eder and colleagues isolated the putative magnetic field-sensing cells that line the trout’s nasal cavity, and which contain iron-rich deposits of the magnetic material called magnetite. The authors placed a suspension of trout nasal tissue under a light microscope, and identified magnetic cells by their rotational motion in the presence of a slowly rotating external magnetic field. After siphoning off the rotating cells to characterize them in greater detail, the authors discovered that each cell contained reflective, iron-rich magnetic particles that were anchored to the cell membrane. The authors also determined that the cells are about 100 times more sensitive to magnetic fields than previously estimated. The findings suggest that the cells are capable of detecting magnetic north as well as small changes in the external magnetic field, and could form the basis of an accurate magnetic sensory system, according to the authors.
Russ Hobbie and I discuss the role of magnetic materials in biology in our chapter about biomagnetism in the 4th edition of Intermediate Physics for Medicine and Biology. We included in our book a photograph of magnetosomes (intracellular magnetite particles) in magnetotactic bacteria. In the photo, the magnetosomes are each about 0.05 μm on a side, and about 20 particles form a line roughly 1 μm long. Eder et al., on the other hand, find magnetic inclusions that are more spherical, and roughly 1–2 μm across. A trout cell has a really big magnetic moment when it contains such a large inclusion, but less than one cell in a thousand responds to the magnetic field and therefore presumably contains one. For a magnetotactic bacterium to have the same magnetic moment, it would need to be packed solid with magnetite.

I find the PNAS paper to be fascinating, and the method to detect individual cells using a rotating magnetic field is clever. However, in my opinion the last sentence of the abstract is a bit speculative, given that typical residential 60 Hz magnetic fields are 10,000 times smaller than the 2 mT fields used by Eder et al., and the frequency is almost 200 times higher. Granted the large magnetic moment makes the idea of powerline field detection intriguing, but that hypothesis is far from proven and I remain skeptical.

One of the coauthors on the PNAS paper is Joseph Kirschvink, whose work we discuss extensively in Section 9.10, Possible Effects of Weak External Electric and Magnetic Fields. Kirschvink is the Nico and Marilyn Van Wingen Professor of Geobiology at Caltech. He has developed several fascinating and controversial hypotheses, such as the Snowball Earth concept and the idea that a meteor found in 1984 contains evidence of life on Mars (he collaborated with my PhD advisor, John Wikswo, to make magnetic field measurements on that meteor). Kirschvink received the William Gilbert Award from the American Geophysical Union in 2011 for his work on geomagnetism. In the citation for this award, Benjamin Weiss of MIT wrote that “Joe represents everything we are looking for in a William Gilbert awardee. He is an ‘ideas man,’ a gadfly, working at the edge of the crowd while the crowd chases after him!” Kirschvink has also won Caltech’s Feynman Prize for Excellence in Teaching.

The PNAS paper has triggered an avalanche of press reports, including those in Science News,  the International Science Times, Science Daily, Phys.org, Live Science, and Discover Magazine.

Friday, July 6, 2012

Women in Medical Physics

Last week in this blog, I discussed the medical physicist Rosalyn Yalow, who was the second female to win the Nobel Prize in Physiology or Medicine (The first was biochemist Gerty Cori), and who developed, with Solomon Berson, the radioimmunoassay technique. Her story reminds us of the important contributions of females to medical physics. I am particularly interested in this topic because Oakland University recently was awarded an ADVANCE grant from the National Science Foundation, with the goal of increasing the participation and advancement of women in academic science and engineering careers. I am on the leadership team of this project, and we are working hard to improve the environment for female STEM (science, technology, engineering, and math) faculty.

Of course the real reason I support increasing opportunities for women in the sciences is that I am certain many of the readers of the 4th edition of Intermediate Physics for Medicine and Biology are female. Medical physics provides several role models for women. For instance, Aminollah Sabzevari published an article in the Science Creative Quarterly titled “Women in Medical Physics.” Sabzevari begins
Traditionally, physics has been a male-dominated occupation. However, throughout history there have been exceptional women who have risen above society’s restrictions and contributed greatly to the advancement of physics. Women have played an important role in the creation, advancement and application of medical physics. As a frontier science, medical physics is less likely to be bound by society’s norms and less subject to the inherent glass ceiling limiting female participation. Women such as Marie Curie, Harriet Brooks, and Rosalind Franklin helped break through that ceiling, and their contributions are worth observing.
Another notable female medical physicist is Edith Hinkley Quimby, who established the first measurements of safe levels of radiation. The American Association of Physicists in Medicine named the Edith H. Quimby Lifetime Achievement Award in her honor.

On a related note (though having little to do with medical or biological physics), I recently read a fascinating biography of Sophie Germain (1776–1831), who did fundamental work in number theory and elasticity.

Finally, in my mind the greatest female physicist of all time (yes, greater than Marie Curie) is Lise Meitner, who first discovered nuclear fission. A great place to learn more about her life and work is Richard Rhodes’ masterpiece The Making of the Atomic Bomb.

One characteristic these women have in common is that they overcame great obstacles in order to become scientists. Their tenacity and determination inspires us all.

Friday, June 29, 2012

Rosalyn Yalow and the Radioimmunoassay

The radioimmunoassay is a sensitive technique for measuring tiny amounts of biologically important molecules, such as the hormone insulin in the blood. The basic idea is to tag insulin with a radioisotope such as I-125, and mix it with antibodies for insulin. Then, add to this mix the patient’s blood. The insulin in the blood competes with the tagged insulin for binding to the antibodies. Next, remove the antibodies and their bound insulin, leaving just the free insulin in the supernatant. The radioactivity of the supernatant provides a way to determine the concentration of insulin in the blood.

Russ Hobbie and I describe the basics of a radioimmunoassay in Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology.
Four kinds of radioactivity measurements have proven useful in medicine. The first involves no administration of a radioactive substance to the patient. Rather, a sample from the patient (usually blood) is mixed with a radioactive substance in the laboratory, and the resulting chemical compounds are separated and counted. This is the basis of various competitive binding assays, such as those for measuring thyroid hormone and the availability of iron-binding sites. The most common competitive binding technique is called radioimmunoassay. A wide range of proteins are measured in this manner.
The radioimmunoassay was developed by Rosalyn Yalow and Solomon Berson. Yalow received the Nobel Prize in Physiology or Medicine for this work in 1977 (Berson had died by then, and the Nobel committee never gives a prize posthumously). For readers of Intermediate Physics for Medicine and Biology, Yalow is interesting because she started out as a physics major, getting her bachelor’s degree in physics from Hunter College, part of the City University of New York (CUNY) system. In 1945 she obtained a PhD in nuclear physics from the University of Illinois at Urbana-Champaign. Building on this physics background, and collaborating with Berson, in the 1950s she developed the radioimmunoassay. Interestingly she and Berson refused to patent the method, wanting it to be freely available for use in medicine. Yalow died just over one year ago, at age 89.

You can learn more about Rosalyn Yalow and her inspiring life from her Nobel autobiography, her Physics Today obituary, her New York Times obituary, and the Jewish Woman’s Archive. For those who prefer a video, click here. I have not read Eugene Straus’s book Rosalyn Yalow: Nobel Laureate: Her Life and Work in Science, but I am putting it on my list of things to do.

Television interview with Rosalyn Yalow.

I often like to finish a blog entry about a noteworthy scientist with their own words. Below are the opening paragraphs of Yalow’s Nobel Prize Lecture.
To primitive man the sky was wonderful, mysterious and awesome but he could not even dream of what was within the golden disk or silver points of light so far beyond his reach. The telescope, the spectroscope, the radiotelescope—all the tools and paraphernalia of modern science have acted as detailed probes to enable man to discover, to analyze and hence better to understand the inner contents and fine structure of these celestial objects.

Man himself is a mysterious object and the tools to probe his physiologic nature and function have developed only slowly through the millenia. Becquerel, the Curies and the Joliot-Curies with their discovery of natural and artificial radioactivity and Hevesy, who pioneered in the application of radioisotopes to the study of chemical processes, were the scientific progenitors of my career. For the past 30 years I have been committed to the development and application of radioisotopic methodology to analyze the fine structure of biologic systems.

From 1950 until his untimely death in 1972, Dr. Solomon Berson was joined with me in this scientific adventure and together we gave birth to and nurtured through its infancy radioimmunoassay, a powerful tool for determination of virtually any substance of biologic interest. Would that he were here to share this moment.

Friday, June 22, 2012

Mannitol

Elevated intracranial pressure often follows a traumatic brain injury. One way to lower pressure in the brain is to administer mannitol intravenously. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss mannitol.
The converse of this effect [removal of urea during renal dialysis] is to inject into the blood urea or mannitol, another molecule that does not readily cross the blood–brain barrier. This lowers the driving pressure of water within the blood, and water flows from the brain into the blood. Although the effects do not last long, this technique is sometimes used as an emergency treatment for cerebral edema.
Mannitol (C6H14O6) has a similar size, structure, and chemical formula as glucose (C6H12O6). It is metabolically inert in humans. A 10% solution consists of 100 g of mannitol per liter (1000 g) of water. Mannitol has a molecular weight of 182 g/mole, implying an osmolarity of (100 g/liter)/(182 g/mole) = 0.55 moles/liter, or 550 mosmole. Blood has an osmolarity of about 300 mosmole, so 10% mannitol is significantly hypertonic. Problem 3 in Chapter 5 asks you to calculate the osmotic pressure produced by mannitol.
Problem 3 Sometimes after trauma the brain becomes very swollen and distended with fluid, a condition known as cerebral edema. To reduce swelling, mannitol may be injected into the bloodstream. This reduces the driving force of water in the blood, and fluid flows from the brain into the blood. If 0.01 mol l−1 of mannitol is used, what will be the approximate osmotic pressure?
Mannitol works best for short-term reduction of intracranial pressure. If it is administered continuously, eventually some of the mannitol may cross the blood-brain barrier, reducing its osmotic effect (Wakai et al., 2008). Then, when mannitol administration is discontinued, the mannitol that crossed into the brain can actually have the opposite effect of osmotically drawing water from the blood into the brain.

Interestingly, if you give a high enough concentration of mannitol, the osmotic shrinking of endothelia cells can disrupt the blood brain barrier. Sometimes mannitol is used to ensure that certain drugs are able to pass from the blood into the brain.

Friday, June 15, 2012

The Heating of Metal Electrodes

Twenty years ago, I published a paper titled “The Heating of Metal Electrodes During Rapid-Rate Magnetic Stimulation: A Possible Safety Hazard” (Electroencephalography and Clinical Neurophysiology, Volume 85, Pages 116–123, 1992). My coauthors were Alvaro Pascual-Leone, Leonardo Cohen, and Mark Hallett, all working at the National Institutes of Health at that time. The paper motivated two new homework problems in Chapter 8 of the 4th edition of Intermediate Physics for Medicine and Biology.
Problem 24 Suppose one is measuring the EEG when a time-dependent magnetic field is present (such as during magnetic stimulation). The EEG is measured using a disk electrode of radius a = 5 mm and thickness d = 1 mm, made of silver with conductivity σ = 63 × 106 S m−1. The magnetic field is uniform in space, is in a direction perpendicular to the plane of the electrode, and changes from zero to 1 T in 200 μs.
(a) Calculate the electric field and current density in the electrode due to Faraday induction.
(b) The rate of conversion of electrical energy to thermal energy per unit volume (Joule heating) is the product of the current density times the electric field. Calculate the rate of thermal energy production during the time the magnetic field is changing.
(c) Determine the total thermal energy change caused by the change of magnetic field.
(d) The specific heat of silver is 240 J kg−1 ◦C−1, and the density of silver is 10 500 kg m−3. Determine the temperature increase of the electrode due to Joule heating. The heating of metal electrodes can be a safety hazard during rapid (20 Hz) magnetic stimulation [Roth et al. (1992)].

Problem 25 Suppose that during rapid-rate magnetic stimulation, each stimulus pulse causes the temperature of a metal EEG electrode to increase by ΔT (see Prob. 24). The hot electrode then cools exponentially with a time constant τ (typically about 45 s). If N stimulation pulses are delivered starting at t = 0 m with successive pulses separated by a time Δt, then the temperature at the end of the pulse train is T(N,Δt) = ΔT Σ e−iΔt/τ [the sum goes from 0 to N-1]. Find a closed-form expression for T(N,Δt) using the summation formula for the geometric series: 1 + x + x2 + ... + xn−1 = (1 − xn)/(1 − x). Determine the limiting values of T(N,Δt)for NΔt [much less than] τ and NΔt [much greater than] τ . [See Roth et al. (1992).]
Both problems walk you through parts of our paper. I like Problem 24 because it provides a nice example of Faraday’s law of induction, one of the topics discussed in Chapter 8 (Biomagnetism). Problem 25 could easily have been placed in Chapter 3 (Systems of Many Particles) because of its emphasis on thermal heating and Newton’s law of cooling.

Problem 25 also provides a physical example illustrating the mathematical expression for the summation of a geometric series. If you are not familiar with this sum, it is one that you don’t have to memorize because it is so easy to derive. Let S = 1 + x + x2 + … + xN-1. If you multiply this expression by x, you get xS = x + x2 + … + xN. Now (and this is the clever trick), subtract xS from S, which gives (1 – x) S = 1 – xN. Note that all the terms x + x2 + … + xN-1 cancel out! Solving for S gives you the equation in Problem 25. If x is between -1 and 1, and N goes to infinity, you get for the infinite sum S = 1/(1 – x).

I should say a few words about my coauthors, who are all leaders in the field of transcranial magnetic stimulation. The project started when Alvaro Pascual-Leone, who had just arrived at NIH, mentioned to me that one patient of his had suffered a burn during rapid-rate magnetic stimulation (see: Pascual-Leone, A., Gates, J.R. and Dhuna, A.K. “Induction of Speech Arrest and Counting Errors with Rapid Transcranial Magnetic Stimulation,” Neurology, Volume 41, Pages 697–702, 1991). This motivated Alvaro and I to launch a study using a variety of metal disks made by the NIH machine shop. Alvaro is now the Director of the Berenson-Allen Center for Noninvasive Brain Stimulation. Leo Cohen and Mark Hallett were both studying magnetic stimulation when I arrived at NIH in 1988. I was lucky to start collaborating with them, providing physics expertise to augment their extensive clinical experience. Both continue at the Human Motor Control Section at NIH in Bethesda, Maryland.

Saturday, June 9, 2012

Law of Laplace

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I often introduce topics in the homework problems that we don’t have room to discuss fully in the text. For instance, Problem 18 in Chapter 1 asks the reader to derive the Law of Laplace, f = p R, a relationship between the pressure p inside a cylinder, its radius R, and its wall tension f.

Vital Circuits, by Steven Vogel.
Vital Circuits,
by Steven Vogel.
In his book Vital Circuits: On Pumps, Pipes, and the Workings of Circulatory Systems, Steven Vogel explains the physiological significance of the Law of Laplace, particularly for blood vessels.
The wall of the aorta of a dog is about 0.6 millimeters thick, while the wall of the artery leading to the head is only half that. Pressure differences across the wall are the same…, but the aorta has an internal diameter three times as great. The bigger vessel simply needs a thicker wall. An arteriole is 100 times skinnier yet; it wall is fifteen times thinner than that of the artery going to the head. A capillary is eight times still skinnier, with walls another twenty times thinner… A general rule that wall thickness is proportional to vessel diameter is clearly evident, just the relationship expected from Laplace’s law.
In our Homework Problem 18, Russ and I write that “Sometimes a patient will have an aneurysm in which a portion of an artery will balloon out and possibly rupture. Comment on this phenomenon in light of the R dependence of the force per unit length.” The answer (spoiler alert) is explained by Vogel. He first examines what happens when inflating a balloon.
About the same pressure is needed throughout the inflation, except for an extra bit to get started and (if you persist) another extra bit just before the final explosion… Pressure gets more effective in generating tension—stretch—as the balloon gets bigger [p = f/R], automatically providing the extra force needed as the rubber is expanded.
What is the implication for an aneurysm?
Pressure, we noted, is more effective in generating tension in the walls of a big cylinder than in those of a small cylinder. Blow into a cylindrical balloon, and one part of the balloon will inflate almost fully before the remainder expands. Pressure inside at any instant is the same everywhere, but the responsive stretching is curiously irregular…any part partially inflated is easier to inflate further than any part not yet inflated at all.
Thus, the law of Laplace provides insight into aneurysms: just think of a bulge that develops when inflating a cylindrical balloon. Now the question can be turned on its head: why don’t all cylindrical vessels immediately develop aneurysms as soon as pressure is applied? In other words, why are we not all dead, killed by the Law of Laplace? Vogel addresses this point too. “The primary question isn’t why aneurysms sometimes occur, but why they don’t normally happen…Why an arterial wall...behaves in a much friendlier manner.”

Vogel’s answer is that arteries “should surely stretch strangely” (I love the alliteration). The walls of an artery as designed such that
a disproportionate force is needed for each incremental bit of stretch—the thing gets stiffer as it stretches further…As the vessels expand, pressure inside is increasingly effective at generating tension in their walls—that’s the unavoidable consequence of Laplace’s law. But that tension, the stress in the walls, is decreasingly effective in causing the walls to stretch. It all comes down to that curved, J-shaped line on the stress-strain graph, which means no aneurysm.
Thank goodness nature found a way to avoid the aneurysms predicted by the Law of Laplace!

There are many more biomedical applications of the Law of Laplace. In a review article, Jeffrey Basford describes the “Law of Laplace and Its Relevance to Contemporary Medicine and Rehabilitation” (Archives of Physical Medidine and Rehabilitation, Volume 83, Pages 1165–1170, 2002). Basford considers many examples, including bladder function, compressive wraps to treat peripheral edema, the choice of where in the uterus to perform a Cesarean section. That is a lot of insight from one simple law relating pressure, radius and wall tension.

Friday, June 1, 2012

Andrew Huxley (1917-2012)

Andrew Huxley, the greatest mathematical biologist of the 20th century, died on Wednesday, May 30. Huxley won the Nobel Prize for his groundbreaking work with Alan Hodgkin that explained electrical transmission in nerves.

In Chapter 6 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe the Hodgkin-Huxley model of membrane current in a nerve axon.
Considerable work was done on nerve conduction in the late 1940s, culminating in a model that relates the propagation of the action potential to the changes in membrane permeability that accompany a change in voltage. The model [Hodgkin and Huxley (1952)] does not explain why the membrane permeability changes; it relates the shape and conduction speed of the impulse to the observed changes in membrane permeability. Nor does it explain all the changes in current…Nonetheless, the work was a triumph that led to the Nobel Prize for Alan Hodgkin and Andrew Huxley.
The paper we cite (“A Quantitative Description of Membrane Current and its Application to Conduction and Excitation in Nerve,” Journal of Physiology, Volume 117, Pages 500–544) is one of my favorites. Whenever I teach biological physics, I assign this paper to my students as an example of mathematical modeling in biology at its best. In 1981 Hodgkin and Huxley wrote a “citation classic” article about their paper, which has now been cited over 9300 times. They concluded
Another reason why our paper has been widely read may be that it shows how a wide range of well-known, complicated, and variable phenomena in many excitable tissues can be explained quantitatively by a few fairly simple relations between membrane potential and changes of ion permeability—processes that are several steps away from the phenomena that are usually observed, so that the connections between them are too complex to be appreciated, intuitively. There now seems little doubt that the main outlines of our explanation are correct, but we have always felt that our equations should be regarded only as a first approximation that needs to be refined and extended in many ways in the search for the actual mechanism of the permeability change’s on the molecular scale.
As one who does mathematical modeling of bioelectric phenomena for a living, I can think of no better way to honor Huxley than to show you his equations.


This set of four nonlinear ordinary differential equations, plus six expressions relating how the ion channel rate constants depend on voltage, not only describes the membrane of the squid giant nerve axon, but also is the starting point for models of all electrically active tissue. Russ and I consider this model to be so important that we dedicate six pages to exploring it, and present in our Fig. 6.38 a computer program to solve the equations. For anyone interested in electrophysiology, becoming familiar with the Hodgkin-Huxley model is job one, just as analyzing the Bohr model for hydrogen is the starting point for someone interested in atomic structure. Remarkably, 60 years ago Huxley solved these differential equations numerically using only a hand-crank adding machine.

How can your learn more about this great man? First, the Nobel Prize website contains his biography, a transcript of his Nobel lecture, and a video of an interview. Another recent, more detailed interview is available on Youtube in two parts, part1 and part 2. Huxley wrote a fascinating description of the many false leads during their nerve studies in a commemorative article celebrating the 50th anniversary of his famous paper. Finally, the Guardian published an obituary of Huxley yesterday.

An interview with Andrew Huxley, Part 1.
https://www.youtube.com/watch?v=WdL-81i3Qg4

An interview with Andrew Huxley, Part 2.
https://www.youtube.com/watch?v=qL3aTfljBXE

I will conclude by quoting the summary at the end of Hodgkin and Huxley’s 1952 paper, which was the last of a series of five articles describing their voltage clamp experiments on a squid axon.
SUMMARY
1. The voltage clamp data obtained previously are used to find equations which describe the changes in sodium and potassium conductance associated with an alteration of membrane potential. The parameters in these equations were determined by fitting solutions to the experimental curves relating sodium or potassium conductance to time at various membrane potentials.
2. The equations, given on pp. 518–19, were used to predict the quantitative behaviour of a model nerve under a variety of conditions which corresponded to those in actual experiments. Good agreement was obtained in the cases:
(a) The form, amplitude and threshold of an action potential under zero membrane current at two temperatures.
(b) The form, amplitude and velocity of a propagated action potential.
(c) The form and amplitude of the impedance changes associated with an action potential.
(d) The total inward movement of sodium ions and the total outward movement of potassium ions associated with an impulse.
(e) The threshold and response during the refractory period.
(f) The existence and form of subthreshold responses.
(g) The existence and form of an anode break response.
(h) The properties of the subthreshold oscillations seen in cephalopod axons.
3. The theory also predicts that a direct current will not excite if it rises sufficiently slowly.
4. Of the minor defects the only one for which there is no fairly simple explanation is that the calculated exchange of potassium ions is higher than that found in Sepia axons.
5. It is concluded that the responses of an isolated giant axon of Loligo to electrical stimuli are due to reversible alterations in sodium and potassium permeability arising from changes in membrane potential.

Friday, May 25, 2012

The Semiempirical Mass Formula

When revising Chapter 17 about Nuclear Medicine for the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I were tempted to include a discussion of the semiempirical mass formula, one of the fundamental concepts in nuclear physics. We finally decided that you can’t discuss everything in one book, but we did include the following footnote.
This parabola and the general behavior of the binding energy with Z and A can be explained remarkably well by the semiempirical mass formula [Evans (1955, Chapter 11); Eisberg and Resnick, (1985, p. 528)].
The semiempirical mass formula consists of five terms, which together predict the binding energy of a nucleus having atomic number Z and mass number A.
  1. The first term is negative, and arises from the binding caused by the short range nuclear force. It is proportional to A, which implies that it increases with the volume of the nucleus (this term assumes that the nuclear density is constant; the “liquid drop model”).
  2. The second term represents a positive correction caused by surface tension, arising because nucleons at the surface of the nucleus feel an attractive force from only one side (the nuclear interior). It is proportional to surface area, or A2/3.
  3. All the positively charged protons repel each other, and this effect is accounted for by a positive term for the Coulomb energy, proportional to Z2/A1/3.
  4. Everything else being equal, nuclei tend to be more stable if they have the same number of protons and neutrons. This behavior is reflected in an asymmetry term containing (Z - A/2)2/A. It is zero if A = 2Z (an equal number of protons and neutrons) and is positive otherwise.
  5. Finally, a pairing term is negative if both the number of protons and neutrons is even, positive if both are odd, and zero if one is even and the other odd.
The sum of these five terms is the semiempirical mass formula, with the terms weighted by parameters determined by fitting the model to data.

What can this formula explain? One example is the plot of average binding energy per nucleon as a function of A given in Fig. 17.3. At low A, this function predicts a very low binding energy because of the surface term (very small nuclei have a large surface-to-volume ratio). As A increases, the surface term becomes less important, but the Coulomb term increases as the nucleus is packed with more and more positive charge. For nuclei above about A = 60, the Coulomb term causes the binding energy to decrease as A increases. Therefore, the binding energy per nucleon reaches a peak for isotopes of elements such as iron and nickel, the most stable of nuclei, because of a competition between the surface and Coulomb terms. Although Russ and I did not mention it in our book, the smooth curve that most of the data cluster about in Fig. 17.3 is the prediction of the semiempirical mass formula.

If you hold A constant, you can examine the binding energy as a function of Z. This case is important for beta decay (in which a neutron is converted to a proton and an electron) and positron decay (in which a proton is converted to a neutron and a positron). The two terms in the semiempirical mass formula containing Z—the Coulomb term and the asymmetry term—combine to give a quadratic shape for the binding energy, as shown in Fig. 17.6. For odd A, the resulting parabola predicts the stable isotope (Z) for that A. For even A, the pairing term results in two parabolas, one for even Z and one for odd Z (Fig. 17.7).

Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, by Eisberg and Resnick, superimposed on Intermediate Physics for Medicine and Biology.
Quantum Physics of Atoms, Molecules,
Solids, Nuclei, and Particles,
by Eisberg and Resnick.
In their textbook, Eisberg and Resnick conclude that
The liquid drop model is the oldest, and most classical nuclear model. At the time the semiempirical mass formula was first developed, mass data was available, but not much else was known about nuclei. The parameters were purely empirical, and there was not even a qualitative understanding of the asymmetry and pairing terms. Nevertheless, the formula was significant because it described fairly accurately the masses of hundreds of nuclei in terms of only five parameters.