Friday, July 13, 2012

Magnetic Characterization of Isolated Candidate Vertebrate Magnetoreceptor Cells

Big news this week in the field of magnetoreception. A paper titled “Magnetic Characterization of Isolated Candidate Vertebrate Magnetoreceptor Cells” by Stephan Eder and his colleagues was published online (“early edition”) in the Proceedings of the National Academy of Sciences. One commentator went so far as to suggest that these magnetoreceptors are “the biological equivalent of the elusive Higgs boson” (an exaggeration, but a catchy quote with a grain of truth). The abstract to the paper is given below.
Over the past 50 y, behavioral experiments have produced a large body of evidence for the existence of a magnetic sense in a wide range of animals. However, the underlying sensory physiology remains poorly understood due to the elusiveness of the magnetosensory structures. Here we present an effective method for isolating and characterizing potential magnetite-based magnetoreceptor cells. In essence, a rotating magnetic field is employed to visually identify, within a dissociated tissue preparation, cells that contain magnetic material by their rotational behavior. As a tissue of choice, we selected trout olfactory epithelium that has been previously suggested to host candidate magnetoreceptor cells. We were able to reproducibly detect magnetic cells and to determine their magnetic dipole moment. The obtained values (4 to 100 fA m2) greatly exceed previous estimates (0.5 fA m2). The magnetism of the cells is due to a μm-sized intracellular structure of iron-rich crystals, most likely single-domain magnetite. In confocal reflectance imaging, these produce bright reflective spots close to the cell membrane. The magnetic inclusions are found to be firmly coupled to the cell membrane, enabling a direct transduction of mechanical stress produced by magnetic torque acting on the cellular dipole in situ. Our results show that the magnetically identified cells clearly meet the physical requirements for a magnetoreceptor capable of rapidly detecting small changes in the external magnetic field. This would also explain interference of ac powerline magnetic fields with magnetoreception, as reported in cattle.
The PNAS published a highlight about the article.
Identification of cells that sense Earth’s magnetic field
Researchers have isolated magnetic cells thought to underlie certain animals’ ability to navigate by Earth’s magnetic field. Behavioral studies have long provided evidence for the existence of a magnetic sense, but the identity of the specialized cells that comprise this internal compass has remained elusive. Stephan Eder and colleagues isolated the putative magnetic field-sensing cells that line the trout’s nasal cavity, and which contain iron-rich deposits of the magnetic material called magnetite. The authors placed a suspension of trout nasal tissue under a light microscope, and identified magnetic cells by their rotational motion in the presence of a slowly rotating external magnetic field. After siphoning off the rotating cells to characterize them in greater detail, the authors discovered that each cell contained reflective, iron-rich magnetic particles that were anchored to the cell membrane. The authors also determined that the cells are about 100 times more sensitive to magnetic fields than previously estimated. The findings suggest that the cells are capable of detecting magnetic north as well as small changes in the external magnetic field, and could form the basis of an accurate magnetic sensory system, according to the authors.
Russ Hobbie and I discuss the role of magnetic materials in biology in our chapter about biomagnetism in the 4th edition of Intermediate Physics for Medicine and Biology. We included in our book a photograph of magnetosomes (intracellular magnetite particles) in magnetotactic bacteria. In the photo, the magnetosomes are each about 0.05 μm on a side, and about 20 particles form a line roughly 1 μm long. Eder et al., on the other hand, find magnetic inclusions that are more spherical, and roughly 1–2 μm across. A trout cell has a really big magnetic moment when it contains such a large inclusion, but less than one cell in a thousand responds to the magnetic field and therefore presumably contains one. For a magnetotactic bacterium to have the same magnetic moment, it would need to be packed solid with magnetite.

I find the PNAS paper to be fascinating, and the method to detect individual cells using a rotating magnetic field is clever. However, in my opinion the last sentence of the abstract is a bit speculative, given that typical residential 60 Hz magnetic fields are 10,000 times smaller than the 2 mT fields used by Eder et al., and the frequency is almost 200 times higher. Granted the large magnetic moment makes the idea of powerline field detection intriguing, but that hypothesis is far from proven and I remain skeptical.

One of the coauthors on the PNAS paper is Joseph Kirschvink, whose work we discuss extensively in Section 9.10, Possible Effects of Weak External Electric and Magnetic Fields. Kirschvink is the Nico and Marilyn Van Wingen Professor of Geobiology at Caltech. He has developed several fascinating and controversial hypotheses, such as the Snowball Earth concept and the idea that a meteor found in 1984 contains evidence of life on Mars (he collaborated with my PhD advisor, John Wikswo, to make magnetic field measurements on that meteor). Kirschvink received the William Gilbert Award from the American Geophysical Union in 2011 for his work on geomagnetism. In the citation for this award, Benjamin Weiss of MIT wrote that “Joe represents everything we are looking for in a William Gilbert awardee. He is an ‘ideas man,’ a gadfly, working at the edge of the crowd while the crowd chases after him!” Kirschvink has also won Caltech’s Feynman Prize for Excellence in Teaching.

The PNAS paper has triggered an avalanche of press reports, including those in Science News,  the International Science Times, Science Daily, Phys.org, Live Science, and Discover Magazine.

Friday, July 6, 2012

Women in Medical Physics

Last week in this blog, I discussed the medical physicist Rosalyn Yalow, who was the second female to win the Nobel Prize in Physiology or Medicine (The first was biochemist Gerty Cori), and who developed, with Solomon Berson, the radioimmunoassay technique. Her story reminds us of the important contributions of females to medical physics. I am particularly interested in this topic because Oakland University recently was awarded an ADVANCE grant from the National Science Foundation, with the goal of increasing the participation and advancement of women in academic science and engineering careers. I am on the leadership team of this project, and we are working hard to improve the environment for female STEM (science, technology, engineering, and math) faculty.

Of course the real reason I support increasing opportunities for women in the sciences is that I am certain many of the readers of the 4th edition of Intermediate Physics for Medicine and Biology are female. Medical physics provides several role models for women. For instance, Aminollah Sabzevari published an article in the Science Creative Quarterly titled “Women in Medical Physics.” Sabzevari begins
Traditionally, physics has been a male-dominated occupation. However, throughout history there have been exceptional women who have risen above society’s restrictions and contributed greatly to the advancement of physics. Women have played an important role in the creation, advancement and application of medical physics. As a frontier science, medical physics is less likely to be bound by society’s norms and less subject to the inherent glass ceiling limiting female participation. Women such as Marie Curie, Harriet Brooks, and Rosalind Franklin helped break through that ceiling, and their contributions are worth observing.
Another notable female medical physicist is Edith Hinkley Quimby, who established the first measurements of safe levels of radiation. The American Association of Physicists in Medicine named the Edith H. Quimby Lifetime Achievement Award in her honor.

On a related note (though having little to do with medical or biological physics), I recently read a fascinating biography of Sophie Germain (1776–1831), who did fundamental work in number theory and elasticity.

Finally, in my mind the greatest female physicist of all time (yes, greater than Marie Curie) is Lise Meitner, who first discovered nuclear fission. A great place to learn more about her life and work is Richard Rhodes’ masterpiece The Making of the Atomic Bomb.

One characteristic these women have in common is that they overcame great obstacles in order to become scientists. Their tenacity and determination inspires us all.

Friday, June 29, 2012

Rosalyn Yalow and the Radioimmunoassay

The radioimmunoassay is a sensitive technique for measuring tiny amounts of biologically important molecules, such as the hormone insulin in the blood. The basic idea is to tag insulin with a radioisotope such as I-125, and mix it with antibodies for insulin. Then, add to this mix the patient’s blood. The insulin in the blood competes with the tagged insulin for binding to the antibodies. Next, remove the antibodies and their bound insulin, leaving just the free insulin in the supernatant. The radioactivity of the supernatant provides a way to determine the concentration of insulin in the blood.

Russ Hobbie and I describe the basics of a radioimmunoassay in Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology.
Four kinds of radioactivity measurements have proven useful in medicine. The first involves no administration of a radioactive substance to the patient. Rather, a sample from the patient (usually blood) is mixed with a radioactive substance in the laboratory, and the resulting chemical compounds are separated and counted. This is the basis of various competitive binding assays, such as those for measuring thyroid hormone and the availability of iron-binding sites. The most common competitive binding technique is called radioimmunoassay. A wide range of proteins are measured in this manner.
The radioimmunoassay was developed by Rosalyn Yalow and Solomon Berson. Yalow received the Nobel Prize in Physiology or Medicine for this work in 1977 (Berson had died by then, and the Nobel committee never gives a prize posthumously). For readers of Intermediate Physics for Medicine and Biology, Yalow is interesting because she started out as a physics major, getting her bachelor’s degree in physics from Hunter College, part of the City University of New York (CUNY) system. In 1945 she obtained a PhD in nuclear physics from the University of Illinois at Urbana-Champaign. Building on this physics background, and collaborating with Berson, in the 1950s she developed the radioimmunoassay. Interestingly she and Berson refused to patent the method, wanting it to be freely available for use in medicine. Yalow died just over one year ago, at age 89.

You can learn more about Rosalyn Yalow and her inspiring life from her Nobel autobiography, her Physics Today obituary, her New York Times obituary, and the Jewish Woman’s Archive. For those who prefer a video, click here. I have not read Eugene Straus’s book Rosalyn Yalow: Nobel Laureate: Her Life and Work in Science, but I am putting it on my list of things to do.

Television interview with Rosalyn Yalow.

I often like to finish a blog entry about a noteworthy scientist with their own words. Below are the opening paragraphs of Yalow’s Nobel Prize Lecture.
To primitive man the sky was wonderful, mysterious and awesome but he could not even dream of what was within the golden disk or silver points of light so far beyond his reach. The telescope, the spectroscope, the radiotelescope—all the tools and paraphernalia of modern science have acted as detailed probes to enable man to discover, to analyze and hence better to understand the inner contents and fine structure of these celestial objects.

Man himself is a mysterious object and the tools to probe his physiologic nature and function have developed only slowly through the millenia. Becquerel, the Curies and the Joliot-Curies with their discovery of natural and artificial radioactivity and Hevesy, who pioneered in the application of radioisotopes to the study of chemical processes, were the scientific progenitors of my career. For the past 30 years I have been committed to the development and application of radioisotopic methodology to analyze the fine structure of biologic systems.

From 1950 until his untimely death in 1972, Dr. Solomon Berson was joined with me in this scientific adventure and together we gave birth to and nurtured through its infancy radioimmunoassay, a powerful tool for determination of virtually any substance of biologic interest. Would that he were here to share this moment.

Friday, June 22, 2012

Mannitol

Elevated intracranial pressure often follows a traumatic brain injury. One way to lower pressure in the brain is to administer mannitol intravenously. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss mannitol.
The converse of this effect [removal of urea during renal dialysis] is to inject into the blood urea or mannitol, another molecule that does not readily cross the blood–brain barrier. This lowers the driving pressure of water within the blood, and water flows from the brain into the blood. Although the effects do not last long, this technique is sometimes used as an emergency treatment for cerebral edema.
Mannitol (C6H14O6) has a similar size, structure, and chemical formula as glucose (C6H12O6). It is metabolically inert in humans. A 10% solution consists of 100 g of mannitol per liter (1000 g) of water. Mannitol has a molecular weight of 182 g/mole, implying an osmolarity of (100 g/liter)/(182 g/mole) = 0.55 moles/liter, or 550 mosmole. Blood has an osmolarity of about 300 mosmole, so 10% mannitol is significantly hypertonic. Problem 3 in Chapter 5 asks you to calculate the osmotic pressure produced by mannitol.
Problem 3 Sometimes after trauma the brain becomes very swollen and distended with fluid, a condition known as cerebral edema. To reduce swelling, mannitol may be injected into the bloodstream. This reduces the driving force of water in the blood, and fluid flows from the brain into the blood. If 0.01 mol l−1 of mannitol is used, what will be the approximate osmotic pressure?
Mannitol works best for short-term reduction of intracranial pressure. If it is administered continuously, eventually some of the mannitol may cross the blood-brain barrier, reducing its osmotic effect (Wakai et al., 2008). Then, when mannitol administration is discontinued, the mannitol that crossed into the brain can actually have the opposite effect of osmotically drawing water from the blood into the brain.

Interestingly, if you give a high enough concentration of mannitol, the osmotic shrinking of endothelia cells can disrupt the blood brain barrier. Sometimes mannitol is used to ensure that certain drugs are able to pass from the blood into the brain.

Friday, June 15, 2012

The Heating of Metal Electrodes

Twenty years ago, I published a paper titled “The Heating of Metal Electrodes During Rapid-Rate Magnetic Stimulation: A Possible Safety Hazard” (Electroencephalography and Clinical Neurophysiology, Volume 85, Pages 116–123, 1992). My coauthors were Alvaro Pascual-Leone, Leonardo Cohen, and Mark Hallett, all working at the National Institutes of Health at that time. The paper motivated two new homework problems in Chapter 8 of the 4th edition of Intermediate Physics for Medicine and Biology.
Problem 24 Suppose one is measuring the EEG when a time-dependent magnetic field is present (such as during magnetic stimulation). The EEG is measured using a disk electrode of radius a = 5 mm and thickness d = 1 mm, made of silver with conductivity σ = 63 × 106 S m−1. The magnetic field is uniform in space, is in a direction perpendicular to the plane of the electrode, and changes from zero to 1 T in 200 μs.
(a) Calculate the electric field and current density in the electrode due to Faraday induction.
(b) The rate of conversion of electrical energy to thermal energy per unit volume (Joule heating) is the product of the current density times the electric field. Calculate the rate of thermal energy production during the time the magnetic field is changing.
(c) Determine the total thermal energy change caused by the change of magnetic field.
(d) The specific heat of silver is 240 J kg−1 ◦C−1, and the density of silver is 10 500 kg m−3. Determine the temperature increase of the electrode due to Joule heating. The heating of metal electrodes can be a safety hazard during rapid (20 Hz) magnetic stimulation [Roth et al. (1992)].

Problem 25 Suppose that during rapid-rate magnetic stimulation, each stimulus pulse causes the temperature of a metal EEG electrode to increase by ΔT (see Prob. 24). The hot electrode then cools exponentially with a time constant τ (typically about 45 s). If N stimulation pulses are delivered starting at t = 0 m with successive pulses separated by a time Δt, then the temperature at the end of the pulse train is T(N,Δt) = ΔT Σ e−iΔt/τ [the sum goes from 0 to N-1]. Find a closed-form expression for T(N,Δt) using the summation formula for the geometric series: 1 + x + x2 + ... + xn−1 = (1 − xn)/(1 − x). Determine the limiting values of T(N,Δt)for NΔt [much less than] τ and NΔt [much greater than] τ . [See Roth et al. (1992).]
Both problems walk you through parts of our paper. I like Problem 24 because it provides a nice example of Faraday’s law of induction, one of the topics discussed in Chapter 8 (Biomagnetism). Problem 25 could easily have been placed in Chapter 3 (Systems of Many Particles) because of its emphasis on thermal heating and Newton’s law of cooling.

Problem 25 also provides a physical example illustrating the mathematical expression for the summation of a geometric series. If you are not familiar with this sum, it is one that you don’t have to memorize because it is so easy to derive. Let S = 1 + x + x2 + … + xN-1. If you multiply this expression by x, you get xS = x + x2 + … + xN. Now (and this is the clever trick), subtract xS from S, which gives (1 – x) S = 1 – xN. Note that all the terms x + x2 + … + xN-1 cancel out! Solving for S gives you the equation in Problem 25. If x is between -1 and 1, and N goes to infinity, you get for the infinite sum S = 1/(1 – x).

I should say a few words about my coauthors, who are all leaders in the field of transcranial magnetic stimulation. The project started when Alvaro Pascual-Leone, who had just arrived at NIH, mentioned to me that one patient of his had suffered a burn during rapid-rate magnetic stimulation (see: Pascual-Leone, A., Gates, J.R. and Dhuna, A.K. “Induction of Speech Arrest and Counting Errors with Rapid Transcranial Magnetic Stimulation,” Neurology, Volume 41, Pages 697–702, 1991). This motivated Alvaro and I to launch a study using a variety of metal disks made by the NIH machine shop. Alvaro is now the Director of the Berenson-Allen Center for Noninvasive Brain Stimulation. Leo Cohen and Mark Hallett were both studying magnetic stimulation when I arrived at NIH in 1988. I was lucky to start collaborating with them, providing physics expertise to augment their extensive clinical experience. Both continue at the Human Motor Control Section at NIH in Bethesda, Maryland.

Saturday, June 9, 2012

Law of Laplace

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I often introduce topics in the homework problems that we don’t have room to discuss fully in the text. For instance, Problem 18 in Chapter 1 asks the reader to derive the Law of Laplace, f = p R, a relationship between the pressure p inside a cylinder, its radius R, and its wall tension f.

Vital Circuits, by Steven Vogel.
Vital Circuits,
by Steven Vogel.
In his book Vital Circuits: On Pumps, Pipes, and the Workings of Circulatory Systems, Steven Vogel explains the physiological significance of the Law of Laplace, particularly for blood vessels.
The wall of the aorta of a dog is about 0.6 millimeters thick, while the wall of the artery leading to the head is only half that. Pressure differences across the wall are the same…, but the aorta has an internal diameter three times as great. The bigger vessel simply needs a thicker wall. An arteriole is 100 times skinnier yet; it wall is fifteen times thinner than that of the artery going to the head. A capillary is eight times still skinnier, with walls another twenty times thinner… A general rule that wall thickness is proportional to vessel diameter is clearly evident, just the relationship expected from Laplace’s law.
In our Homework Problem 18, Russ and I write that “Sometimes a patient will have an aneurysm in which a portion of an artery will balloon out and possibly rupture. Comment on this phenomenon in light of the R dependence of the force per unit length.” The answer (spoiler alert) is explained by Vogel. He first examines what happens when inflating a balloon.
About the same pressure is needed throughout the inflation, except for an extra bit to get started and (if you persist) another extra bit just before the final explosion… Pressure gets more effective in generating tension—stretch—as the balloon gets bigger [p = f/R], automatically providing the extra force needed as the rubber is expanded.
What is the implication for an aneurysm?
Pressure, we noted, is more effective in generating tension in the walls of a big cylinder than in those of a small cylinder. Blow into a cylindrical balloon, and one part of the balloon will inflate almost fully before the remainder expands. Pressure inside at any instant is the same everywhere, but the responsive stretching is curiously irregular…any part partially inflated is easier to inflate further than any part not yet inflated at all.
Thus, the law of Laplace provides insight into aneurysms: just think of a bulge that develops when inflating a cylindrical balloon. Now the question can be turned on its head: why don’t all cylindrical vessels immediately develop aneurysms as soon as pressure is applied? In other words, why are we not all dead, killed by the Law of Laplace? Vogel addresses this point too. “The primary question isn’t why aneurysms sometimes occur, but why they don’t normally happen…Why an arterial wall...behaves in a much friendlier manner.”

Vogel’s answer is that arteries “should surely stretch strangely” (I love the alliteration). The walls of an artery as designed such that
a disproportionate force is needed for each incremental bit of stretch—the thing gets stiffer as it stretches further…As the vessels expand, pressure inside is increasingly effective at generating tension in their walls—that’s the unavoidable consequence of Laplace’s law. But that tension, the stress in the walls, is decreasingly effective in causing the walls to stretch. It all comes down to that curved, J-shaped line on the stress-strain graph, which means no aneurysm.
Thank goodness nature found a way to avoid the aneurysms predicted by the Law of Laplace!

There are many more biomedical applications of the Law of Laplace. In a review article, Jeffrey Basford describes the “Law of Laplace and Its Relevance to Contemporary Medicine and Rehabilitation” (Archives of Physical Medidine and Rehabilitation, Volume 83, Pages 1165–1170, 2002). Basford considers many examples, including bladder function, compressive wraps to treat peripheral edema, the choice of where in the uterus to perform a Cesarean section. That is a lot of insight from one simple law relating pressure, radius and wall tension.

Friday, June 1, 2012

Andrew Huxley (1917-2012)

Andrew Huxley, the greatest mathematical biologist of the 20th century, died on Wednesday, May 30. Huxley won the Nobel Prize for his groundbreaking work with Alan Hodgkin that explained electrical transmission in nerves.

In Chapter 6 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe the Hodgkin-Huxley model of membrane current in a nerve axon.
Considerable work was done on nerve conduction in the late 1940s, culminating in a model that relates the propagation of the action potential to the changes in membrane permeability that accompany a change in voltage. The model [Hodgkin and Huxley (1952)] does not explain why the membrane permeability changes; it relates the shape and conduction speed of the impulse to the observed changes in membrane permeability. Nor does it explain all the changes in current…Nonetheless, the work was a triumph that led to the Nobel Prize for Alan Hodgkin and Andrew Huxley.
The paper we cite (“A Quantitative Description of Membrane Current and its Application to Conduction and Excitation in Nerve,” Journal of Physiology, Volume 117, Pages 500–544) is one of my favorites. Whenever I teach biological physics, I assign this paper to my students as an example of mathematical modeling in biology at its best. In 1981 Hodgkin and Huxley wrote a “citation classic” article about their paper, which has now been cited over 9300 times. They concluded
Another reason why our paper has been widely read may be that it shows how a wide range of well-known, complicated, and variable phenomena in many excitable tissues can be explained quantitatively by a few fairly simple relations between membrane potential and changes of ion permeability—processes that are several steps away from the phenomena that are usually observed, so that the connections between them are too complex to be appreciated, intuitively. There now seems little doubt that the main outlines of our explanation are correct, but we have always felt that our equations should be regarded only as a first approximation that needs to be refined and extended in many ways in the search for the actual mechanism of the permeability change’s on the molecular scale.
As one who does mathematical modeling of bioelectric phenomena for a living, I can think of no better way to honor Huxley than to show you his equations.


This set of four nonlinear ordinary differential equations, plus six expressions relating how the ion channel rate constants depend on voltage, not only describes the membrane of the squid giant nerve axon, but also is the starting point for models of all electrically active tissue. Russ and I consider this model to be so important that we dedicate six pages to exploring it, and present in our Fig. 6.38 a computer program to solve the equations. For anyone interested in electrophysiology, becoming familiar with the Hodgkin-Huxley model is job one, just as analyzing the Bohr model for hydrogen is the starting point for someone interested in atomic structure. Remarkably, 60 years ago Huxley solved these differential equations numerically using only a hand-crank adding machine.

How can your learn more about this great man? First, the Nobel Prize website contains his biography, a transcript of his Nobel lecture, and a video of an interview. Another recent, more detailed interview is available on Youtube in two parts, part1 and part 2. Huxley wrote a fascinating description of the many false leads during their nerve studies in a commemorative article celebrating the 50th anniversary of his famous paper. Finally, the Guardian published an obituary of Huxley yesterday.

An interview with Andrew Huxley, Part 1.
https://www.youtube.com/watch?v=WdL-81i3Qg4

An interview with Andrew Huxley, Part 2.
https://www.youtube.com/watch?v=qL3aTfljBXE

I will conclude by quoting the summary at the end of Hodgkin and Huxley’s 1952 paper, which was the last of a series of five articles describing their voltage clamp experiments on a squid axon.
SUMMARY
1. The voltage clamp data obtained previously are used to find equations which describe the changes in sodium and potassium conductance associated with an alteration of membrane potential. The parameters in these equations were determined by fitting solutions to the experimental curves relating sodium or potassium conductance to time at various membrane potentials.
2. The equations, given on pp. 518–19, were used to predict the quantitative behaviour of a model nerve under a variety of conditions which corresponded to those in actual experiments. Good agreement was obtained in the cases:
(a) The form, amplitude and threshold of an action potential under zero membrane current at two temperatures.
(b) The form, amplitude and velocity of a propagated action potential.
(c) The form and amplitude of the impedance changes associated with an action potential.
(d) The total inward movement of sodium ions and the total outward movement of potassium ions associated with an impulse.
(e) The threshold and response during the refractory period.
(f) The existence and form of subthreshold responses.
(g) The existence and form of an anode break response.
(h) The properties of the subthreshold oscillations seen in cephalopod axons.
3. The theory also predicts that a direct current will not excite if it rises sufficiently slowly.
4. Of the minor defects the only one for which there is no fairly simple explanation is that the calculated exchange of potassium ions is higher than that found in Sepia axons.
5. It is concluded that the responses of an isolated giant axon of Loligo to electrical stimuli are due to reversible alterations in sodium and potassium permeability arising from changes in membrane potential.

Friday, May 25, 2012

The Semiempirical Mass Formula

When revising Chapter 17 about Nuclear Medicine for the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I were tempted to include a discussion of the semiempirical mass formula, one of the fundamental concepts in nuclear physics. We finally decided that you can’t discuss everything in one book, but we did include the following footnote.
This parabola and the general behavior of the binding energy with Z and A can be explained remarkably well by the semiempirical mass formula [Evans (1955, Chapter 11); Eisberg and Resnick, (1985, p. 528)].
The semiempirical mass formula consists of five terms, which together predict the binding energy of a nucleus having atomic number Z and mass number A.
  1. The first term is negative, and arises from the binding caused by the short range nuclear force. It is proportional to A, which implies that it increases with the volume of the nucleus (this term assumes that the nuclear density is constant; the “liquid drop model”).
  2. The second term represents a positive correction caused by surface tension, arising because nucleons at the surface of the nucleus feel an attractive force from only one side (the nuclear interior). It is proportional to surface area, or A2/3.
  3. All the positively charged protons repel each other, and this effect is accounted for by a positive term for the Coulomb energy, proportional to Z2/A1/3.
  4. Everything else being equal, nuclei tend to be more stable if they have the same number of protons and neutrons. This behavior is reflected in an asymmetry term containing (Z - A/2)2/A. It is zero if A = 2Z (an equal number of protons and neutrons) and is positive otherwise.
  5. Finally, a pairing term is negative if both the number of protons and neutrons is even, positive if both are odd, and zero if one is even and the other odd.
The sum of these five terms is the semiempirical mass formula, with the terms weighted by parameters determined by fitting the model to data.

What can this formula explain? One example is the plot of average binding energy per nucleon as a function of A given in Fig. 17.3. At low A, this function predicts a very low binding energy because of the surface term (very small nuclei have a large surface-to-volume ratio). As A increases, the surface term becomes less important, but the Coulomb term increases as the nucleus is packed with more and more positive charge. For nuclei above about A = 60, the Coulomb term causes the binding energy to decrease as A increases. Therefore, the binding energy per nucleon reaches a peak for isotopes of elements such as iron and nickel, the most stable of nuclei, because of a competition between the surface and Coulomb terms. Although Russ and I did not mention it in our book, the smooth curve that most of the data cluster about in Fig. 17.3 is the prediction of the semiempirical mass formula.

If you hold A constant, you can examine the binding energy as a function of Z. This case is important for beta decay (in which a neutron is converted to a proton and an electron) and positron decay (in which a proton is converted to a neutron and a positron). The two terms in the semiempirical mass formula containing Z—the Coulomb term and the asymmetry term—combine to give a quadratic shape for the binding energy, as shown in Fig. 17.6. For odd A, the resulting parabola predicts the stable isotope (Z) for that A. For even A, the pairing term results in two parabolas, one for even Z and one for odd Z (Fig. 17.7).

Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, by Eisberg and Resnick, superimposed on Intermediate Physics for Medicine and Biology.
Quantum Physics of Atoms, Molecules,
Solids, Nuclei, and Particles,
by Eisberg and Resnick.
In their textbook, Eisberg and Resnick conclude that
The liquid drop model is the oldest, and most classical nuclear model. At the time the semiempirical mass formula was first developed, mass data was available, but not much else was known about nuclei. The parameters were purely empirical, and there was not even a qualitative understanding of the asymmetry and pairing terms. Nevertheless, the formula was significant because it described fairly accurately the masses of hundreds of nuclei in terms of only five parameters.

Friday, May 18, 2012

Spin Echo

The spin-echo of nuclear magnetic resonance is one of those concepts that anyone interested in medical physics should know. Russ Hobbie and I discuss the spin-echo’s role in magnetic resonance imaging in Chapter 18 of the 4th edition of Intermediate Physics for Medicine and Biology.
The pulse sequence shown in Fig. 18.17 can be used to determine T2 [the true or non-recoverable spin-spin relaxation time] and T*2 [the experimental spin-spin relaxation time]. Initially a π/2 [90°] pulse nutates M [the magnetization] about the x’ axis so that all spins lie along the rotating y’ axis. Figure 18.17(a) shows two such spins. Spin a continues to precess at the same frequency as the rotating coordinate system; spin b is subject to a slightly smaller magnetic field and precesses at a slightly lower frequency, so that at time TE/2 it has moved clockwise in the rotating frame by angle θ, as sown in Fig. 18.17(b). At this time a π [180°] pulse is applied that rotates all spins around the x' axis. Spin a then points along the –y' axis; spin b rotates to the angle shown in Fig. 18.17(c). If spin b still experiences the smaller magnetic field, it continues to precess clockwise in the rotating frame. At time TE both spins are in phase again, pointing along –y' as shown in Fig. 18.17(d). The resulting signal is called an echo, and the process for producing it is called a spin-echo sequence.
When I discuss this concept in class, I use the analogy of a footrace. Suppose all runners line up at the starting line, and at the sound of the starter’s gun they begin to run clockwise around a track. Because they all run at somewhat different speeds, the pack of runners spreads until eventually (after many laps) they are distributed nearly evenly, and seemingly randomly, around the track. At this time another gun is fired, commanding all runners to turn around and run counterclockwise. Now, the fast runners who were ahead of the others are suddenly behind, and the slow runners who were behind the others are miraculously ahead. As time goes on, the fast runners catch up to the slow ones, and eventually they all meet in one tight pack as they run past the starting line. This unexpected regrouping of the runners is the echo. The analogy is not perfect, because the spins always precess in the same direction. Nevertheless, the 180° pulse has the effect of placing the fast spinners behind the slow spinners, which is the essence of both the spin echo effect and the runner analogy.

The spin-echo was first observed by physicist Erwin Hahn. His paper “Spin Echos” (Physical Review, Volume 80, Pages 580–594, 1950) has been cited over 3000 times. Hahn wrote a citation classic article about this paper, in which he describes how he made his great discovery by accident.
One day a strange signal appeared on the oscilloscope, in the absence of a pulse pedestal, so I kicked the apparatus and breathed a sigh of relief when the signal went away. A week later, the signal returned, and this time it checked out to be a real spontaneous spin echo nuclear signal from the test sample of protons in the glycerine being used. In about three weeks, I was able to predict mathematically what I suspected to be a constructive interference of precessing nuclear magnetism components by solving the Bloch nuclear induction equations. Here for the first time, a free precession signal in the absence of driving radiation was observed first, and predicted later. The spin echo began to yield information about the local atomic environment in terms of various amplitude and frequency memory beat effects, certainly not all understood in the beginning.

As I look back at this experience, it was an awesome adventure to be alone with the apparatus showing one new effect after another at a time when there was no one at Illinois experienced in NMR with whom I could talk.
You can learn more about Hahn and his discovery of the spin-echo from the transcript of an oral history interview published by the Niels Bohr Library and Archives, part of the American Institute of Physics.

For those of you who are visual learners, Wikipedia has a nice animation of the formation of a spin-echo. Another animation is at http://mrsrl.stanford.edu/~brian/mri-movies/spinecho.mpg.

You can find an excellent video about spin-echo NMR on Youtube, narrated by Sir Paul Callaghan, a New Zealand physicist (this is part of a series of videos that nicely support the discussion in Chapter 18 of Intermediate Physics for Medicine and Biology). Callaghan was a leader in MRI physics, and wrote Principles of Nuclear Magnetic Resonance Microscopy and, more recently, Translational Dynamics and Magnetic Resonance: Principles of Pulsed Gradient Spin Echo NMR. Tragically, Callaghan lost his battle to colon cancer this March.

Paul Callaghan discusses the spin echo.

Friday, May 11, 2012

Stopping Power and the Bragg Peak

Proton therapy is becoming a popular treatment for cancer. Russ Hobbie and I discuss proton therapy in Chapter 16 of the 4th edition of Intermediate Physics for Medicine and Biology.
Protons are also used to treat tumors. Their advantage is the increase of stopping power at low energies. It is possible to make them come to rest in the tissue to be destroyed, with an enhanced dose relative to intervening tissue and almost no dose distally (“downstream”) as shown by the Bragg peak in Fig. 16.51. … The edges of proton fields are much sharper than for x rays and electrons. This can provide better tissue sparing, but it also means that alignments must be much more precise [Moyers (2003)]. Sparing tissue reduces side effects immediately after treatment. It also reduces the incidence of radiation-induced second cancers many years later.
Stopping power and range are a key concepts in describing how radiation interacts with matter, and are defined in Chapter 15.
It is convenient to speak of how much energy the charged particle loses per unit path length, the stopping power, and its range—roughly, the total distance it travels before losing all its energy. The stopping power is the expectation value of the amount of kinetic energy T lost by the projectile per unit path length. (The term power is historical. The units of stopping power are J m−1 not J s−1.)
To illustrate these concepts, I have devised a new homework problem. It’s a bit like Problem 31 in Chapter 16, but uses a simpler expression for the energy dependence of the stopping power, and focuses on how this leads to a Bragg peak. This problem occasionally appears on the qualifier exam taken by our Medical Physics graduate students at Oakland University.
Section 16.11

Problem 31 ½   Assume the stopping power of a particle, S = − dT/dx, as a function of kinetic energy, T, is S = C/T. 
(a) What are the units of C? 
(b) If the initial kinetic energy at x = 0 is To, find T(x) .
(c) Determine the range R of the particle as a function of C and To
(d) Plot S(x) versus x. Does this plot contain a Bragg peak? 
(e) Discuss the implications of the shape of S(x) for radiation treatment using this particle.
The stopping power often does fall as 1/T for large energies, as assumed in the above problem, but it rises as the square root of T for small energies (See Fig. 15.17 in Intermediate Physics for Medicine and Biology). To find a more accurate expression for S(x), try repeating this problem with

S(T) = C/(T + A/√T) .

Warning: I wasn’t able to find a simple analytical expression for S(x) in this case. Can you?

One can imagine a proton incident with such low energy that it lies entirely on the rising part of the stopping power versus energy curve. In that case, a good approximation for the stopping power would be simply

S(T) = B √T .

I was able to solve for the stopping power in this case, although the expression is cumbersome. Interestingly, for these low energy particles the range is now infinite, because as the particle slows down it loses energy more slowly. I suppose once the particle’s energy is similar to the thermal energy, the entire model breaks down, so I am not too worried about this result.

These considerations illustrate how we gain much insight by examining simple toy models. That tends to be the view Russ and I adopt in our book, which is at odds with the traditional view of biologists and medical doctors, who relish the diversity and complexity of life.