Friday, May 17, 2013

The Lorenz equations and chaos

Fifty years ago, Edward Lorenz (1917–2008) published an analysis of Rayleigh-Benard convection that began the study of a field of mathematics called chaos theory. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce chaos by analyzing the logistic map, which
is an example of chaotic behavior or deterministic chaos. Deterministic chaos has four important characteristics: 1. The system is deterministic, governed by a set of equations that define the evolution of the system. 2. The behavior is bounded. It does not go off to infinity. 3. The behavior of the variables is aperiodic in the chaotic regime. The values never repeat. 4. The behavior depends very sensitively on the initial conditions.
The sensitivity to initial conditions is sometimes called the “butterfly effect,” a term coined by Lorenz. His model is a simplified description of the atmosphere, and has implications for weather prediction.

The mathematical model that Lorenz analyzed consists of three first-order coupled nonlinear ordinary differential equations. Because of their historical importance, I have written a new homework problem that introduces Lorenz’s equations. These particular equations don’t have any biological applications, but the general idea of chaos and nonlinear dynamics certainly does (see, for example, Glass and Mackey’s book From Clock’s to Chaos.
Section 10.7

Problem 33 1/2. Edward Lorenz (1963) published a simple, three-variable (x, y, z) model of Rayleigh-Benard convection
dx/dt = σ (y – x)
dy/dt = x (ρ – z) – y
dz/dt = xy – β z
where σ=10, ρ=28, and β=8/3.
(a) Which terms are nonlinear?
(b) Find the three equilibrium points for this system of equations.
(c) Write a simple program to solve these equations on the computer (see Sec. 6.14 for some guidance on how to solve differential equations numerically). Calculate and plot x(t) as a function of t for different initial conditions. Consider two initial equations that are very similar, and compute how the solutions diverge as time goes by.
(d) Plot z(t) versus x(t), with t acting as a parameter of the curve.

Lorenz, E. N. (1963) “Deterministic nonperiodic flow,” Journal of the Atmospheric Sciences, Volume 20, Pages 130–141.
If you want to examine chaos in more detail, see Steven Strogatz’s excellent book Nonlinear Dynamics and Chaos. He has an entire chapter (his Chapter 9) dedicated to the Lorenz equations.

The story of how Lorenz stumbled upon the sensitivity of initial conditions is a fascinating tale. Here is one version in a National Academy of Sciences Biographical Memoir about Lorenz written by Kerry Emanuel.
At one point, in 1961, Ed had wanted to examine one of the solutions [to a preliminary version of his model that contained 12 equations] in greater detail, so he stopped the computer and typed in the 12 numbers from a row that the computer had printed earlier in the integration. He started the machine again and stepped out for a cup of coffee. When he returned about an hour later, he found that the new solution did not agree with the original one. At first he suspected trouble with the machine, a common occurrence, but on closer examination of the output, he noticed that the new solution was the same as the original for the first few time steps, but then gradually diverged until ultimately the two solutions differed by as much as any two randomly chosen states of the system. He saw that the divergence originated in the fact that he had printed the output to three decimal places, whereas the internal numbers were accurate to six decimal places. His typed-in new initial conditions were inaccurate to less than one part in a thousand.

“At this point, I became rather excited,” Ed relates. He realized at once that if the atmosphere behaved the same way, long-range weather prediction would be impossible owing to extreme sensitivity to initial conditions. During the following months, he persuaded himself that this sensitivity to initial conditions and the nonperiodic nature of the solutions were somehow related, and was eventually able to prove this under fairly general conditions. Thus was born the modern theory of chaos.
To learn more about the life of Edward Lorenz, see his obituary here and here. I have not read Chaos: Making a New Science by James Gleick, but I understand that he tells Lorenz’s story there.

Friday, May 10, 2013

Graduation

Today, my wife Shirley and I will attend the graduation of our daughter Kathy from Vanderbilt University. She is getting her undergraduate degree, with a double major in biology and history.

Kathy spent part of her time in college working with John Wikswo in the Department of Physics. As regular readers of this blog may know, Wikswo was my PhD advisor when I was a graduate student at Vanderbilt in the 1980s. Russ Hobbie and I often cite Wikswo’s work in the 4th edition of Intermediate Physics for Medicine and Biology, for his contributions to both cardiac electrophysiology and biomagnetism. You can see a picture of Kathy and John in an article about undergraduate research in Arts and Science, the magazine of Vanderbilt University’s College of Arts and Science. Interestingly, there are now publications out there with “Roth and Wikswo” among the authors that I had nothing to do with; for example, see poster A53 at the 6th q-bio conference (Santa Fe, New Mexico, 2012). You can watch and listen to Wikswo give his TEDxNashville talk here. Kathy also worked with Todd Graham of the Department of Cell and Developmental Biology at Vanderbilt. This fall, she plans to attend graduate school studying biology at Michigan State University.

Friday, May 3, 2013

Biodamage Via Shock Waves Initiated by Irradiation With Ions

Just two doors down the hall from my office in Hannah Hall of Science at Oakland University is the office of my friend Gene Surdutovich. Gene teaches physics at OU, and is an international expert on the interaction of heavy ions with biological tissue. Recently he and his collaborators have published a review describing their work, titled “Biodamage Via Shock Waves Initiated by Irradiation With Ions,” in the journal Scientific Reports (Volume 3, Article Number 1289). Their abstract states
Radiation damage following the ionising radiation of tissue has different scenarios and mechanisms depending on the projectiles or radiation modality. We investigate the radiation damage effects due to shock waves produced by ions. We analyse the strength of the shock wave capable of directly producing DNA strand breaks and, depending on the ion’s linear energy transfer, estimate the radius from the ion’s path, within which DNA damage by the shock wave mechanism is dominant. At much smaller values of linear energy transfer, the shock waves turn out to be instrumental in propagating reactive species formed close to the ion’s path to large distances, successfully competing with diffusion.
Except for the British spelling, I enjoyed reading this paper very much.

Russ Hobbie and I discuss the interaction of ions with tissue in the 4th edition of Intermediate Physics for Medicine and Biology, in the context of proton therapy.
Protons are also used to treat tumors. Their advantage is the increase of stopping power at low energies. It is possible to make them come to rest in the tissue to be destroyed, with an enhanced dose relative to intervening tissue and almost no dose distally (“downstream”) as shown by the Bragg peak in Fig. 16.51.
Surdutovich and his colleagues note that the Bragg peak occurs for heavier ions too, such as carbon. What is really fascinating about their work is one of the mechanisms they propose. They note that
while the Bragg peak location answers a question of where most of the damage occurs, we are raising a question of how the damage happens… We investigate the effects that stem from a large inhomogeneity of the dose distribution in the vicinity of the Bragg peak on biological damage.
Their hypothesis is that when a carbon ion interacts with tissue, energy is abruptly deposited
within a cylinder of about one nm radius, which is so small that the temperature within this cylinder increases by over 1000 K by 10−13 s (we will refer to it as the ‘‘hot cylinder’’). This increase of temperature brings about a rapid increase of pressure (up to 1 GPa) compared to the atmospheric pressure outside the cylinder. Such circumstances cause the onset of a cylindrical shock wave described by the strong explosion scenario. The pressure rapidly increases on the wave front and then decreases in the wake.
I find this mechanism to be fascinating. The theory is noteworthy for its multiscale approach: analyzing events spanning a wide range of time, space, and energy scales. Interestingly, their theory also spans multiple chapters in Intermediate Physics for Medicine and Biology: Chapter 1 about pressure, Chapter 3 on temperature and heat, Chapter 4 on diffusion, Chapter 13 on sound, Chapter 15 about the interaction of radiation with tissue, and Chapter 16 about the medical use of radiation. They conclude
The notion of thermomechanical effects represents a paradigm shift in our understanding of radiation damage due to ions and requires re-evaluation of relative biological effectiveness because of collective transport effects for all ions and direct covalent bond breaking by shock waves for ions heavier than argon. These effects will also have to be considered for high-density ion beams, irradiation with intensive laser fields, and other conditions prone to causing high gradients of temperature and pressure on a nanometre scale.
This paper appears in an open access online journal, so you don’t need a subscription to read it. Be sure to look at the paper’s supplementary information, especially the movie showing a molecular dynamics simulation of the shock wave distorting a DNA molecule. And if you read very closely, you will find this nugget appearing in the acknowledgments: “We are grateful to … Bradley Roth who critically read the manuscript.”

By the way, this fall Gene is scheduled to teach PHY 325, Biological Physics, using the textbook….you guessed it….Intermediate Physics for Medicine and Biology.

Friday, April 26, 2013

Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid

Sixty years ago yesterday the journal Nature published a letter by James Watson and Francis Crick titled “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid” (Volume 171, Pages 737–738), announcing the discovery of the double helix structure of DNA.

The 4th edition of Intermediate Physics for Medicine and Biology doesn’t discuss the structure of DNA much. As Russ Hobbie and I say in the preface, “Molecular biophysics has been almost completely ignored: excellent texts already exist, and this is not our area of expertise.” Yet, we do mention DNA occasionally. In the first section of our book, about distances and sizes, we say
Genetic information is stored in long, helical strands of deoxyribonucleic acid (DNA). DNA is about 2.5 nm wide, and the helix completes a turn every 3.4 nm along its length.
Problem 2 in the first chapter analyzes the volume of DNA
Problem 2 Our genetic information or genome is stored in the parts of the DNA molecule called base pairs. Our genome contains about 3 billion (3×109) base pairs, and there are two copies in each cell. Along the DNA molecule, there is one base pair every one-third of a nanometer. How long would the DNA helix from one cell be if it were stretched out in a line? If the entire DNA molecule were wrapped up into a sphere, what would be the diameter of that sphere?
A problem in Chapter 3 considers errors in DNA in the context of the Boltzmann factor.
Problem 25 The DNA molecule consists of two intertwined linear chains. Sticking out from each monomer (link in the chain) is one of four bases: adenine (A), guanine (G), thymine (T), or cytosine (C). In the double helix, each base from one strand bonds to a base in the other strand. The correct matches, A–T and G–C, are more tightly bound than are the improper matches. The chain looks something like this, where the last bond shown is an “error.

A  T  G  C  G
T  A  C  G  A (error)

The probability of an error at 300 K is about 10−9 per base pair. Assume that this probability is determined by a Boltzmann factor e−U/kBT, where U is the additional energy required for a mismatch.
(a) Estimate this excess energy.
(b) If such mismatches are the sole cause of mutations in an organism, what would the mutation rate be if the temperature were raised 20° C?
We discuss DNA again in Chapter 16, when considering radiation damage to tissue.
Cellular DNA is organized into chromosomes. In order to understand radiation damage to DNA, we must recognize that there are four phases in the cycle of cell division

Figure 16.33 shows, at different magnifications, a strand of DNA, various intermediate structures which we will not discuss, and a chromosome as seen during the M phase of the cell cycle. The size goes from 2 nm for the DNA double helix to 1400 nm for the chromosome. In addition to cell survival curves one can directly measure chromosome damage. There is strong evidence that radiation, directly or indirectly, breaks a DNA strand. If only one strand is broken, there are efficient mechanisms that repair it over the course of a few hours using the other strand as a template. If both strands are broken, permanent damage results, and the next cell division produces an abnormal chromosome.19 Several forms of abnormal chromosomes are known, depending on where along the strand the damage occurred and how the damaged pieces connected or failed to connect to other chromosome fragments. Many of these chromosomal abnormalities are lethal: the cell either fails to complete its next mitosis, or it fails within the next few divisions. Other abnormalities allow the cell to continue to divide, but they may contribute to a multistep process that sometimes leads to cancer many cell generations later.
The Double Helix, by James Watson, superimposed on Intermeidate Physics for Medicine and Biology.
The Double Helix,
by James Watson.
The story of how the structure of DNA was discovered is nearly as fascinating as the structure itself. James Watson provides a first-person account in his book The Double Helix. It begins
I have never seen Francis Crick in a modest mood. Perhaps in other company he is that way, but I have never had reason so to judge him. It has nothing to do with his present fame. Already he is much talked about, usually with reverence, and someday he may be considered in the category of Rutherford or Bohr. But this was not true when, in the fall of 1951, I came to the Cavendish Laboratory of Cambridge University to join a small group of physicists and chemists working on the three-dimensional structures of proteins. At that time he was thirty-five, yet almost totally unknown. Although some of his closest colleagues realized the value of his quick, penetrating mind and frequently sought his advice, he was often not appreciated, and most people thought he talked too much.
The Eighth Day of Creation, by Horace Freeland Judson, superimposed on Intermediate Physics for Medicine and Biology.
The Eighth Day of Creation,
by Horace Freeland Judson.
The Double Helix is one of those iconic books that everyone should read for the insights it provides into how science is done, and for what is simply a fascinating story. The tale is also told from a more unbiased perspective in Horace Freeland Judson’s book The Eighth Day of Creation: The Makers of the Revolution in Biology. Let us end with Judson’s discussion of Watson and Crick’s now 60-year-old letter.
The letter to Nature appeared in the April 25th issue. To those of its readers who were close to the questions, and who had not already heard the news, the letter must have gone off like a string of depth charges in a calm sea. “We wish to suggest a structure for the salt of deoxyribose nucleic acid (D.N.A.). This structure has novel features which are of considerable biological interest,” the letter began; at the end, “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” The last sentence has been called one of the most coy statements in the literature of science.

Friday, April 19, 2013

Hyperpolarized 129Xe MRI of the Human Lung

Chapter 18 of the 4th edition of Intermediate Physics for Medicine and Biology is devoted to magnetic resonance imaging. Russ Hobbie and I discuss many aspects of MRI, including functional MRI and diffusion tensor imaging. One topic we do not cover is using hyperpolarized spins to study lung function. Fortunately, a clearly written review article, “Hyperpolarized 129Xe MRI of the Human Lung,” by John Mugler and Talissa Altes, appeared recently in the Journal of Magnetic Resonance Imaging (Volume 37, Pages 313–331, 2013). Below I reproduce excerpts from their introduction, with citations removed.
CLINICAL MAGNETIC RESONANCE IMAGING (MRI) of the lung is challenging due to the lung’s low proton density, which is roughly one-third that of muscle, and the inhomogeneous magnetic environment within the lung created by numerous air–tissue interfaces, which lead to a T2* value on the order of 1 msec at 1.5 T. Although advances continue with techniques such as ultrashort echo time (UTE) imaging of the lung parenchyma, conventional proton-based MRI is at a fundamental disadvantage for pulmonary applications because it cannot directly image the lung airspaces. This disadvantage of proton MRI can be overcome by turning to a gaseous contrast agent, such as the noble gas helium-3 (3He) or xenon-129 (129Xe), which upon inhalation permits direct visualization of lung airspaces in an MR image. With these agents, the low density of gas compared to that of solid tissue can be compensated by using a standalone, laser-based polarization device to increase the nuclear polarization to be roughly five orders of magnitude (10,000 times) higher than the corresponding thermal equilibrium polarization would be in the magnet of a clinical MR scanner. As a result, the MR signal from these hyperpolarized noble gases is increased by a proportionate amount and is easily detected using an MR scanner tuned to the appropriate resonance frequency. MRI using hyperpolarized gases has led to the development of numerous unique strategies for evaluating the structure and function of the human lung that provide advantages relative to current clinically available methods. For example, as compared with nuclear-medicine ventilation scintigraphy scans using 133Xe or aerosolized technetium-99m DTPA, hyperpolarized-gas ventilation MR images provide improved temporal and spatial resolution, expose the patient to no ionizing radiation, and can be repeated multiple times in a single day if desired. Although inhaled xenon has also been used as a contrast agent with computed tomography (CT), which can provide high spatial and temporal resolution, the high radiation dose and low contrast on the resulting ventilation images has dampened enthusiasm for the CT-based technique.

Although the first hyperpolarized-gas MR images were obtained using hyperpolarized 129Xe, and images of the human lung were acquired with hyperpolarized 129Xe only a few years later, the vast majority of work in humans has been performed using hyperpolarized 3He instead. This occurred primarily because 3He provided a stronger MR signal, due to its larger nuclear magnetic moment (and hence larger gyromagnetic ratio) compared to 129Xe and historically high levels of polarization (greater than 30%) achieved for 3He, and because there are no significant safety concerns associated with inhaled helium. However, in the years following the terrorist attacks of 9/11 there was a surge in demand for 3He for use in neutron detectors for port and border security, and this demand far exceeded the replenishment rate from the primary source, the decay of tritium used in nuclear warheads. As a result, 3He prices skyrocketed and availability plummeted. Currently, the U.S. government is regulating the supply of 3He, allocating this precious resource among users whose research or applications depend on 3He’s unique physical properties. This includes an annual allocation for medical imaging, which allows research on hyperpolarized 3He MRI of the lung to continue. Nonetheless, unless a new source for 3He is found it is clear that insufficient 3He is available to permit hyperpolarized 3He MRI of the lung to translate from the research community to a clinical tool.

In contrast to 3He, 129Xe is naturally abundant on Earth and its cost is relatively low. Thus, 129Xe is the obvious potential alternative to 3He as an inhaled contrast agent for MRI of the lung. While the 3He availability crisis has accelerated efforts to develop and evaluate hyperpolarized 129Xe for human applications, it is important to understand that 129Xe is not just a lower-signal alternative to 3He, forced upon us by practical necessity. In particular, the relatively high solubility of xenon in biological tissues and an exquisite sensitivity to its environment, which results in an enormous range of chemical shifts upon solution, make hyperpolarized 129Xe particularly attractive for exploring certain characteristics of lung function, such as gas exchange and uptake, that cannot be accessed using hyperpolarized 3He. The quantitative characteristics of gas exchange and uptake are determined by parameters of physiologic relevance, including the thickness of the blood–gas barrier, and thus measurements that quantify this process offer a potential wealth of information on the functional status of the healthy and diseased lung.

Historically, polarization levels for liter-quantities of hyperpolarized 129Xe have been roughly 10%, while those for similar quantities of hyperpolarized 3He have been greater than 30%. (Recall that the thermal equilibrium polarization of water protons at 1.5T is 0.0005%—four to five orders of magnitude lower.) Given 129Xe’s lower nuclear magnetic moment, this situation has put hyperpolarized 129Xe at a distinct disadvantage relative to 3He. A recent, key advance for 129Xe is the development of systems that can deliver liter quantities of hyperpolarized 129Xe with polarization on the order 50%. This now puts 129Xe on a competitive footing with 3He, positioning MRI of the human lung using hyperpolarized 129Xe to advance quickly in the immediate future, and making hyperpolarized 129Xe MRI of interest to the broader radiology and medical-imaging communities.
This idea of gas hyperpolarization is fascinating. How does one hyperpolarize the gas? Mugler and Altes explain:
Although it is possible to image either 129Xe or 3He by simply placing the gas (in a suitable container) in the magnet of an MR scanner, the low density of gas compared to that of solid tissue results in a signal that is too low to be of practical use for imaging the human lung… Nonetheless, the nuclear polarization can be increased dramatically compared to that produced by the magnet of the MR scanner by using a method called opticalpumping and spin exchange (OPSE), which was originally developed for nuclear-physics experiments many years before being applied to medical imaging.

As its name implies, OPSE is, in concept, a two-step process. The first step, optical pumping, involves using a laser to generate electron-spin polarization in a vapor of an alkali metal. This process takes place within a glass container, called an optical cell…positioned within a magnetic field... A small amount of the alkali metal, typically rubidium, is placed in the cell, which is heated…during the polarization process to create rubidium vapor. The optical cell is illuminated with circularly polarized laser light…at a specific wavelength (795 nm) to optically pump the rubidium atoms. This pumping preferentially populates one of the two spin states for the valence electron, thereby polarizing the associated electron spins and resulting in electron-spin polarization approaching 100%. In the second step of OPSE, collisions between spin-polarized rubidium atoms and noble-gas (129Xe or 3He) atoms within the cell result in spin exchange—the transfer of polarization from rubidium electrons to noble-gas nuclei...
To learn more, you can hear John Mugler discuss hyperpolarized gas MRI in the lung on youtube.

John Mugler discusses hyperpolarized gas MRI in the lung.

Friday, April 12, 2013

Radon

The largest source of natural background radiation is radon gas. In Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss radon.
The naturally occurring radioactive nuclei are either produced continuously by cosmic γ ray bombardment, or they are the products in a decay chain from a nucleus whose half-life is comparable to the age of the earth. Otherwise they would have already decayed. There are three naturally occurring radioactive decay chains near the high-Z end of the periodic table. One of these is the decay products from 238U, shown in Fig. 17.27. The halflife of 238U is 4.5 × 109 yr, which is about the same as the age of the earth. A series of α and β decays lead to radium, 226Ra, which undergoes α decay with a half-life of 1620 yr to radon, 222Rn.

Uranium, and therefore radium and radon, are present in most rocks and soil. Radon, a noble gas, percolates through grainy rocks and soil and enters the air and water in different concentrations. Although radon is a noble gas, its decay products have different chemical properties and attach to dust or aerosol droplets which can collect in the lungs. High levels of radon products in the lungs have been shown by both epidemiological studies of uranium miners and by animal studies to cause lung cancer.
In Chapter 16 we consider radon in the context of the risk of the general population to low levels of background radiation.
The question of a hormetic effect or a threshold effect [as opposed to the linear no-threshold model of radiation exposure] has received a great deal of attention for the case of radon, where remediation at fairly low radon levels has been proposed. Radon is produced naturally in many types of rock. It is a noble gas, but its radioactive decay products can become lodged in the lung. An excess of lung cancer has been well documented in uranium miners, who have been exposed to fairly high radon concentrations as well as high dust levels and tobacco smoke. Radon at lower concentrations seeps from the soil into buildings and contributes up to 55% of the exposure to the general population.
Building Blocks of the Universe, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Building Blocks of the Universe,
by Isaac Asimov.
Given the high profile of radon in our book, I thought readers might be interested in a bit of the history of this element. A brief discussion of the discovery of radon can be found in Isaac Asimov’s book Building Blocks of the Universe. After Asimov describes the Curies’ discoveries of radium and polonium from uranium ore in the late 1890s, he writes
When the radium atom breaks up, it forms an atom of radon, element No. 86. Radon is a gas, a radioactive gas! It fits into the inert gas column of the periodic table, right under xenon, and has all the chemical characteristics of the other inert gases.

Radon was first discovered in 1900 by a chemist named F. E. Dorn, and he called it radium emanation because it emanated from (that is, was given off by) radium. [William] Ramsay and R. Whytlaw-Gray collected the gas in 1908, and they called it niton from a Greek word meaning “shining.” In 1923, though, the official name became “radon” to show that the gas arose from radium…

Other gases arise from the breakdown of thorium and actinium … and have been called thoron and actinon, respectively. These are, as it turns out, varieties [isotopes] of radon. However, there have been suggestions that the element be named emanon (from “emanation”) since it does not arise from the breakdown of radium only, as “radon” implies.
Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov's Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
Asimov’s Biographical Encyclopedia of Science and Technology describes the scientist who discovered radon in more detail.
Dorn, Friedrich Ernst, German physicist
Born: Guttstadt (now Dobre, Miasto, Poland), East Prussia, July 27, 1848
Died: Halle, June 13, 1916

Dorn was educated at the University of Konigsberg and taught physics at the universities of Darmstadt and Halle. He turned to the study of radioactivity in the wake of Madame Curie’s discoveries and in 1900 showed that radium not only produced radioactive radiations, but also gave off a gas that was itself radioactive.

This gas eventually received the name radon and turned out to be the final member of Ramsay’s family of inert gases. It was the first clear-cut demonstration that in the process of giving off radioactive radiation, one element was transmuted (shades of the alchemists!) to another. This concept was carried further by Boltwood and Soddy.

Friday, April 5, 2013

Leon Glass wins Winfree Prize

From Clocks to Chaos, by Glass and Mackey, superimposed on Intermediate Physics for Medicine and Biology.
From Clocks to Chaos,
by Glass and Mackey.
Leon Glass was honored recently by the Society for Mathematical Biology. Their website states
The Society for Mathematical Biology is pleased to announce that this year’s recipient of the Arthur T. Winfree prize is Prof. Leon Glass of McGill University. Awarded every two years to a scientist whose work has “led to significant new biological understanding affecting observation/experiments,” this prize commemorates the creativity, imagination and intellectual breadth of Arthur T. Winfree.

Beginning with simple and brilliantly chosen experiments, Leon launched the study of chaos in biology. Among the applications he and his many collaborators and students pursued was the novel idea of “dynamical disease” and the better understanding of pathologies like Parkinson’s disease and cardiac arrhythmias. His elegant work (with Michael Guevara and Alvin Shrier) on periodic stimulation of heart cells demonstrated and explained how the interaction of nonlinearities with oscillations create complex dynamics and chaos.

The book From Clocks to Chaos, which he co-authored with Michael Mackey, was an instant classic that illuminated this difficult subject for a whole generation of mathematical biologists. His combination of imagination, experimental and mathematical insight, and ability to communicate fundamental principles has launched new fields of research and inspired researchers ranging from applied mathematicians to medical researchers.
Leon Glass is the Isadore Rosenfeld Chair in Cardiology at McGill University. Russ Hobbie and I cite From Clocks to Chaos (discussed previously in this blog) in the 4th edition of Intermediate Physics for Medicine and Biology, especially in Chapter 10 when discussing nonlinear dynamics. According to Google Scholar, the book has been cited 1800 times. Even more highly cited (over 2600 times) is Mackey and Glass’s paper “Oscillation and Chaos in Physiological Control Systems” (Science, Volume 197, Pages 287–289, 1977), which Russ and I also cite.

Other books and papers mentioned in IPMB include
Bub, G., A. Shrier, and L. Glass (2002) “Spiral wavegeneration in heterogeneous excitable media,” Phys. Rev. Lett., Volume 88, Article Number 058101.

Glass, L., Y. Nagai, K. Hall, M. Talajic, and S. Nattel (2002) “Predicting the entrainment of reentrant cardiacwaves using phase resetting curves,” Phys. Rev. E, Volume 65, Article Number 021908.

Guevara, M. R., L. Glass, and A. Shrier (1981) “Phaselocking,period-doubling bifurcations and irregular dynamicsin periodically stimulated cardiac cells,” Science Volume 214, Pages 1350–1353.

Glass, L. (2001) “Synchronization and rhythmicprocesses in physiology,” Nature, Volume 410, Pages 277–284.

Kaplan, D., and L. Glass (1995) Understanding NonlinearDynamics. New York, Springer-Verlag.
You can listen to Glass talk about cardiac arrhythmias below.

Leon Glass talks about cardiac arrhythmias.

Friday, March 29, 2013

1932: A Watershed Year in Nuclear Physics

I should have posted this article last year, to mark the 80th anniversary of the annus mirabilis of nuclear physics. Unfortunately, I didn’t think of it until I read “1932: A Watershed Year in Nuclear Physics” by Joseph Reader and Charles Clark, which appeared in the March 2013 issue of Physics Today. The article describes four major discoveries that changed nuclear physics forever.

 

Deuterium

The first landmark result was published by Harold Urey on January 1, 1932, in which he reported his discovery of deuterium, the isotope 2H. Russ Hobbie and I mention deuterium in Homework Problem 45 of Chapter 4 in the 4th edition of Intermediate Physics for Medicine and Biology.
Problem 45 Using the definitions in Problem 44, write the diffusion constant in terms of λ and vrms. By how much do you expect the diffusion constant for “heavy water” (water in which the two hydrogen atoms are deuterium, 2H) to differ from the diffusion constant for water? Assume the mean free path is independent of mass.
Unlike many elements, for which the various stable isotopes differ in mass be only a few percent, deuterium has twice the mass of normal hydrogen. Even when deuterium is incorporated into water, the H2O molecule’s mass increases by a significant 11%. Heavy water has been used as a non-radioactive biological tracer.

The Neutron

A second advance of 1932, and in my opinion the most important, is the discovery of the neutron by James Chadwick. Of course, the idea of a neutron is central to nuclear physics, and you cannot make sense of isotopes without it (I wonder how Urey interpreted the deuterium before the neutron was discovered). Russ and I discuss neutrons throughout Chapter 17 on Nuclear Medicine, and in particular we describe boron neutron capture therapy in Chapter 16
Boron neutron capture therapy (BNCT) is based on a nuclear reaction which occurs when the stable isotope 10B is irradiated with neutrons, leading to the nuclear reaction (in the notation of Chapter 17)
105B + 10n → 42α + 73Li
... Both the alpha particle and lithium are heavily ionizing and travel only about one cell diameter. BNCT has been tried since the 1950s; success requires boron-containing drugs that accumulate in the tumor.

The Positron

Discovery number three is the positron, the first example of antimatter. Carl Anderson found evidence of this positive electron in cosmic ray tracks in cloud chambers. Positrons appear in IPMB in two key places. In Chapter 15 (The Interaction of Photons and Charged Particles with Matter) positrons are key to pair production.
A photon with energy above 1.02 MeV can produce a particle–antiparticle pair: a negative electron and a positive electron or positron… Since the rest energy (mec2) of an electron or positron is 0.51 MeV, pair production is energetically impossible for photons below 2mec2 = 1.02 MeV.
The positron appears again in our discussion of β+ decay in Chapter 17.
Two modes of decay allow a nucleus to approach the stable line. In beta or electron) decay, a neutron is converted into a proton. This keeps A [mass number] constant, lowering N [neutron number] by one and raising Z [atomic number] by one. In positron (β+) decay, a proton is converted into a neutron. Again A remains unchanged, Z decreases and N increases by 1. We find β+ decay for nuclei above the line of stability and β- decay for nuclei below the line.
Isotopes that undergo β+ decay are used in positron emission tomography.
If a positron emitter is used as the radionuclide, the positron comes to rest and annihilates an electron, emitting two annihilation photons back to back. In positron emission tomography (PET) these are detected in coincidence….

PET can provide a functional image with information about metabolic activity. A very common positron agent is 18F fluorodeoxyglucoseglucose in which a hydroxyl group has been replaced with 18F. The PET signal is largest in those cells that have taken up the 18F because they are actively metabolizing glucose. PET has become particularly important in studies of brain function, where active neurons are detected by an increase in their metabolism, and in locating metastatic cancer.

Accelerators

The last of the four great developments of 1932 is the first use of accelerators to study nuclear reactions. John Cockcroft and Ernest Walton built an accelerator to produce high energy protons, which smashed into 7Li to produce two alpha particles. Their work was soon followed by the invention of the cyclotron by Ernest Lawrence, which is now the main tool for producing the unstable isotopes used in PET. Russ and I explain that
Positron emitters are short-lived, and it is necessary to have a cyclotron for producing them in or near the hospital. This is proving to be less of a problem than initially imagined. Commercial cyclotron facilities deliver isotopes to a number of nearby hospitals.
The Making of the Atomic Bomb, by Richard Rhodes. superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.

Soon after the miraculous year of 1932 Hitler came to power in Germany, and nuclear physics became much more than a scientific curiosity. The story of how the discoveries of Urey, Chadwick, Anderson, Cockcroft and Walton led relentlessly to the Manhattan Project is told masterfully in Richard Rhodes’ book The Making of the Atomic Bomb.

I have a few personal connections to this watershed year. My father Ron Roth, now retired and living in Lenexa Kansas, was born in 1932, proving that we are not so far removed from that historic time. In addition, my academic genealogy goes back to James Chadwick and Ernest Rutherford (whose lab Cockcroft and Walton worked in). Finally, Carl Anderson worked under the supervision of Robert Millikan, who was born in Morrison, Illinois, the small town I grew up in.

Friday, March 22, 2013

Barouh Berkovits (1926-2012)

When my March 2013 issue of the journal Heart Rhythm arrived this week, I found in it an obituary for Barouh Berkovits, who died last year.
Barouh Vojtec Berkovits passed away on October 23, 2012, at the age of 86 years. Berkovits was a master of science and an electrical engineer. Born in 1926 in Lucenec, Czechoslovakia (today Czech Republic), he worked as a technician behind the enemy lines. He escaped the Holocaust, but his parents and sister Eva perished in Auschwitz, Poland. In 1949 he immigrated to Israel and in 1956 to the United States… Berkovits invented and patented the first demand pacemaker capable of sensing the R wave…For his contributions to the treatment of cardiac arrhythmias, Berkovits received the “Distinguished Scientist Award” in 1982 by the Heart Rhythm Society.
Machines in Our Hearts, by Kirk Jeffrey, superimposed on Intermediate Physics for Medicine and Biology.
Machines in Our Hearts,
by Kirk Jeffrey.
The story of how Berkovits invented the demand pacemaker is told in Machines in Our Hearts, by Kirk Jeffrey.
Barouh V. Berkovits (b. 1924), an engineer at the American Optical Company, was already well known as the inventor of the DC defibrillator and the cardioverter, a device that interrupts a rapid heart rate (tachycardia) with low-energy shocks. He knew that when the cardioverter discharged randomly into the tachycardia, it would “occasionally not only not stop the tachyarrhythmia…but would produce ventricular fibrillation.” Cardioversion has to be synchronized to fall within the QRS complex and avoid the vulnerable period of the heartbeat. In 1963, Berkovits applied this principle to cardiac pacing. To solve the problem of competition [between the SA node and the artificial pacemaker], Berkovits in 1963 designed a sensing capability into the pacemaker. His invention behaved exactly like an asynchronous pacer until it detected a naturally occurring R wave, the indication of a ventricular contraction. This event would reset the timing circuit of the pacemaker, and the countdown to the next stimulus would begin anew. Thus the pacer stimulated the heart only when the ventricles failed to contract. It worked only “on demand.” As an added benefit, non-competitive pacing extended the life of the battery.
The 4th edition of Intermediate Physics for Medicine and Biology does not mention Berkovits by name, but Homework Problem 45 in Chapter 7 does analyze the demand pacemaker.
Problem 45 A patient with “intermittent heart block” has an AV node which functions normally most of the time with occasional episodes of block, lasting perhaps several hours. Design a pacemaker to treat the patient. Ideally, your design will not stimulate the heart when it is functioning normally. Describe
(a) whether you will stimulate the atria or ventricles
(b) which chambers you will monitor with a recording electrode
(c) what logic your pacemaker will use to determine when to stimulate. Your design may be similar to a “demand pacemaker” described in Jeffrey (2001), p. 132.
Of course, the reference is to Machines in Our Hearts. Berkovits’s phenomenal career is yet another example of how knowledge of engineering and physics can allow you to contribute to medicine and biology.

Friday, March 15, 2013

The Technology of Medicine

In Chapter 5 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the artificial kidney as an example of the use of physics and engineering to solve a medical problem. Rather than delving into the engineering details, today we consider the larger question of technology in medicine. Russ and I write
The reader should also be aware that this “high-technology” solution to the problem of chronic renal disease is not an entirely satisfactory one… The distinction between a high-technology treatment and a real conquest of a disease has been underscored by Thomas (1974, pp. 31–36).
The Lives of a Cell, by Lewis Thomas, superimposed on Intermediate Physics for Medicine and Biology.
The Lives of a Cell,
by Lewis Thomas.
The citation is to the book The Lives of a Cell by Lewis Thomas (physician, poet, essayist, administrator). To me, his essays—written 40 years ago—sound surprisingly modern. For instance, the introduction to his essay about “The Technology of Medicine” is relevant today as we struggle with the role of expensive technology in the ever-increasing cost of health care.
Technology assessment has become a routine exercise for the scientific enterprises on which the country is obliged to spend vast sums for its needs. Brainy committees are continually evaluating the effectiveness and cost of doing various things in space, defense, energy, transportation, and the like, to give advice about prudent investments for the future. Somehow medicine, for all the $80-odd billion that it is said to cost the nation [$2-something trillion in 2013], has not yet come in for much of this analytical treatment. It seems taken for granted that the technology of medicine simply exists, take it or leave it, and the only major technologic problem which policy-makers are interested in is how to deliver today's kind of health care, with equity, to all the people.

When, as is bound to happen sooner or later, the analysts get around to the technology of medicine itself, they will have to face the problem of measuring the relative cost and effectiveness of all the things that are done in the management of disease. They make their living at this kind of thing, and I wish them well, but I imagine they will have a bewildering time… 
The analysts have finally gotten around to it. As our nation spends 15% of our Gross Domestic Product on health care, the costs of technology in medicine are no longer taken for granted. The Affordable Care Act (a.k.a. Obamacare) focuses on research into the comparative effectiveness of treatments, often measured by the incremental cost-effectiveness ratio. And as Thomas predicted, the analysts are having a bewildering time dealing with it.

Thomas divides “technology” into three types. His first type is not really technology at all (“no-technology”). It is “supportive therapy”, that “tides patients over through diseases that are not, by and large, understood.” There is not much physics in this category, so we will move on.

The second level of technology, which he calls “halfway technology,” represents “the kinds of things that must be done after the fact, in efforts to compensate for the incapacitating effects of certain diseases whose course one is unable to do very much about. It is a technology designed to make up for disease, or to postpone death.” The artificial kidney, as well as kidney transplants, fall into this category, and “almost everything offered today for the treatment of heart disease is at this level of technology, with the transplanted and artificial hearts as ultimate examples.” There is lots of physics in this category. Yet, Thomas sees it as, at best, an intermediate—and expensive—type of medical solution. “In fact, this level of technology is, by its nature, at the same time highly sophisticated and profoundly primitive. It is the kind of thing that one must continue to do until there is a genuine understanding of the mechanisms involved in disease.”

The third type of technology is based on a complete understanding of the causes of disease. Thomas writes that it “is the kind that is so effective that it seems to attract the least public notice; it has come to be taken for granted. This is the genuinely decisive technology of modern medicine, exemplified best by modern methods for immunization against diphtheria, pertussis, and the childhood virus diseases, and the contemporary use of antibiotics and chemotherapy for bacterial infections.”

Is there physics in this third category? I think so. Biological mechanisms will be based, ultimately, on the constraints of physical laws, and one can’t hope to understand biology without physics (at least, this is what I believe). Perhaps we can say that physics and engineering are essential for the second type of technology, whereas physics and biology are essential for the third type.

Thomas clearly favors the third category. He concludes
The point to be made about this kind [the third type] of technology—the real high technology of medicine—is that it comes as the result of a genuine understanding of disease mechanisms, and when it becomes available, it is relatively inexpensive, and relatively easy to deliver.

Offhand, I cannot think of any important human disease for which medicine possesses the outright capacity to prevent or cure where the cost of the technology is itself a major problem. The price is never as high as the cost of managing the same diseases during the earlier stages of no-technology or halfway technology…

It is when physicians are bogged down by their incomplete technologies, by the innumerable things they are obliged to do in medicine when they lack a clear understanding of disease mechanisms, that the deficiencies of the health-care system are most conspicuous. If I were a policy-maker, interested in saving money for health care over the long haul, I would regard it as an act of high prudence to give high priority to a lot more basic research in biologic science. This is the only way to get the full mileage that biology owes to the science of medicine, even though it seems, as used to be said in the days when the phrase still had some meaning, like asking for the moon.
As we face the looming crisis of budget sequestration, with its devastating cutbacks in funding for the National Institutes of Health and the National Science Foundation, and as the calls for translational medical research increase, I urge our legislators to heed Thomas’s advice and “give high priority to a lot more basic research.”