Friday, April 26, 2013

Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid

Sixty years ago yesterday the journal Nature published a letter by James Watson and Francis Crick titled “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid” (Volume 171, Pages 737–738), announcing the discovery of the double helix structure of DNA.

The 4th edition of Intermediate Physics for Medicine and Biology doesn’t discuss the structure of DNA much. As Russ Hobbie and I say in the preface, “Molecular biophysics has been almost completely ignored: excellent texts already exist, and this is not our area of expertise.” Yet, we do mention DNA occasionally. In the first section of our book, about distances and sizes, we say
Genetic information is stored in long, helical strands of deoxyribonucleic acid (DNA). DNA is about 2.5 nm wide, and the helix completes a turn every 3.4 nm along its length.
Problem 2 in the first chapter analyzes the volume of DNA
Problem 2 Our genetic information or genome is stored in the parts of the DNA molecule called base pairs. Our genome contains about 3 billion (3×109) base pairs, and there are two copies in each cell. Along the DNA molecule, there is one base pair every one-third of a nanometer. How long would the DNA helix from one cell be if it were stretched out in a line? If the entire DNA molecule were wrapped up into a sphere, what would be the diameter of that sphere?
A problem in Chapter 3 considers errors in DNA in the context of the Boltzmann factor.
Problem 25 The DNA molecule consists of two intertwined linear chains. Sticking out from each monomer (link in the chain) is one of four bases: adenine (A), guanine (G), thymine (T), or cytosine (C). In the double helix, each base from one strand bonds to a base in the other strand. The correct matches, A–T and G–C, are more tightly bound than are the improper matches. The chain looks something like this, where the last bond shown is an “error.

A  T  G  C  G
T  A  C  G  A (error)

The probability of an error at 300 K is about 10−9 per base pair. Assume that this probability is determined by a Boltzmann factor e−U/kBT, where U is the additional energy required for a mismatch.
(a) Estimate this excess energy.
(b) If such mismatches are the sole cause of mutations in an organism, what would the mutation rate be if the temperature were raised 20° C?
We discuss DNA again in Chapter 16, when considering radiation damage to tissue.
Cellular DNA is organized into chromosomes. In order to understand radiation damage to DNA, we must recognize that there are four phases in the cycle of cell division

Figure 16.33 shows, at different magnifications, a strand of DNA, various intermediate structures which we will not discuss, and a chromosome as seen during the M phase of the cell cycle. The size goes from 2 nm for the DNA double helix to 1400 nm for the chromosome. In addition to cell survival curves one can directly measure chromosome damage. There is strong evidence that radiation, directly or indirectly, breaks a DNA strand. If only one strand is broken, there are efficient mechanisms that repair it over the course of a few hours using the other strand as a template. If both strands are broken, permanent damage results, and the next cell division produces an abnormal chromosome.19 Several forms of abnormal chromosomes are known, depending on where along the strand the damage occurred and how the damaged pieces connected or failed to connect to other chromosome fragments. Many of these chromosomal abnormalities are lethal: the cell either fails to complete its next mitosis, or it fails within the next few divisions. Other abnormalities allow the cell to continue to divide, but they may contribute to a multistep process that sometimes leads to cancer many cell generations later.
The Double Helix, by James Watson, superimposed on Intermeidate Physics for Medicine and Biology.
The Double Helix,
by James Watson.
The story of how the structure of DNA was discovered is nearly as fascinating as the structure itself. James Watson provides a first-person account in his book The Double Helix. It begins
I have never seen Francis Crick in a modest mood. Perhaps in other company he is that way, but I have never had reason so to judge him. It has nothing to do with his present fame. Already he is much talked about, usually with reverence, and someday he may be considered in the category of Rutherford or Bohr. But this was not true when, in the fall of 1951, I came to the Cavendish Laboratory of Cambridge University to join a small group of physicists and chemists working on the three-dimensional structures of proteins. At that time he was thirty-five, yet almost totally unknown. Although some of his closest colleagues realized the value of his quick, penetrating mind and frequently sought his advice, he was often not appreciated, and most people thought he talked too much.
The Eighth Day of Creation, by Horace Freeland Judson, superimposed on Intermediate Physics for Medicine and Biology.
The Eighth Day of Creation,
by Horace Freeland Judson.
The Double Helix is one of those iconic books that everyone should read for the insights it provides into how science is done, and for what is simply a fascinating story. The tale is also told from a more unbiased perspective in Horace Freeland Judson’s book The Eighth Day of Creation: The Makers of the Revolution in Biology. Let us end with Judson’s discussion of Watson and Crick’s now 60-year-old letter.
The letter to Nature appeared in the April 25th issue. To those of its readers who were close to the questions, and who had not already heard the news, the letter must have gone off like a string of depth charges in a calm sea. “We wish to suggest a structure for the salt of deoxyribose nucleic acid (D.N.A.). This structure has novel features which are of considerable biological interest,” the letter began; at the end, “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” The last sentence has been called one of the most coy statements in the literature of science.

Friday, April 19, 2013

Hyperpolarized 129Xe MRI of the Human Lung

Chapter 18 of the 4th edition of Intermediate Physics for Medicine and Biology is devoted to magnetic resonance imaging. Russ Hobbie and I discuss many aspects of MRI, including functional MRI and diffusion tensor imaging. One topic we do not cover is using hyperpolarized spins to study lung function. Fortunately, a clearly written review article, “Hyperpolarized 129Xe MRI of the Human Lung,” by John Mugler and Talissa Altes, appeared recently in the Journal of Magnetic Resonance Imaging (Volume 37, Pages 313–331, 2013). Below I reproduce excerpts from their introduction, with citations removed.
CLINICAL MAGNETIC RESONANCE IMAGING (MRI) of the lung is challenging due to the lung’s low proton density, which is roughly one-third that of muscle, and the inhomogeneous magnetic environment within the lung created by numerous air–tissue interfaces, which lead to a T2* value on the order of 1 msec at 1.5 T. Although advances continue with techniques such as ultrashort echo time (UTE) imaging of the lung parenchyma, conventional proton-based MRI is at a fundamental disadvantage for pulmonary applications because it cannot directly image the lung airspaces. This disadvantage of proton MRI can be overcome by turning to a gaseous contrast agent, such as the noble gas helium-3 (3He) or xenon-129 (129Xe), which upon inhalation permits direct visualization of lung airspaces in an MR image. With these agents, the low density of gas compared to that of solid tissue can be compensated by using a standalone, laser-based polarization device to increase the nuclear polarization to be roughly five orders of magnitude (10,000 times) higher than the corresponding thermal equilibrium polarization would be in the magnet of a clinical MR scanner. As a result, the MR signal from these hyperpolarized noble gases is increased by a proportionate amount and is easily detected using an MR scanner tuned to the appropriate resonance frequency. MRI using hyperpolarized gases has led to the development of numerous unique strategies for evaluating the structure and function of the human lung that provide advantages relative to current clinically available methods. For example, as compared with nuclear-medicine ventilation scintigraphy scans using 133Xe or aerosolized technetium-99m DTPA, hyperpolarized-gas ventilation MR images provide improved temporal and spatial resolution, expose the patient to no ionizing radiation, and can be repeated multiple times in a single day if desired. Although inhaled xenon has also been used as a contrast agent with computed tomography (CT), which can provide high spatial and temporal resolution, the high radiation dose and low contrast on the resulting ventilation images has dampened enthusiasm for the CT-based technique.

Although the first hyperpolarized-gas MR images were obtained using hyperpolarized 129Xe, and images of the human lung were acquired with hyperpolarized 129Xe only a few years later, the vast majority of work in humans has been performed using hyperpolarized 3He instead. This occurred primarily because 3He provided a stronger MR signal, due to its larger nuclear magnetic moment (and hence larger gyromagnetic ratio) compared to 129Xe and historically high levels of polarization (greater than 30%) achieved for 3He, and because there are no significant safety concerns associated with inhaled helium. However, in the years following the terrorist attacks of 9/11 there was a surge in demand for 3He for use in neutron detectors for port and border security, and this demand far exceeded the replenishment rate from the primary source, the decay of tritium used in nuclear warheads. As a result, 3He prices skyrocketed and availability plummeted. Currently, the U.S. government is regulating the supply of 3He, allocating this precious resource among users whose research or applications depend on 3He’s unique physical properties. This includes an annual allocation for medical imaging, which allows research on hyperpolarized 3He MRI of the lung to continue. Nonetheless, unless a new source for 3He is found it is clear that insufficient 3He is available to permit hyperpolarized 3He MRI of the lung to translate from the research community to a clinical tool.

In contrast to 3He, 129Xe is naturally abundant on Earth and its cost is relatively low. Thus, 129Xe is the obvious potential alternative to 3He as an inhaled contrast agent for MRI of the lung. While the 3He availability crisis has accelerated efforts to develop and evaluate hyperpolarized 129Xe for human applications, it is important to understand that 129Xe is not just a lower-signal alternative to 3He, forced upon us by practical necessity. In particular, the relatively high solubility of xenon in biological tissues and an exquisite sensitivity to its environment, which results in an enormous range of chemical shifts upon solution, make hyperpolarized 129Xe particularly attractive for exploring certain characteristics of lung function, such as gas exchange and uptake, that cannot be accessed using hyperpolarized 3He. The quantitative characteristics of gas exchange and uptake are determined by parameters of physiologic relevance, including the thickness of the blood–gas barrier, and thus measurements that quantify this process offer a potential wealth of information on the functional status of the healthy and diseased lung.

Historically, polarization levels for liter-quantities of hyperpolarized 129Xe have been roughly 10%, while those for similar quantities of hyperpolarized 3He have been greater than 30%. (Recall that the thermal equilibrium polarization of water protons at 1.5T is 0.0005%—four to five orders of magnitude lower.) Given 129Xe’s lower nuclear magnetic moment, this situation has put hyperpolarized 129Xe at a distinct disadvantage relative to 3He. A recent, key advance for 129Xe is the development of systems that can deliver liter quantities of hyperpolarized 129Xe with polarization on the order 50%. This now puts 129Xe on a competitive footing with 3He, positioning MRI of the human lung using hyperpolarized 129Xe to advance quickly in the immediate future, and making hyperpolarized 129Xe MRI of interest to the broader radiology and medical-imaging communities.
This idea of gas hyperpolarization is fascinating. How does one hyperpolarize the gas? Mugler and Altes explain:
Although it is possible to image either 129Xe or 3He by simply placing the gas (in a suitable container) in the magnet of an MR scanner, the low density of gas compared to that of solid tissue results in a signal that is too low to be of practical use for imaging the human lung… Nonetheless, the nuclear polarization can be increased dramatically compared to that produced by the magnet of the MR scanner by using a method called opticalpumping and spin exchange (OPSE), which was originally developed for nuclear-physics experiments many years before being applied to medical imaging.

As its name implies, OPSE is, in concept, a two-step process. The first step, optical pumping, involves using a laser to generate electron-spin polarization in a vapor of an alkali metal. This process takes place within a glass container, called an optical cell…positioned within a magnetic field... A small amount of the alkali metal, typically rubidium, is placed in the cell, which is heated…during the polarization process to create rubidium vapor. The optical cell is illuminated with circularly polarized laser light…at a specific wavelength (795 nm) to optically pump the rubidium atoms. This pumping preferentially populates one of the two spin states for the valence electron, thereby polarizing the associated electron spins and resulting in electron-spin polarization approaching 100%. In the second step of OPSE, collisions between spin-polarized rubidium atoms and noble-gas (129Xe or 3He) atoms within the cell result in spin exchange—the transfer of polarization from rubidium electrons to noble-gas nuclei...
To learn more, you can hear John Mugler discuss hyperpolarized gas MRI in the lung on youtube.

John Mugler discusses hyperpolarized gas MRI in the lung.

Friday, April 12, 2013

Radon

The largest source of natural background radiation is radon gas. In Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss radon.
The naturally occurring radioactive nuclei are either produced continuously by cosmic γ ray bombardment, or they are the products in a decay chain from a nucleus whose half-life is comparable to the age of the earth. Otherwise they would have already decayed. There are three naturally occurring radioactive decay chains near the high-Z end of the periodic table. One of these is the decay products from 238U, shown in Fig. 17.27. The halflife of 238U is 4.5 × 109 yr, which is about the same as the age of the earth. A series of α and β decays lead to radium, 226Ra, which undergoes α decay with a half-life of 1620 yr to radon, 222Rn.

Uranium, and therefore radium and radon, are present in most rocks and soil. Radon, a noble gas, percolates through grainy rocks and soil and enters the air and water in different concentrations. Although radon is a noble gas, its decay products have different chemical properties and attach to dust or aerosol droplets which can collect in the lungs. High levels of radon products in the lungs have been shown by both epidemiological studies of uranium miners and by animal studies to cause lung cancer.
In Chapter 16 we consider radon in the context of the risk of the general population to low levels of background radiation.
The question of a hormetic effect or a threshold effect [as opposed to the linear no-threshold model of radiation exposure] has received a great deal of attention for the case of radon, where remediation at fairly low radon levels has been proposed. Radon is produced naturally in many types of rock. It is a noble gas, but its radioactive decay products can become lodged in the lung. An excess of lung cancer has been well documented in uranium miners, who have been exposed to fairly high radon concentrations as well as high dust levels and tobacco smoke. Radon at lower concentrations seeps from the soil into buildings and contributes up to 55% of the exposure to the general population.
Building Blocks of the Universe, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Building Blocks of the Universe,
by Isaac Asimov.
Given the high profile of radon in our book, I thought readers might be interested in a bit of the history of this element. A brief discussion of the discovery of radon can be found in Isaac Asimov’s book Building Blocks of the Universe. After Asimov describes the Curies’ discoveries of radium and polonium from uranium ore in the late 1890s, he writes
When the radium atom breaks up, it forms an atom of radon, element No. 86. Radon is a gas, a radioactive gas! It fits into the inert gas column of the periodic table, right under xenon, and has all the chemical characteristics of the other inert gases.

Radon was first discovered in 1900 by a chemist named F. E. Dorn, and he called it radium emanation because it emanated from (that is, was given off by) radium. [William] Ramsay and R. Whytlaw-Gray collected the gas in 1908, and they called it niton from a Greek word meaning “shining.” In 1923, though, the official name became “radon” to show that the gas arose from radium…

Other gases arise from the breakdown of thorium and actinium … and have been called thoron and actinon, respectively. These are, as it turns out, varieties [isotopes] of radon. However, there have been suggestions that the element be named emanon (from “emanation”) since it does not arise from the breakdown of radium only, as “radon” implies.
Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov's Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
Asimov’s Biographical Encyclopedia of Science and Technology describes the scientist who discovered radon in more detail.
Dorn, Friedrich Ernst, German physicist
Born: Guttstadt (now Dobre, Miasto, Poland), East Prussia, July 27, 1848
Died: Halle, June 13, 1916

Dorn was educated at the University of Konigsberg and taught physics at the universities of Darmstadt and Halle. He turned to the study of radioactivity in the wake of Madame Curie’s discoveries and in 1900 showed that radium not only produced radioactive radiations, but also gave off a gas that was itself radioactive.

This gas eventually received the name radon and turned out to be the final member of Ramsay’s family of inert gases. It was the first clear-cut demonstration that in the process of giving off radioactive radiation, one element was transmuted (shades of the alchemists!) to another. This concept was carried further by Boltwood and Soddy.

Friday, April 5, 2013

Leon Glass wins Winfree Prize

From Clocks to Chaos, by Glass and Mackey, superimposed on Intermediate Physics for Medicine and Biology.
From Clocks to Chaos,
by Glass and Mackey.
Leon Glass was honored recently by the Society for Mathematical Biology. Their website states
The Society for Mathematical Biology is pleased to announce that this year’s recipient of the Arthur T. Winfree prize is Prof. Leon Glass of McGill University. Awarded every two years to a scientist whose work has “led to significant new biological understanding affecting observation/experiments,” this prize commemorates the creativity, imagination and intellectual breadth of Arthur T. Winfree.

Beginning with simple and brilliantly chosen experiments, Leon launched the study of chaos in biology. Among the applications he and his many collaborators and students pursued was the novel idea of “dynamical disease” and the better understanding of pathologies like Parkinson’s disease and cardiac arrhythmias. His elegant work (with Michael Guevara and Alvin Shrier) on periodic stimulation of heart cells demonstrated and explained how the interaction of nonlinearities with oscillations create complex dynamics and chaos.

The book From Clocks to Chaos, which he co-authored with Michael Mackey, was an instant classic that illuminated this difficult subject for a whole generation of mathematical biologists. His combination of imagination, experimental and mathematical insight, and ability to communicate fundamental principles has launched new fields of research and inspired researchers ranging from applied mathematicians to medical researchers.
Leon Glass is the Isadore Rosenfeld Chair in Cardiology at McGill University. Russ Hobbie and I cite From Clocks to Chaos (discussed previously in this blog) in the 4th edition of Intermediate Physics for Medicine and Biology, especially in Chapter 10 when discussing nonlinear dynamics. According to Google Scholar, the book has been cited 1800 times. Even more highly cited (over 2600 times) is Mackey and Glass’s paper “Oscillation and Chaos in Physiological Control Systems” (Science, Volume 197, Pages 287–289, 1977), which Russ and I also cite.

Other books and papers mentioned in IPMB include
Bub, G., A. Shrier, and L. Glass (2002) “Spiral wavegeneration in heterogeneous excitable media,” Phys. Rev. Lett., Volume 88, Article Number 058101.

Glass, L., Y. Nagai, K. Hall, M. Talajic, and S. Nattel (2002) “Predicting the entrainment of reentrant cardiacwaves using phase resetting curves,” Phys. Rev. E, Volume 65, Article Number 021908.

Guevara, M. R., L. Glass, and A. Shrier (1981) “Phaselocking,period-doubling bifurcations and irregular dynamicsin periodically stimulated cardiac cells,” Science Volume 214, Pages 1350–1353.

Glass, L. (2001) “Synchronization and rhythmicprocesses in physiology,” Nature, Volume 410, Pages 277–284.

Kaplan, D., and L. Glass (1995) Understanding NonlinearDynamics. New York, Springer-Verlag.
You can listen to Glass talk about cardiac arrhythmias below.

Leon Glass talks about cardiac arrhythmias.

Friday, March 29, 2013

1932: A Watershed Year in Nuclear Physics

I should have posted this article last year, to mark the 80th anniversary of the annus mirabilis of nuclear physics. Unfortunately, I didn’t think of it until I read “1932: A Watershed Year in Nuclear Physics” by Joseph Reader and Charles Clark, which appeared in the March 2013 issue of Physics Today. The article describes four major discoveries that changed nuclear physics forever.

 

Deuterium

The first landmark result was published by Harold Urey on January 1, 1932, in which he reported his discovery of deuterium, the isotope 2H. Russ Hobbie and I mention deuterium in Homework Problem 45 of Chapter 4 in the 4th edition of Intermediate Physics for Medicine and Biology.
Problem 45 Using the definitions in Problem 44, write the diffusion constant in terms of λ and vrms. By how much do you expect the diffusion constant for “heavy water” (water in which the two hydrogen atoms are deuterium, 2H) to differ from the diffusion constant for water? Assume the mean free path is independent of mass.
Unlike many elements, for which the various stable isotopes differ in mass be only a few percent, deuterium has twice the mass of normal hydrogen. Even when deuterium is incorporated into water, the H2O molecule’s mass increases by a significant 11%. Heavy water has been used as a non-radioactive biological tracer.

The Neutron

A second advance of 1932, and in my opinion the most important, is the discovery of the neutron by James Chadwick. Of course, the idea of a neutron is central to nuclear physics, and you cannot make sense of isotopes without it (I wonder how Urey interpreted the deuterium before the neutron was discovered). Russ and I discuss neutrons throughout Chapter 17 on Nuclear Medicine, and in particular we describe boron neutron capture therapy in Chapter 16
Boron neutron capture therapy (BNCT) is based on a nuclear reaction which occurs when the stable isotope 10B is irradiated with neutrons, leading to the nuclear reaction (in the notation of Chapter 17)
105B + 10n → 42α + 73Li
... Both the alpha particle and lithium are heavily ionizing and travel only about one cell diameter. BNCT has been tried since the 1950s; success requires boron-containing drugs that accumulate in the tumor.

The Positron

Discovery number three is the positron, the first example of antimatter. Carl Anderson found evidence of this positive electron in cosmic ray tracks in cloud chambers. Positrons appear in IPMB in two key places. In Chapter 15 (The Interaction of Photons and Charged Particles with Matter) positrons are key to pair production.
A photon with energy above 1.02 MeV can produce a particle–antiparticle pair: a negative electron and a positive electron or positron… Since the rest energy (mec2) of an electron or positron is 0.51 MeV, pair production is energetically impossible for photons below 2mec2 = 1.02 MeV.
The positron appears again in our discussion of β+ decay in Chapter 17.
Two modes of decay allow a nucleus to approach the stable line. In beta or electron) decay, a neutron is converted into a proton. This keeps A [mass number] constant, lowering N [neutron number] by one and raising Z [atomic number] by one. In positron (β+) decay, a proton is converted into a neutron. Again A remains unchanged, Z decreases and N increases by 1. We find β+ decay for nuclei above the line of stability and β- decay for nuclei below the line.
Isotopes that undergo β+ decay are used in positron emission tomography.
If a positron emitter is used as the radionuclide, the positron comes to rest and annihilates an electron, emitting two annihilation photons back to back. In positron emission tomography (PET) these are detected in coincidence….

PET can provide a functional image with information about metabolic activity. A very common positron agent is 18F fluorodeoxyglucoseglucose in which a hydroxyl group has been replaced with 18F. The PET signal is largest in those cells that have taken up the 18F because they are actively metabolizing glucose. PET has become particularly important in studies of brain function, where active neurons are detected by an increase in their metabolism, and in locating metastatic cancer.

Accelerators

The last of the four great developments of 1932 is the first use of accelerators to study nuclear reactions. John Cockcroft and Ernest Walton built an accelerator to produce high energy protons, which smashed into 7Li to produce two alpha particles. Their work was soon followed by the invention of the cyclotron by Ernest Lawrence, which is now the main tool for producing the unstable isotopes used in PET. Russ and I explain that
Positron emitters are short-lived, and it is necessary to have a cyclotron for producing them in or near the hospital. This is proving to be less of a problem than initially imagined. Commercial cyclotron facilities deliver isotopes to a number of nearby hospitals.
The Making of the Atomic Bomb, by Richard Rhodes. superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.

Soon after the miraculous year of 1932 Hitler came to power in Germany, and nuclear physics became much more than a scientific curiosity. The story of how the discoveries of Urey, Chadwick, Anderson, Cockcroft and Walton led relentlessly to the Manhattan Project is told masterfully in Richard Rhodes’ book The Making of the Atomic Bomb.

I have a few personal connections to this watershed year. My father Ron Roth, now retired and living in Lenexa Kansas, was born in 1932, proving that we are not so far removed from that historic time. In addition, my academic genealogy goes back to James Chadwick and Ernest Rutherford (whose lab Cockcroft and Walton worked in). Finally, Carl Anderson worked under the supervision of Robert Millikan, who was born in Morrison, Illinois, the small town I grew up in.

Friday, March 22, 2013

Barouh Berkovits (1926-2012)

When my March 2013 issue of the journal Heart Rhythm arrived this week, I found in it an obituary for Barouh Berkovits, who died last year.
Barouh Vojtec Berkovits passed away on October 23, 2012, at the age of 86 years. Berkovits was a master of science and an electrical engineer. Born in 1926 in Lucenec, Czechoslovakia (today Czech Republic), he worked as a technician behind the enemy lines. He escaped the Holocaust, but his parents and sister Eva perished in Auschwitz, Poland. In 1949 he immigrated to Israel and in 1956 to the United States… Berkovits invented and patented the first demand pacemaker capable of sensing the R wave…For his contributions to the treatment of cardiac arrhythmias, Berkovits received the “Distinguished Scientist Award” in 1982 by the Heart Rhythm Society.
Machines in Our Hearts, by Kirk Jeffrey, superimposed on Intermediate Physics for Medicine and Biology.
Machines in Our Hearts,
by Kirk Jeffrey.
The story of how Berkovits invented the demand pacemaker is told in Machines in Our Hearts, by Kirk Jeffrey.
Barouh V. Berkovits (b. 1924), an engineer at the American Optical Company, was already well known as the inventor of the DC defibrillator and the cardioverter, a device that interrupts a rapid heart rate (tachycardia) with low-energy shocks. He knew that when the cardioverter discharged randomly into the tachycardia, it would “occasionally not only not stop the tachyarrhythmia…but would produce ventricular fibrillation.” Cardioversion has to be synchronized to fall within the QRS complex and avoid the vulnerable period of the heartbeat. In 1963, Berkovits applied this principle to cardiac pacing. To solve the problem of competition [between the SA node and the artificial pacemaker], Berkovits in 1963 designed a sensing capability into the pacemaker. His invention behaved exactly like an asynchronous pacer until it detected a naturally occurring R wave, the indication of a ventricular contraction. This event would reset the timing circuit of the pacemaker, and the countdown to the next stimulus would begin anew. Thus the pacer stimulated the heart only when the ventricles failed to contract. It worked only “on demand.” As an added benefit, non-competitive pacing extended the life of the battery.
The 4th edition of Intermediate Physics for Medicine and Biology does not mention Berkovits by name, but Homework Problem 45 in Chapter 7 does analyze the demand pacemaker.
Problem 45 A patient with “intermittent heart block” has an AV node which functions normally most of the time with occasional episodes of block, lasting perhaps several hours. Design a pacemaker to treat the patient. Ideally, your design will not stimulate the heart when it is functioning normally. Describe
(a) whether you will stimulate the atria or ventricles
(b) which chambers you will monitor with a recording electrode
(c) what logic your pacemaker will use to determine when to stimulate. Your design may be similar to a “demand pacemaker” described in Jeffrey (2001), p. 132.
Of course, the reference is to Machines in Our Hearts. Berkovits’s phenomenal career is yet another example of how knowledge of engineering and physics can allow you to contribute to medicine and biology.

Friday, March 15, 2013

The Technology of Medicine

In Chapter 5 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the artificial kidney as an example of the use of physics and engineering to solve a medical problem. Rather than delving into the engineering details, today we consider the larger question of technology in medicine. Russ and I write
The reader should also be aware that this “high-technology” solution to the problem of chronic renal disease is not an entirely satisfactory one… The distinction between a high-technology treatment and a real conquest of a disease has been underscored by Thomas (1974, pp. 31–36).
The Lives of a Cell, by Lewis Thomas, superimposed on Intermediate Physics for Medicine and Biology.
The Lives of a Cell,
by Lewis Thomas.
The citation is to the book The Lives of a Cell by Lewis Thomas (physician, poet, essayist, administrator). To me, his essays—written 40 years ago—sound surprisingly modern. For instance, the introduction to his essay about “The Technology of Medicine” is relevant today as we struggle with the role of expensive technology in the ever-increasing cost of health care.
Technology assessment has become a routine exercise for the scientific enterprises on which the country is obliged to spend vast sums for its needs. Brainy committees are continually evaluating the effectiveness and cost of doing various things in space, defense, energy, transportation, and the like, to give advice about prudent investments for the future. Somehow medicine, for all the $80-odd billion that it is said to cost the nation [$2-something trillion in 2013], has not yet come in for much of this analytical treatment. It seems taken for granted that the technology of medicine simply exists, take it or leave it, and the only major technologic problem which policy-makers are interested in is how to deliver today's kind of health care, with equity, to all the people.

When, as is bound to happen sooner or later, the analysts get around to the technology of medicine itself, they will have to face the problem of measuring the relative cost and effectiveness of all the things that are done in the management of disease. They make their living at this kind of thing, and I wish them well, but I imagine they will have a bewildering time… 
The analysts have finally gotten around to it. As our nation spends 15% of our Gross Domestic Product on health care, the costs of technology in medicine are no longer taken for granted. The Affordable Care Act (a.k.a. Obamacare) focuses on research into the comparative effectiveness of treatments, often measured by the incremental cost-effectiveness ratio. And as Thomas predicted, the analysts are having a bewildering time dealing with it.

Thomas divides “technology” into three types. His first type is not really technology at all (“no-technology”). It is “supportive therapy”, that “tides patients over through diseases that are not, by and large, understood.” There is not much physics in this category, so we will move on.

The second level of technology, which he calls “halfway technology,” represents “the kinds of things that must be done after the fact, in efforts to compensate for the incapacitating effects of certain diseases whose course one is unable to do very much about. It is a technology designed to make up for disease, or to postpone death.” The artificial kidney, as well as kidney transplants, fall into this category, and “almost everything offered today for the treatment of heart disease is at this level of technology, with the transplanted and artificial hearts as ultimate examples.” There is lots of physics in this category. Yet, Thomas sees it as, at best, an intermediate—and expensive—type of medical solution. “In fact, this level of technology is, by its nature, at the same time highly sophisticated and profoundly primitive. It is the kind of thing that one must continue to do until there is a genuine understanding of the mechanisms involved in disease.”

The third type of technology is based on a complete understanding of the causes of disease. Thomas writes that it “is the kind that is so effective that it seems to attract the least public notice; it has come to be taken for granted. This is the genuinely decisive technology of modern medicine, exemplified best by modern methods for immunization against diphtheria, pertussis, and the childhood virus diseases, and the contemporary use of antibiotics and chemotherapy for bacterial infections.”

Is there physics in this third category? I think so. Biological mechanisms will be based, ultimately, on the constraints of physical laws, and one can’t hope to understand biology without physics (at least, this is what I believe). Perhaps we can say that physics and engineering are essential for the second type of technology, whereas physics and biology are essential for the third type.

Thomas clearly favors the third category. He concludes
The point to be made about this kind [the third type] of technology—the real high technology of medicine—is that it comes as the result of a genuine understanding of disease mechanisms, and when it becomes available, it is relatively inexpensive, and relatively easy to deliver.

Offhand, I cannot think of any important human disease for which medicine possesses the outright capacity to prevent or cure where the cost of the technology is itself a major problem. The price is never as high as the cost of managing the same diseases during the earlier stages of no-technology or halfway technology…

It is when physicians are bogged down by their incomplete technologies, by the innumerable things they are obliged to do in medicine when they lack a clear understanding of disease mechanisms, that the deficiencies of the health-care system are most conspicuous. If I were a policy-maker, interested in saving money for health care over the long haul, I would regard it as an act of high prudence to give high priority to a lot more basic research in biologic science. This is the only way to get the full mileage that biology owes to the science of medicine, even though it seems, as used to be said in the days when the phrase still had some meaning, like asking for the moon.
As we face the looming crisis of budget sequestration, with its devastating cutbacks in funding for the National Institutes of Health and the National Science Foundation, and as the calls for translational medical research increase, I urge our legislators to heed Thomas’s advice and “give high priority to a lot more basic research.”

Friday, March 8, 2013

Helium Shortage!

A recent article in the New York Times discusses the looming shortage of helium.
A global helium shortage has turned the second-most abundant element in the universe (after hydrogen) into a sought-after scarcity, disrupting its use in everything from party balloons and holiday parade floats to M.R.I. machines and scientific research….

Experts say the shortage has many causes. Because helium is a byproduct of natural gas extraction, a drop in natural gas prices has reduced the financial incentives for many overseas companies to produce helium. In addition, suppliers’ ability to meet the growing demand for helium has been strained by production problems around the world. Helium plants that are being built or are already operational in Qatar, Algeria, Wyoming and elsewhere have experienced a series of construction delays or maintenance troubles.
One medical use of helium is discussed in the 4th edition of Intermediate Physics for Medicine and Biology. In Chapter 8, Russ Hobbie and I write about the role of helium in magnetoencephalography—the biomagnetic measurement of electrical activity in the brain—using Superconducting Quantum Interference Device (SQUID) magnetometers.
The SQUID must be operated at temperatures where it is superconducting. It used to be necessary to keep a SQUID in a liquid-helium bath, which is expensive to operate because of the high evaporation rate of liquid helium. With the advent of high-temperature superconductors, SQUIDS have the potential to operate at liquid-nitrogen temperatures, where the cooling problems are much less severe [for additional information, see here].
A more wide-spread use of helium in medicine is during magnetic resonance imaging. Chapter 18 of our book discusses MRI, but it does not describe how the strong, static magnetic field required by MRI is created. In a clinical MRI system, a magnetic field (typically 2 to 4 T) must exist over a large volume. Producing such a magnetic field using permanent magnets would, if possible, require giant, massive, expensive structures. A more reasonable method to create this field is using coils carrying a large current. One way to minimize the resulting Joule heating losses in the coils is to make them out of superconducting wire, which must be cooled cryogenically. An article on the Time Magazine online newsfeed states
Liquid helium has an extremely low boiling point—minus 452.1 degrees Fahrenheit, close to absolute zero—which makes it a perfect substance for cooling the superconducting magnets found in MRI machines. Hospitals are generally the first in line for helium, so the shortage isn’t affecting them yet. But prices for hospital-grade helium may continue to go up, leading to higher health-care costs or, in the worst-case scenario, the need for a backup plan for cooling MRI machines.
More detail about the use of helium during MRI can be found in an online book titled The Basics of MRI by Joseph Hornak. Below I quote some of the text, but you will need to go the book website to see the pictures and animations.
The imaging magnet is the most expensive component of the magnetic resonance imaging system. Most magnets are of the superconducting type. This is a picture of a first generation 1.5 Tesla superconducting magnet from a magnetic resonance imager. A superconducting magnet is an electromagnet made of superconducting wire. Superconducting wire has a resistance approximately equal to zero when it is cooled to a temperature close to absolute zero (−273.15° C or 0 K) by immersing it in liquid helium. Once current is caused to flow in the coil it will continue to flow as long as the coil is kept at liquid helium temperatures. (Some losses do occur over time due to infinitely small resistance of the coil. These losses can be on the order of a ppm of the main magnetic field per year.)

The length of superconducting wire in the magnet is typically several miles. The coil of wire is kept at a temperature of 4.2 K by immersing it in liquid helium. The coil and liquid helium is kept in a large dewar. The typical volume of liquid Helium in an MRI magnet is 1700 liters. In early magnet designs, this dewar was typically surrounded by a liquid nitrogen (77.4 K) dewar which acts as a thermal buffer between the room temperature (293 K) and the liquid helium. See the animation window for a cross sectional view of a first generation superconducting imaging magnet.

In later magnet designs, the liquid nitrogen region was replaced by a dewar cooled by a cryocooler or refrigerator. There is a refrigerator outside the magnet with cooling lines going to a coldhead in the liquid helium. This design eliminates the need to add liquid nitrogen to the magnet, and increases the liquid helium hold time to 3 to 4 years. The animation window contains a cross sectional view of this type of magnet. Researchers are working on a magnet that requires no liquid helium.
With the discovery of high temperature superconductivity (HTS), MRI magnets cooled at higher temperatures, avoiding the need for liquid helium, are possible. The ideal solution to the helium shortage would be superconducting coils cooled with liquid nitrogen. Nitrogen makes up 80% of our atmosphere, so it is free and virtually limitless. However, a 2010 article by scientists at the MIT Francis Bitter Magnet Laboratory (FBML) suggests that a more practical solution might be the use of solid nitrogen to reach temperatures of 20 K, for which superconducting materials such as magnesium diboride (MgB2) exist that have the properties required for magnet coils.
A tremendous progress achieved in the past decade and is continuing today has transformed selected HTS materials into “magnet-grade” conductors, i.e., meet rigorous magnet specifications and are readily available from commercial wire manufacturers [1]. We are now at the threshold of a new era in which HTS will play a key role in a number of applications— here MgB2 (Tc=39 K) is classified as an HTS. The HTS offers opportunities and challenges to a number of applications for superconductivity. In this paper we briefly describe three NMR/MRI magnets programs currently being developed at FBML that would be impossible without HTS: 1) a 1.3 GHz NMR magnet; 2) a compact NMR magnet assembled from YBCO [yttrium barium copper oxide] annuli; and 3) a persistent-mode, fully-protected MgB2 0.5-T/800-mm whole-body MRI magnet.
Even if new MRI magnets using solid nitrogen or some other abundant substance as the coolant were developed, there are thousands of existing MRI devices that still would require liquid helium and would be very expensive to replace. Congress is currently considering legislation to address the helium shortage (see article here). We urgently need to preserve our helium supply to ensure its availability for important medical devices.

P.S. I saw this article just a few days ago. High temperature superconductors for MRI may be just around the corner!

Friday, March 1, 2013

Magnetoacoustic Tomography with Magnetic Induction

Magnetoacoustic tomography with magnetic induction is a new method to image the distribution of electrical conductivity in tissue. Bin He, the director of the Institute for Engineering in Medicine at the University of Minnesota, developed this technique with his student Yuan Xu in a 2005 publication (Physics in Medicine and Biology, Volume 50, Pages 5175–5187). They describe MAT-MI in their introduction.
We have developed a new approach called magnetoacoustic tomography with magnetic induction (MAT-MI) by combining ultrasound and magnetism. In this method, the object is in a static magnetic field and a time-varying (μs) magnetic field... The time-varying magnetic field induces an eddy current in the object. Consequently, the object will emit ultrasonic waves through the Lorentz force produced by the combination of the eddy current and the static magnetic field. The ultrasonic waves are then collected by the detectors located around the object for image reconstruction. MAT-MI combines the good contrast of EIT [electrical impedance tomography] with the good spatial resolution of sonography.
One nice feature of MAT-MI is that it fits so well into the 4th edition of Intermediate Physics for Medicine and Biology, in which Russ Hobbie and I analyze both eddy currents caused by Faraday induction (Chapter 8) and ultrasound imaging (Chapter 13). Another characteristic of MAT-MI is that the physics is simple enough that it can be summarized in a homework problem. So, dear reader, here is a new problem that will help you understand MAT-MI.
Section 8.6

Problem 25 ½  Assume a sheet of tissue having conductivity σ is placed perpendicular to a uniform, strong, static magnetic field B0, and a weaker spatially uniform but temporally oscillating magnetic field B1(t).
(a) Derive an expression for the electric field E induced by the oscillating magnetic field. It will depend on the distance r from the center of the sheet and the rate of change of the magnetic field.
(b) Determine an expression for the current density J by multiplying the electric field by the conductivity.
(c) The force per unit volume, F, is given by the Lorentz force, J×B0 (ignore the weak B1). Find an expression for F.
(d) The source of the ultrasonic pressure waves can be expressed as the divergence of the Lorentz Force. Derive an expression for ∇ · F.
(e) Draw a picture showing the directions of
J, B0, and F.
While this example is simple enough to serve as a homework problem, it does not illustrate imaging of conductivity; the conductivity is uniform so there is no variation to image. As He and Yuan explain, if the conductivity varies with position, this will also contribute to ∇ · F, and therefore influence the radiated ultrasonic wave. Thus, information about the conductivity distribution σ(x,y) is contained in the pressure. Subsequent papers by He and his colleagues explore methods for extracting σ(x,y) from the ultrasonic signal. Potential applications include using MAT-MI to image breast cancer tumors.

I’ve worked on MAT-MI a little bit. University of Michigan student Kayt Brinker and I published a paper describing MAT-MI in anisotropic tissue like skeletal muscle, where the conductivity is much higher parallel to the muscle fibers than perpendicular to them [Brinker, K. and B. J. Roth (2008) “The effect of electrical anisotropy during magneto-acoustic tomography with magnetic induction,” IEEE Transactions on Biomedical Engineering, Volume 55, Pages 1637–1639]. For some reason the figures published by the journal were not of high quality, so here I reproduce a better version of Figure 6, which shows the pressure wave produced during MAT-MI.

Figure 6 from Brinker and Roth (2008) shows the pressure at 20, 40, 60, and 80 μs in isotropic and anisotropic tissue.
Fig. 6. Pressure at 20, 40, 60, and 80 μs in isotropic and anisotropic tissue.
Each panel represents a 400 mm by 400 mm area.
In isotropic tissue, the wave propagates outward, the same in all directions. In electrically anisotropic tissue, the pressure is greater in the direction perpendicular to the fiber axis (vertical) than parallel to it (horizontal). The main difference between our calculation and that in the new homework problem given above is that Kayt and I restricted the oscillating magnetic field B1 to a small region (40 mm radius) at the center of the tissue sheet.

Friday, February 22, 2013

The Response of a Spherical Heart to a Uniform Electric Field

In Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the bidomain model of cardiac tissue.
Myocardial cells are typically about 10 μm in diameter and 100 μm long. They have the added complication that they are connected to one another by gap junctions, as shown schematically in Fig. 7.27. This allows currents to flow directly from one cell to another without flowing in the extracellular medium. The bidomain (two-domain) model is often used to model this situation [Henriquez (1993)]. It considers a region, small compared to the size of the heart, that contains many cells and their surrounding extracellular fluid.
The citation is to the 20-year-old-but-still-useful review article by Craig Henriquez of Duke University.
Henriquez, C. S. (1993) “Simulating the electrical behavior of cardiac tissue using the bidomain model,” Crit. Rev. Biomed. Eng., Volume 21, Pages 1–77.
According to Google Scholar, this landmark paper has been cited over 450 times (including a citation on page 202 of IPMB).

During the early 1990s I collaborated with another researcher from Duke, Natalia Trayanova. Our goal was to apply the bidomain model to the study of defibrillation of the heart. In the same year that Craig’s review appeared, Trayanova, her student Lisa Malden, and I published an article in the IEEE Transactions on Biomedical Engineering titled “The Response of the Spherical Heart to a Uniform Electric Field: A Bidomain Analysis of Cardiac Stimulation” (Volume 40, Pages 899–908). I’m fond of this paper for several reasons:
  • Like most physicists, I like simple models that highlight and clarify basic mechanisms. Our spherical heart model had that simplicity.
  • The article was the first to show that fiber curvature provides a mechanism for polarization of cardiac tissue in response to an electrical shock. Since our paper, researchers have appreciated the importance of the fiber geometry in the heart when modeling electrical stimulation.
  • The model emphasizes the role of unequal anisotropy ratios in the bidomain model. In cardiac tissue, both the intracellular and extracellular spaces are anisotropic (the electrical conductivity is different parallel to the myocardial fibers then perpendicular to them), but the intracellular space is more anisotropic than the extracellular space. Fiber curvature will only result in polarization deep in the heart wall if the tissue has unequal anisotropy ratios.
  • The calculation has important clinical implications. Fibrillation of the heart is a leading cause of death in the United States, and the only way to treat a fibrillating heart is to apply a strong electric shock: defibrillation. I’ve performed a lot of numerical simulations in my career, but none have the potential impact for medicine as my work on defibrillation.
  • The IEEE TBME publishes brief bios of the authors. Back in those days I published in this journal often, and my goal was to have my entire CV included, bit by bit, in these small bios. The one in this paper read “Bradley J Roth was raised in Morrison, Illinois. He received the BS degree in physics from the University of Kansas in 1982, and the PhD in physics from Vanderbilt University in 1987. His PhD dissertation research was performed in the Living State Physics Laboratory under the direction of Dr. J. WIkswo. He is now a Senior Staff Fellow with the Biomedical Engineering and Instrumentation Program, National Institutes of Health, Bethesda, MD. One of this research interests is the mathematical modeling of the electrical behavior of the heart. He is also interested in the production and interactions of magnetic fields with biological tissue, e.g. biomagnetism, magnetic stimulation, and magnetoacoustic imaging.”
  • The acknowledgments state “the authors thank B. Bowman for his assistance in editing the manuscript.” Barry was a great help to me in improving my writing skills during my years at NIH, and I’m glad that we mentioned him.
  • The paper cites several of my favorite books, including When Time Breaks Down by Art Winfree, Classical Electrodynamics by John David Jackson, and Handbook of Mathematical Functions with Formulas, Graphs, and and Mathematical Tables, by Abramowitz and Stegun.
  • The paper has been fairly influential. It’s been cited 97 times, which is small potatoes compared to Henriquez’s review, but not too shabby nevertheless; an average of almost five citations a year for 20 years.
  • It was a pleasure to collaborate with Natalia Trayanova, who I was to work with again seven years later on another study of cardiac electrical behavior (Lindblom, Roth, and Trayanova, Journal of Cardiovascular Electrophysiology, Volume 11, Pages 274–285, 2000).
  • The paper led to subsequent simulations of defibrillation that are much more realistic and sophisticated than our simple spherical model of twenty years ago. Trayanova has led the way in this research, first at Duke, then at Tulane, and now at Johns Hopkins. You can listen to her discuss her research here. If you have a subscription to the Journal of Visualized Experiments you can hear more here. For a recent review, see Trayanova et al. (2012). Also, see this article recently put out by Johns Hopkins University. 
Listen to Natalia Trayanova discuss developing computer simulations to improve arrhythmia treatments.
Cardiac Bioelectric Therapy: Mechanisms and Practical Implications with Intermediate Physics for Medicine and Biology.
Cardiac Bioelectric Therapy:
Mechanisms and Practical Implications.
To learn more about how physics and engineering can help us understand defibrillation, consult the book Cardiac Bioelectric Therapy: Mechanisms and Practical Implications, which has chapters by Trayanova and many of the other leading researchers in the field (including yours truly).