Friday, March 29, 2013

1932: A Watershed Year in Nuclear Physics

I should have posted this article last year, to mark the 80th anniversary of the annus mirabilis of nuclear physics. Unfortunately, I didn’t think of it until I read “1932: A Watershed Year in Nuclear Physics” by Joseph Reader and Charles Clark, which appeared in the March 2013 issue of Physics Today. The article describes four major discoveries that changed nuclear physics forever.

 

Deuterium

The first landmark result was published by Harold Urey on January 1, 1932, in which he reported his discovery of deuterium, the isotope 2H. Russ Hobbie and I mention deuterium in Homework Problem 45 of Chapter 4 in the 4th edition of Intermediate Physics for Medicine and Biology.
Problem 45 Using the definitions in Problem 44, write the diffusion constant in terms of λ and vrms. By how much do you expect the diffusion constant for “heavy water” (water in which the two hydrogen atoms are deuterium, 2H) to differ from the diffusion constant for water? Assume the mean free path is independent of mass.
Unlike many elements, for which the various stable isotopes differ in mass be only a few percent, deuterium has twice the mass of normal hydrogen. Even when deuterium is incorporated into water, the H2O molecule’s mass increases by a significant 11%. Heavy water has been used as a non-radioactive biological tracer.

The Neutron

A second advance of 1932, and in my opinion the most important, is the discovery of the neutron by James Chadwick. Of course, the idea of a neutron is central to nuclear physics, and you cannot make sense of isotopes without it (I wonder how Urey interpreted the deuterium before the neutron was discovered). Russ and I discuss neutrons throughout Chapter 17 on Nuclear Medicine, and in particular we describe boron neutron capture therapy in Chapter 16
Boron neutron capture therapy (BNCT) is based on a nuclear reaction which occurs when the stable isotope 10B is irradiated with neutrons, leading to the nuclear reaction (in the notation of Chapter 17)
105B + 10n → 42α + 73Li
... Both the alpha particle and lithium are heavily ionizing and travel only about one cell diameter. BNCT has been tried since the 1950s; success requires boron-containing drugs that accumulate in the tumor.

The Positron

Discovery number three is the positron, the first example of antimatter. Carl Anderson found evidence of this positive electron in cosmic ray tracks in cloud chambers. Positrons appear in IPMB in two key places. In Chapter 15 (The Interaction of Photons and Charged Particles with Matter) positrons are key to pair production.
A photon with energy above 1.02 MeV can produce a particle–antiparticle pair: a negative electron and a positive electron or positron… Since the rest energy (mec2) of an electron or positron is 0.51 MeV, pair production is energetically impossible for photons below 2mec2 = 1.02 MeV.
The positron appears again in our discussion of β+ decay in Chapter 17.
Two modes of decay allow a nucleus to approach the stable line. In beta or electron) decay, a neutron is converted into a proton. This keeps A [mass number] constant, lowering N [neutron number] by one and raising Z [atomic number] by one. In positron (β+) decay, a proton is converted into a neutron. Again A remains unchanged, Z decreases and N increases by 1. We find β+ decay for nuclei above the line of stability and β- decay for nuclei below the line.
Isotopes that undergo β+ decay are used in positron emission tomography.
If a positron emitter is used as the radionuclide, the positron comes to rest and annihilates an electron, emitting two annihilation photons back to back. In positron emission tomography (PET) these are detected in coincidence….

PET can provide a functional image with information about metabolic activity. A very common positron agent is 18F fluorodeoxyglucoseglucose in which a hydroxyl group has been replaced with 18F. The PET signal is largest in those cells that have taken up the 18F because they are actively metabolizing glucose. PET has become particularly important in studies of brain function, where active neurons are detected by an increase in their metabolism, and in locating metastatic cancer.

Accelerators

The last of the four great developments of 1932 is the first use of accelerators to study nuclear reactions. John Cockcroft and Ernest Walton built an accelerator to produce high energy protons, which smashed into 7Li to produce two alpha particles. Their work was soon followed by the invention of the cyclotron by Ernest Lawrence, which is now the main tool for producing the unstable isotopes used in PET. Russ and I explain that
Positron emitters are short-lived, and it is necessary to have a cyclotron for producing them in or near the hospital. This is proving to be less of a problem than initially imagined. Commercial cyclotron facilities deliver isotopes to a number of nearby hospitals.
The Making of the Atomic Bomb, by Richard Rhodes. superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.

Soon after the miraculous year of 1932 Hitler came to power in Germany, and nuclear physics became much more than a scientific curiosity. The story of how the discoveries of Urey, Chadwick, Anderson, Cockcroft and Walton led relentlessly to the Manhattan Project is told masterfully in Richard Rhodes’ book The Making of the Atomic Bomb.

I have a few personal connections to this watershed year. My father Ron Roth, now retired and living in Lenexa Kansas, was born in 1932, proving that we are not so far removed from that historic time. In addition, my academic genealogy goes back to James Chadwick and Ernest Rutherford (whose lab Cockcroft and Walton worked in). Finally, Carl Anderson worked under the supervision of Robert Millikan, who was born in Morrison, Illinois, the small town I grew up in.

Friday, March 22, 2013

Barouh Berkovits (1926-2012)

When my March 2013 issue of the journal Heart Rhythm arrived this week, I found in it an obituary for Barouh Berkovits, who died last year.
Barouh Vojtec Berkovits passed away on October 23, 2012, at the age of 86 years. Berkovits was a master of science and an electrical engineer. Born in 1926 in Lucenec, Czechoslovakia (today Czech Republic), he worked as a technician behind the enemy lines. He escaped the Holocaust, but his parents and sister Eva perished in Auschwitz, Poland. In 1949 he immigrated to Israel and in 1956 to the United States… Berkovits invented and patented the first demand pacemaker capable of sensing the R wave…For his contributions to the treatment of cardiac arrhythmias, Berkovits received the “Distinguished Scientist Award” in 1982 by the Heart Rhythm Society.
Machines in Our Hearts, by Kirk Jeffrey, superimposed on Intermediate Physics for Medicine and Biology.
Machines in Our Hearts,
by Kirk Jeffrey.
The story of how Berkovits invented the demand pacemaker is told in Machines in Our Hearts, by Kirk Jeffrey.
Barouh V. Berkovits (b. 1924), an engineer at the American Optical Company, was already well known as the inventor of the DC defibrillator and the cardioverter, a device that interrupts a rapid heart rate (tachycardia) with low-energy shocks. He knew that when the cardioverter discharged randomly into the tachycardia, it would “occasionally not only not stop the tachyarrhythmia…but would produce ventricular fibrillation.” Cardioversion has to be synchronized to fall within the QRS complex and avoid the vulnerable period of the heartbeat. In 1963, Berkovits applied this principle to cardiac pacing. To solve the problem of competition [between the SA node and the artificial pacemaker], Berkovits in 1963 designed a sensing capability into the pacemaker. His invention behaved exactly like an asynchronous pacer until it detected a naturally occurring R wave, the indication of a ventricular contraction. This event would reset the timing circuit of the pacemaker, and the countdown to the next stimulus would begin anew. Thus the pacer stimulated the heart only when the ventricles failed to contract. It worked only “on demand.” As an added benefit, non-competitive pacing extended the life of the battery.
The 4th edition of Intermediate Physics for Medicine and Biology does not mention Berkovits by name, but Homework Problem 45 in Chapter 7 does analyze the demand pacemaker.
Problem 45 A patient with “intermittent heart block” has an AV node which functions normally most of the time with occasional episodes of block, lasting perhaps several hours. Design a pacemaker to treat the patient. Ideally, your design will not stimulate the heart when it is functioning normally. Describe
(a) whether you will stimulate the atria or ventricles
(b) which chambers you will monitor with a recording electrode
(c) what logic your pacemaker will use to determine when to stimulate. Your design may be similar to a “demand pacemaker” described in Jeffrey (2001), p. 132.
Of course, the reference is to Machines in Our Hearts. Berkovits’s phenomenal career is yet another example of how knowledge of engineering and physics can allow you to contribute to medicine and biology.

Friday, March 15, 2013

The Technology of Medicine

In Chapter 5 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the artificial kidney as an example of the use of physics and engineering to solve a medical problem. Rather than delving into the engineering details, today we consider the larger question of technology in medicine. Russ and I write
The reader should also be aware that this “high-technology” solution to the problem of chronic renal disease is not an entirely satisfactory one… The distinction between a high-technology treatment and a real conquest of a disease has been underscored by Thomas (1974, pp. 31–36).
The Lives of a Cell, by Lewis Thomas, superimposed on Intermediate Physics for Medicine and Biology.
The Lives of a Cell,
by Lewis Thomas.
The citation is to the book The Lives of a Cell by Lewis Thomas (physician, poet, essayist, administrator). To me, his essays—written 40 years ago—sound surprisingly modern. For instance, the introduction to his essay about “The Technology of Medicine” is relevant today as we struggle with the role of expensive technology in the ever-increasing cost of health care.
Technology assessment has become a routine exercise for the scientific enterprises on which the country is obliged to spend vast sums for its needs. Brainy committees are continually evaluating the effectiveness and cost of doing various things in space, defense, energy, transportation, and the like, to give advice about prudent investments for the future. Somehow medicine, for all the $80-odd billion that it is said to cost the nation [$2-something trillion in 2013], has not yet come in for much of this analytical treatment. It seems taken for granted that the technology of medicine simply exists, take it or leave it, and the only major technologic problem which policy-makers are interested in is how to deliver today's kind of health care, with equity, to all the people.

When, as is bound to happen sooner or later, the analysts get around to the technology of medicine itself, they will have to face the problem of measuring the relative cost and effectiveness of all the things that are done in the management of disease. They make their living at this kind of thing, and I wish them well, but I imagine they will have a bewildering time… 
The analysts have finally gotten around to it. As our nation spends 15% of our Gross Domestic Product on health care, the costs of technology in medicine are no longer taken for granted. The Affordable Care Act (a.k.a. Obamacare) focuses on research into the comparative effectiveness of treatments, often measured by the incremental cost-effectiveness ratio. And as Thomas predicted, the analysts are having a bewildering time dealing with it.

Thomas divides “technology” into three types. His first type is not really technology at all (“no-technology”). It is “supportive therapy”, that “tides patients over through diseases that are not, by and large, understood.” There is not much physics in this category, so we will move on.

The second level of technology, which he calls “halfway technology,” represents “the kinds of things that must be done after the fact, in efforts to compensate for the incapacitating effects of certain diseases whose course one is unable to do very much about. It is a technology designed to make up for disease, or to postpone death.” The artificial kidney, as well as kidney transplants, fall into this category, and “almost everything offered today for the treatment of heart disease is at this level of technology, with the transplanted and artificial hearts as ultimate examples.” There is lots of physics in this category. Yet, Thomas sees it as, at best, an intermediate—and expensive—type of medical solution. “In fact, this level of technology is, by its nature, at the same time highly sophisticated and profoundly primitive. It is the kind of thing that one must continue to do until there is a genuine understanding of the mechanisms involved in disease.”

The third type of technology is based on a complete understanding of the causes of disease. Thomas writes that it “is the kind that is so effective that it seems to attract the least public notice; it has come to be taken for granted. This is the genuinely decisive technology of modern medicine, exemplified best by modern methods for immunization against diphtheria, pertussis, and the childhood virus diseases, and the contemporary use of antibiotics and chemotherapy for bacterial infections.”

Is there physics in this third category? I think so. Biological mechanisms will be based, ultimately, on the constraints of physical laws, and one can’t hope to understand biology without physics (at least, this is what I believe). Perhaps we can say that physics and engineering are essential for the second type of technology, whereas physics and biology are essential for the third type.

Thomas clearly favors the third category. He concludes
The point to be made about this kind [the third type] of technology—the real high technology of medicine—is that it comes as the result of a genuine understanding of disease mechanisms, and when it becomes available, it is relatively inexpensive, and relatively easy to deliver.

Offhand, I cannot think of any important human disease for which medicine possesses the outright capacity to prevent or cure where the cost of the technology is itself a major problem. The price is never as high as the cost of managing the same diseases during the earlier stages of no-technology or halfway technology…

It is when physicians are bogged down by their incomplete technologies, by the innumerable things they are obliged to do in medicine when they lack a clear understanding of disease mechanisms, that the deficiencies of the health-care system are most conspicuous. If I were a policy-maker, interested in saving money for health care over the long haul, I would regard it as an act of high prudence to give high priority to a lot more basic research in biologic science. This is the only way to get the full mileage that biology owes to the science of medicine, even though it seems, as used to be said in the days when the phrase still had some meaning, like asking for the moon.
As we face the looming crisis of budget sequestration, with its devastating cutbacks in funding for the National Institutes of Health and the National Science Foundation, and as the calls for translational medical research increase, I urge our legislators to heed Thomas’s advice and “give high priority to a lot more basic research.”

Friday, March 8, 2013

Helium Shortage!

A recent article in the New York Times discusses the looming shortage of helium.
A global helium shortage has turned the second-most abundant element in the universe (after hydrogen) into a sought-after scarcity, disrupting its use in everything from party balloons and holiday parade floats to M.R.I. machines and scientific research….

Experts say the shortage has many causes. Because helium is a byproduct of natural gas extraction, a drop in natural gas prices has reduced the financial incentives for many overseas companies to produce helium. In addition, suppliers’ ability to meet the growing demand for helium has been strained by production problems around the world. Helium plants that are being built or are already operational in Qatar, Algeria, Wyoming and elsewhere have experienced a series of construction delays or maintenance troubles.
One medical use of helium is discussed in the 4th edition of Intermediate Physics for Medicine and Biology. In Chapter 8, Russ Hobbie and I write about the role of helium in magnetoencephalography—the biomagnetic measurement of electrical activity in the brain—using Superconducting Quantum Interference Device (SQUID) magnetometers.
The SQUID must be operated at temperatures where it is superconducting. It used to be necessary to keep a SQUID in a liquid-helium bath, which is expensive to operate because of the high evaporation rate of liquid helium. With the advent of high-temperature superconductors, SQUIDS have the potential to operate at liquid-nitrogen temperatures, where the cooling problems are much less severe [for additional information, see here].
A more wide-spread use of helium in medicine is during magnetic resonance imaging. Chapter 18 of our book discusses MRI, but it does not describe how the strong, static magnetic field required by MRI is created. In a clinical MRI system, a magnetic field (typically 2 to 4 T) must exist over a large volume. Producing such a magnetic field using permanent magnets would, if possible, require giant, massive, expensive structures. A more reasonable method to create this field is using coils carrying a large current. One way to minimize the resulting Joule heating losses in the coils is to make them out of superconducting wire, which must be cooled cryogenically. An article on the Time Magazine online newsfeed states
Liquid helium has an extremely low boiling point—minus 452.1 degrees Fahrenheit, close to absolute zero—which makes it a perfect substance for cooling the superconducting magnets found in MRI machines. Hospitals are generally the first in line for helium, so the shortage isn’t affecting them yet. But prices for hospital-grade helium may continue to go up, leading to higher health-care costs or, in the worst-case scenario, the need for a backup plan for cooling MRI machines.
More detail about the use of helium during MRI can be found in an online book titled The Basics of MRI by Joseph Hornak. Below I quote some of the text, but you will need to go the book website to see the pictures and animations.
The imaging magnet is the most expensive component of the magnetic resonance imaging system. Most magnets are of the superconducting type. This is a picture of a first generation 1.5 Tesla superconducting magnet from a magnetic resonance imager. A superconducting magnet is an electromagnet made of superconducting wire. Superconducting wire has a resistance approximately equal to zero when it is cooled to a temperature close to absolute zero (−273.15° C or 0 K) by immersing it in liquid helium. Once current is caused to flow in the coil it will continue to flow as long as the coil is kept at liquid helium temperatures. (Some losses do occur over time due to infinitely small resistance of the coil. These losses can be on the order of a ppm of the main magnetic field per year.)

The length of superconducting wire in the magnet is typically several miles. The coil of wire is kept at a temperature of 4.2 K by immersing it in liquid helium. The coil and liquid helium is kept in a large dewar. The typical volume of liquid Helium in an MRI magnet is 1700 liters. In early magnet designs, this dewar was typically surrounded by a liquid nitrogen (77.4 K) dewar which acts as a thermal buffer between the room temperature (293 K) and the liquid helium. See the animation window for a cross sectional view of a first generation superconducting imaging magnet.

In later magnet designs, the liquid nitrogen region was replaced by a dewar cooled by a cryocooler or refrigerator. There is a refrigerator outside the magnet with cooling lines going to a coldhead in the liquid helium. This design eliminates the need to add liquid nitrogen to the magnet, and increases the liquid helium hold time to 3 to 4 years. The animation window contains a cross sectional view of this type of magnet. Researchers are working on a magnet that requires no liquid helium.
With the discovery of high temperature superconductivity (HTS), MRI magnets cooled at higher temperatures, avoiding the need for liquid helium, are possible. The ideal solution to the helium shortage would be superconducting coils cooled with liquid nitrogen. Nitrogen makes up 80% of our atmosphere, so it is free and virtually limitless. However, a 2010 article by scientists at the MIT Francis Bitter Magnet Laboratory (FBML) suggests that a more practical solution might be the use of solid nitrogen to reach temperatures of 20 K, for which superconducting materials such as magnesium diboride (MgB2) exist that have the properties required for magnet coils.
A tremendous progress achieved in the past decade and is continuing today has transformed selected HTS materials into “magnet-grade” conductors, i.e., meet rigorous magnet specifications and are readily available from commercial wire manufacturers [1]. We are now at the threshold of a new era in which HTS will play a key role in a number of applications— here MgB2 (Tc=39 K) is classified as an HTS. The HTS offers opportunities and challenges to a number of applications for superconductivity. In this paper we briefly describe three NMR/MRI magnets programs currently being developed at FBML that would be impossible without HTS: 1) a 1.3 GHz NMR magnet; 2) a compact NMR magnet assembled from YBCO [yttrium barium copper oxide] annuli; and 3) a persistent-mode, fully-protected MgB2 0.5-T/800-mm whole-body MRI magnet.
Even if new MRI magnets using solid nitrogen or some other abundant substance as the coolant were developed, there are thousands of existing MRI devices that still would require liquid helium and would be very expensive to replace. Congress is currently considering legislation to address the helium shortage (see article here). We urgently need to preserve our helium supply to ensure its availability for important medical devices.

P.S. I saw this article just a few days ago. High temperature superconductors for MRI may be just around the corner!

Friday, March 1, 2013

Magnetoacoustic Tomography with Magnetic Induction

Magnetoacoustic tomography with magnetic induction is a new method to image the distribution of electrical conductivity in tissue. Bin He, the director of the Institute for Engineering in Medicine at the University of Minnesota, developed this technique with his student Yuan Xu in a 2005 publication (Physics in Medicine and Biology, Volume 50, Pages 5175–5187). They describe MAT-MI in their introduction.
We have developed a new approach called magnetoacoustic tomography with magnetic induction (MAT-MI) by combining ultrasound and magnetism. In this method, the object is in a static magnetic field and a time-varying (μs) magnetic field... The time-varying magnetic field induces an eddy current in the object. Consequently, the object will emit ultrasonic waves through the Lorentz force produced by the combination of the eddy current and the static magnetic field. The ultrasonic waves are then collected by the detectors located around the object for image reconstruction. MAT-MI combines the good contrast of EIT [electrical impedance tomography] with the good spatial resolution of sonography.
One nice feature of MAT-MI is that it fits so well into the 4th edition of Intermediate Physics for Medicine and Biology, in which Russ Hobbie and I analyze both eddy currents caused by Faraday induction (Chapter 8) and ultrasound imaging (Chapter 13). Another characteristic of MAT-MI is that the physics is simple enough that it can be summarized in a homework problem. So, dear reader, here is a new problem that will help you understand MAT-MI.
Section 8.6

Problem 25 ½  Assume a sheet of tissue having conductivity σ is placed perpendicular to a uniform, strong, static magnetic field B0, and a weaker spatially uniform but temporally oscillating magnetic field B1(t).
(a) Derive an expression for the electric field E induced by the oscillating magnetic field. It will depend on the distance r from the center of the sheet and the rate of change of the magnetic field.
(b) Determine an expression for the current density J by multiplying the electric field by the conductivity.
(c) The force per unit volume, F, is given by the Lorentz force, J×B0 (ignore the weak B1). Find an expression for F.
(d) The source of the ultrasonic pressure waves can be expressed as the divergence of the Lorentz Force. Derive an expression for ∇ · F.
(e) Draw a picture showing the directions of
J, B0, and F.
While this example is simple enough to serve as a homework problem, it does not illustrate imaging of conductivity; the conductivity is uniform so there is no variation to image. As He and Yuan explain, if the conductivity varies with position, this will also contribute to ∇ · F, and therefore influence the radiated ultrasonic wave. Thus, information about the conductivity distribution σ(x,y) is contained in the pressure. Subsequent papers by He and his colleagues explore methods for extracting σ(x,y) from the ultrasonic signal. Potential applications include using MAT-MI to image breast cancer tumors.

I’ve worked on MAT-MI a little bit. University of Michigan student Kayt Brinker and I published a paper describing MAT-MI in anisotropic tissue like skeletal muscle, where the conductivity is much higher parallel to the muscle fibers than perpendicular to them [Brinker, K. and B. J. Roth (2008) “The effect of electrical anisotropy during magneto-acoustic tomography with magnetic induction,” IEEE Transactions on Biomedical Engineering, Volume 55, Pages 1637–1639]. For some reason the figures published by the journal were not of high quality, so here I reproduce a better version of Figure 6, which shows the pressure wave produced during MAT-MI.

Figure 6 from Brinker and Roth (2008) shows the pressure at 20, 40, 60, and 80 μs in isotropic and anisotropic tissue.
Fig. 6. Pressure at 20, 40, 60, and 80 μs in isotropic and anisotropic tissue.
Each panel represents a 400 mm by 400 mm area.
In isotropic tissue, the wave propagates outward, the same in all directions. In electrically anisotropic tissue, the pressure is greater in the direction perpendicular to the fiber axis (vertical) than parallel to it (horizontal). The main difference between our calculation and that in the new homework problem given above is that Kayt and I restricted the oscillating magnetic field B1 to a small region (40 mm radius) at the center of the tissue sheet.