Friday, March 26, 2010

Erwin Neher

I subscribe to a monthly magazine, The Scientist, which was founded by Eugene Garfield (who also was a founder of the Science Citation Index). It provides print and online coverage about biomedical research, technology and business. I’m not sure what I did to deserve it, but I get a paper copy delivered to my office for free, and I can tell you for certain that the magazine is worth the price. Seriously, it is a valuable resource, and the articles are general enough that I can follow them without having to consult my physiology and biochemistry textbooks. The online site contains many of the articles for free, and also has career information for young scientists. I recommend it.

The March 2010 issue of The Scientist contains a profile of Nobel Prize winner Erwin Neher, the developer of the patch clamp technique. Russ Hobbie and I discuss patch-clamp recording in Chapter 9 of the 4th edition of Intermediate Physics for Medicine and Biology.
The next big advance was patch-clamp recording [Neher and Sakmann (1976)]. Micropipettes were sealed against a cell membrane that had been cleaned of connective tissue by treatment with enzymes. A very-high-resistance seal resulted [(2-3) × 107 Ohm] that allowed one to see the opening and closing of individual channels. For this work Erwin Neher and Bert Sakmann received the Nobel Prize in Physiology or Medicine in 1991. Around 1980, Neher’s group found a way to make even higher-resistance (1010-1011 Ohm) seals that reduced the noise even further and allowed patches of membrane to be torn from the cell while adhering to the pipette [Hamill et al. (1981)]…
The profile in The Scientist provides some insight into how this research began.
[Neher’s former postdoc Fred] Sigworth remembers it well. “I came into lab that Monday morning and Erwin said, with a twinkle in his eye, ‘I think I know how you’re going to see sodium channels,’” he says. These channels—essential to neural communication—had proven elusive because they produce such small currents and remain open for such a short time. But thanks to the team’s new “patch-clamp” technique—and in particular, the formation of an incredibly tight seal, or “gigaseal,” between the pipette tip and the cell membrane—“seeing sodium channels suddenly became really easy,” says Sigworth, who, along with Neher, published these observations (and the first description of the tight-seal patch-clamp technique) in Nature in 1980.
I always enjoy reading about the quirks and odd twists of fate that often accompany scientific advance. The profile in The Scientist provides an entertaining anecdote.
You also needed to suck. “You had to apply a little bit of suction in order to pull some membrane into the orifice of the pipette,” says Neher. “If you did it the right way, it worked.” At least for Neher. “There was a weird period where we could no longer get gigaseals,” recalls [Owen] Hamill [a postdoc at the time]. “Then Bert suggested you have to blow before you suck.” Gently blowing a solution through the pipette as it approaches the surface of the cell keeps the tip from picking up debris during the descent. Between the blowing and the sucking, Hamill says, “our effeiciency went up to 99 percent.”
I fond it interesting that Neher’s undergraduate degree was in physics, and it was only after he arrived at the University of Wisconsin on a Fulbright Scholarship that he began studying biophysics. In his Nobel autobiography he describes his early motivation for studying biological physics.
At the age of 10, I entered the “Maristenkolleg” at Mindelheim [...] the local “Gymnasium” is operated by a catholic congregation, the “Maristenschulbrüder.” The big advantage of this school was that our teachers—both those belonging to the congregation and others—were very dedicated and were open not only to the subject matter but also to personal issues. During my years at the Gymnasium (1954 to 1963) I found out that, next to my interest in living things, I also could immerse myself in technical and analytical problems. In fact, pretty soon, physics and mathematics became my favourite subjects. At the same time, however, new concepts unifying these two areas had seeped into the literature, which was accessible to me. I eagerly read about cybernetics, which was a fashionable word at that time, and studied everything in my reach on the “Hodgkin-Huxley theory” of nerve excitation. By the time of my Abitur—the examination providing access to university—it was clear to me that I should become a “biophysicist.” My plan was to study physics, and later on add biology.
Neher provides a classic example of how a strong background in physics can lead to advances in biology and medicine, a major theme underlying Intermediate Physics for Medicine and Biology.

Friday, March 19, 2010

How Should We Teach Physics to Future Life Scientists and Physicians?

The American Physical Society publishes a monthly newspaper, the APS News, and the back page of each issue contains an editorial that goes under the name—you guessed it—“The Back Page.” Readers of the 4th edition of Intermediate Physics for Medicine and Biology will want to read The Back Page in the March 2010 issue, subtitled “Physics for Future Physicians and Life Scientists: A Moment of Opportunity.” This excellent editorial—written by Catherine Crouch, Robert Hilborn, Suzanne Amador Kane, Timothy McKay, and Mark Reeves—champions many of the ideas that underlie our textbook. The editorial begins
How should we teach physics to future life scientists and physicians? The physics community has an exciting and timely opportunity to reshape introductory physics courses for this audience. A June 2009 report from the American Association of Medical Colleges (AAMC) and the Howard Hughes Medical Institute (HHMI), as well as the National Research Council’s Bio2010 report, clearly acknowledge the critical role physics plays in the contemporary life sciences. They also issue a persuasive call to enhance our courses to serve these students more effectively by demonstrating the foundational role of physics for understanding biological phenomena and by making it an explicit goal to develop in students the sophisticated scientific skills characteristic of our discipline. This call for change provides an opportunity for the physics community to play a major role in educating future physicians and future life science researchers.

A number of physics educators have already reshaped their courses to better address the needs of life science and premedical students, and more are actively doing so. Here we describe what these reports call for, their import for the physics community, and some key features of these reshaped courses. Our commentary is based on the discussions at an October 2009 conference (www.gwu.edu/~ipls), at which physics faculty engaged in teaching introductory physics for the life sciences (IPLS), met with life scientists and representatives of NSF, APS, AAPT, and AAMC, to take stock of these calls for change and possible responses from the physics community. Similar discussion on IPLS also took place at the 2009 APS April Meeting, the 2009 AAPT Summer Meeting, and the February 2010 APS/AAPT Joint Meeting.
One key distinction between our textbook and the work described in The Back Page editorial is that our book is aimed toward an intermediate level, while the IPLS movement is aimed at the introductory level. Like it or not, premedical students have a difficult time fitting additional physics courses into their undergraduate curriculum. I know that here at Oakland University, I’ve been able to entice only a handful of premed students to take my PHY 325 (Biological Physics) and PHY 326 (Medical Physics) classes, despite my best efforts to attract them and despite OU’s large number of students hoping to attend medical school (these classes have our two-semester introductory physics sequence as a prerequisite). So, I think there’s merit in revising the introductory physics class, which premedical students are required to take, if your goal is to influence premedical education. As The Back Page editorial states, “the challenge is to offer courses that cultivate general quantitative and scientific reasoning skills, together with a firm grounding in basic physical principles and the ability to apply those principles to living systems, all without increasing the number of courses needed to prepare for medical school.” The Back Page editorial also cites the “joint AAMC-HHMI committee … report, Scientific Foundations for Future Physicians (SFFP). This report calls for removing specific course requirements for medical school admission and focusing instead on a set of scientific and mathematical ‘competencies.’ Physics plays a significant role…”

How do you fit all the biomedical applications of physics into an already full introductory class? The Back Page editorial gives some suggestions. For instance, “an extended discussion of kinematics and projectile motion could be replaced by more study of fluids and continuum mechanics... [and] topics such as diffusion and open systems could replace the current focus on heat engines and equilibrium thermal situations.” I agree, especially with adding fluid dynamics (Chapter 1 in our book) and diffusion (Chapter 4), which I believe are absolutely essential for understanding biology. I have my own suggestions. Although Newton’s universal law of gravity, Kepler’s laws of planetary motion, and the behavior of orbiting satellites are fascinating and beautiful topics, a premed student may benefit more from the study of osmosis (Chapter 5) and sound (Chapter 13, including ultrasound). Electricity and magnetism remains a cornerstone of introductory physics (usually in a second semester of a two-semester sequence), but the emphasis could be different. For instance, Faraday’s law of induction can be illustrated using magnetic stimulation of the brain, Ampere’s law by the magnetic field around a nerve axon, and the dipole approximation by the electrocardiogram. In a previous post to this blog, I discussed how Intermediate Physics for Medicine and Biology addresses many of these issues. Russ Hobbie will be giving an invited paper about medical physics and premed students at the July 2010 meeting of the American Association of Physics Teachers. When he gives the talk it will be posted on the book website.

One way to shift the focus of an introductory physics class toward the life sciences is to create new homework problems that use elementary physics to illustrate biological applications. In the 4th edition of Introductory Physics for Medicine and Biology, Russ Hobbie and I constructed many interesting homework problems about biomedical topics. While some of these may be too advanced for an introductory class, others may (with some modification) be very useful. Indeed, teaching a traditional introductory physics class but using a well-crafted set of homework problems may go a long ways toward achieving the goals set out by The Back Page editorial.

Let me finish this blog entry by quoting the eloquent final paragraph of The Back Page editorial. Notice that the editorial ends with the same central question that began it. It is the question that motivated Russ Hobbie to write the first edition of Intermediate Physics for Medicine and Biology (published in 1978) and it is the key question that Russ and I struggled with when working on the 4th edition.
The physics community faces a challenging opportunity as it addresses the issues surrounding IPLS courses. A sizable community we serve has articulated a clear set of skills and competencies that students should master as a result of their physics education. We have for a number of decades incorporated engineering examples into our physics classes. The SFFP report asks us to respond to another important constituency. Are we ready to develop courses that will teach our students how to apply basic physical principles to the life sciences? The challenges of making significant changes in IPLS courses are daunting if we each individually try to take on the task. But with a community-wide effort, we should be able to meet this challenge. The physics community is already moving to develop and implement changes in IPLS courses, and the motivations for change are strong. The life science and medical school communities stress that a working knowledge of physical principles is essential to success in all areas of life science including the practice of medicine. Thus we see significant teaching and learning opportunities as we work to answer the question that opened our discussion: how should we teach physics to future physicians and life scientists?

Friday, March 12, 2010

The Strangest Man

The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom, by Graham Farmelo, superimposed on Intermediate Physics for Medicine and Biology.
The Strangest Man:
The Hidden Life of Paul Dirac,
Mystic of the Atom,
by Graham Farmelo.
I recently read The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom, by Graham Farmelo, a fascinating biography of the Nobel Prize winning physicist Paul Adrien Maurice Dirac. One thing I did not find in the book was biological or medical physics. Nevertheless, Russ Hobbie and I mention Dirac in Chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology, in connection with the Dirac delta function.
The δ function can be thought of as a rectangle of width a and height 1/a in the limit [as a goes to zero]… The δ function is not like the usual function in mathematics because of its infinite discontinuity at the origin. It is one of a class of “generalized functions” whose properties have been rigorously developed by mathematicians since they were first used by the physicist P. A. M. Dirac.
The Principles of Quantum Mechanics, by Paul Dirac, superimposed on Intermediate Physics for Medicine and Biology.
The Principles of Quantum Mechanics,
by Paul Dirac.
Dirac won his Nobel Prize for contributions to quantum mechanics. I bought a copy of his famous textbook The Principles of Quantum Mechanics when I was an undergraduate at the University of Kansas. Farmelo describes it as “never out of print, it remains the most insightful and stylish introduction to quantum mechanics and is still a powerful source of inspiration for the most able young theoretical physicists. Of all the textbooks they use, none presents the theory with such elegance and with such relentless logic.”

One of Dirac’s greatest contributions was the prediction of positive electrons, or positrons, a type of antimatter. His prediction arose from the relativistic wave equation for the electron, now called the Dirac equation. An interesting feature of the Dirac equation is that it implies negative energy states. The only time these negative states are observable is when an electron is missing from one of the states: a hole. Farmelo writes
The bizarre upshot of the theory is that the entire universe is pervaded by an infinite number of negative-energy electrons – what might be thought of as a “sea.” Dirac argued that this sea has a constant density everywhere, so that experimenters can observe only departures from this perfect uniformity… Only a disturbance in Dirac’s sea—a bursting bubble, for example—would be observable. He envisaged just this when he foresaw that there would be some vacant states in the sea of negative-energy electrons, causing tiny departures from the otherwise perfect uniformity. Dirac called these unoccupied states “holes”... Each hole has positive energy and positive charge—the properties of the proton, the only other subatomic particle known at that time [1929]. So Dirac made the simplest possible assumption by suggesting that a hole is a proton.
We now know that these holes are not protons but are positrons, discovered experimentally in 1932 by Carl Anderson. Positrons are vital for understanding how x-rays interact with matter, as Russ and I describe in Section 15.6 of Intermediate Physics for Medicine and Biology
A photon with energy above 1.02 MeV can produce a particle-antiparticle pair: a negative electron and a positive electron or positron… Since the rest energy (mc2) of an electron or positron is 0.51 MeV, pair production is energetically impossible for photons below 2mc2 = 1.02 MeV.

One can show, using o = pc for the photon, that momentum is not conserved by the positron and electron if Eq. 15.23 [conservation of energy] is satisfied. However, pair production always takes place in the Coulomb field of another particle (usually a nucleus) that recoils to conserve momentum.
In Sec. 17.14, Russ and I describe the crucial role positrons play in medical imaging.
If a positron emitter is used as the radionuclide, the positron comes to rest and annihilates an electron, emitting two annihilation photons back to back. In positron emission tomography (PET) these are detected in coincidence. This simplifies the attenuation correction, because the total attenuation for both photons is the same for all points of emission along each gamma ray through the body (see Problem 54). Positron emitters are short-lived, and it is necessary to have a cyclotron for producing them in or near the hospital. This is proving to be less of a problem than initially imagined. Commercial cyclotron facilities deliver isotopes to a number of nearby hospitals. Patterson and Mosley (2005) found that 97% of the people in the United States live within 75 miles of a clinical PET facility.
Another famous prediction of Dirac’s was magnetic monopoles. Russ and I only mention monopoles in passing in Section 8.8.1: “Since there are no known magnetic charges (monopoles), we must consider the effect of magnetic fields on current loops or magnetic dipoles.” Dirac predicted that magnetic monopoles could exist. Farmelo tells the story.
In Cambridge, during the spring of 1931, Dirac happened upon a rich new seam of ideas that would crystallize into one of his most famous contributions to science… As usual, Dirac appears to have said nothing of this to anyone, even to his close friends. In the early months of 1931, a quiet time for his fellow theoreticians, he was working on the most promising new theory he had conceived for years. The theory broke new ground in magnetism. For centuries, it had been a commonplace of science that magnetic poles come only in pairs, labeled north and south: if one pole is spotted, then the opposite one will be close by. Dirac had found that quantum theory is compatible with the existence of single magnetic poles. During a talk at the Kapitza Club, he dubbed them magnons, but the name never caught on in this context; the particles became known as magnetic monopoles.
Physicists have searched for magnetic monopoles, and once they even thought they found one. In 1982, physicist Blas Cabrera observed a signal consistent with the experimental signature of a monopole (Physical Review Letters, Volume 48, Pages 1378–1381), but it now appears to have been an artifact, as the result has never been reproduced. I have my own remote (indeed, very remote) connection with this experiment (and thus to Dirac). Cabrera’s PhD advisor, William Fairbank, was John Wikswo’s PhD advisor, and Wikswo was in turn my PhD advisor. Thus, academically speaking, I am one of Cabrera’s scientific nephews.

Dirac was known for saying little and behaving rather oddly (the title of the book is, after all, “The Strangest Man”), and Farmelo suggests a possible reason: Dirac may have been autistic.
[Dirac] always attributed his extreme taciturnity and stunted emotions to his father’s disciplinarian regime; but there is another, quite different explanation, namely that he was autistic. Two of Dirac’s younger colleagues confided in me that they had concluded this, each of them making their disclosure in sotto voce, as if they were imparting a shameful secret. Both refused to be quoted… There is not nearly enough detail in her [Dirac’s mother’s] comments or in reports of Dirac’s behaviour in school to justify a diagnosis that he was then autistic. His behavior as an a adult, however, had all the characteristics that almost every autistic person has to some degree—reticence, passivity, aloofness, literal-mindedness, rigid patterns of activity, physical ineptitude, self-centredness and, above all, a narrow range of interests and a marked inability to empathise with other human beings.
Whatever the cause of Dirac’s unusual behavior, he was a great physicist. Farmelo sums up Dirac’s enduring legacy at the end of his book.
There is no doubt that Dirac was a great scientist, one of the few who deserves a place just below Einstein in the pantheon of modern physicists. Along with Heisenberg, Jordan, Pauli, Schrodinger and Born, Dirac was one of the group of theoreticians who discovered quantum mechanics. Yet his contribution was special. In his heyday, between 1925 and 1933, he brought a uniquely clear vision to the development of a new branch of science: the book of nature often seemed to be open in front of him.

Friday, March 5, 2010

Magnetic Measurements of Peripheral Nerve Function Using a Neuromagnetic Current Probe

Section 8.9 in the 4th Edition of Intermediate Physics for Medicine and Biology describes how a toroidal probe can be used to measure the magnetic field of a nerve.
If the signal [a weak biomagnetic field] is strong enough, it can be detected with conventional coils and signal-averaging techniques that are described in Chapter 11. Barach et al. (1985) used a small detector through which a single axon was threaded. The detector consisted of a toroidal magnetic core wound with many turns of fine wire (Fig. 8.26). Current passing through the hole in the toroid generated a magnetic field that was concentrated in the ferromagnetic material of the toroid. When the field changed, a measurable voltage was induced in the surrounding coil.
My friend Ranjith Wijesinghe, of the Department of Physics at Ball State University, recently published the definitive review of this research in the journal Experimental Biology and Medicine, titled “Magnetic Measurements of Peripheral Nerve Function Using a Neuromagnetic Current Probe” (Volume 235, Pages 159–169).
The progress made during the last three decades in mathematical modeling and technology development for the recording of magnetic fields associated with cellular current flow in biological tissues has provided a means of examining action currents more accurately than that of using traditional electrical recordings. It is well known to the biomedical research community that the room-temperature miniature toroidal pickup coil called the neuromagnetic current probe can be employed to measure biologically generated magnetic fields in nerve and muscle fibers. In contrast to the magnetic resonance imaging technique, which relies on the interaction between an externally applied magnetic field and the magnetic properties of individual atomic nuclei, this device, along with its room-temperature, low-noise amplifier, can detect currents in the nano-Ampere range. The recorded magnetic signals using neuromagnetic current probes are relatively insensitive to muscle movement since these probes are not directly connected to the tissue, and distortions of the recorded data due to changes in the electrochemical interface between the probes and the tissue are minimal. Contrary to the methods used in electric recordings, these probes can be employed to measure action currents of tissues while they are lying in their own natural settings or in saline baths, thereby reducing the risk associated with elevating and drying the tissue in the air during experiments. This review primarily describes the investigations performed on peripheral nerves using the neuromagnetic current probe. Since there are relatively few publications on these topics, a comprehensive review of the field is given. First, magnetic field measurements of isolated nerve axons and muscle fibers are described. One of the important applications of the neuromagnetic current probe to the intraoperative assessment of damaged and reconstructed nerve bundles is summarized. The magnetic signals of crushed nerve axons and the determination of the conduction velocity distribution of nerve bundles are also reviewed. Finally, the capabilities and limitations of the probe and the magnetic recordings are discussed.
Ranjith and I were both involved in this research when we were graduate students working in John Wikswo’s laboratory at Vanderbilt University. I remember the painstaking process of making those toroids; just winding the wire onto the ferrite core was a challenge. Wikswo built this marvelous contraption that held the core at one spot under a dissection microscope but at the same time allowed the core to be rotated around multiple axes (he’s very good at that sort of thing). When Russ Hobbie and I wrote about “many turns of fine wire” we were not exaggerating. The insulated copper wire was often as thin as 40-gauge (80 microns diameter), which is only slightly thicker than a human hair. With wire that thin, the slightest kink causes a break. We sometimes wound up to 100 turns on one toroid. It was best to wind the toroid when no one else was around (I preferred early morning): if someone startled you when you were winding, the result was usually a broken wire, requiring you to start over. We potted the toroid and its winding in epoxy, which itself was a job requiring several steps. We machined a mold out of Teflon, carefully positioned the toroid in the mold, soldered the ends of those tiny wires to a coaxial cable, and then injected epoxy into the mold under vacuum to avoid bubbles. If all went as planned, you ended up with a “neuromagnetic current probe” to use in your experiments. Often, all didn’t go as planned.

Ranjith’s review describes the work of several of my colleagues from graduate school days. Frans Gielen (who now works at the Medtronic Bakken Research Centre in Maastricht, the Netherlands) was a post doc who used toroids to record action currents in skeletal muscle. Ranjith studied compound action currents in the frog sciatic nerve for his PhD dissertation. My research was mostly on crayfish axons. Jan van Egeraat was the first to measure action currents in a single muscle fiber, did experiments on squid giant axons, and studied how the magnetic signal changed near a region of the nerve that was damaged. Jan obtained his PhD from Vanderbilt a few years after I did, and then tragically died of cancer just as his career was taking off. I recall that when my wife Shirley and I moved from Tennessee to Maryland to start my job at the National Institutes of Health, Jan and his wife drove the rented truck with all our belongings while we drove our car with our 1-month old baby. They got a free trip to Washington DC out of the deal, and we got a free truck driver. John Barach was a Vanderbilt professor who originally studied plasma physics, but changed to biological physics later in his career when collaborating with Wikswo. I always have admired Barach’s ability to describe complex phenomena in a very physical and intuitive way (see, for instance, Problem 13 in Chapter 8 of our textbook). Of course, we were all led by Wikswo, whose energy and drive are legendary, and whose grant writing skills kept us all in business. For his work developing the Neuromagnetic Current Probe, Wikswo earned an IR-100 Award in 1984, presented by the RandD Magazine to recognize the 100 most technologically significant new products and processes of the year.

Friday, February 26, 2010

All the News That’s Fit to Print

Newspaper articles may not provide the most authoritative information about science and medicine, but they are probably the primary source of news about medical physics for the layman. Today, I will discuss some recent articles from one of the leading newspapers in the United States: the venerable New York Times.


Last week Russ Hobbie sent me a copy of an article in the February 16 issue of the NYT, titled “New Source of an Isotope in Medicine is Found.” It describes the continuing shortage of technetium-99m, a topic I have discussed before in this blog.
Just as the worldwide shortage of a radioactive isotope used in millions of medical procedures is about to get worse, officials say a new source for the substance has emerged: a nuclear reactor in Poland.

The isotope, technetium 99, is used to measure blood flow in the heart and to help diagnose bone and breast cancers. Almost two-thirds of the world’s supply comes from two reactors; one, in Ontario, has been shut for repairs for nine months and is not expected to reopen before April, and the other, in the Netherlands, will close for six months starting Friday.

Radiologists say that as a result of the shortage, their treatment of some patients has had to revert to inferior materials and techniques they stopped using 20 years ago.

But on Wednesday, Covidien, a company in St. Louis that purifies the material created in the reactor and packages it in a form usable by radiologists, will announce that it has signed a contract with the operators of the Maria reactor, near Warsaw, one of the world’s most powerful research reactors.
I doubt that relying on a Polish reactor is a satisfactory long-term solution to our 99mTc shortage, but it may provide crucial help with the short term crisis. A more promising permanent solution is described in a January 26 article on medicalphysicsweb.
GE Hitachi Nuclear Energy (GEH) announced today it has been selected by the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) to help develop a U.S. supply of a radioisotope used in more than 20 million diagnostic medical procedures in the United States each year.
More information can be found at the American Association of Physicists in Medicine website. Let’s hope that this new initiative will prove successful.


The second topic I want to discuss today was called to my attention by my former student Phil Prior (PhD in Biomedical Sciences: Medical Physics, Oakland University, 2008). On January 26, the NYT published Walt Bogdanich’s article “As Technology Surges, Radiation Safeguards Lag.”
In New Jersey, 36 cancer patients at a veterans hospital in East Orange were overradiated—and 20 more received substandard treatment—by a medical team that lacked experience in using a machine that generated high-powered beams of radiation… In Louisiana, Landreaux A. Donaldson received 38 straight overdoses of radiation, each nearly twice the prescribed amount, while undergoing treatment for prostate cancer… In Texas, George Garst now wears two external bags—one for urine and one for fecal matter—because of severe radiation injuries he suffered after a medical physicist who said he was overworked failed to detect a mistake.

These mistakes and the failure of hospitals to quickly identify them offer a rare look into the vulnerability of patient safeguards at a time when increasingly complex, computer-controlled devices are fundamentally changing medical radiation, delivering higher doses in less time with greater precision than ever before.

Serious radiation injuries are still infrequent, and the new equipment is undeniably successful in diagnosing and fighting disease. But the technology introduces its own risks: it has created new avenues for error in software and operation, and those mistakes can be more difficult to detect. As a result, a single error that becomes embedded in a treatment plan can be repeated in multiple radiation sessions.
A related article by the same author, “Radiation Offers New Cures, and Ways to Do Harm,” was also published in the Gray Lady a few days earlier. These articles discuss recent medical mistakes in which patients have received much more radiation than originally intended. While somewhat sensational, the articles reinforce the importance of quality control in medical physics.

The NYT articles triggered a response from the American Association of Physicists in Medicine on January 28.
The American Association of Physicists in Medicine (AAPM) has issued a statement today in the wake of several recent articles in the New York Times yesterday and earlier in the week that discuss a number of rare but tragic events in the last decade involving people undergoing radiation therapy.

While it does not specifically comment on the details of these events, the statement acknowledges their gravity. It reads in part: “The AAPM and its members deeply regret that these events have occurred, and we continue to work hard to reduce the likelihood of similar events in the future.” The full statement appears here.

Today's statement also seeks to reassure the public on the safety of radiation therapy, which is safely and effectively used to treat hundreds of thousands of people with cancer and other diseases every year in the United States. Medical physicists in hospitals and clinics across the United States are board-certified professionals who play a key role in assuring quality during these treatments because they are directly responsible for overseeing the complex technical equipment used.

Taken together, the articles I’ve discussed today highlight some of the challenges that face the field of medical physics. For those who want additional background about the underlying physics and its applications to medicine, I recommend—you guessed it—the 4th edition of Intermediate Physics for Medicine and Biology.

Friday, February 19, 2010

The Electron Microscope

Intermediate Physics for Medicine and Biology does not discuss one of the most important instruments in modern biology: the electron microscope. If I were to add a very brief introduction about the electron microscope to Intermediate Physics for Medicine and Biology, I would put it right after Sec. 14.1, The Nature of Light: Waves Versus Photons. It would look something like this.
14.1 ½ De Broglie Wavelength and the Electron Microscope

Like light, matter can have both wave and particle properties. The French physicist Louis de Broglie derived a quantum mechanical relationship between a particle’s momentum p and wavelength λ

λ = h/p     (14.6 ½)

[Eisberg and Resnick (1985)]. For example, a 100 eV electron has a speed of 5.9 × 106 m s−1 (about 2% the speed of light), a momentum of 5.4 × 10−24 kg m s−1, and a wavelength of 0.12 nm.

The electron microscope takes advantage of the short wavelength of electrons to produce exquisite pictures of very small objects. Diffraction limits the spatial resolution of an image to about a wavelength. For a visible light microscope, this resolution is on the order of 500 nm (Table 14.2). For the electron microscope, however, the wavelength of the electron limits the resolution. A typical electron energy used for imaging is about 100 keV, implying a wavelength much smaller than an atom (however, practical limitations often limit the resolution to about 1 nm). Table 1.2 shows that viruses appear as blurry smears in a light microscope, but can be resolved with considerable detail in an electron microscope. In 1986, Ernst Ruska shared the Nobel Prize in Physics “for his fundamental work in electron optics, and for the design of the first electron microscope.”

Electron microscopes come in two types. In a transmission electron microscope (TEM), electrons pass through a thin sample. In a scanning electron microscope (SEM), a fine beam of electrons is raster scanned across the sample and secondary electrons emitted by the surface are imaged. In both cases, the image is formed in vacuum and the electron beam is focused using a magnetic lens.
To learn more, you can watch a YouTube video about the electron microscope. Nice collections of electron microscope images can be found at http://www.denniskunkel.com, http://www5.pbrc.hawaii.edu/microangela and http://www.mos.org/sln/SEM.

Structure and function of the electron microscope. 

Friday, February 12, 2010

Biomagnetism and Medicalphysicsweb

Medicalphysicsweb is an excellent website for news and articles related to medical physics. Several articles that have appeared recently are related to the field of biomagnetism, a topic Russ Hobbie and I cover in Chapter 8 of the 4th edition of Intermediate Physics for Medicine and Biology. I have followed the biomagnetism field ever since graduate school, when I made some of the first measurements of the magnetic field of an isolated nerve axon. Below I describe four recent articles from medicalphysicsweb.

A February 2 article titled “Magnetomometer Eases Cardiac Diagnostics” discusses a novel type of magnetometer for measuring the magnetic field of the heart. In Section 8.9 of our book, Russ and I discuss Superconducting Quantum Interference Device (SQUID) magnetometers, which have long been used to measure the small (100 pT) magnetic fields produced by cardiac activity. Another way to measure weak magnetic fields is to determine the Zeeman splitting of energy levels of a rubidium gas. The energy difference between levels depends on the external magnetic field, and is measured by detecting the frequency of optical light that is in resonance with this energy difference. Ben Varcoe, of the University of Leeds, has applied this technology to the heart by separating the magnetometer from the pickup coil:
The magnetic field detector—a rubidium vapour gas cell—is housed within several layers of magnetic shielding that reduce the Earth’s field about a billion-fold. The sensor head, meanwhile, is external to this shielding and contained within a handheld probe.
I haven’t been able to find many details about this device (such as if the pickup coils are superconducting or not, and why the pickup coil doesn’t transport the noise from the unshielded measurement area to the shielded detector), but Varcoe believes the device is a breakthrough in the way researchers can measure biomagnetic fields.

Another recent (February 10) article on medicalphysicsweb is about the effect of magnetic resonance imaging scans on pregnant women. As described in Chapter 18 of Intermediate Physics for Medicine and Biology, MRI uses a radio-frequency magnetic field to flip the proton spins so their decay can be measured, resulting in the magnetic resonance signal. This radio-frequency field induces eddy currents in the body that heat the tissue. Heating is a particular concern if the MRI is performed on a pregnant woman, as it could affect fetal development.
Medical physicists at Hammersmith Hospital, Imperial College London, UK, have now developed a more sophisticated model of thermal transport between mother and baby to assess how MRI can affect the foetal temperature (Phys. Med. Biol. 55 913)… This latest analysis takes account of heat transport through the blood vessels in the umbilical cord, an important mechanism that was ignored in previous models. It also includes the fact that the foetus is typically half a degree warmer than the mother – another key piece of information overlooked in earlier work.
Russ and I discuss these issues in Sec. 14.10: Heating Tissue with Light, where we derive the bioheat equation. The authors of the study, Jeff Hand and his colleagues, found that under normal conditions, fetal heating wasn’t a concern, but if exposed to 7.5 minutes of continuous RF field (unlikely during MRI) the temperature increase could be significant.

In a January 27 article, researchers from the University of Minnesota (Russ’s institution) use magnetoencephalography to diagnose post-traumatic stress disorder.
Post-traumatic stress disorder (PTSD) is a difficult condition to diagnose definitively from clinical evidence alone. In the absence of a reliable biomarker, patients’ descriptions of flashbacks, worry and emotional numbness are all doctors have to work with. Researchers from the University of Minnesota Medical School (Minneapolis, MN) have now shown how magnetoencephalography (MEG) could identify genuine PTSD sufferers with high confidence and without the need for patients to relive painful past memories (J. Neural Eng. 7 016011).
The magnetoencephalogram is discussed in Sec. 8.5 of Intermediate Physics for Medicine and Biology. The data for the Minnesota study was obtained using a 248-channel SQUID magnetometer. The researchers analyzed data from 74 patients with post-traumatic stress disorder known to the VA hospital in Minneapolis, and 250 healthy controls. The authors claim that the accuracy of the test is over 90%.

Finally, a February 8 article describes a magnetic navigation system installed in Taiwan by the company Stereotaxis.
The Stereotaxis System is designed to enable physicians to complete more complex interventional procedures by providing image guided delivery of catheters and guidewires through the blood vessels and chambers of the heart to treatment sites. This is achieved using computer-controlled, externally applied magnetic fields that govern the motion of the working tip of the catheter or guidewire, resulting in improved navigation, shorter procedure time and reduced x-ray exposure.
The system works by having ferromagnetic material in a catheter tip, and an applied magnetic field that can be adjusted to “steer” the catheter through the blood vessels. We discuss magnetic forces in Sec. 8.1 of Intermediate Physics for Medicine and Biology, and ferromagnetic materials in Sec. 8.8.

Although I believe medicalphysicsweb is extremely useful for keeping up-to-date on developments in medical physics, I find that often their articles either have specialized physics concepts that the layman may not understand or, more often, don’t address the underlying physics at all. Yet, one can’t understand modern medicine without mastery of the basic physics concepts. My browsing through medicalphysicsweb convinced me once again about the importance of learning how physics can be applied to medicine and biology. Perhaps I am biased, but I think that studying from the 4th edition of Intermediate Physics for Medicine and Biology is a great way to master these important topics.

Friday, February 5, 2010

Beta Decay and the Neutrino

In Section 17.4 in the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss beta decay the neutrino.
The emission of a beta-particle is accompanied by the emission of a neutrino… [which] has no charge and no rest mass… [and] hardly interacts with matter at all… A particle that seemed originally to be an invention to conserve energy and angular momentum now has a strong experimental basis.
Understanding Physics: The Electron, Proton, and Neutron, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Understanding Physics:
The Electron, Proton, and Neutron,
by Isaac Asimov.
Our wording implies there is a story behind this particle “that seemed originally to be an invention to conserve energy.” Indeed, that is the case. I will let Isaac Asimov tell this tale. (Asimov's books, which I read in high school, influenced me to become a scientist.) The excerpt below is from Chapter 14 of his book Understanding Physics: The Electron, Proton, and Neutron.
In Chapter 11, disappearance in mass during the course of nuclear reactions was described as balanced by an appearance of energy in accordance with Einstein’s equation, e=mc2. This balance also held in the case of the total annihilation of a particle by its anti-particle, or the production of a particle/anti-particle pair from energy.
Nevertheless, although in almost all such cases the mass-energy equivalence was met exactly, there was one notable exception in connection with radioactive radiations.

Alpha radiation behaves in satisfactory fashion. When a parent nucleus breaks down spontaneously to yield a daughter nucleus and an alpha particle, the sum of the mass of the two products does not quite equal the mass of the original nucleus. The difference appears in the form of energy—specifically, as the kinetic energy of the speeding alpha particle. Since the same particles appear as products at every breakdown of a particular parent nucleus, the mass-difference should always be the same, and the kinetic energy of the alpha particles should also always be the same. In other words, the beam of alpha particles should be monoenergetic. This was, in essence, found to be the case…

It was to be expected that the same considerations would hold for a parent nucleus breaking down to a daughter nucleus and a beta particle. It would seem reasonable to suppose that the beta particles would form a monoenergetic beam too…

Instead, as early as 1900, Becquerel indicated that beta particles emerged with a wide spread of kinetic energies. By 1914, the work of James Chadwick demonstrated the “continuous beta particle spectrum” to be undeniable.

The kinetic energy calculated for a beta particle on the basis of mass loss turned out to be a maximum kinetic energy that very few obtained. (None surpassed it, however; physicists were not faced with the awesome possibility of energy appearing out of nowhere.)

Most beta particles fell short of the expected kinetic energy by almost any amount up to the maximum. Some possessed virtually no kinetic energy at all. All told, a considerable portion of the energy that should have been present, wasn’t present, and through the 1920’s this missing energy could not be detected in any form.

Disappearing energy is as insupportable, really, as appearing energy, and though a number of physicists, including, notably, Niels Bohr, were ready to abandon the law of conservation of energy at the subatomic level, other physicists sought desperately for an alternative.

In 1931, an alternative was suggested by Wolfgang Pauli. He proposed that whenever a beta particle was produced, a second particle was also produced, and that the energy that was lacking in the beta particle was present in the second particle.

The situation demanded certain properties of the hypothetical particle. In the emission of beta particles, electric charge was conserved; that is, the net charge of the particles produced after emission was the same as that of the original particle. Pauli’s postulated particle therefore had to be uncharged. This made additional sense since, had the particle possessed a charge, it would have produced ions as it sped along and would therefore have been detectable in a cloud chamber, for instance. As a matter of fact, it was not detectable.

In addition, the total energy of Pauli’s projected particle was very small—only equal to the missing kinetic energy of the electron. The total energy of the particle had to include its mass, and the possession of so little energy must signify an exceedingly small mass. It quickly became apparent that the new particle had to have a mass of less than 1 percent of the electron and, in all likelihood, was altogether massless.

Enrico Fermi, who interested himself in Pauli’s theory at once, thought of calling the new particle a “neutron,” but Chadwick, at just about that time, discovered the massive, uncharged particle that came to be known by that name. Fermi therefore employed an Italian diminutive suffix and named the projected particle the neutrino (“little neutral one”), and it is by that name that it is known.

Friday, January 29, 2010

William Albert Hugh Rushton

This semester, I am teaching a graduate class at Oakland University on Bioelectric Phenomena (PHY 530). Rather than using a textbook, I require the students to read original papers, thereby providing insights into the history of the subject and many opportunities to learn about the structure and content of original research articles.

We began with a paper by Alan Hodgkin and Bernard Katz (“The Effect of Sodium Ions on the Electrical Activity of the Giant Axon of the Squid,” Journal of Physiology, Volume 108, Pages 37–77, 1949) that tests the hypothesis that the nerve membrane becomes selectively permeable to sodium during an action potential. We then moved on to Alan Hodgkin and Andrew Huxley’s monumental 1952 paper in which they present the Hodgkin-Husley model of the squid nerve axon (“A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve,” Journal of Physiology, Volume 117, Pages 500–544, 1952). In order to provide a more modern view of the ion channels that underlie Hodgkin and Huxley’s model, we next read an article by Roderick MacKinnon and his group (“The Structure of the Potassium Channel: Molecular Basis of K+ Conduction and Selectivity,” Science, Volume 280, Pages 69–77, 1998). Then we read a paper by Erwin Neher, Bert Sakmann and their colleagues that described patch clamp recordings of single ion channels (“Improved Patch-Clamp Techniques for High-Resolution Current Recordings from Cells and Cell-Free Membrane Patches,” Pflugers Archive, Volume 391, Pages 85–100, 1981).

This week I wanted to cover one-dimensional cable theory, so I chose one of my favorite papers, by Alan Hodgkin and William Rushton (“The Electrical Constants of a Crustacean Nerve Fibre,” Proceedings of the Royal Society of London, B, Volume 133, Pages 444–479, 1946). I recall reading this lovely article during my first summer as a graduate student at Vanderbilt University (where my daughter Kathy is now an attending college). My mentor, John Wikswo, had notebook after notebook full of research papers about nerve electrophysiology, and I set out to read them all. Learning a subject by reading the original literature is an interesting experience. It is less efficient than learning from a textbook, but you pick up many insights that are lost when the research is presented in a condensed form. Hodgkin and Rushton’s paper contains the fascinating quote
Electrical measurements were made by applying rectangular pulses of current and recording the potential response photographically. About fifteen sets of film were obtained in May and June 1939, and a preliminary analysis was started during the following months. The work was then abandoned and the records and notes stored for six years [my italics]. A final analysis was made in 1945 and forms the basis of this paper.
During those six years, the authors were preoccupied with a little issue called World War II.

Sometimes I like to provide my students with biographical information about the authors of these papers, and I had already talked about my hero, the Nobel Prize-winning Alan Hodgkin, earlier in the semester. So, I did some research on Rushton, who I was less familiar with. It turns out, he is known primarily for his work on vision. William Albert Hugh Rushton (1901–1980) has only a short Wikipedia entry, which does not even discuss his work on nerves. (Footnote: Several months ago, after reading—or rather listening to while walking my dog Suki—The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia by Andrew Lih, I became intensely interested in Wikipedia and started updating articles related to my areas of expertise. This obsession lasted for only about a week or two. I rarely make edits anymore, but I may update Rushton’s entry.) Rushton was a professor of physiology at Trinity College in Cambridge University. He became a Fellow of the Royal Society in 1948, and received the Royal Medal from that society in 1970.

Horace Barlow wrote an obituary for Rushton in the Biographical Memoirs of Fellows of the Royal Society (Volume 32, Pages 423–459, 1986). It begins
William Rushton first achieved scientific recognition for his work on the excitability of peripheral nerve where he filled the gap in the Cambridge succession between Lord Adrian, whose last paper on peripheral nerve appeared in 1922, and Alan Hodgkin, whose first paper was published in 1937. It was on the strength of this work that he was elected as a fellow of the Royal Society in 1948, but then Rushton started his second scientific career, in vision, and for the next 30 years he was dominant in a field that was advancing exceptionally fast. In whatever he was engaged he cut a striking and influential figure, for he was always interested in a new idea and had the knack of finding the critical argument or experiment to test it. He was argumentative, and often an enormously successful showman, but he also exerted much influence from the style of his private discussions and arguments. He valued the human intellect and its skillful use above everything else, and he successfully transmitted this enthusiasm to a large number of students and disciples.
Another of my favorite papers by Rushton is “A Theory of the Effects of Fibre Size in Medullated Nerve” (Journal of Physiology, Volume 115, Pages 101–122, 1951). Here, he correctly predicts many of the properties of myelinated nerve axons, such as the ratio of the inner and outer diameters of the myelin, from first principles.

Both of the Rushton papers I have cited here are also referenced in the 4th edition of Intermediate Physics for Medicine and Biology. Problem 34 in Chapter 6 is based on the Hodgkin-Rushton paper. It examines their analytical solution to the one-dimensional cable equation, which involves error functions. Was it Hodgkin or Rushton who was responsible for this elegant piece of mathematics gracing the Journal of Physiology? I can’t say for sure, but in Hodgkin’s Nobel Prize autobiography he claims he learned about cable theory from Rushton (who was 13 years older than him).

William Rushton provides yet another example of how a scientist with a firm grasp of basic physics can make fundamental contributions to biology.

Friday, January 22, 2010

Summer Internships

Many readers of the 4th edition of Intermediate Physics for Medicine and Biology are undergraduate majors in science or engineering. This time of the year, these students are searching for summer internships. I have a few suggestions.

My first choice is the NIH Summer Internship Program in Biomedical Research. The intramural campus of the National Institutes of Health in Bethesda, Maryland is the best place in the world to do biomedical research. My years working there in the 1990s were wonderful. Apply now. The deadline is March 1.

The National Science Foundation supports Research Experience for Undergraduate (REU) programs throughout the US. Click here for a list (it is long, but probably incomplete). Often NSF requires schools to not just select from their own undergraduates, but also to open some positions in their REU program to students from throughout the country. You might also try to Google “REU” and see what you come up with. Each program has different deadlines and eligibility requirements. For several years Oakland University, where I work, had an REU program run by the physics department. We have applied for funding again, but have not heard yet if we were successful. If lucky, we will run the program this summer, with a somewhat later deadline than most.

Last year, as part of the federal government’s stimulus package, the National Institutes of Health encouraged researchers supported by NIH grants to apply for a supplement to fund undergraduate students in the summer. Most of these supplements were for two years, and this will be the second summer. Therefore, I expect there will be extra opportunities for undergraduate students to do biomedical research in the coming months. Strike while the iron’s hot! The stimulus program is scheduled to end next year.

Finally, one of the best ways for undergraduate students to find opportunities to do research in the summer is to ask around among your professors. Get a copy of your department’s annual report and find out which professors have funding. Attend department seminars and colloquia to find out who is doing research that interests you. Or just show up at a faculty member’s door and ask (first read what you can about his or her research, and have your resume in hand). If you can manage it financially, consider working without pay for the first summer, just to get your foot in the door.

When I look back on my undergraduate education at the University of Kansas, one of the most valuable experiences was doing research in Professor Wes Unruh’s lab. I learned more from Unruh and his graduate students than I did in my classes. But such opportunities don’t just fall into your lap. You need to look for them. Ask around, knock on some doors, and keep your eyes open. And start now, because many of the formal internship programs have deadlines coming up soon.

If, dear reader, you are fortunate enough to get an internship this summer, but it’s far from home, then don’t forget to pack your copy of Intermediate Physics for Medicine and Biology when you go. After working all day in the lab, you can relax with it in the evening!

Good luck.