Friday, April 9, 2010

Galileo's Daughter

Galileo's Daughter, by Dava Sobel, superimposed on Intermediate Physics for Medicine and Biology.
Galileo's Daughter,
by Dava Sobel.
As is my habit, I listen to recorded books when I walk my dog Suki each day. Recently, I listened to the book Galileo’s Daughter, by Dava Sobel. I was surprised how touching I found this story (like Galileo, I have two daughters). It is a biography of Galileo Galilei (1564–1642), the famous Italian scientist, but also tells the parallel story of Sister Maria Celeste (1600–1634), Galileo’s daughter who was a nun at the San Matteo convent near Florence. The book quotes Maria Celeste’s letters to Galileo, which Sobel herself translated from Italian. (Unfortunately, Galileo’s replies are lost.) Maria Celeste comes across as a loving, intelligent and extremely loyal daughter who played a central role in Galileo’s life. “She alone of Galileo’s three children mirrored his own brilliance, industry, and sensibility, and by virtue of these qualities became his confidante.”

I tend to see biological physics everywhere, and I found some in this story. Late in his life, Galileo published his final book, Two New Sciences. One of these sciences was the motion of projectiles, and the other was what we would now call the strength of materials. In the part about materials, Galileo addressed the issue of scaling in animals. I quote Sobel, who quotes Galileo:
I have sketched a bone whose natural length has been increased three times and whose thickness has been multiplied until, for a correspondingly large animal, it would perform the same function which the small bone performs for its small animal. From the figures here shown you can see how out of proportion the enlarged bone appears. Clearly then if one wishes to maintain in a great giant the same proportion of limb as that found in an ordinary man he must either find a harder and stronger material for making the bones, or he must admit a diminution of strength in comparison with men of medium stature.
Scaling: Why is Animal Size so Important? by Knut Schmidt-Nielsen, superimposed on Intermediate Physics for Medicine and Biology.
Scaling: Why is Animal
Size so Important?
by Knut Schmidt-Nielsen.
(You can find the picture of the two bones here.) This example of how the strength of bones must scale with animal size did not make it into the 4th edition of Intermediate Physics in Medicine and Biology, although I sometimes discuss it when I teach PHY 325 (Biological Physics) at Oakland University. It serves as an excellent example of how physics can constrain the structure of animals. I won’t hold it against Galileo that he didn’t get his drawing of the bones quite right; it was the 17th century after all. According to Knut Schmidt-Nielsen (Scaling: Why is Animal Size so Important)
The need for a disproportionate increase in the size of supporting bones with increasing body size was understood by Galileo Galilei (1637), who probably was the first scientist to publish a discussion of the effects of body size on the size of the skeleton. In his Dialogues [Two New Sciences was written in the form of a dialogue] he mentioned that the skeleton of a large animal must be strong enough to support the weight of the animal as it increases with the third power of the linear dimensions. Galileo used a drawing to show how a large bone is disproportionately thicker than a small bone. (Incidentally, judging from the drawing, Galileo made an arithmetical mistake. The larger bone, which is three times as long as the shorter, shows a 9-fold increase in diameter, which is a greater distortion than required. A three-fold increase in linear dimensions should give a 27-fold increase in mass, and the cross-sectional area of the bone should be increased 27-fold, and its diameter therefore by the square root of 27 (i.e., 5.2 instead of 9)).
Russ Hobbie and I discuss the issue of scaling in Chapter 2 of Intermediate Physics for Medicine and Biology. In Problem 28 of Chapter 2, we ask the reader to calculate the falling speed of animals of different sizes, taking into account air friction. The solution to the problem indicates that large animals, with their smaller surface-to-volume ratio, have a larger terminal speed (the speed of descent in steady state, once the acceleration drops to zero) than smaller animals. We end the problem with one of my favorite quotes, by J. B. S. Haldane
You can drop a mouse down a thousand-yard mine shaft; and arriving at the bottom, it gets a slight shock and walks away. A rat is killed, and man is broken, a horse splashes.
When listening to Galileo’s Daughter, I was surprised to hear Galileo’s own words on this same subject, which are similar and written centuries earlier.
Who does not know that a horse falling from a height of three or four braccia will break his bones, while a dog falling from the same height or a cat from eight or ten, or even more, will suffer no injury? Equally harmless would be the fall of a grasshopper from a tower or the fall of an ant from the distance of the Moon.
Of course, the climax of Galileo’s Daughter is the great scientist’s trial by the Catholic Church for publishing a book supporting the Copernican view that the earth travels around the sun. Although I was familiar with this trial, I had never read the transcript, which Sodal quotes extensively. Listening to the elderly Galileo being forced into a humiliating recantation of his scientific views almost made me nauseous.

Sobel is a fine writer. Years ago I read her most famous book, Longitude, about finding a method to measure longitude at sea. Galileo himself contributed to the solution of this problem by introducing a method based on the orbits of the moons of Jupiter, which he of course discovered. However, the longitude problem was not definitely solved until clocks that could keep time on a rolling ship were invented by John Harrison. I have also listened to Sobel’s book The Planets, which I enjoyed but, in my opinion, isn’t as good as Longitude and Galileo’s Daughter. I hope Sobel continues writing books. As soon as a new one comes out (and arrives at the Rochester Hills Public Library, because I’m too cheap to buy these audio books), Suki and I plan on taking some long walks. I can’t wait.

Friday, April 2, 2010

Kids: Don’t Try This At Home

Russ Hobbie and I included a new chapter on sound and ultrasound in the 4th edition of Intermediate Physics for Medicine and Biology. In that chapter, we discuss how to calculate the speed of sound from the compressibility and the density of the tissue (Eq. 13.11). We then go on to describe, among other things, hearing, ultrasonic imaging, and the Doppler effect. One topic we do not mention is the behavior of objects moving faster than the speed of sound. Such a discussion, often found in physics and engineering books, usually is based on the Mach number, defined as the speed of an object divided by the speed of sound. If the Mach number is greater than one, the speed is supersonic and a shock wave develops. When an airplane travels faster than the speed of sound, people on the ground can hear the shock wave as a “sonic boom.” This is all very interesting, but it has nothing to do with biology and medicine, right?

Guess again. A recent article in the New York Times describes the plans of Felix Baumgartner, who intends to be the first human to break the sound barrier. I know, dear readers, that some of you are now saying “No, Chuck Yeager was the first to break the sound barrier, and that happened over 60 years ago.” Well, Yeager broke the sound barrier when flying in a plane, whereas Baumgartner plans to break the sound barrier while in free fall! The Times article states
But now Fearless Felix, as his fans call him, has something more difficult on the agenda: jumping from a helium balloon in the stratosphere at least 120,000 feet above Earth. Within about half a minute, he figures, he would be going 690 miles per hour and become the first skydiver to break the speed of sound. After a free fall lasting five and a half minutes, his parachute would open and land him about 23 miles below the balloon.
No one is certain what will happen to a human near the sound barrier. The NYT article says that turbulence may set in, causing havoc. Turbulence is a subject Russ and I discuss briefly in Chapter 1 of Intermediate Physics for Medicine and Biology, when introducing the Reynolds number. Most fluid in the body flows at low Reynolds number, with no danger of turbulence, although blood flow in the heart and aorta can get so fast that some turbulence may develop. Of course, any animal that flies in air will experience turbulence, which includes birds, bats, pterodactyls, and, in Baumgartner’s case, humans.

Physics With Illustrative Examples From Medicine and Biology, Volume 1, by Benedek and Villars, superimposed on Intermediate Physics for Medicine and Biology.
Physics With Illustrative Examples
From Medicine and Biology, Volume 1,
by Benedek and Villars.
These high altitude exploits remind me of a delightful section in the textbook Physics With Illustrative Examples from Medicine and Biology, by George Benedek and Felix Villars. In their Volume 1 on Mechanics, they describe balloon ascensions and the physiological effects of air pressure. After reviewing the medical implications of a lack of oxygen at high altitudes, they present the fascinating tale of a 19th century balloon ascension.
These symptoms are shown very clearly in the tragic balloon ascent of the “Zenith” carrying the balloon pioneers Tissandier, Sivel, and Corce-Spinelli on April 15, 1875. During this ascent Sivel and Corce-Spinelli died. The balloons maximum elevation as recorded on their instruments was 8600 m. Though gas bags carrying 70% oxygen were carried by the balloonists, the rapid and insidious effects of hypoxia reduced their judgment and muscular control and prevented their use of the oxygen when it was most needed. Though these balloonists were indeed trying to establish an altitude record, their account shows clearly that their judgment was severely impaired during critical moments during the maximum tolerable altitude.
Then Benedek and Villars present a 3-page extended quote from the account of the surviving member of the trio, Gaston Tissandier. It makes for fascinating reading. However, concerns about oxygen depletion aren’t relevant for Fearless Felix, because he will be wearing a space suit during his jump, with its own oxygen supply.

The New York Times story ends with the following quote. One wonders if we should admire Baumgartner’s pluck, or commit him to an insane asylum.
Private adventurers have more freedom to take their own risks. The Stratos medical director, Dr. Jonathan Clark, who formerly oversaw the health of space shuttle crews at NASA, says that the spirit of this project reminds him of stories from the first days of the space age.

“This is really risky stuff, putting someone up there in that extreme environment and breaking the sound barrier,” Dr. Clark said. “It’s going to be a major technical feat. It’s like early NASA, this heady feeling that we don’t know what we’re up against but we’re going to do everything we can to overcome it.”

Friday, March 26, 2010

Erwin Neher

I subscribe to a monthly magazine, The Scientist, which was founded by Eugene Garfield (who also was a founder of the Science Citation Index). It provides print and online coverage about biomedical research, technology and business. I’m not sure what I did to deserve it, but I get a paper copy delivered to my office for free, and I can tell you for certain that the magazine is worth the price. Seriously, it is a valuable resource, and the articles are general enough that I can follow them without having to consult my physiology and biochemistry textbooks. The online site contains many of the articles for free, and also has career information for young scientists. I recommend it.

The March 2010 issue of The Scientist contains a profile of Nobel Prize winner Erwin Neher, the developer of the patch clamp technique. Russ Hobbie and I discuss patch-clamp recording in Chapter 9 of the 4th edition of Intermediate Physics for Medicine and Biology.
The next big advance was patch-clamp recording [Neher and Sakmann (1976)]. Micropipettes were sealed against a cell membrane that had been cleaned of connective tissue by treatment with enzymes. A very-high-resistance seal resulted [(2-3) × 107 Ohm] that allowed one to see the opening and closing of individual channels. For this work Erwin Neher and Bert Sakmann received the Nobel Prize in Physiology or Medicine in 1991. Around 1980, Neher’s group found a way to make even higher-resistance (1010-1011 Ohm) seals that reduced the noise even further and allowed patches of membrane to be torn from the cell while adhering to the pipette [Hamill et al. (1981)]…
The profile in The Scientist provides some insight into how this research began.
[Neher’s former postdoc Fred] Sigworth remembers it well. “I came into lab that Monday morning and Erwin said, with a twinkle in his eye, ‘I think I know how you’re going to see sodium channels,’” he says. These channels—essential to neural communication—had proven elusive because they produce such small currents and remain open for such a short time. But thanks to the team’s new “patch-clamp” technique—and in particular, the formation of an incredibly tight seal, or “gigaseal,” between the pipette tip and the cell membrane—“seeing sodium channels suddenly became really easy,” says Sigworth, who, along with Neher, published these observations (and the first description of the tight-seal patch-clamp technique) in Nature in 1980.
I always enjoy reading about the quirks and odd twists of fate that often accompany scientific advance. The profile in The Scientist provides an entertaining anecdote.
You also needed to suck. “You had to apply a little bit of suction in order to pull some membrane into the orifice of the pipette,” says Neher. “If you did it the right way, it worked.” At least for Neher. “There was a weird period where we could no longer get gigaseals,” recalls [Owen] Hamill [a postdoc at the time]. “Then Bert suggested you have to blow before you suck.” Gently blowing a solution through the pipette as it approaches the surface of the cell keeps the tip from picking up debris during the descent. Between the blowing and the sucking, Hamill says, “our effeiciency went up to 99 percent.”
I fond it interesting that Neher’s undergraduate degree was in physics, and it was only after he arrived at the University of Wisconsin on a Fulbright Scholarship that he began studying biophysics. In his Nobel autobiography he describes his early motivation for studying biological physics.
At the age of 10, I entered the “Maristenkolleg” at Mindelheim [...] the local “Gymnasium” is operated by a catholic congregation, the “Maristenschulbrüder.” The big advantage of this school was that our teachers—both those belonging to the congregation and others—were very dedicated and were open not only to the subject matter but also to personal issues. During my years at the Gymnasium (1954 to 1963) I found out that, next to my interest in living things, I also could immerse myself in technical and analytical problems. In fact, pretty soon, physics and mathematics became my favourite subjects. At the same time, however, new concepts unifying these two areas had seeped into the literature, which was accessible to me. I eagerly read about cybernetics, which was a fashionable word at that time, and studied everything in my reach on the “Hodgkin-Huxley theory” of nerve excitation. By the time of my Abitur—the examination providing access to university—it was clear to me that I should become a “biophysicist.” My plan was to study physics, and later on add biology.
Neher provides a classic example of how a strong background in physics can lead to advances in biology and medicine, a major theme underlying Intermediate Physics for Medicine and Biology.

Friday, March 19, 2010

How Should We Teach Physics to Future Life Scientists and Physicians?

The American Physical Society publishes a monthly newspaper, the APS News, and the back page of each issue contains an editorial that goes under the name—you guessed it—“The Back Page.” Readers of the 4th edition of Intermediate Physics for Medicine and Biology will want to read The Back Page in the March 2010 issue, subtitled “Physics for Future Physicians and Life Scientists: A Moment of Opportunity.” This excellent editorial—written by Catherine Crouch, Robert Hilborn, Suzanne Amador Kane, Timothy McKay, and Mark Reeves—champions many of the ideas that underlie our textbook. The editorial begins
How should we teach physics to future life scientists and physicians? The physics community has an exciting and timely opportunity to reshape introductory physics courses for this audience. A June 2009 report from the American Association of Medical Colleges (AAMC) and the Howard Hughes Medical Institute (HHMI), as well as the National Research Council’s Bio2010 report, clearly acknowledge the critical role physics plays in the contemporary life sciences. They also issue a persuasive call to enhance our courses to serve these students more effectively by demonstrating the foundational role of physics for understanding biological phenomena and by making it an explicit goal to develop in students the sophisticated scientific skills characteristic of our discipline. This call for change provides an opportunity for the physics community to play a major role in educating future physicians and future life science researchers.

A number of physics educators have already reshaped their courses to better address the needs of life science and premedical students, and more are actively doing so. Here we describe what these reports call for, their import for the physics community, and some key features of these reshaped courses. Our commentary is based on the discussions at an October 2009 conference (www.gwu.edu/~ipls), at which physics faculty engaged in teaching introductory physics for the life sciences (IPLS), met with life scientists and representatives of NSF, APS, AAPT, and AAMC, to take stock of these calls for change and possible responses from the physics community. Similar discussion on IPLS also took place at the 2009 APS April Meeting, the 2009 AAPT Summer Meeting, and the February 2010 APS/AAPT Joint Meeting.
One key distinction between our textbook and the work described in The Back Page editorial is that our book is aimed toward an intermediate level, while the IPLS movement is aimed at the introductory level. Like it or not, premedical students have a difficult time fitting additional physics courses into their undergraduate curriculum. I know that here at Oakland University, I’ve been able to entice only a handful of premed students to take my PHY 325 (Biological Physics) and PHY 326 (Medical Physics) classes, despite my best efforts to attract them and despite OU’s large number of students hoping to attend medical school (these classes have our two-semester introductory physics sequence as a prerequisite). So, I think there’s merit in revising the introductory physics class, which premedical students are required to take, if your goal is to influence premedical education. As The Back Page editorial states, “the challenge is to offer courses that cultivate general quantitative and scientific reasoning skills, together with a firm grounding in basic physical principles and the ability to apply those principles to living systems, all without increasing the number of courses needed to prepare for medical school.” The Back Page editorial also cites the “joint AAMC-HHMI committee … report, Scientific Foundations for Future Physicians (SFFP). This report calls for removing specific course requirements for medical school admission and focusing instead on a set of scientific and mathematical ‘competencies.’ Physics plays a significant role…”

How do you fit all the biomedical applications of physics into an already full introductory class? The Back Page editorial gives some suggestions. For instance, “an extended discussion of kinematics and projectile motion could be replaced by more study of fluids and continuum mechanics... [and] topics such as diffusion and open systems could replace the current focus on heat engines and equilibrium thermal situations.” I agree, especially with adding fluid dynamics (Chapter 1 in our book) and diffusion (Chapter 4), which I believe are absolutely essential for understanding biology. I have my own suggestions. Although Newton’s universal law of gravity, Kepler’s laws of planetary motion, and the behavior of orbiting satellites are fascinating and beautiful topics, a premed student may benefit more from the study of osmosis (Chapter 5) and sound (Chapter 13, including ultrasound). Electricity and magnetism remains a cornerstone of introductory physics (usually in a second semester of a two-semester sequence), but the emphasis could be different. For instance, Faraday’s law of induction can be illustrated using magnetic stimulation of the brain, Ampere’s law by the magnetic field around a nerve axon, and the dipole approximation by the electrocardiogram. In a previous post to this blog, I discussed how Intermediate Physics for Medicine and Biology addresses many of these issues. Russ Hobbie will be giving an invited paper about medical physics and premed students at the July 2010 meeting of the American Association of Physics Teachers. When he gives the talk it will be posted on the book website.

One way to shift the focus of an introductory physics class toward the life sciences is to create new homework problems that use elementary physics to illustrate biological applications. In the 4th edition of Introductory Physics for Medicine and Biology, Russ Hobbie and I constructed many interesting homework problems about biomedical topics. While some of these may be too advanced for an introductory class, others may (with some modification) be very useful. Indeed, teaching a traditional introductory physics class but using a well-crafted set of homework problems may go a long ways toward achieving the goals set out by The Back Page editorial.

Let me finish this blog entry by quoting the eloquent final paragraph of The Back Page editorial. Notice that the editorial ends with the same central question that began it. It is the question that motivated Russ Hobbie to write the first edition of Intermediate Physics for Medicine and Biology (published in 1978) and it is the key question that Russ and I struggled with when working on the 4th edition.
The physics community faces a challenging opportunity as it addresses the issues surrounding IPLS courses. A sizable community we serve has articulated a clear set of skills and competencies that students should master as a result of their physics education. We have for a number of decades incorporated engineering examples into our physics classes. The SFFP report asks us to respond to another important constituency. Are we ready to develop courses that will teach our students how to apply basic physical principles to the life sciences? The challenges of making significant changes in IPLS courses are daunting if we each individually try to take on the task. But with a community-wide effort, we should be able to meet this challenge. The physics community is already moving to develop and implement changes in IPLS courses, and the motivations for change are strong. The life science and medical school communities stress that a working knowledge of physical principles is essential to success in all areas of life science including the practice of medicine. Thus we see significant teaching and learning opportunities as we work to answer the question that opened our discussion: how should we teach physics to future physicians and life scientists?

Friday, March 12, 2010

The Strangest Man

The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom, by Graham Farmelo, superimposed on Intermediate Physics for Medicine and Biology.
The Strangest Man:
The Hidden Life of Paul Dirac,
Mystic of the Atom,
by Graham Farmelo.
I recently read The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom, by Graham Farmelo, a fascinating biography of the Nobel Prize winning physicist Paul Adrien Maurice Dirac. One thing I did not find in the book was biological or medical physics. Nevertheless, Russ Hobbie and I mention Dirac in Chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology, in connection with the Dirac delta function.
The δ function can be thought of as a rectangle of width a and height 1/a in the limit [as a goes to zero]… The δ function is not like the usual function in mathematics because of its infinite discontinuity at the origin. It is one of a class of “generalized functions” whose properties have been rigorously developed by mathematicians since they were first used by the physicist P. A. M. Dirac.
The Principles of Quantum Mechanics, by Paul Dirac, superimposed on Intermediate Physics for Medicine and Biology.
The Principles of Quantum Mechanics,
by Paul Dirac.
Dirac won his Nobel Prize for contributions to quantum mechanics. I bought a copy of his famous textbook The Principles of Quantum Mechanics when I was an undergraduate at the University of Kansas. Farmelo describes it as “never out of print, it remains the most insightful and stylish introduction to quantum mechanics and is still a powerful source of inspiration for the most able young theoretical physicists. Of all the textbooks they use, none presents the theory with such elegance and with such relentless logic.”

One of Dirac’s greatest contributions was the prediction of positive electrons, or positrons, a type of antimatter. His prediction arose from the relativistic wave equation for the electron, now called the Dirac equation. An interesting feature of the Dirac equation is that it implies negative energy states. The only time these negative states are observable is when an electron is missing from one of the states: a hole. Farmelo writes
The bizarre upshot of the theory is that the entire universe is pervaded by an infinite number of negative-energy electrons – what might be thought of as a “sea.” Dirac argued that this sea has a constant density everywhere, so that experimenters can observe only departures from this perfect uniformity… Only a disturbance in Dirac’s sea—a bursting bubble, for example—would be observable. He envisaged just this when he foresaw that there would be some vacant states in the sea of negative-energy electrons, causing tiny departures from the otherwise perfect uniformity. Dirac called these unoccupied states “holes”... Each hole has positive energy and positive charge—the properties of the proton, the only other subatomic particle known at that time [1929]. So Dirac made the simplest possible assumption by suggesting that a hole is a proton.
We now know that these holes are not protons but are positrons, discovered experimentally in 1932 by Carl Anderson. Positrons are vital for understanding how x-rays interact with matter, as Russ and I describe in Section 15.6 of Intermediate Physics for Medicine and Biology
A photon with energy above 1.02 MeV can produce a particle-antiparticle pair: a negative electron and a positive electron or positron… Since the rest energy (mc2) of an electron or positron is 0.51 MeV, pair production is energetically impossible for photons below 2mc2 = 1.02 MeV.

One can show, using o = pc for the photon, that momentum is not conserved by the positron and electron if Eq. 15.23 [conservation of energy] is satisfied. However, pair production always takes place in the Coulomb field of another particle (usually a nucleus) that recoils to conserve momentum.
In Sec. 17.14, Russ and I describe the crucial role positrons play in medical imaging.
If a positron emitter is used as the radionuclide, the positron comes to rest and annihilates an electron, emitting two annihilation photons back to back. In positron emission tomography (PET) these are detected in coincidence. This simplifies the attenuation correction, because the total attenuation for both photons is the same for all points of emission along each gamma ray through the body (see Problem 54). Positron emitters are short-lived, and it is necessary to have a cyclotron for producing them in or near the hospital. This is proving to be less of a problem than initially imagined. Commercial cyclotron facilities deliver isotopes to a number of nearby hospitals. Patterson and Mosley (2005) found that 97% of the people in the United States live within 75 miles of a clinical PET facility.
Another famous prediction of Dirac’s was magnetic monopoles. Russ and I only mention monopoles in passing in Section 8.8.1: “Since there are no known magnetic charges (monopoles), we must consider the effect of magnetic fields on current loops or magnetic dipoles.” Dirac predicted that magnetic monopoles could exist. Farmelo tells the story.
In Cambridge, during the spring of 1931, Dirac happened upon a rich new seam of ideas that would crystallize into one of his most famous contributions to science… As usual, Dirac appears to have said nothing of this to anyone, even to his close friends. In the early months of 1931, a quiet time for his fellow theoreticians, he was working on the most promising new theory he had conceived for years. The theory broke new ground in magnetism. For centuries, it had been a commonplace of science that magnetic poles come only in pairs, labeled north and south: if one pole is spotted, then the opposite one will be close by. Dirac had found that quantum theory is compatible with the existence of single magnetic poles. During a talk at the Kapitza Club, he dubbed them magnons, but the name never caught on in this context; the particles became known as magnetic monopoles.
Physicists have searched for magnetic monopoles, and once they even thought they found one. In 1982, physicist Blas Cabrera observed a signal consistent with the experimental signature of a monopole (Physical Review Letters, Volume 48, Pages 1378–1381), but it now appears to have been an artifact, as the result has never been reproduced. I have my own remote (indeed, very remote) connection with this experiment (and thus to Dirac). Cabrera’s PhD advisor, William Fairbank, was John Wikswo’s PhD advisor, and Wikswo was in turn my PhD advisor. Thus, academically speaking, I am one of Cabrera’s scientific nephews.

Dirac was known for saying little and behaving rather oddly (the title of the book is, after all, “The Strangest Man”), and Farmelo suggests a possible reason: Dirac may have been autistic.
[Dirac] always attributed his extreme taciturnity and stunted emotions to his father’s disciplinarian regime; but there is another, quite different explanation, namely that he was autistic. Two of Dirac’s younger colleagues confided in me that they had concluded this, each of them making their disclosure in sotto voce, as if they were imparting a shameful secret. Both refused to be quoted… There is not nearly enough detail in her [Dirac’s mother’s] comments or in reports of Dirac’s behaviour in school to justify a diagnosis that he was then autistic. His behavior as an a adult, however, had all the characteristics that almost every autistic person has to some degree—reticence, passivity, aloofness, literal-mindedness, rigid patterns of activity, physical ineptitude, self-centredness and, above all, a narrow range of interests and a marked inability to empathise with other human beings.
Whatever the cause of Dirac’s unusual behavior, he was a great physicist. Farmelo sums up Dirac’s enduring legacy at the end of his book.
There is no doubt that Dirac was a great scientist, one of the few who deserves a place just below Einstein in the pantheon of modern physicists. Along with Heisenberg, Jordan, Pauli, Schrodinger and Born, Dirac was one of the group of theoreticians who discovered quantum mechanics. Yet his contribution was special. In his heyday, between 1925 and 1933, he brought a uniquely clear vision to the development of a new branch of science: the book of nature often seemed to be open in front of him.

Friday, March 5, 2010

Magnetic Measurements of Peripheral Nerve Function Using a Neuromagnetic Current Probe

Section 8.9 in the 4th Edition of Intermediate Physics for Medicine and Biology describes how a toroidal probe can be used to measure the magnetic field of a nerve.
If the signal [a weak biomagnetic field] is strong enough, it can be detected with conventional coils and signal-averaging techniques that are described in Chapter 11. Barach et al. (1985) used a small detector through which a single axon was threaded. The detector consisted of a toroidal magnetic core wound with many turns of fine wire (Fig. 8.26). Current passing through the hole in the toroid generated a magnetic field that was concentrated in the ferromagnetic material of the toroid. When the field changed, a measurable voltage was induced in the surrounding coil.
My friend Ranjith Wijesinghe, of the Department of Physics at Ball State University, recently published the definitive review of this research in the journal Experimental Biology and Medicine, titled “Magnetic Measurements of Peripheral Nerve Function Using a Neuromagnetic Current Probe” (Volume 235, Pages 159–169).
The progress made during the last three decades in mathematical modeling and technology development for the recording of magnetic fields associated with cellular current flow in biological tissues has provided a means of examining action currents more accurately than that of using traditional electrical recordings. It is well known to the biomedical research community that the room-temperature miniature toroidal pickup coil called the neuromagnetic current probe can be employed to measure biologically generated magnetic fields in nerve and muscle fibers. In contrast to the magnetic resonance imaging technique, which relies on the interaction between an externally applied magnetic field and the magnetic properties of individual atomic nuclei, this device, along with its room-temperature, low-noise amplifier, can detect currents in the nano-Ampere range. The recorded magnetic signals using neuromagnetic current probes are relatively insensitive to muscle movement since these probes are not directly connected to the tissue, and distortions of the recorded data due to changes in the electrochemical interface between the probes and the tissue are minimal. Contrary to the methods used in electric recordings, these probes can be employed to measure action currents of tissues while they are lying in their own natural settings or in saline baths, thereby reducing the risk associated with elevating and drying the tissue in the air during experiments. This review primarily describes the investigations performed on peripheral nerves using the neuromagnetic current probe. Since there are relatively few publications on these topics, a comprehensive review of the field is given. First, magnetic field measurements of isolated nerve axons and muscle fibers are described. One of the important applications of the neuromagnetic current probe to the intraoperative assessment of damaged and reconstructed nerve bundles is summarized. The magnetic signals of crushed nerve axons and the determination of the conduction velocity distribution of nerve bundles are also reviewed. Finally, the capabilities and limitations of the probe and the magnetic recordings are discussed.
Ranjith and I were both involved in this research when we were graduate students working in John Wikswo’s laboratory at Vanderbilt University. I remember the painstaking process of making those toroids; just winding the wire onto the ferrite core was a challenge. Wikswo built this marvelous contraption that held the core at one spot under a dissection microscope but at the same time allowed the core to be rotated around multiple axes (he’s very good at that sort of thing). When Russ Hobbie and I wrote about “many turns of fine wire” we were not exaggerating. The insulated copper wire was often as thin as 40-gauge (80 microns diameter), which is only slightly thicker than a human hair. With wire that thin, the slightest kink causes a break. We sometimes wound up to 100 turns on one toroid. It was best to wind the toroid when no one else was around (I preferred early morning): if someone startled you when you were winding, the result was usually a broken wire, requiring you to start over. We potted the toroid and its winding in epoxy, which itself was a job requiring several steps. We machined a mold out of Teflon, carefully positioned the toroid in the mold, soldered the ends of those tiny wires to a coaxial cable, and then injected epoxy into the mold under vacuum to avoid bubbles. If all went as planned, you ended up with a “neuromagnetic current probe” to use in your experiments. Often, all didn’t go as planned.

Ranjith’s review describes the work of several of my colleagues from graduate school days. Frans Gielen (who now works at the Medtronic Bakken Research Centre in Maastricht, the Netherlands) was a post doc who used toroids to record action currents in skeletal muscle. Ranjith studied compound action currents in the frog sciatic nerve for his PhD dissertation. My research was mostly on crayfish axons. Jan van Egeraat was the first to measure action currents in a single muscle fiber, did experiments on squid giant axons, and studied how the magnetic signal changed near a region of the nerve that was damaged. Jan obtained his PhD from Vanderbilt a few years after I did, and then tragically died of cancer just as his career was taking off. I recall that when my wife Shirley and I moved from Tennessee to Maryland to start my job at the National Institutes of Health, Jan and his wife drove the rented truck with all our belongings while we drove our car with our 1-month old baby. They got a free trip to Washington DC out of the deal, and we got a free truck driver. John Barach was a Vanderbilt professor who originally studied plasma physics, but changed to biological physics later in his career when collaborating with Wikswo. I always have admired Barach’s ability to describe complex phenomena in a very physical and intuitive way (see, for instance, Problem 13 in Chapter 8 of our textbook). Of course, we were all led by Wikswo, whose energy and drive are legendary, and whose grant writing skills kept us all in business. For his work developing the Neuromagnetic Current Probe, Wikswo earned an IR-100 Award in 1984, presented by the RandD Magazine to recognize the 100 most technologically significant new products and processes of the year.

Friday, February 26, 2010

All the News That’s Fit to Print

Newspaper articles may not provide the most authoritative information about science and medicine, but they are probably the primary source of news about medical physics for the layman. Today, I will discuss some recent articles from one of the leading newspapers in the United States: the venerable New York Times.


Last week Russ Hobbie sent me a copy of an article in the February 16 issue of the NYT, titled “New Source of an Isotope in Medicine is Found.” It describes the continuing shortage of technetium-99m, a topic I have discussed before in this blog.
Just as the worldwide shortage of a radioactive isotope used in millions of medical procedures is about to get worse, officials say a new source for the substance has emerged: a nuclear reactor in Poland.

The isotope, technetium 99, is used to measure blood flow in the heart and to help diagnose bone and breast cancers. Almost two-thirds of the world’s supply comes from two reactors; one, in Ontario, has been shut for repairs for nine months and is not expected to reopen before April, and the other, in the Netherlands, will close for six months starting Friday.

Radiologists say that as a result of the shortage, their treatment of some patients has had to revert to inferior materials and techniques they stopped using 20 years ago.

But on Wednesday, Covidien, a company in St. Louis that purifies the material created in the reactor and packages it in a form usable by radiologists, will announce that it has signed a contract with the operators of the Maria reactor, near Warsaw, one of the world’s most powerful research reactors.
I doubt that relying on a Polish reactor is a satisfactory long-term solution to our 99mTc shortage, but it may provide crucial help with the short term crisis. A more promising permanent solution is described in a January 26 article on medicalphysicsweb.
GE Hitachi Nuclear Energy (GEH) announced today it has been selected by the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) to help develop a U.S. supply of a radioisotope used in more than 20 million diagnostic medical procedures in the United States each year.
More information can be found at the American Association of Physicists in Medicine website. Let’s hope that this new initiative will prove successful.


The second topic I want to discuss today was called to my attention by my former student Phil Prior (PhD in Biomedical Sciences: Medical Physics, Oakland University, 2008). On January 26, the NYT published Walt Bogdanich’s article “As Technology Surges, Radiation Safeguards Lag.”
In New Jersey, 36 cancer patients at a veterans hospital in East Orange were overradiated—and 20 more received substandard treatment—by a medical team that lacked experience in using a machine that generated high-powered beams of radiation… In Louisiana, Landreaux A. Donaldson received 38 straight overdoses of radiation, each nearly twice the prescribed amount, while undergoing treatment for prostate cancer… In Texas, George Garst now wears two external bags—one for urine and one for fecal matter—because of severe radiation injuries he suffered after a medical physicist who said he was overworked failed to detect a mistake.

These mistakes and the failure of hospitals to quickly identify them offer a rare look into the vulnerability of patient safeguards at a time when increasingly complex, computer-controlled devices are fundamentally changing medical radiation, delivering higher doses in less time with greater precision than ever before.

Serious radiation injuries are still infrequent, and the new equipment is undeniably successful in diagnosing and fighting disease. But the technology introduces its own risks: it has created new avenues for error in software and operation, and those mistakes can be more difficult to detect. As a result, a single error that becomes embedded in a treatment plan can be repeated in multiple radiation sessions.
A related article by the same author, “Radiation Offers New Cures, and Ways to Do Harm,” was also published in the Gray Lady a few days earlier. These articles discuss recent medical mistakes in which patients have received much more radiation than originally intended. While somewhat sensational, the articles reinforce the importance of quality control in medical physics.

The NYT articles triggered a response from the American Association of Physicists in Medicine on January 28.
The American Association of Physicists in Medicine (AAPM) has issued a statement today in the wake of several recent articles in the New York Times yesterday and earlier in the week that discuss a number of rare but tragic events in the last decade involving people undergoing radiation therapy.

While it does not specifically comment on the details of these events, the statement acknowledges their gravity. It reads in part: “The AAPM and its members deeply regret that these events have occurred, and we continue to work hard to reduce the likelihood of similar events in the future.” The full statement appears here.

Today's statement also seeks to reassure the public on the safety of radiation therapy, which is safely and effectively used to treat hundreds of thousands of people with cancer and other diseases every year in the United States. Medical physicists in hospitals and clinics across the United States are board-certified professionals who play a key role in assuring quality during these treatments because they are directly responsible for overseeing the complex technical equipment used.

Taken together, the articles I’ve discussed today highlight some of the challenges that face the field of medical physics. For those who want additional background about the underlying physics and its applications to medicine, I recommend—you guessed it—the 4th edition of Intermediate Physics for Medicine and Biology.

Friday, February 19, 2010

The Electron Microscope

Intermediate Physics for Medicine and Biology does not discuss one of the most important instruments in modern biology: the electron microscope. If I were to add a very brief introduction about the electron microscope to Intermediate Physics for Medicine and Biology, I would put it right after Sec. 14.1, The Nature of Light: Waves Versus Photons. It would look something like this.
14.1 ½ De Broglie Wavelength and the Electron Microscope

Like light, matter can have both wave and particle properties. The French physicist Louis de Broglie derived a quantum mechanical relationship between a particle’s momentum p and wavelength λ

λ = h/p     (14.6 ½)

[Eisberg and Resnick (1985)]. For example, a 100 eV electron has a speed of 5.9 × 106 m s−1 (about 2% the speed of light), a momentum of 5.4 × 10−24 kg m s−1, and a wavelength of 0.12 nm.

The electron microscope takes advantage of the short wavelength of electrons to produce exquisite pictures of very small objects. Diffraction limits the spatial resolution of an image to about a wavelength. For a visible light microscope, this resolution is on the order of 500 nm (Table 14.2). For the electron microscope, however, the wavelength of the electron limits the resolution. A typical electron energy used for imaging is about 100 keV, implying a wavelength much smaller than an atom (however, practical limitations often limit the resolution to about 1 nm). Table 1.2 shows that viruses appear as blurry smears in a light microscope, but can be resolved with considerable detail in an electron microscope. In 1986, Ernst Ruska shared the Nobel Prize in Physics “for his fundamental work in electron optics, and for the design of the first electron microscope.”

Electron microscopes come in two types. In a transmission electron microscope (TEM), electrons pass through a thin sample. In a scanning electron microscope (SEM), a fine beam of electrons is raster scanned across the sample and secondary electrons emitted by the surface are imaged. In both cases, the image is formed in vacuum and the electron beam is focused using a magnetic lens.
To learn more, you can watch a YouTube video about the electron microscope. Nice collections of electron microscope images can be found at http://www.denniskunkel.com, http://www5.pbrc.hawaii.edu/microangela and http://www.mos.org/sln/SEM.

Structure and function of the electron microscope. 

Friday, February 12, 2010

Biomagnetism and Medicalphysicsweb

Medicalphysicsweb is an excellent website for news and articles related to medical physics. Several articles that have appeared recently are related to the field of biomagnetism, a topic Russ Hobbie and I cover in Chapter 8 of the 4th edition of Intermediate Physics for Medicine and Biology. I have followed the biomagnetism field ever since graduate school, when I made some of the first measurements of the magnetic field of an isolated nerve axon. Below I describe four recent articles from medicalphysicsweb.

A February 2 article titled “Magnetomometer Eases Cardiac Diagnostics” discusses a novel type of magnetometer for measuring the magnetic field of the heart. In Section 8.9 of our book, Russ and I discuss Superconducting Quantum Interference Device (SQUID) magnetometers, which have long been used to measure the small (100 pT) magnetic fields produced by cardiac activity. Another way to measure weak magnetic fields is to determine the Zeeman splitting of energy levels of a rubidium gas. The energy difference between levels depends on the external magnetic field, and is measured by detecting the frequency of optical light that is in resonance with this energy difference. Ben Varcoe, of the University of Leeds, has applied this technology to the heart by separating the magnetometer from the pickup coil:
The magnetic field detector—a rubidium vapour gas cell—is housed within several layers of magnetic shielding that reduce the Earth’s field about a billion-fold. The sensor head, meanwhile, is external to this shielding and contained within a handheld probe.
I haven’t been able to find many details about this device (such as if the pickup coils are superconducting or not, and why the pickup coil doesn’t transport the noise from the unshielded measurement area to the shielded detector), but Varcoe believes the device is a breakthrough in the way researchers can measure biomagnetic fields.

Another recent (February 10) article on medicalphysicsweb is about the effect of magnetic resonance imaging scans on pregnant women. As described in Chapter 18 of Intermediate Physics for Medicine and Biology, MRI uses a radio-frequency magnetic field to flip the proton spins so their decay can be measured, resulting in the magnetic resonance signal. This radio-frequency field induces eddy currents in the body that heat the tissue. Heating is a particular concern if the MRI is performed on a pregnant woman, as it could affect fetal development.
Medical physicists at Hammersmith Hospital, Imperial College London, UK, have now developed a more sophisticated model of thermal transport between mother and baby to assess how MRI can affect the foetal temperature (Phys. Med. Biol. 55 913)… This latest analysis takes account of heat transport through the blood vessels in the umbilical cord, an important mechanism that was ignored in previous models. It also includes the fact that the foetus is typically half a degree warmer than the mother – another key piece of information overlooked in earlier work.
Russ and I discuss these issues in Sec. 14.10: Heating Tissue with Light, where we derive the bioheat equation. The authors of the study, Jeff Hand and his colleagues, found that under normal conditions, fetal heating wasn’t a concern, but if exposed to 7.5 minutes of continuous RF field (unlikely during MRI) the temperature increase could be significant.

In a January 27 article, researchers from the University of Minnesota (Russ’s institution) use magnetoencephalography to diagnose post-traumatic stress disorder.
Post-traumatic stress disorder (PTSD) is a difficult condition to diagnose definitively from clinical evidence alone. In the absence of a reliable biomarker, patients’ descriptions of flashbacks, worry and emotional numbness are all doctors have to work with. Researchers from the University of Minnesota Medical School (Minneapolis, MN) have now shown how magnetoencephalography (MEG) could identify genuine PTSD sufferers with high confidence and without the need for patients to relive painful past memories (J. Neural Eng. 7 016011).
The magnetoencephalogram is discussed in Sec. 8.5 of Intermediate Physics for Medicine and Biology. The data for the Minnesota study was obtained using a 248-channel SQUID magnetometer. The researchers analyzed data from 74 patients with post-traumatic stress disorder known to the VA hospital in Minneapolis, and 250 healthy controls. The authors claim that the accuracy of the test is over 90%.

Finally, a February 8 article describes a magnetic navigation system installed in Taiwan by the company Stereotaxis.
The Stereotaxis System is designed to enable physicians to complete more complex interventional procedures by providing image guided delivery of catheters and guidewires through the blood vessels and chambers of the heart to treatment sites. This is achieved using computer-controlled, externally applied magnetic fields that govern the motion of the working tip of the catheter or guidewire, resulting in improved navigation, shorter procedure time and reduced x-ray exposure.
The system works by having ferromagnetic material in a catheter tip, and an applied magnetic field that can be adjusted to “steer” the catheter through the blood vessels. We discuss magnetic forces in Sec. 8.1 of Intermediate Physics for Medicine and Biology, and ferromagnetic materials in Sec. 8.8.

Although I believe medicalphysicsweb is extremely useful for keeping up-to-date on developments in medical physics, I find that often their articles either have specialized physics concepts that the layman may not understand or, more often, don’t address the underlying physics at all. Yet, one can’t understand modern medicine without mastery of the basic physics concepts. My browsing through medicalphysicsweb convinced me once again about the importance of learning how physics can be applied to medicine and biology. Perhaps I am biased, but I think that studying from the 4th edition of Intermediate Physics for Medicine and Biology is a great way to master these important topics.

Friday, February 5, 2010

Beta Decay and the Neutrino

In Section 17.4 in the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss beta decay the neutrino.
The emission of a beta-particle is accompanied by the emission of a neutrino… [which] has no charge and no rest mass… [and] hardly interacts with matter at all… A particle that seemed originally to be an invention to conserve energy and angular momentum now has a strong experimental basis.
Understanding Physics: The Electron, Proton, and Neutron, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Understanding Physics:
The Electron, Proton, and Neutron,
by Isaac Asimov.
Our wording implies there is a story behind this particle “that seemed originally to be an invention to conserve energy.” Indeed, that is the case. I will let Isaac Asimov tell this tale. (Asimov's books, which I read in high school, influenced me to become a scientist.) The excerpt below is from Chapter 14 of his book Understanding Physics: The Electron, Proton, and Neutron.
In Chapter 11, disappearance in mass during the course of nuclear reactions was described as balanced by an appearance of energy in accordance with Einstein’s equation, e=mc2. This balance also held in the case of the total annihilation of a particle by its anti-particle, or the production of a particle/anti-particle pair from energy.
Nevertheless, although in almost all such cases the mass-energy equivalence was met exactly, there was one notable exception in connection with radioactive radiations.

Alpha radiation behaves in satisfactory fashion. When a parent nucleus breaks down spontaneously to yield a daughter nucleus and an alpha particle, the sum of the mass of the two products does not quite equal the mass of the original nucleus. The difference appears in the form of energy—specifically, as the kinetic energy of the speeding alpha particle. Since the same particles appear as products at every breakdown of a particular parent nucleus, the mass-difference should always be the same, and the kinetic energy of the alpha particles should also always be the same. In other words, the beam of alpha particles should be monoenergetic. This was, in essence, found to be the case…

It was to be expected that the same considerations would hold for a parent nucleus breaking down to a daughter nucleus and a beta particle. It would seem reasonable to suppose that the beta particles would form a monoenergetic beam too…

Instead, as early as 1900, Becquerel indicated that beta particles emerged with a wide spread of kinetic energies. By 1914, the work of James Chadwick demonstrated the “continuous beta particle spectrum” to be undeniable.

The kinetic energy calculated for a beta particle on the basis of mass loss turned out to be a maximum kinetic energy that very few obtained. (None surpassed it, however; physicists were not faced with the awesome possibility of energy appearing out of nowhere.)

Most beta particles fell short of the expected kinetic energy by almost any amount up to the maximum. Some possessed virtually no kinetic energy at all. All told, a considerable portion of the energy that should have been present, wasn’t present, and through the 1920’s this missing energy could not be detected in any form.

Disappearing energy is as insupportable, really, as appearing energy, and though a number of physicists, including, notably, Niels Bohr, were ready to abandon the law of conservation of energy at the subatomic level, other physicists sought desperately for an alternative.

In 1931, an alternative was suggested by Wolfgang Pauli. He proposed that whenever a beta particle was produced, a second particle was also produced, and that the energy that was lacking in the beta particle was present in the second particle.

The situation demanded certain properties of the hypothetical particle. In the emission of beta particles, electric charge was conserved; that is, the net charge of the particles produced after emission was the same as that of the original particle. Pauli’s postulated particle therefore had to be uncharged. This made additional sense since, had the particle possessed a charge, it would have produced ions as it sped along and would therefore have been detectable in a cloud chamber, for instance. As a matter of fact, it was not detectable.

In addition, the total energy of Pauli’s projected particle was very small—only equal to the missing kinetic energy of the electron. The total energy of the particle had to include its mass, and the possession of so little energy must signify an exceedingly small mass. It quickly became apparent that the new particle had to have a mass of less than 1 percent of the electron and, in all likelihood, was altogether massless.

Enrico Fermi, who interested himself in Pauli’s theory at once, thought of calling the new particle a “neutron,” but Chadwick, at just about that time, discovered the massive, uncharged particle that came to be known by that name. Fermi therefore employed an Italian diminutive suffix and named the projected particle the neutrino (“little neutral one”), and it is by that name that it is known.