Friday, March 16, 2012

Henry Moseley

Henry Moseley is an English physicist who developed x-ray methods to assign a unique atomic number Z to each element. He appears in Problem 3 of Chapter 16 in the 4th edition of Intermediate Physics for Medicine and Biology.
Problem 3 Henry Moseley first assigned atomic numbers to elements by discovering that the square root of the frequency of the Kα photon is linearly related to Z. Solve Eq. 16.2 for Z and show that this is true. Plot Z vs the square root of the frequency and compare it to data you look up.
Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov’s Biographical Encyclopedia of
Science and Technology,
by Isaac Asimov.
Asimov’s Biographical Encyclopedia of Science and Technology (Second Revised Edition) describes Moseley.
For a time [Moseley] did research under Ernest Rutherford where he was the youngest and most brilliant of Rutherford’s brilliant young men . . .

This discovery [that each element could be assigned an atomic number] led to a major improvement of Mendeleev’s periodic table. Mendeleev had arranged his table of elements in order of atomic weight, but this order had had to be slightly modified in a couple of instances to keep the table useful. Moseley showed that if it was arranged in order of nuclear charge (that is, according to the number of protons in the nucleus, a quantity that came to be known as the atomic number) no modifications were necessary . . . Furthermore, Moseley’s X-ray technique could locate all the holes in the table representing still-undiscovered elements, and exactly seven such holes remained in 1914, the year Moseley developed the concept of the atomic number.
Moseley died when he was only 28 years old. Asimov tells the story.
World War I had broken out at this time and Moseley enlisted at once as a lieutenant of the Royal Engineers. Nations were still naïve in their understanding of the importance of scientists to human society and there seemed no reason not to expose Moseley to the same chances of death to which millions of other soldiers were being exposed. Rutherford tried to get Moseley assigned to scientific labors but failed. On June 13, 1915, Moseley shipped to Turkey and two months later he was killed at Gallipoli as part of a thoroughly useless and badly bungled campaign, his death having brought Great Britain and the world no good . . . In view of what he might still have accomplished (he was only twenty-seven when he died), his death might well have been the most costly single death of the war to mankind generally.

Had Moseley lived it seems as certain as anything can be in the uncertain world of scientific history, that he would have received a Nobel Prize in physics . . .
The Making of the Atomic Bomb, by Richard Rhodes, superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.
To learn more about Moseley, I recommend Chapter 4 (The Long Grave Already Dug) in Richard Rhodes’ classic The Making of the Atomic Bomb. Rhodes writes that “When he heard of Moseley’s death, the American physicist Robert A. Millikan wrote in public eulogy that his loss alone made the war ‘one of the most hideous and most irreparable crimes in history.’”

Friday, March 9, 2012

Glimpses of Creatures in Their Physical Worlds

Glimpses of Creatures in their Physical Worlds, by Steven Vogel, superimposed on Intermediate Physics for Medicine and Biology.
Glimpses of Creatures
in their Physical Worlds,
by Steven Vogel.
I have recently finished reading Steven Vogel’s book Glimpses of Creatures in Their Physical Worlds, which is a collection of twelve essays previously published in the Journal of Biosciences. The Preface begins
The dozen essays herein look at bits of biology, bits that reflect the physical world in which organisms find themselves. Evolution can do wonders, but it cannot escape its earthy context—a certain temperature range, a particular gravitational acceleration, the physical properties of air and water, and so forth. Nor can it tamper with mathematics. Thus the design of organisms—the level of organization at which natural selection acts most directly as well as the focus here—must reflect that physical context. The baseline it provides both imposes constraints and affords opportunities, the co-stars in what follows….”
The first essay is titled “Two Ways to Move Material,” and the two ways it discusses are diffusion and flow. To compare the two quantitatively, Vogel uses the Péclet number, Pe, defined as Pe = VL/D, where V is the flow speed, L the distance, and D the diffusion coefficient. As I read his analysis I suddenly got a sinking feeling: Russ Hobbie and I discussed just such a dimensionless number in Problem 37 of Chapter 4 in the 4th edition of Intermediate Physics for Medicine and Biology, but we called it the Sherwood Number, not the Péclet number. Were we wrong?

Edward Purcell, in his well-known article “Life at Low Reynolds Number,” introduced the quantity VL/D, which he called simply S with no other name. However, in a footnote at the end of the article he wrote “I’ve recently discovered that its official name is the Sherwood number, so S is appropriate after all!” Mark Denny, in his book Air and Water, states that VL/D is the Sherwood number, but in his Encyclopedia of Tide Pools and Rocky Shores he calls it the Péclet number. Vogel, in his earlier book Life in Moving Fluids, introduces VL/D as the Péclet number but adds parenthetically “sometimes known as the Sherwood number.”

Some articles report a more complicated relationship between the Péclet and Sherwood number, implying they can’t be the same. For instance, consider the paper “Nutrient Uptake by a Self-Propelled Steady Squirmer,” by Vanesa Magar, Tomonobu Goto, and T. J. Pedley (Quarterly Journal of Mechanics and Applied Mathematics, Volume 56, Pages 65–91, 2003), in which they write “We find the relationship between the Sherwood number (Sh), a measure of the mass transfer across the surface, and the Péclet number (Pe), which indicates the relative effect of convection versus diffusion”. Similarly, Fumio Takemura and Akira Yabe (“Gas Dissolution Process of Spherical Rising Gas Bubbles,” Chemical Engineering Science, Volume 53, Pages 2691–2699, 1998) define the Péclet number as VL/D, but define the Sherwood number as αL/D, where α is the mass transfer coefficient at an interface (having, by necessity, the same units as speed, m/s). After reviewing these and other sources, I conclude that Vogel is probably right: VL/D should properly be called the Péclet number and not the Sherwood number, although the distinction is not always clear in the literature.

Now that we have cleared up this Péclet/Sherwood unpleasentness, let’s return to Vogel’s lovely essay about two ways to move material. He calculated the Péclet number for capillaries, using a speed of 0.7 mm/s (close to the 1 mm/s listed in Table 1.4 of Intermediate Physics for Medicine and Biology), a capillary radius of 3 microns (we use 4 microns in Table 1.4), and a oxygen diffusion constant of 1.8 × 10−9 m/s2 (2 × 10−9 m/s2 in Intermediate Physics for Medicine and Biology), and finds a Péclet number of 1.2 (if you use the data in our book, you would get 2). Vogel then argues that the optimum size for capillaries is when the Péclet number is approximately one, so that evolution has created a nearly optimized system. The argument, in my words, is that oxygen transport changes from convection to diffusion in the capillaries. If the Péclet number were much smaller than one, diffusion would dominate and we would be better off with larger capilarries that are farther apart and faster blood flow to improve convection. If the Péclet number were much larger than one, convection would dominate and our circulatory system would be improved by using smaller capilarries closer together, even if that means slower blood flow, to improve diffusion. A Péclet number of about one seems to be the happy medium.

The third essay, “Getting Up to Speed” is also relevant to readers of Intermediate Physics for Medicine and Biology. Our Problem 43 of Chapter 2 is about how high animals can jump.
Problem 43 Let’s examine how high animals can jump [Schmidt-Nielsen (1984), pp. 176-179]. Assume that the energy output of the jumping muscle is proportional to the body mass, M. The gravitational potential energy gained upon jumping to a height h is Mgh (g = 9.8 m s−2). If a 3 g locust can jump 60 cm, how high can a 70 kg human jump? Use scaling arguments.
In the next exercise, Problem 44, Russ and I ask the reader to calculate the acceleration of the jumper, which if you solve the problem you will find varies inversely with length.

Vogel analyzes this same topic, but digs a little deeper. Here examines all sorts of jumpers, including seeds and spores that are hurled upward without the help of muscles at all. He finds that the scaling laws from Problems 43 and 44 do indeed hold, but the traditional reasoning behind the law is flawed.
The diversity of cases for which the scaling rule works ought to raise a flag of suspicion. Why should an argument based on muscle work for systems that do their work with other engines? . . . Something else must be afoot—again, the original argument presumed isometrically built muscle-powered jumpers. In short, the fit of the far more diverse projectiles demands a more general argument for the scaling of projectile performance. . .
Vogel goes on to show that for small animals, muscles would have to work unrealistically fast in order to produce the accelerations required to jump to a fixed height.
The old argument has crashed and burned. The work relative to mass of a contracting muscle deteriorates as animals get smaller rather than holding constant—a consequence of the requisite rise in intrinsic speed. Muscle need not and commonly does not power jumps in real time—elastic energy storage in tendons of collagen, in apodemes of chitin, and in pads of resilin provides power amplification. Finally, muscle powers none of those seed and tiny fungal projectiles. Yet acceleration persists in scaling as the classic argument anticipates. . .
So how does Vogel explain the scaling law?
A possible alternative emerges if we reexamine the relationship between force and acceleration defined by Newton’s second law. If acceleration indeed scales inversely with length and mass directly with the cube of length, then force should scale with the square of the length. Or, put another way, force divided by the square of the length should remain constant. Force over the square of length corresponds to stress, so we’re saying that stress should be constant. Perhaps our empirical finding that acceleration varies with length tells us that stress in some manner limits the systems.
Vogel’s book is full of these sorts of physical insights. I recommend it as supplemental reading for those studying from Intermediate Physics for Medicine and Biology.

Friday, March 2, 2012

Odds and Ends

It’s time to catch up on topics discussed previously in this blog.

The Technetium-99m Shortage

Several times I have written about the Technetium-99m shortage facing the United States (see here, here, here, and here). Russ Hobbie and I discuss 99mTc in Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology.
The most widely used isotope is 99mTc. As its name suggests, it does not occur naturally on earth, since it has no stable isotopes. We consider it in some detail to show how an isotope is actually used. Its decay scheme has been discussed above. There is a nearly monoenergetic 140-keV γ ray. Only about 10% of the energy is in the form of nonpenetrating radiation. The isotope is produced in the hospital from the decay of its parent, 99Mo, which is a fission product of 235U and can be separated from about 75 other fission products. The 99Mo decays to 99mTc.
An interesting article by Matthew Wald about the supply of 99mTc appeared in the February 6 issue of the New York Times. Wald writes
For years, scientists and policy makers have been trying to address two improbably linked problems that hinge on a single radioactive isotope: how to reduce the risk of nuclear weapons proliferation, and how to assure supplies of a material used in thousands of heart, kidney and breast procedures a year. . .

The isotope is technetium 99m, or tech 99 for short. It is useful in diagnostic tests because it throws off an easy-to-detect gamma ray; also, because it breaks down very quickly, it gives only a small dose of radiation to the patient.

But the recipe for tech 99 requires another isotope, molybdenum 99, that is now made in nuclear reactors using weapon-grade uranium. In May 2009, a Canadian reactor that makes most of the North American supply of moly 99 was shut because of a safety problem. A second reactor, in the Netherlands, was simultaneously closed for repairs.

The 54-year-old Canadian reactor, Chalk River in Ontario, is running now, but its license expires in four years. Canada built two replacement reactors, but even though they turned out to be unusable, their construction discouraged potential competitors ...
One solution to the 99mTc shortage may be to produce 99Mo in a cyclotron. The New York Times article discussed this solution briefly, and more detail is supplied by a report written by Hamish Johnston and published on the website medicalphysicsweb.org (all readers of Intermediate Physics for Medicine and Biology should become familiar with medicalphysicsweb.org). The gist of the method is to bombard 100Mo with protons in a cyclotron. Recently, researchers have made progress in developing this method. Johnston writes
Scientists in Canada are the first to make commercial quantities of the medical isotope technetium-99m using medical cyclotrons. The material is currently made in just a few ageing nuclear reactors worldwide, and recent reactor shutdowns have highlighted the current risk to the global supply of this important isotope.
See also an article in the Canadian newspaper, The Globe and Mail.

The Linear-No-Threshold Model

Another topic addressed recently in this blog is the risk of low levels of radiation, discussed in Chapter 16 of Intermediate Physics for Medicine and Biology.
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear-nonthreshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.
On February 21, medicalphysicsweb.org published an article asking “Does LNT model overestimate cancer risk?” Science writer Jude Dineley reports
An in vitro study has demonstrated that DNA repair mechanisms respond more effectively when exposed to low doses of ionizing radiation, compared to high doses. The observations potentially contradict the benchmark for radiation-induced cancer risk estimation, the linear-no-threshold (LNT) model, and if so, could have large implications for cancer risk estimation (PNAS 109 443).
The Proceedings of the National Academy of Sciences paper that Dineley cites is titled “Evidence for Formation of DNA Repair Centers and Dose-Response Nonlinearity in Human Cells,” and is written by a team of researchers at the Lawrence Berkeley National Laboratory. The abstract is given below.
The concept of DNA 'repair centers' and the meaning of radiation-induced foci (RIF) in human cells have remained controversial. RIFs are characterized by the local recruitment of DNA damage sensing proteins such as p53 binding protein (53BP1). Here, we provide strong evidence for the existence of repair centers. We used live imaging and mathematical fitting of RIF kinetics to show that RIF induction rate increases with increasing radiation dose, whereas the rate at which RIFs disappear decreases. We show that multiple DNA double-strand breaks (DSBs) 1 to 2 μm apart can rapidly cluster into repair centers. Correcting mathematically for the dose dependence of induction/resolution rates, we observe an absolute RIF yield that is surprisingly much smaller at higher doses: 15 RIF/Gy after 2 Gy exposure compared to approximately 64 RIF/Gy after 0.1 Gy. Cumulative RIF counts from time lapse of 53BP1-GFP in human breast cells confirmed these results. The standard model currently in use applies a linear scale, extrapolating cancer risk from high doses to low doses of ionizing radiation. However, our discovery of DSB clustering over such large distances casts considerable doubts on the general assumption that risk to ionizing radiation is proportional to dose, and instead provides a mechanism that could more accurately address risk dose dependency of ionizing radiation.
PNAS published an editorial by Lynn Hlatky, titled “Double-Strand Break Motions Shift Radiation Risk Notions,” accompanying the article. Also, see the Lawrence Berkeley lab press release.

See Russ Hobbie Demonstrate MacDose

Finally, my coauthor Russ Hobbie is now on iTunes! His video “Photon Interactions: A Simulation Study with MacDose” can be downloaded free from iTunes, and provides much insight into how radiation interacts with tissue. The description on iTunes states
This 26-minute video uses a computer simulation to demonstrate how x-ray photons interact in the body through coherent scattering, the photoelectric effect, Compton scattering, and pair production. It emphasizes the statistical nature of photon attenuation and energy absorption. The viewer should be able to distinguish between the quantities energy transferred, energy absorbed, Kerma, and absorbed dose, describe the effect of secondary photons on energy transferred and absorbed dose, and understand the effect of photons of different energy when used for radiation therapy.
You can also find Russ’s video on Youtube, included below.

Russell Hobbie Demonstrates MacDose, Part 1

Russell Hobbie Demonstrates MacDose, Part 2 

 Russell Hobbie Demonstrates MacDose, Part 3

Friday, February 24, 2012

The Hodgkin and Huxley Macarena

Last week, Oakland University had the honor of hosting James Keener, Distinguished Professor of Mathematics at the University of Utah. He gave a delightful talk as part of our Quantitative Biology lecture series. His book Mathematical Physiology won the 1998 Association of American Publishers award for the Best New Title in Mathematics. Somehow, Russ Hobbie and I failed to cite this book in the 4th edition of Intermediate Physics for Medicine and Biology. We did, however, cite Keener’s work with Sasha Panfilov on the three-dimensional propagation of electrical activity in the heart. You can learn more about Keener's career and research in a Society of Mathematical Biology newsletter.

One of my favorite features of Keener’s website is his instructions on how to do the Hodgkin-Huxley Macarena. A photograph shows a large group of researchers doing this dance at the Cold Spring Harbor Laboratory last summer. To make sense of the HH Macarena, image that the left arm is the sodium channel “m” gate, and the right arm is the “h” gate, as discussed in Chapter 6 of Intermediate Physics for Medicine and Biology. (Note: I assume the picture on Keener's website shows a person facing us, so that her left arm is on my right side). Initially h is open (right arm vertical) and m is closed (left arm horizontal). During an action potential, m opens (step 2 and 3) and then h closes (step 4 and 5) and the “nerve” becomes refractory. Since the h gate is slower than the m gate, perhaps you should imagine having a lead weight wrapped around your right wrist as you do the HH Macarena. Unfortunately, Keener does not yet have a video posted (with music), but perhaps we can encourage him to make one. If readers of Introductory Physics for Medicine and Biology know only one dance, it should be the Hodgkin-Huxley Macarena (although the ECG dance is a close second).

Note added in 2019: Watch Keener lead the HH Macarena on Youtube!

Watch James Keener do the Hodgkin-Huxley Macarena. 

Friday, February 17, 2012

Measurement of Blood Pressure

Last week I was in the hospital with pneumonia. I’m fine now, thank you, but I was there three days, and last week’s blog entry was posted from my hospital bed (doesn’t everyone bring their laptop with them to the hospital?).

A hospital is a rich environment for a lover of physics applied to medicine. One thing that particularly caught my eye is their way of measuring blood pressure. I got interested when, after a cuff was inflated around my arm, instead of feeling the familiar slow steady release of pressure as the nurse listened to my arm (that’s the way they still do it at the blood drive), this cuff started gripping and ungripping my arm in a strange and almost belligerent way. I had several opportunities to observe the measurement of blood pressure, and I decided that it would be a good topic for this blog.

First, a bit about the basic physics and physiology. In Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
As the heart beats, the pressure in the blood leaving the heart rises and falls. The maximum pressure during the cardiac cycle is the systolic pressure. The minimum is the diastolic pressure. (A blood pressure reading is in the form systolic/diastolic, measured in torr.)
The Human Body, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
The Human Body,
by Isaac Asimov.
To explain how blood pressure is measured traditionally, I will turn to my hero Isaac Asimov’s book The Human Body (I quote from my 1963 paperback copy).
When blood is forced into the aorta, it exerts a pressure against the walls that is referred to as blood pressure. This pressure is measured by a device called a sphygmomanometer (sfig’ moh-ma-nom’ i-ter; “to measure the pressure of the pulse” [Greek]), an instrument which, next to the stethoscope, is surely the darling of the general practitioner. The sphygmomanometer consists of a flat rubber bag some 5 inches wide and 8 inches long. This is kept in a cloth bag that can be wrapped snugly about the upper arm, just over the elbow. The interior of the rubber bag is pumped up with air by means of a little rubber bulb fitted with a one-way valve. As the bag is pumped up, the pressure within it increases and that pressure is measured by a small mercury manometer to which the interior of the bag is connected by a second tube.

As the bag is pumped up, the arm is compressed until, at a certain point, the pressure of the bag against the arm equals the blood pressure. At that point, the main artery of the arm is pinched closed and the pulse in the lower arm (where the physician is listening with a stethoscope) ceases.

Now air is allowed to escape from the bag and, as it does so, the level of mercury in the manometer begins to fall and blood begins to make its way through the gradually opening artery. The person measuring the blood pressure can hear the first weak beats and the reading of the manometer at that point is the systolic pressure, for those first beats can be heard only during systole, when the blood pressure is highest. As the air continues to escape and the mercury level to fall, there comes a characteristic quality of the beat that indicates the diastolic pressure; the pressure when the heart is relaxed.
What I experienced in the hospital was different than Asimov’s explanation, and was more automated. I’m having a difficult time finding good technical literature about automated blood pressure monitors, but I’m going out on a limb here and guess how they work. In the hospital there was an inflatable cuff around my forearm, but there was no one listening with a stethoscope. That person is replaced by an optical device (similar to a pulse oximeter, see Section 14.6, Biological Applications of Infrared Scattering, in Intermediate Physics for Medicine and Biology) clipped to my finger, which presumably can detect flow. The cuff then inflates and flow is measured, the cuff changes the pressure to a new level and flow is measured again, etc. This all happens rapidly; each new cuff pressure level was maintained for at most one second, implying that only a single heart beat sufficed to make the flow measurement. The cuff and optical device are attached to a computer, and the computer made the decision about when to increase or decrease pressure, and what values to use. It seemed to be doing some sort of binary search, first going above and then below the level that allows flow. The algorithm slowed as the threshold level was approached, and I suspect that in such cases several heartbeats were required to accurately determine if flow occurred. The device also output the pulse rate and, if my memory serves me well, blood oxygenation level. Both were recorded after the cuff had completed its measurement of blood pressure.

I like this method. It does not depend on someone carefully listening for delicate blood flow noises (also known as Korotkoff sounds). In fact, while the blood pressure was being measured, the nurse was usually checking my IV line or doing some other task; the method is truly automated. One time, I did a little experiment and fidgeted with the finger clip during the measurement. The nurse got a fright when she saw my blood pressure up around 220/150. But a quick repeat measurement (during which I behaved myself) revealed that my blood pressure was actually normal (about 110/70). I suspect that the use of the binary search and pulse oximeter provides a more accurate measurement than does the traditional method, although I have no evidence to support that opinion. Automated blood pressure recording is an excellent example of how physics and engineering can contribute to medicine and biology.

Friday, February 10, 2012

Decay Plus Input at a Constant Rate

Section 2.7 of the 4th edition of Intermediate Physics for Medicine and Biology is titled Decay Plus Input at a Constant Rate. When I taught Biological Physics last fall (using for my textbook—you guessed it—Intermediate Physics for Medicine and Biology), I found that we kept coming back to this section over and over. Russ Hobbie and I write
Suppose that in addition to the removal of y from the system at a rate –by, y enters the system at a constant rate a, independent of y and t. The net rate of change of y is given by

dy/dt = a – by…  (2.24)

The solution is

y = a/b (1 –  e−bt).
One of the first applications of this equation is to the speed of an animal falling under the force of gravity and air friction (Chapter 2, Problem 28). One can show that the terminal speed of the animal is a/b. If further one proves that the gravitational force (a) is proportional to volume, and the frictional force (−by) is proportional to surface area, then the implication is that larger animals fall faster than smaller ones. This led to Haldane’s famous quote “You can drop a mouse down a thousand-yard mine shaft: and arriving at the bottom, it gets a slight shock and walks away. A rat is killed, a man is broken, a horse splashes.”

We see this equation again in Chapter 3 when analyzing Newton’s law of cooling (Problem 45). The surrounding temperature plays the role of a, and the exponential cooling from convection is represented by a term like −by. The solution to the resulting differential equation is just the exponential solution presented in Sec. 2.7.

In Section 5.7 about the artificial kidney, the equation arises again in governing the concentration of solute in the blood when the concentration of the solute in the dialysis fluid is a constant. As with the animal falling, the ratio of blood volume V to membrane area S is a key parameter.

The equation appears twice in Chapter 6 (Impulses in Nerve and Muscle Cells). First, the gate variables m, h, and n in the Hodgkin and Huxley model obey this same differential equation. In the voltage-clamp case, the gates approach their steady-state values exponentially. Then, Problem 35 analyzes electrical stimulation of a space-clamped passive axon using a constant current, and finds that the transmembrane potential approaches its steady-state value exponentially also. This result is used to derive two quantities with colorful names—rheobase and chronaxie—that are important in neural stimulation.

By Chapter 10, some of the students have almost forgotten the equation when it appears again in the study of feedback loops (particularly Section 10.4). I am sure that the equation would appear even more times, except my one-semester class ended with Chapter 10.

Some might wonder why Intermediate Physics for Medicine and Biology contains an entire chapter (Chapter 2) about exponential growth and decay. I believe that the way we are constantly returning to the concepts introduced in Chapter 2 justifies why we organize the material the way we do. In fact, Chapter 2 has always been one of my favorite chapters in Intermediate Physics for Medicine and Biology.

Friday, February 3, 2012

Charles Dickens: Medical Physicist?

The 200th anniversary of the birth of Charles Dickens occurs this week (he was born February 7, 1812). I am a big Dickens fan, so I had to fit him into this week’s blog entry somehow. It is not easy, since there is little overlap between Dickens’ novels and the 4th edition of Intermediate Physics for Medicine and Biology. But let us try.

Great Expectations, by Charles Dickens, superimposed on Intermediate Physics for Medicine and Biology.
Great Expectations,
by Charles Dickens.
Dickens’ life spanned an incredibly productive era of Victorian science in England. He was born the same year as English Chemist Humphry Davy published his Elements of Chemical Philosophy. Dickens’ birth fell almost exactly halfway between the births of the two greatest of 19th century British physicists: Michael Faraday (September 1791) and James Maxwell (November 1831). Just as Maxwell was publishing his eponymous Maxwell’s equations, Dickens was publishing Great Expectations. The physician John Snow was born one year after Dickens. It was Snow who famously traced the source of the 1854 cholera epidemic to the Broad Street pump in London (read more about this story in The Ghost Map by Steven Johnson). Dickens was born just two years after the birth of Charles Darwin, and On The Origin of Species appeared almost simultaneously with Dickens’ masterpiece A Tale of Two Cities. William Thomson (Lord Kelvin) was 12 years younger than Dickens. He formulated his version of the second law of thermodynamics in 1851, soon after Dickens published David Copperfield.

David Copperfield, by Charles Dickens, superimposed on Intermediate Physics for Medicine and Biology.
David Copperfield,
by Charles Dickens.
A young Charles was working long hours at Warren’s Blacking Warehouse when the first issue of the British medical journal The Lancet appeared in 1823. Dickens published his first story the year after Faraday proposed his vision of electric and magnetic fields, and he got married and published his first novel (The Pickwick Papers) in 1836, the year Darwin returned from his voyage on the Beagle. Martin Chuzzlewit came out in 1843, the same year James Joule determined the mechanical equivalent of heat. Dickens traveled to France and Italy the year that the prominent English chemist John Dalton died in England. His last complete novel, Our Mutual Friend, appeared in 1865, the year before the first transatlantic telegraph cable was laid (for more about this fascinating story, read A Thread Across the Ocean by John Gordon). Kelvin developed the cable equation to govern the transmission of signals over this telegraph line, and the same equation is used nowadays to describe nerve axons. An elderly Dickens came to the United States for a reading tour in 1867, the year English surgeon Joseph Lister pioneered the use of antiseptic to sterilize surgical instruments. Charles Dickens died of a stroke in 1870—leaving the Mystery of Edwin Drood unfinished—just a few months before the birth of the greatest experimental physicist since Faraday, Ernest Rutherford.

A Tale of Two Cities, by Charles Dickens, superimposed on Intermediate Physics for Medicine and BIology.
A Tale of Two Cities,
by Charles Dickens.
The medical literature contains several studies of how medicine was portrayed in Dickens’ books. Howard Markel writes about “Charles Dickens and the Art of Medicine” in the Annals of Internal Medicine (Volume 101, Pages 408–411, 1984).
Charles Dickens, the novelist, humanist, and social reformer, was a keen observer of all the characteristics of the people in his novels. Dickens observed physicians and visited hospitals so that he could record various illnesses and diseases of people he met during his life. Dickens also worked for many public health reforms in Victorian England. The author used his observations of sick people in many of his novels and produced several accurate descriptions of disease, including Ménière's disorder and acute leukemia.
Bleak House, by Charles Dickens, superimposed on Intermediate Physics for Medicine and Biology.
Bleak House,
by Charles Dickens.
The article analyzes how, in Bleak House, Phil Squod (who was always “shoulding his way along walls”) demonstrated symptoms consistent with “dysfunction of the vestibular nerve…most likely Meniere’s disorder.” In Dombey and Son, the symptoms of young Paul Dombey “resemble those of a child with an acute form of leukemia.”

Kerrie Schoffer and John O’Sullivan focus on movement disorders in their study of Charles Dickens in the Journal of Clinical Neuroscience (Volume 13, Pages 898–901, 2006)
Nineteenth-century Victorian novelists played an important role in developing our understanding of medicine and illness. With the eye of an expert clinician, Charles Dickens provided several detailed accounts of movement disorders in his literary works, many of which predated medical descriptions. His gift for eloquence, imagery, and precision attest not only to the importance of careful clinical observation, but also provide an insightful and entertaining perspective on movement disorders for modern students of neuroscience.
Pickwick Papers, by Charles Dickens, superimposed on Intermediate Physics for Medicine and Biology.
Pickwick Papers,
by Charles Dickens.
So is Dickens a medical physicist? I guess not. But he was a great writer. My favorite Dickens book is A Christmas Carol; I read it every Christmas. I reread A Tale of Two Cities during my Paris trip two years ago (“…It is a far, far better thing that I do, than I have ever done; it is a far, far better rest that I go to than I have ever known.”). I enjoyed Bleak House a few years ago, although it took me a long time to plow through that 818 page tome. I love Dickens’ characters, like the Artful Dodger in Oliver Twist, and Wilkins Micawber in David Copperfield. What will be my next Dickens book? I haven’t read Nicholas Nickleby yet; I think it will be next.

Friday, January 27, 2012

The Intermediate Physics for Medicine and Biology Tourist

A map of the path of an Intermediate Physcis for Medicine and Biology tourist.
Over the Christmas break I was browsing through the Guidebook for the Scientific Traveler: Visiting Physics and Chemistry Sites Across America, and it got me to wondering what sites a reader of the 4th edition of Intermediate Physics for Medicine and Biology might want to visit. Apparently having too much time on my hands, I devised a trip through the United States for our readers. (Perhaps I’ll prepare an international edition later.) The trip starts and ends in Rochester, Michigan, where I work, but the path consists of a large circle and you can begin anywhere. I have not visited all these places, but I know enough about them to suspect you would enjoy them all. Tell me if I have forgotten any important sites. Happy travels!

Friday, January 20, 2012

Radiation Risks from Medical Imaging Procedures

On December 13, 2011 the American Association of Physicists in Medicine issued a position statement (PP 25-A) about radiation risks from medical imaging procedures. It is brief, and I will quote it in its entirety:
The American Association of Physicists in Medicine (AAPM) acknowledges that medical imaging procedures should be appropriate and conducted at the lowest radiation dose consistent with acquisition of the desired information. Discussion of risks related to radiation dose from medical imaging procedures should be accompanied by acknowledgement of the benefits of the procedures. Risks of medical imaging at effective doses below 50 mSv for single procedures or 100 mSv for multiple procedures over short time periods are too low to be detectable and may be nonexistent. Predictions of hypothetical cancer incidence and deaths in patient populations exposed to such low doses are highly speculative and should be discouraged. These predictions are harmful because they lead to sensationalistic articles in the public media that cause some patients and parents to refuse medical imaging procedures, placing them at substantial risk by not receiving the clinical benefits of the prescribed procedures.

AAPM members continually strive to improve medical imaging by lowering radiation levels and maximizing benefits of imaging procedures involving ionizing radiation.
News articles discussing this position statement appeared on the Inside Science and Physics Central websites.

The 4th edition of Intermediate Physics for Medicine and Biology discusses the risk of radiation in Section 16.13. Dose is the energy deposited by radiation in tissue per unit mass, and its unit of a gray is equal to one joule per kilogram. A sievert is also a J/kg, but it differs from a gray in that it includes a weighting factor that measures the relative biological effectiveness of the radiation, and is used to measure the equivalent dose (although often, including in the remainder of this blog entry, people get a little sloppy and just say “dose” when they really mean “equivalent dose”). A sievert is a rather large dose of radiation, and when discussing medical imaging or background radiation exposure, scientists often use the millisievert (mSv).

Table 16.7 of Intermediate Physics for Medicine and Biology lists typical radiation doses for many medical imaging procedures. For example, a simple chest X ray has a dose of about 0.06 mSv, and a CT scan is 1–10 mSv. The average radiation dose from all natural (background) sources is given in Table 16.6 as 3 mSv per year (primarily from exposure to radon gas). A pilot logging 1000 hours in the air per year receives on the order of 7 mSv annually.

Perhaps the most interesting sentence in the AAPM position statement is “Risks of medical imaging at effective doses below 50 mSv for single procedures or 100 mSv for multiple procedures over short time periods are too low to be detectable and may be nonexistent.” To me, the phrase “may be nonexistent” seems to cast doubt on the linear nonthreshold model often used when discussing the risk of low-dose radiation. Russ Hobbie and I discuss this model in Intermediate Physics for Medicine and Biology.
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear-nonthreshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.
We also consider other ideas, such as a threshold model for radiation effects and even hormesis, the idea that very low doses of radiation may be beneficial. The controversy over the biological effects of low-dose radiation is fascinating, but as best I can tell the validity of each of these models remains uncertain; getting accurate data when measuring tiny effects is difficult. I assume this is what motivates the word “may” in the phrase “may be nonexistent” from the position statement (although, I hasten to add, I have no inside information about the intent of the authors of the position statement—I’m just guessing). In our book, Russ and I come to a conclusion that is fairly consistent with the AAPM position statement.
Some investigators feel that there is evidence for a threshold dose, and that the LNT model overestimates the risk [Kathren (1996); Kondo (1993); Cohen (2002)]. Mossman (2001) argues against hormesis but agrees that the LNT model has led to ‘enormous problems in radiation protection practice’ and unwarranted fears about radiation.
Although I find the AAPM position statement to have a slightly condescending tone, I applaud it primarily as an antidote for those “unwarranted fears about radiation.” My impression is that many in the general public have a fear of the word radiation that borders on the irrational, stemming from a lack of knowledge about the basic physics governing how radiation interacts with tissue, and a poor understanding of risk analysis. I hope the AAPM position statement (and, immodestly, our textbook) helps change those concerns from irrational fears to reasoned and fact-based assessment. I would not discourage analysis of public safety, but I definitely encourage an intelligent and scientific analysis.

Friday, January 13, 2012

Open Access

The journal Medical Physics is one of the leading publications in the field of physics applied to medicine. Recently, many articles in Medical Physics have become free to everyone (open access) (see the editorial here). This is great news to those readers of the 4th edition of Intermediate Physics for Medicine and Biology who do not have a personal or institutional subscription to Medical Physics. Some of the articles that can now be downloaded for free are the ever-popular point/counterpoint debates, review papers, award papers, and something called the “editor’s picks.” Also available free are the special 50th anniversary articles published as part of the celebration of half a century of contributions by the American Association of Physicists in Medicine in 2008. Several of these were cited by Russ Hobbie and me in our American Journal of PhysicsResource Letter MP-2: Medical Physics” (Volume 77, Pages 967–978, 2009). To access this wealth of free material, just go to the home page of the Medical Physics website and click on the Open Access Tab.

Open Access publishing is becoming more common, and has been championed by many leading scientists, such as former NIH director and Nobel laureate Harold Varmus (listen to Varmus talk about open access here). Nevertheless, the topic is hotly debated. For instance, see the point/counterpoint discussion in the November 2005 issue of Medical Physics, titled “Results of Publicly Funded Scientific Research Should Be Immediately Available Without Cost to the Public.” Additional debate can be found in the journal Nature and at physicsworld.com.

 Harold Varmus discussing open access publishing.
http://www.youtube.com/watch?v=MD-OP7YScr0

Open Access to journal articles should benefit readers of Intermediate Physics for Medicine and Biology, because it will allow those readers immediate access to cutting-edge papers that otherwise would require a journal subscription. Another source of open access papers is BioMed Central:
BioMed Central is an independent publishing house committed to providing immediate open access to peer-reviewed biomedical research. All original research articles published by BioMed Central are made freely and permanently accessible online immediately upon publication. BioMed Central views open access to research as essential in order to ensure the rapid and efficient communication of research findings.
BioMed Central journals that will be of interest to readers of Intermediate Physics for Medicine and Biology are BMC Medical Physics, Biomedical Engineering Online, and Radiation Oncology.

A third source of papers is the Public Library of Science. Specific journals are PLoS One (the flagship journal, covering all areas of science), PLoS Medicine, PLoS Biology, and especially PLoS Computational Biology. Also of interest is PLoS Blogs.

The Open Access movement continues, slowly but steadily, to remake scientific publication. There are now hundreds of Open Access journals. Even some of the most prestigious leading publishers are getting into the act: the American Physical Society recently initiated the open access, all on-line journal Physical Review X to go along with its other Physical Review journals.

In the spirit of Open Access, I’m pleased to announce that the 4th edition of Intermediate Physics for Medicine and Biology will now be given away, free of cha... just kidding. Maybe someday the Open Access movement will reach to textbooks, but not yet. At least this blog is free. ;)