Friday, December 19, 2014

A Theoretical Physicist’s Journey into Biology

Many physicists have shifted their research to biology, but rarely do we learn how they make this transition or, more importantly, why. But the recent article “A Theoretical Physicist’s Journey into Biology: From Quarks and Strings to Cells and Whales” by Geoffrey West (Physical Biology, Volume 11, Article number 053013, 2014) lets us see what is involved when changing fields and the motivation for doing it. Readers of the 4th edition of Intermediate Physics for Medicine and Biology will remember West from Chapter 2, where Russ Hobbie and I discuss his work on Kleber’s law. West writes
Biology will almost certainly be the predominant science of the twenty-first century but, for it to become successfully so, it will need to embrace some of the quantitative, analytic, predictive culture that has made physics so successful. This includes the search for underlying principles, systemic thinking at all scales, the development of coarse-grained models, and closer ongoing collaboration between theorists and experimentalists. This article presents a personal, slightly provocative, perspective of a theoretical physicist working in close collaboration with biologists at the interface between the physical and biological sciences.
On Growth and Form, by D'Arcy Thompson, superimposed on Intermediate Physics for Medicine and Biology.
On Growth and Form,
by D'Arcy Thompson.
West describes his own path to biology, which included reading some classic texts such as D’Arcy Thompson’s On Growth and Form. He learned biology during intense free-for-all discussions with his collaborator James Brown and Brown’s student Brian Enquist.
The collaboration, begun in 1995, has been enormously productive, extraordinarily exciting and tremendous fun. But, like all excellent and fulfilling relationships, it has also been a huge challenge, sometimes frustrating and sometimes maddening. Jim, Brian and I met every Friday beginning around 9:00 am and finishing around 3:00 pm with only short breaks for necessities. This was a huge commitment since we both ran large groups elsewhere. Once the ice was broken and some of the cultural barriers crossed, we created a refreshingly open atmosphere where all questions and comments, no matter how “elementary,” speculative or “stupid,” were encouraged, welcomed and treated with respect. There were lots of arguments, speculations and explanations, struggles with big questions and small details, lots of blind alleys and an occasional aha moment, all against a backdrop of a board covered with equations and hand-drawn graphs and illustrations. Jim and Brian generously and patiently acted as my biology tutors, exposing me to the conceptual world of natural selection, evolution and adaptation, fitness, physiology and anatomy, all of which were embarrassingly foreign to me. Like many physicists, however, I was horrified to learn that there were serious scientists who put Darwin on a pedestal above Newton and Einstein.
West’s story reminds me of the collaboration between physicist Joe Redish and biologist Todd Cook that I discussed previously in this blog, or Jane Kondev’s transition from basic physics to biological physics when an assistant professor at Brandeis (an awkward time in your career to make such a dramatic change).

I made my own shift from physics to biology much earlier in my career—in graduate school. Changing fields is not such a big deal when you are young, but I think all of us who make this transition have to cross that cultural barrier and make that huge commitment to learning a new field. I remember spending much of my first summer at Vanderbilt University reading papers by Hodgkin, Huxley, Rushton, and others, slowly learning how nerves work. Certainly my years at the National Institutes of Health provided a liberal education in biology.

I will give West the last word. He concludes by writing
Many of us recognize that there is a cultural divide between biology and physics, sometimes even extending to what constitutes a scientific explanation as encapsulated, for example, in the hegemony of statistical regression analyses in biology versus quantitative mechanistic explanations characteristic of physics. Nevertheless, we are witnessing an enormously exciting period as the two fields become more closely integrated, leading to new inter-disciplinary sub-fields such as biological physics and systems biology. The time seems right for revisiting D’Arcy Thompson’s challenge: “How far even then mathematics will suffice to describe, and physics to explain, the fabric of the body, no man can foresee. It may be that all the laws of energy, and all the properties of matter, all… chemistry… are as powerless to explain the body as they are impotent to comprehend the soul. For my part, I think it is not so.” Many would agree with the spirit of this remark, though new tools and concepts including closer collaboration may well be needed to accomplish his lofty goal.

Friday, December 12, 2014

In Vitro Evaluation of a 4-leaf Coil Design for Magnetic Stimulation of Peripheral Nerve

In the comments to last week’s blog entry, Frankie asks if there is a way to “safely, reversibly block nerve conduction (first in the lab, then in the clinic) with an exogenously applied E and M signal?” This is a fascinating question, and I may have an answer.

When working at the National Institutes of Health in the early 1990’s, Peter Basser and I analyzed magnetic stimulation of a peripheral nerve. The mechanism of excitation is similar to the one Frank Rattay developed for stimulating a nerve axon with an extracellular electrode. You can find Rattay’s method described in Problems 38–41 of Chapter 7 in the 4th edition of Intermediate Physics for Medicine and Biology. The bottom line is that excitation occurs where the spatial derivative of the electric field is largest. I have already recounted how Peter and I derived and tested our model, so I won’t repeat it today.

If you accept the hypothesis that excitation occurs where the electric field derivative is large, then the traditional coil design for magnetic stimulation—a figure-of-eight coil—has a problem: the axon is not excited directly under the center of the coil (where the electric field is largest), but a few centimeters from the center (where the electric field gradient is largest). What a nuisance. Doctors want a simple design like a crosshair: excitation should occur under the center. X marks the spot.

As I pondered this problem, I realized that we could build a coil just like the doctor ordered. It wouldn’t have a figure-of-eight design. Rather, it would be two figure-of-eights side by side. I called this the four leaf coil. With this design, excitation occurs directly under the center.

An x-ray of a four-leaf-coil used for magnetic stimulation of nerves.
A four-leaf-coil used for
magnetic stimulation of nerves.
John Cadwell of Cadwell Labs built a prototype of this coil; an x ray of it is shown above. We wanted to test the coil in a well-controlled animal experiment, so we sent it to Paul Maccabee at the State University of New York Health Science Center in Brooklyn. Paul did the experiments, and we published the results in the journal Electroencephalography and clinical Neurophysiology (Volume 93, Pages 68–74, 1994). The paper begins
Magnetic stimulation is used extensively for non-invasive activation of human brain, but is not used as widely for exciting limb peripheral nerves because of both the uncertainty about the site of stimulation and the difficulty in obtaining maximal responses. Recently, however, mathematical models have provided insight into one mechanism of peripheral nerve stimulation: peak depolarization occurs where the negative derivative of the component of the induced electric field parallel to nerve fibers is largest (Durand et al. 1989; Roth and Basser 1990). Both in vitro (Maccabee et al. 1993) and in vivo (Nilsson et al. 1992) experiments support this hypothesis for uniform, straight nerves. Based on these results, a 4-leaf magnetic coil (MC) design has been suggested that would provide a well defined site of stimulation directly under the center of the coil (Roth et al. 1990). In this note, we perform in vitro studies which test the performance of this new coil design during magnetic stimulation of a mammalian peripheral nerve.
Maccabee’s experiments showed that the coil worked as advertised. In the discussion of the paper we concluded that “the 4-leaf coil design provides a well defined stimulus site directly below the center of the coil.”

This is a nice story, but it’s all about exciting an action potential. What does it have to do with Frankie’s goal of blocking an action potential? Well, if you flip the polarity of the coil current, instead of depolarizing the nerve under the coil center, you hyperpolarize it. A strong enough hyperpolarization should block propagation. We wrote
In a final type of experiment, performed on 3 nerves, the action potential was elicited electrically, and a hyperpolarizing magnetic stimulus was applied between the stimulus and recording sites at various times. The goal was to determine if a precisely timed stimulus could affect action potential propagation. Using induced hyperpolarizing current at the coil center, with a strength that was approximately 3 times greater than that needed to excite by depolarization at that location, we never observed a block of the action potential. Moreover, no significant effect on the latency of the action potential propagating to the recording site was observed… Our magnetic stimulator was able to deliver stimuli with strengths up to only 2 or 3 times the threshold strength, and therefore the magnetic stimuli were probably too weak to block propagation. It is possible that such phenomena might be observed using a more powerful stimulator.
Frankie, I have good news and bad news. The good news is that you should be able to reversibly block nerve conduction with magnetic stimulation using a four-leaf coil. The bad news is that it didn’t work with Paul’s stimulator; perhaps a stronger stimulator would do the trick. Give it a try.

Friday, December 5, 2014

The Bubble Experiment

When I was a graduate student, my mentor John Wikswo assigned to me the job of measuring the magnetic field of a nerve axon. This experiment required me to dissect the ventral nerve cord out of a crayfish, thread it through a wire-wound ferrite-core toroid, immerse the nerve and toroid in saline, stimulate one end of the nerve, and record the magnetic field produced by the propagating action currents. One day as I was lowering the instrument into the saline bath, a bubble got stuck in the gap between the nerve and the inner surface of the toroid. “Drat” I thought as I searched for a needle to remove it. But before I could poke it out I wondered “how will the bubble affect the magnetic signal?”

A drawing of a wire-wound ferrite-core toroid, used to measure the magnetic field of a nerve axon.
A wire-wound, ferrite-core toroid,
used to measure the magnetic field of a nerve.

To answer this question, we need to review some magnetism. Ampere’s law states that the line integral of the magnetic field around a closed path is proportional to the net current passing through a surface bounded by that path. For my experiment, that meant the magnetic signal depended on the net current passing through the toroid. The net current is the sum of the current inside the nerve axon and that fraction of the current in the saline bath that threads the toroid—the return current. In general, these currents flow in opposite directions and partially cancel. One of the difficulties I faced when interpreting my data was determining how much of the signal was from intracellular current and how much was from return current.

I struggled with this question for months. I calculated the return current with a mathematical model involving Fourier transforms and Bessel functions, but the calculation was based on many assumptions and required values for several parameters. Could I trust it? I wanted a simpler way to find the return current.

Then along came the bubble, plugging the toroid like Pooh stuck in Rabbit’s front door. The bubble blocked the return current, so the magnetic signal arose from only the intracellular current. I recorded the magnetic signal with the bubble, and then—as gently as possible—I removed the bubble and recorded the signal again. This was not easy, because surface tension makes a small bubble in water sticky, so it stuck to the toroid as if glued in place. But I eventually got rid of it without stabbing the nerve and ending the experiment.

To my delight, the magnetic field with the bubble was much larger than when it was absent. The problem of estimating the return current was solved; it’s the difference of the signal with and without the bubble. I reported this result in one of my first publications (Roth, B. J., J. K. Woosley and J. P. Wikswo, Jr., 1985, “An Experimental and Theoretical Analysis of the Magnetic Field of a Single Axon,” In: Biomagnetism: Applications and Theory, Weinberg, Stroink and Katila, Eds., Pergamon Press, New York, pp. 78–82.).
When taking data from a crayfish nerve, the toroid and axon were lifted out of the bath for a short time. […] When again placed in the bath an air bubble was trapped in the center of the toroid, filling the space between the axon and the toroid inner surface. […] Taking advantage of this fortunate occurrence, data were taken with and without the bubble present. […] The magnetic field with the bubble present […] is narrower and larger than the field with the toroid filled with saline.
A plot of magnetic field produced by a propagating action potential versus time. The two traces show measurements when a bubble was trapped between the toroid and the nerve ("Bubble") and when it was not ("No Bubble").
The magnetic field of a nerve axon
with and without a bubble trapped
between the nerve and toroid.
On the day of the bubble experiment I was lucky. I didn’t plan the experiment. I wasn’t wise enough or thoughtful enough to realize in advance that a bubble was the ideal way to eliminate the return current. But when I looked through the dissecting microscope and saw the bubble stuck there, I was bright enough to appreciate my opportunity. “Chance favors the prepared mind.”

I have a habit of turning all my stories into homework problems. You will find the bubble story in the 4th edition of Intermediate Physics for Medicine and Biology, Problem 39 of Chapter 8. Focus on part (b).
Problem 39 A coil on a magnetic toroid as in Problem 38 is being used to measure the magnetic field of a nerve axon.
(a) If the axon is suspended in air, with only a thin layer of extracellular fluid clinging to its surface, use Ampere’s law to determine the magnetic field, B, recorded by the toroid.
(b) If the axon is immersed in a large conductor such as a saline bath, B is proportional to the sum of the intracellular current plus that fraction of the extracellular current that passes through the toroid (see Problem 13). Suppose that during an experiment an air bubble is trapped between the axon and the inner radius of the toroid? How is the magnetic signal affected by the bubble? See Roth et al. (1985).

Friday, November 28, 2014

The Bowling Ball and the Feather

Dropping a feather and a ball in a vacuum to show that they fall at the same rate is a classic physics demonstration. We have a version of this demo at Oakland University, but it is not very effective. A small ball and a feather are in a tube about 1 meter long and a few centimeters in diameter. We have vacuum pump to remove the air, but it is difficult to see the objects from the back of the room, and often they bump into the wall of the tube, slowing them down. I have never found it useful. Yet, the physical principle being demonstrated is fundamental. The gravitational mass in Newton’s universal law of gravity and the inertial mass in Newton’s second law of motion cancel out, so that all objects fall downward with acceleration g = 9.8 m/s2.

This result is unexpected because in everyday life we experience air friction. When you include air friction, objects do not all fall at the same rate. Russ Hobbie and I illustrate this point in Problem 28 of Chapter 2 in the 4th edition of Intermediate Physics for Medicine and Biology.

Problem 28 When an animal of mass m falls in air, two forces act on it: gravity, mg, and a force due to air friction. Assume that the frictional force is proportional to the speed v.
(a) Write a differential equation for v based on Newton’s second law, F = m(dv/dt).
(b) Solve this differential equation. (Hint: Compare your equation with Eq. 2.24.)
(c) Assume that the animal is spherical, with radius a and density ρ. Also, assume that the frictional force is proportional to the surface area of the animal. Determine the terminal speed (speed of descent in steady state) as a function of a.
(d) Use your result in part (c) to interpret the following quote by J. B. S. Haldane [1985]: “You can drop a mouse down a thousand-yard mine shaft; and arriving at the bottom, it gets a slight shock and walks away. A rat is killed, a man is broken, a horse splashes.”
If we ignore air fraction, v = gt; the acceleration is g and does not depend on mass. With air friction, objects reach a terminal velocity that depends on their mass. We are all so used to seeing a feather float downward with its motion dominated by air friction that it is difficult to believe it could ever fall as fast as a ball. To persuade students that this behavior does indeed happen, to convince them that in a vacuum a feather drops like a rock, we need a powerful demonstration. The result is so significant, and so nonintuitive, that the demo must be dramatic and memorable.

Now we have it. Watch this amazing video with British Physics Professor Brian Cox. He found the biggest vacuum chamber in the world—a large room used by NASA to test space vehicles—and inside it he dropped a bowling ball and a feather simultaneously from the same height. When the room was filled with air, the feather slowly fluttered to the ground. When the room was evacuated, the feather stayed right beside the bowling ball all the way down. The visual effect is stunning. Cox has a fine sense of drama, building the tension until the final sensational experiment. The video is less than five minutes long. You’ve gotta see it.

Friday, November 21, 2014

The MCAT and IPMB

The Medical College Admission Test, famously known as the MCAT, is an exam taken by students applying to medical school. The Association of American Medical Colleges will introduce a new version of the MCAT next year, focusing on competencies rather than on prerequisite classes. How well does the 4th edition of Intermediate Physics for Medicine and Biology prepare premed students for the MCAT?

The new MCAT will be divided into four sections, and the one most closely related to IPMB deals with the chemical and physical foundations of biological systems. Within that section are two foundational concepts, of which one is about how “complex living organisms transport materials, sense their environment, process signals, and respond to changes that can be understood in terms of physical principles.” This concept is further subdivided into five categories. Below, I review the topics included in these categories and indicate what chapter in IPMB addresses each.

MCAT: Translational motion, forces, work, energy, and equilibrium in living systems

IPMB: Chapter 1 discusses mechanics, including forces and torques, with applications to biomechanics. Work and energy are introduced in Chapter 1, and analyzed in more detail in Chapter 3 on statistical mechanics and thermodynamics (parts of thermodynamics are included under another foundational concept dealing mostly with chemistry). Periodic motion is covered in Chapter 11, which discusses the amplitude, frequency and phase of an oscillator. Waves are analyzed in Chapter 13 about sound and ultrasound.

MCAT: Importance of fluids for the circulation of blood, gas movement, and gas exchange

IPMB: Chapter 1 analyzes fluids, including buoyancy, hydrostatic pressure, viscosity, Poiseuille flow, turbulence, and the circulatory system. Much of this material is not covered in a typical introductory physics class. Chapter 3 introduces absolute temperature, the ideal gas law, heat capacity, and Boltzmann’s constant.

MCAT: Electrochemistry and electrical circuits and their elements

IPMB: Chapters 6 and 7 cover electrostatics, including charge, the electric field, current, voltage, Ohm’s law, resistors, capacitors, and nerve conduction. Chapter 8 discusses the magnetic field and magnetic forces.

MCAT: How light and sound interact with matter

IPMB: Sound is analyzed in Chapter 13, including the speed of sound, the decibel, attenuation, reflection, the Doppler effect, ultrasound, and the ear. Chapter 14 covers light, photon energy, color, interference, and the eye. This chapter also describes absorption of light in the infrared, visible, and ultraviolet. Chapter 18 analyzes nuclear magnetic resonance.

MCAT: Atoms, nuclear decay, electronic structure, and atomic chemical behavior

IPMB: Chapter 17 is about nuclear physics and nuclear medicine, covering isotopes, radioactive decay, and half life. Atoms and atomic energy levels are explained in Chapter 14.

MCAT: General mathematical concepts and techniques

IPMB: Chapter 1 and many other chapters require students to estimate numerically. Chapter 2 covers linear, semilog, and log-log plots, and exponential growth. Metric units and dimensional analysis are used everywhere. Probability concepts are discussed in Chapter 3 and other chapters. Basic math skills such as exponentials, logarithms, scientific notation, trigonometry, and vectors are reinforced throughout the book and in the homework problems, and are reviewed in the Appendices.

The MCAT section about biological and biochemical foundations of living systems includes diffusion and osmosis (discussed in Chapters 4 and 5 of IPMB), membrane ion channels (covered in Chapter 9), and feedback regulation (analyzed in Chapter 10).

Overall, Intermediate Physics for Medicine and Biology covers many of the topics tested on the MCAT. A biological or medical physics class based on IPMB would prepare a student for the exam, and would reinforce problem solving skills and teach the physical principles underlying medicine, resulting in better physicians.

I’m a realist, however. I know premed students take lots of classes, and they don’t want to take more physics beyond a two-semester introduction, especially if the class might lower their grade point average. I have tried to recruit premed students into my Biological Physics (PHY 325) and Medical Physics (PHY 326) classes here at Oakland University, with little success. Perhaps if they realized how closely the topics and skills required for the MCAT correspond to those covered by Intermediate Physics for Medicine and Biology they would reconsider.

To learn more about how to prepare for the physics competencies on the MCAT, see Robert Hilborn’s article “Physics and the Revised Medical College Admission Test,” published in the American Journal of Physics last summer (Volume 82, Pages 428–433, 2014).

Friday, November 14, 2014

Faraday, Maxwell, and the Electromagnetic Field

Faraday, Maxwell, and the Electromagnetic Field, by Nancy Forbes and Basil Mahon, superimposed on Intermediate Physics for Medicine and Biology.
Faraday, Maxwell, and the
Electromagnetic Field,
by Nancy Forbes and Basil Mahon.
Michael Faraday and James Clerk Maxwell are two of my scientific heroes. So, when I saw the book Faraday, Maxwell, and the Electromagnetic Field displayed in the new book section of the Rochester Hills Public Library, I had to check it out. In their introduction, Nancy Forbes and Basil Mahon write
It is almost impossible to overstate the scale of Faraday and Maxwell’s achievement in bringing the concept of the electromagnetic field into human thought. It united electricity, magnetism, and light into a single, compact theory; changed our way of life by bringing us radio, television, radar, satellite navigation, and mobile phones; inspired Einstein’s special theory of relativity; and introduced the idea of field equations, which became the standard form used by today’s physicists to model what goes on in the vastness of space and inside atoms.
I have read previous biographies of both Faraday and Maxwell, so their story was familiar to me. But one anecdote about Faraday I had never heard before.
The Royal Institution’s Friday Evening Discourses had by now become an institution in their own right. The lecture on April 3, 1846, turned out to be a historic occasion, although none of the audience recognized it as such and the whole thing happened by chance in a rather bizarre fashion. Charles Wheatstone was to have been the latest in a long line of distinguished speakers, but he panicked and ran away just as he was due to make his entrance[!]. Although amply confident in his professional dealings as a scientist, inventor, and businessman, Wheatstone was notoriously shy of speaking in public, and Faraday had taken a gamble when engaging him to talk about his latest invention, the electromagnetic chronoscope—a device for measuring small time intervals, like the duration of a spark. The gamble had failed, and Faraday was left with the choice of sending disappointed customers home or giving the talk himself. He chose to talk, but he ran out of things to say on the advertised topic well before the allotted hour was up.

Caught off-guard, he did what he had never done before and gave the audience a glimpse into his private meditations on matter, lines of force, and light. In doing so, he draw an extraordinary prescient outline of the electromagnetic theory of light, as it would be developed over the next sixty years….
Writing about Maxwell’s electromagnetic theory, Forbes and Mahon say
The theory’s construction had been an immense creative effort, sustained over a decade and inspired, from first to last, by the work of Michael Faraday. Thanks to Faraday’s meticulous recording of his findings and thoughts in his Experimental Researches in Electricity, Maxwell had been able to see the world as Faraday did, and, by bringing together Faraday’s vision with the power of Newtonian mathematics, to give us a new concept of physical reality, using the power of mathematics. But mathematics would not have been enough without Maxwell’s own near-miraculous intuition; witness the displacement current, which gave the theory its wonderful completeness. The theory belongs to both Maxwell and Faraday.
Russ Hobbie and I discuss electricity and magnetism in the 4th edition of Intermediate Physics for Medicine and Biology. Chapters 6 and 7 show how electrostatics can be used to describe how nerves and muscles behave. Chapter 8 discusses magnetism and electromagnetism. Chapter 9 examines in more detail how electromagnetic fields interact with the body, and Chapter 18 describes how magnetism leads to magnetic resonance imaging. So, it’s safe to say that IPMB has Maxwell and Faraday’s influence throughout.

If you want to learn more about Maxwell’s work, I suggest Maxwell on the Electromagnetic Field: A Guided Study by Thomas K. Simpson. He reproduces Maxwell’s three landmark papers, and provides the necessary context and background to understand them. Forbes and Mahon talk briefly at the end of their book about the scientists who came after Maxwell and firmly established his theory. For more on this topic, read The Maxwellians, one of the best histories of science I know. I enjoyed Faraday, Maxwell, and the Electromagnetic Field. It provides a great introduction to a fascinating story in the history of science.

Friday, November 7, 2014

Low Reynolds Number Flows

In Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Reynolds number.
The importance of turbulence (nonlaminar) flow is determined by a dimensionless number characteristic of the system called the Reynolds number NR. It is defined by NR = LVρ/η where L is a length characteristic of the problem, V a velocity characteristic of the problem, ρ the density, and η the viscosity of the fluid. When NR is greater than a few thousand, turbulence usually occurs….

When NR is large, inertial effects are important. External forces accelerate the fluid. This happens when the mass is large and the viscosity is small. As the viscosity increases (for fixed L, V , and ρ) the Reynolds number decreases. When the Reynolds number is small, viscous effects are important. The fluid is not accelerated, and external forces that cause the flow are balanced by viscous forces. Since viscosity is a form of internal friction in the fluid, work done on the system by the external forces is transformed into thermal energy. The low-Reynolds-number regime is so different from our everyday experience that the effects often seem counterintuitive. They are nicely described by Purcell (1977).
The first page of the article Life at Low Reynolds Number, by Edward Purcell, superimposed on the cover of Intermediate Physics for Medicine and Biology.
“Life at Low Reynolds Number,”
by Edward Purcell.
Edward Purcell’s 1977 paper in the American Journal of Physics provides much insight into low Reynolds number flow, and is a classic. But to learn from this paper you have to read it. Nowadays, students often want to learn from videos rather than reading text (don’t get me started...). Fortunately, a good video exists to explain low-Reynolds-number flow, and it has been around for many years. Click here to watch G. I. Taylor illustrate low Reynolds flow. Sir Geoffrey Ingram Taylor (1886–1975) was an English physicist and an expert in fluid dynamics. He contributed to the Manhattan Project by analyzing the hydrodynamics of implosion needed to develop a plutonium bomb. Among his many contributions is the description of Taylor-Couette flow between two rotating cylinders.

The video shows a beautiful example of reversibility of low Reynolds number flow. A blob of dye is placed into the fluid between the cylinders, one of the cylinders is rotated, and the dye spreads throughout the fluid. They rotation is then reversed, and the dye eventually returns to its original localized blob. This demonstration always reminds me of the formation of a spin echo during magnetic resonance imaging (see Chapter 18 of IPMB), where all spins begin in phase after a 90 degree radio-frequency magnetic field pulse. Then, because of slight heterogeneities in the static magnetic field, the spins dephase as they all rotate at slightly different Larmor frequencies. If you reverse their positions using a 180 degree RF pulse, the spins eventually return to their original configuration, all in phase (the echo). When you think about it, the formation of spin echoes during MRI is nearly as “magical” as the reformation of the dye blob in Taylor’s cylinder.


The video also analyzes how small machines can “swim” at low Reynolds numbers. He even has built small devices, one a machine that zooms through water but just sits there in a viscous fluid, and another that has a helix for a tail and that swims—slowly but steadily—through the viscous fluid. This example reminds me of both Purcell’s article and the research of Howard Berg, who studies how E coli bacteria swim.

To learn more about Taylor’s life and work, watch Katepalli Sreenivasan’s lecture, or read The Life and Legacy of G. I. Taylor by George Batchelor.


Friday, October 31, 2014

The Biological Risk of Ultraviolet Light From the Sun

In Section 14.9 (Blue and Ultraviolet Radiation) of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the biological impact of ultraviolet radiation from the sun. Figure 14.28 in IPMB illustrates a remarkable fact about UV light: only a very narrow slice of the spectrum presents a risk for damaging our DNA. Why? For shorter wavelengths, the UV light incident upon the earth’s atmosphere is almost entirely absorbed by ozone and never reaches the earth’s surface. For longer wavelengths, the UV photons do not have enough energy to damage DNA. Only what is known as UVB radiation (wavelengths of 280–315 nm) poses a major risk.

This does not mean that other wavelengths of UV light are always harmless. If the source of UV radiation is, say, a tanning booth rather than the sun, then you are not protected by miles of ozone-containing atmosphere, and the amount of dangerous short-wavelength UV light depends on the details of the light source and the light’s ability to penetrate our body. Also, UVA radiation (315–400 nm) is not entirely harmless. UVA can penetrate into the dermis, and it can cause skin cancer by other mechanisms besides direct DNA damage, such as production of free radicals or suppression of the body’s immune system. Nevertheless, Fig. 14.28 shows that UVB light from the sun is particularly effective at harming our genes.

Russ and I obtained Fig. 14.28 from a book chapter by Sasha Madronich (“The Atmosphere and UV-B Radiation at Ground Level,” In Environmental UV Photobiology, Plenum, New York, 1993). Madronich begins
Ultraviolet (UV) radiation emanating from the sun travels unaltered until it enters the earth’s atmosphere. Here, absorption and scattering by various gases and particles modify the radiation profoundly, so that by the time it reaches the terrestrial and oceanic biospheres, the wavelengths which are most harmful to organisms have been largely filtered out. Human activities are now changing the composition of the atmosphere, raising serious concerns about how this will affect the wavelength distribution and quantity of ground-level UV radiation.
Madronich wrote his article in the early 1990s, when scientists were concerned about the development of an “ozone hole” in the atmosphere over Antarctica. Laws limiting the release of chlorofluorocarbons, which catalyze the break down of ozone, have resulted in a slow improvement in the ozone situation. Yet, the risk of skin cancer continues to be quite sensitive to ozone concentration in the atmosphere.

Our exposure to ozone is also sensitive to the angle of the sun overhead. Figure 14.28 suggests that at noon in lower latitudes, when the sun is directly overhead, the ozone exposure is about three times greater than when the sun is at an angle of 45 degrees (here in Michigan, this would be late afternoon in June, or noon in September; we never make it to 45 degrees in December). The moral of this story: Beware of exposure to UV light when frolicking on the Copacabana beach at noon on a sunny day in January!

Friday, October 24, 2014

A Log-Log Plot of the Blackbody Spectrum

Section 14.7 (Thermal Radiation) of the 4th edition of Intermediate Physics for Medicine and Biology contains one of my favorite illustrations: Figure 14.24, which compares the blackbody spectrum as a function of wavelength λ and as a function of frequency ν. One interesting feature of the blackbody spectrum is that its peak (the wavelength or frequency for which the most thermal radiation is emitted) is different depending on if you plot it as a function of wavelength (Wλ(λ,T) in units of W m−3) or frequency (Wν(ν,T) in units of W s m−2). The units make more sense if we express the units of Wλ as W m−2 per m, and the units of Wν as W m−2 per Hz.

A few weeks ago I discussed the book The First Steps in Seeing, in which the blackbody spectrum was plotted using a log-log scale. This got me to thinking, “I wonder how Fig. 14.24 would look if all axes were logarithmic?” The answer is shown below.

Plots of the blackbody spectrum as functions of wavelength and frequency, shown on a log-log scale.
Figure 14.24 from Intermediate Physics for Medicine and Biology,
but plotted using a log-log scale.
The caption for Fig. 14.24 is “The transformation from Wλ(λ,T) to Wν(ν,T) is such that the same amount of power per unit area is emitted in wavelength interval (λ, dλ) and the corresponding frequency interval (ν, dν). The spectrum shown is for a blackbody at 3200 K.” I corrected the wrong temperature T in the caption as printed in the 4th edition.

The bottom right panel of the above figure is a plot of Wλ versus λ. For this temperature the spectrum peaks just a bit below λ = 1 μm. At longer wavelengths, it falls off approximately as λ−4 (shown as the dashed line, known as the Rayleigh-Jeans approximation). At short wavelengths, the spectrum rises abruptly and is exponential.

The top left panel contains a plot of Wν versus ν. The spectrum peaks at a frequency just below about 0.3 THz. At low frequencies it increases approximately as ν2 (again, the Rayleigh-Jeans approximation). At high frequencies the spectrum falls dramatically and exponentially.

The connection between these two plots is illustrated in the upper right panel, which plots the relationship ν = c/λ. This equation has nothing to do with blackbody radiation, but merely shows a general relationship between frequency, wavelength, and the speed of light for electromagnetic radiation.

Why is it useful to show these functions in a log-log plot? First, it reinforces the concepts Russ Hobbie and I introduced in Chapter 2 of IPMB (Exponential Growth and Decay). In a log-log plot, power laws appear as straight lines. Thus, in the book’s version of Fig. 14.24 the equation ν = c/λ is a hyperbola, but in the log-log version this is a straight line with a slope of negative one. Furthermore, the Rayleigh-Jeans approximation implies a power-law relationship, which is nicely illustrated on a log-log plot by the dashed line. In the book’s version of the figure, Wλ falls off at both large and small wavelengths, and at first glance the rates of fall off look similar. You don’t really see the difference until you look at very small values of Wλ, which are difficult to see in a linear plot but are apparent in a logarithmic plot. The falloff at short wavelengths is very abrupt while the decay at long wavelengths is gradual. This difference is even more striking in the book’s plot of Wν. The curve doesn’t even go all the way to zero frequency in Fig. 14.24, making its limiting behavior difficult to judge. The log-log plot clearly shows that at low frequencies Wν rises as ν2.

Both the book’s version and the log-log version illustrate how the two functions peak at different regions of the electromagnetic spectrum, but for this point the book’s linear plot may be clearer. Another advantage of the linear plot is that I have an easier time estimating the area under the curve, which is important for determining the total power emitted by the blackbody and the Stefan-Boltzmann law. Perhaps there is some clever way to estimate areas under a curve on a log-log plot, but it seems to me the log plot exaggerates the area under the small frequency section of the curve and understates the area under the large frequencies (just as on a map the Mercator projection magnifies the area of Greenland and Antarctica). If you want to understand how these functions behave completely, look at both the linear and log plots.

Yet another way to plot these functions would be on a semilog plot. The advantage of semilog is that an exponential falloff shows up as a straight line. I will leave that plot as an exercise for the reader.

For those who want to learn about the derivation and history of the blackbody spectrum, I recommend Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (although any good modern physics book should discuss this topic). A less mathematical but very intuitive description of why Wλ and Wν peak at different parts of the spectrum is given in The Optics of Life. For a plot of photon number (rather than energy radiated) as a function of λ or ν, see The First Steps in Seeing.

Friday, October 17, 2014

A Theoretical Model of Magneto-Acoustic Imaging of Bioelectric Currents

Twenty years ago, I became interested in magneto-acoustic imaging, primarily influenced by the work of Bruce Towe that was called to my attention by my dissertation advisor and collaborator John Wikswo. (See, for example, Towe and Islam, “A Magneto-Acoustic Method for the Noninvasive Measurement of Bioelectric Currents,” IEEE Trans. Biomed. Eng., Volume 35, Pages 892–894, 1988). The result was a paper by Wikswo, Peter Basser, and myself (“A Theoretical Model of Magneto-Acoustic Imaging of Bioelectric Currents,” IEEE Trans. Biomed. Eng., Volume 41, Pages 723–728, 1994). This was my first foray into biomechanics, a subject that has become increasingly interesting to me, to the point where now it is the primary focus of my research (but that’s another story; for a preview look here).

A Treatise on the Mathematical Theory of Elasticity, by A. E. H. Love, superimposed on Intermediate Physics for Medicine and BIology.
A Treatise on the Mathematical
Theory of Elasticity,
by A. E. H. Love.
I started learning about biomechanics mainly through my friend Peter Basser. We both worked at the National Institutes of Health in the early 1990s. Peter used continuum models in his research a lot, and owned a number of books on the subject. He also loved to travel, and would often use his leftover use-or-lose vacation days at the end of the year to take trips to exotic places like Kathmandu. When he was out of town on these adventures, he left me access to his personal library, and I spent many hours in his office reading classic references like Schlichting’s Boundary Layer Theory and Love’s A Treatise on the Mathematical Theory of Elasticity. Peter and I also would read each other’s papers, and I learned much continuum mechanics from his work. (NIH had a rule that someone had to sign a form saying they read and approved a paper before it could be submitted for publication, so I would give my papers to Peter to read and he would give his to me.) In this way, I became familiar enough with biomechanics to analyze magneto-acoustic imaging. Interestingly, we published our paper in the same year Basser began publishing his research on MRI diffusion tensor imaging, for which he is now famous (see here).

As with much of my research, our paper on magneto-acoustic imaging addressed a simple “toy model”: an electric dipole in the center of an elastic, conducting sphere exposed to a uniform magnetic field. We were able to calculate the tissue displacement and pressure analytically for the cases of a magnetic field parallel and perpendicular to the dipole. One of my favorite results in the paper was that we found a close relationship between magneto-acoustic imaging and biomagnetism.
“Magneto-acoustic pressure recordings and biomagnetic measurements image action currents in an equivalent way: they both have curl J [the curl of the current density] as their source.”
For about ten years, our paper had little impact. A few people cited it, including Amalric Montalibet and later Han Wen, who each developed a method to use ultrasound and the Lorentz force to generate electrical current in tissue. I’ve described this work before in a review article about the role of magnetic forces in medicine and biology, which I have mentioned before in this blog. Then, in 2005 Bin He began citing our work in a long list of papers about magnetoacoustic tomography with magnetic induction, which again I've written about previously. His work generated so much interest in our paper that in 2010 alone it was cited 19 times according to Google Scholar. Of course, it is gratifying to see your work have an impact.

But the story continues with a more recent study by Pol Grasland-Mongrain of INSERM in France. Building on Montalibet’s work, Grasland-Mongrain uses an ultrasonic pulse and the Lorentz force to induce a voltage that he can detect with electrodes. The resulting electrical data can then be analyzed to determine the distribution of electrical conductivity (see Ammari, Grasland-Mongrain, et al. for one way to do this mathematically). In many ways, their technique is in competition with Bin He’s MAT-MI as a method to image conductivity.

Grasland-Mongrain also publishes his own blog about medical imaging. (Warning: The website is in French, and I have to rely on Google Translate to read it. It is my experience that Google has a hard time translating technical writing). There he discusses his most recent paper about imaging shear waves using the Lorentz force. Interestingly, shear waves in tissue is one of the topics Russ Hobbie and I added to the 5th edition of Intermediate Physics for Medicine and Biology, due out next year. Grasland-Mongrain’s work has been highlighted in Physics World and Focus Physics, and a paper about it appeared this year in Physical Review Letters, the most prestigious of all physics journals (and one I’ve never published in, to my chagrin).

I am amazed by what can happen in twenty years.


As a postscript, let me add a plug for toy models. Russ and I use a lot of toy models in IPMB. Even though such simple models have their limitations, I believe they provide tremendous insight into physical phenomena. I recently reviewed a paper in which the authors had developed a very sophisticated and complex model of a phenomena, but examination of a toy model would have told them that the signal they calculated was far, far to small to be observable. Do the toy model first. Then, once you have the insight, make your model more complex.

Friday, October 10, 2014

John H Hubbell

In the references at the end of Chapter 15 (Interaction of Photons and Charged Particles with Matter) in the 4th edition of Intermediate Physics for Medicine and Biology, you will find a string of publications authored by John H. Hubbell (1925–2007), covering a 27-year period from 1969 until 1996. Data from his publications are plotted in Fig. 15.2 (Total cross section for the interactions of photons with carbon vs. photon energy), Fig. 15.3 (Cross sections for the photoelectric effect and incoherent and coherent scattering from lead), Fig. 15.8 (The coherent and incoherent differential cross sections as a function of angle for 100-keV photons scattering from carbon, calcium, and lead), Fig. 15.14 (Fluorescence yields from K-, L-, and M-shell vacancies as a function of atomic number Z), and Fig. 15.16 (Coherent and incoherent attenuation coefficients and the mass energy absorption coefficient for water).

Hubbell’s 1982 paper “Photon Mass Attenuation and Energy-Absorption Coefficients from 1 keV to 20 MeV” (International Journal of Applied Radiation and Isotopes, Volume 33, Pages 1269–1290) has been cited 976 times according to the Web of Science. It has been cited so many times that it was selected as a citation classic, and Hubbell was invited to write a one-page reminiscence about the paper. It began modestly
Some papers become highly cited due to the creativity, genius, and vision of the authors, presenting seminal work stimulating and opening up new and multiplicative lines of research. Another, more pedestrian class of papers is “house-by-the-side-of-the-road” works, highly cited simply because these papers provide tools required by a substantial number of researchers in a single discipline or perhaps in several diverse disciplines, as is here the case.
At the time of his death, the International Radiation Physics Society Bulletin published the following obituary
The International Radiation Physics Society (IRPS) lost one of its major founding members, and the field of radiation physics one of its advocates and contributors of greatest impact, with the death this spring of John Hubbell.

John was born in Michigan in 1925, served in Europe in World War II [he received a bronze star], and graduated from the University of Michigan with a BSE (physics) in 1949 and MS (physics) in 1950. He then joined the National Bureau of Standards (NBS), later NIST, where he worked during his entire career. He married Jean Norford in 1955, and they had three children. He became best known and cited for National Standards Reference Data Series Report 29 (l969), “Photon Cross Sections, Attenuation Coefficients, and Energy Absorption Coefficients from 10 keV to 100 GeV.” He was one of the two leading founding members of the International Radiation Physics Society in 1985, and he served as its President 1994–97. While he retired from NIST in 1988, he remained active there and in the affairs of IRPS, until the stroke that led to his death this year.
Learn more about John Hubbell here and here.

Friday, October 3, 2014

Update on the 5th edition of IPMB

A few weeks ago, Russ Hobbie and I submitted the 5th edition of Intermediate Physics for Medicine and Biology to our publisher. We are not done yet; page proofs should arrive in a few months. The publisher is predicting a March publication date. I suppose whether we meet that target will depend on how fast Russ and I can edit the page proofs, but I am nearly certain that the 5th edition will be available for fall 2015 classes (for summer 2015, I am hopeful but not so sure). In the meantime, use the 4th edition of IPMB.

What is in store for the 5th edition? No new chapters; the table of contents will look similar to the 4th edition. But there are hundreds—no, thousands—of small changes, additions, improvements, and upgrades. We’ve included many new up-to-date references, and lots of new homework problems. Regular readers of this blog may see some familiar additions, which were introduced here first. We tried to cut as well as add material to keep the book the same length. We won’t know for sure until we see the page proofs, but we think we did a good job keeping the size about constant.

We found several errors in the 4th edition when preparing the 5th. This week I updated the errata for the 4th edition, to include these mistakes. You can find the errata at the book website, https://sites.google.com/view/hobbieroth. I won’t list here the many small typos we uncovered, and all the misspellings of names are just too embarrassing to mention in this blog. You can see the errata for those. But let me provide some important corrections that readers will want to know about, especially if using the book for a class this fall or next winter (here in Michigan we call the January–April semester "winter"; those in warmer climates often call it spring).
  • Page 78: In Problem 61, we dropped a key minus sign: “90 mV” should be “−90 mV”. This was correct in the 3rd edition (Hobbie), but somehow the error crept into the 4th (Hobbie and Roth). I can’t figure out what was different between the 3rd and 4th editions that could cause such mistakes to occur.
  • Page 137: The 4th edition claimed that at a synapse the neurotransmitter crosses the synaptic cleft and “enters the next cell.” Generally a neurotransmitter doesn’t “enter” the downstream cell, but is sensed by a receptor in the membrane that triggers some response.
  • Page 338: I have already told you about the mistake in the Bessel function identity in Problem 10 of Chapter 12. For me, this was THE MOST ANNOYING of all the errors we have found. 
  • Page 355: In Problem 12 about sound and hearing, I used an unrealistic value for the threshold of pain, 10−4 W m−2. Table 13.1 had it about right, 1 W m−2. The value varies between people, and sometimes I see it quoted as high as 10 W m−2. I suggest we use 1 W m−2 in the homework problem. Warning: the solution manual (available to instructors who contact Russ or me) is based on the problem as written in the 4th edition, not on what it would be with the corrected value.
  • Page 355: Same page, another screw up. Problem 16 is supposed to show how during ultrasound imaging a coupling medium between the transducer and the tissue can improve transmission. Unfortunately, in the problem I used a value for the acoustic impedance that is about a factor of a thousand lower than is typical for tissue. I should have used Ztissue = 1.5 × 106 Pa s m−1. This should have been obvious from the very low transmission coefficient that results from the impedance mismatch caused by my error. Somehow, the mistake didn’t sink in until recently. Again, the solution manual is based on the problem as written in the 4th edition.
  • Page 433: Problem 30 in Chapter 15 is messed up. It contains two problems, one about the Thomson scattering cross section, and another (parts a and b) about the fraction of energy due to the photoelectric effect. Really, the second problem should be Problem 31. But making that change would require renumbering all subsequent problems, which would be a nuisance. I suggest calling the second part of Problem 30 as Problem “30 ½.” 
  • Page 523: When discussing a model for the T1 relaxation time in magnetic resonance imaging, we write “At long correlation times T1 is proportional to the Larmor frequency, as can be seen from Eq. 18.34.” Well, a simple inspection of Eq. 18.34 reveals that T1 is proportional to the SQUARE of the Larmor frequency in that limit. This is also obvious from Fig. 18.12, where a change in Larmor frequency of about a factor of three results in a change in T1 of nearly a factor of ten. 
  • Page 535: In Chapter 18 we discuss how the blood flow speed v, the repetition time TR, and the slice thickness Δz give rise to flow effects in MRI. Following Eq. 18.56, we take the limit when v is much greater than "TR/Δz". I always stress to my students that units are their friends. They can spot errors by analyzing if their equation is dimensionally correct. CHECK IF THE UNITS WORK! Clearly, I didn’t take my own advice in this case.
For all these errors, I humbly apologize. Russ and I redoubled our effort to remove mistakes from the 5th edition, and we will redouble again when the page proofs arrive. In the meantime, if you find still more errors in the 4th edition, please let us know. If the mistake is in the 4th edition, it could well carry over to the 5th edition if we don’t root it out immediately.

Friday, September 26, 2014

The First Steps in Seeing

The First Steps in Seeing,  by Robert Rodieck, superimposed on Intermediate Physics for Medicine and Biology.
The First Steps in Seeing,
by Robert Rodieck.
Russ Hobbie and I discuss the eye and vision in Chapter 14 of the 4th edition of Intermediate Physics for Medicine and Biology. But we just barely begin to describe the complexities of how we perceive light. If you want to learn more, read The First Steps in Seeing, by Robert Rodieck. This excellent book explains how the eye works. The preface states
This book is about the eyes—how they capture an image and convert it to neural messages that ultimately result in visual experience. An appreciation of how the eyes work is rooted in diverse areas of science—optics, photochemistry, biochemistry, cellular biology, neurobiology, molecular biology, psychophysics, psychology, and evolutionary biology. This gives the study of vision a rich mixture of breadth and depth.

The findings related to vision from any one of these fields are not difficult to understand in themselves, but in order to be clear and precise, each discipline has developed its own set of words and conceptual relations—in effect is own language—and for those wanting a broad introduction to vision, these separate languages can present more of an impediment to understanding than an aid. Yet what lies beneath the words usually has a beautiful simplicity.

My aim in this book is to describe how we see in a manner understandable to all. I’ve attempted to restrict the number of technical terms, to associate the terms that are used with a picture or icon that visually express what they mean, and to develop conceptual relations according to arrangements of these icons, or by other graphical means. Experimental findings have been recast in the natural world whenever possible, and broad themes attempt to bring together different lines of thought that are usually treated separately.

The main chapters provide a thin thread that can be read without reference to other books. They are followed by some additional topics that explore certain areas in greater depth, and by notes that link the chapters and topics to the broader literature.

My intent is to provide you with a framework for understanding what is known about the first steps in seeing by building upon what you already know.
Rodieck explains things in a quantitative, almost “physicsy” way. For instance, he imagines a person staring at the star Polaris, and estimates the number of photons (5500) arriving at the eye each tenth of a second (approximately the time required for visual perception), then determines their distribution on the retina, finds how many are at each wavelength, and how many per cone cell.

Color vision is analyzed, as are the mechanisms of how rhodopsin responds to a photon, how the photoreceptor produces a polarization of the neurons, how the retina responds with such a large dynamic range (“the range of vision extends from a catch rate of about one photon per photoreceptor per hour to a million per second”), and how eye movements hold an image steady on the retina. There’s even a discussion of photometry, with a table similar to the one I presented last week in this blog. I learned that the unit of retinal illuminance is the troland (td), defined as the luminance (candelas per square meter) times the pupil area (square millimeters).

Like IPMB, Rodieck ends his book with several appendices, including a first one on angles. His appendix on blackbody radiation includes in a figure showing the Planck function versus frequency plotted on log-log paper (I’ve always seen it plotted on linear axes, but the log-log plot helps clairfy the behavior at very large and small frequencies). The photon emission from the surface of a blackbody as a function of temperature is 1.52 × 1015 T3 photons per second per square meter (Rodieck does everything in terms of the number of photons). The factor of temperature cubed is not a typo; Stefan's law contains a T3 rather than T4 when written in terms of photon number. A lovely appendix analyzes the Poisson distribution, and another compares frequency and wavelength distributions.

The best feature of The First Steps in Seeing are the illustrations. This is a beautiful book. I suspect Rodieck read Edward Tufte’s the Visual Display of Quantitative Information, because his figures and plots elegantly make his points with little superfluous clutter. I highly recommend this book.

Friday, September 19, 2014

Lumens, Candelas, Lux, and Nits

In Chapter 14 (Atoms and Light) of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss photometry, the measurement of electromagnetic radiation and its ability to produce a human visual sensation. I find photometry interesting mainly because of all the unusual units.

Let’s start by assuming you have a source of light emitting a certain amount of energy per second, or in other words with a certain power in watts. This is called the radiant power or radiant flux, and is a fundamental concept in radiometry. But how do we perceive such a source of light? That is a question in photometry. Our perception will depend on the wavelength of light. If the light is all in the infrared or ultraviolet, we won’t see anything. If in the visible spectrum, our perception depends on the wavelength. In fact, the situation is even more complicated than this, because our perception depends on if we are using the cones in the retina of our eye to see bright light in color (photopic vision), or we are using rods to see dim light in black and white (scotopic vision). Moreover, our ability to see varies among individuals. The usual convention is to assume we are using photopic vision, and to say that a source radiating a power of one watt of light at a wavelength of 555 nm (green light, the wavelength that the eye is most sensitive to) has a luminous flux of 683 lumens.

The light source may emit different amounts of light in different directions. In radiometry, the radiant intensity is the power emitted per solid angle, in units of watt per steradian. We can define an analogous photometric unit for the luminous intensity to be the luman per steradian, or the candela. The candela is one of seven “SI base units” (the others are the kilogram, meter, second, ampere, mole, and kelvin). Russ and I mention the candela in Table 14.6, which is a large table that compares radiometric, photometric and actinometric quantities. We also define it in the text, using the old-fashioned name “candle” rather than candela.

Often you want to know the intensity of light per unit area, or irradiance. In radiometry, irradiance is measured in watts per square meter. In photometry, the illuminance is measured in lumens per square meter, also called the lux.

Finally, the radiance of a surface is the radiant power per solid angle per unit surface area (W sr−1 m−2). The analogous photometric quantity is the luminance, which is measured in units of lumen sr−1 m−2, or candela m−2, or lux sr−1, or nit. The brightness of a computer display is measured in nits.

In summary, below is an abbreviated version of Table 14.6 in IPMB
Radiometry Photometry
Radiant power (W) Luminous flux (lumen)
Radiant Intensity (W sr−1) Luminous intensity (candela)
Irradiance (W m−2) Illuminance (lux)
Radiance (W sr−1 m−2) Luminance (nit)
Where did the relationship between 1 W and 683 lumens come from? Before electric lights, a candle was a major source of light. A typical candle emits about 1 candela of light. The relationship between the watt and the lumen is somewhat analogous to the relationship between absolute temperature and thermal energy, and the relationship between a mole and the number of molecules. This would put the conversion factor of 683 lumens per watt in the same class as Boltzmann's constant (1.38 × 10−23 J per K) and Avogadro's number (6.02 × 1023 molecules per mole).

Friday, September 12, 2014

More about the Stopping Power and the Bragg Peak

The Bragg peak is a key concept when studying the interaction of protons with tissue. In Chapter 16 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
Protons are also used to treat tumors. Their advantage is the increase of stopping power at low energies. It is possible to make them come to rest in the tissue to be destroyed, with an enhanced dose relative to intervening tissue and almost no dose distally (“downstream”) as shown by the Bragg peak in Fig. 16.51 [see a similar figure here]. Placing an absorber in the proton beam before it strikes the patient moves the Bragg peak closer to the surface. Various techniques, such as rotating a variable-thickness absorber in the beam, are used to shape the field by spreading out the Bragg peak (Fig. 16.52) [see a similar figure here].
Figure 16.52 is very interesting, because it shows a nearly uniform dose throughout a region of tissue produced by a collection of Bragg peaks, each reaching a maximum at a different depth because the protons have different initial energies. The obvious question is: how many protons should one use for each energy to produce a uniform dose in some region of tissue? I have discussed the Bragg peak before in this blog, when I presented a new homework problem to derive an analytical expression for the stopping power as a function of depth. An extension of this problem can be used to answer this question. Russ and I considered including this extended problem in the 5th edition of IPMB (which is nearing completion), but it didn’t make the cut. Discarded scraps from the cutting room floor make good blog material, so I present you, dear reader, with a new homework problem.
Problem 31 3/4 A proton of kinetic energy T is incident on the tissue surface (x = 0). Assume its stopping power s(x) at depth x is given by
An equation showing the stopping power as a function of depth. This equation illustrates the Bragg peak.
where C is a constant characteristic of the tissue.
(a) Plot s(x) versus x. Where does the Bragg peak occur?
(b) Now, suppose you have a distribution of N protons. Let the number with incident energy between T and T+dT be A(T)dT, where
An equation giving the distribution of proton energies in this example of spreading out the Bragg peak.
Determine the constant B by requiring
An equation showing how to normalize the distribution of proton energies.
Plot A(T) vs T.
(c) If x is greater than T22/2C what is the total stopping power? Hint: think before you calculate; how many particles can reach a depth greater than T22/2C?

(d) If x is between T12/2C and T22/2C, only particles with energy from (2Cx)1/2 to T2 contribute to the stopping power at x, so
An integral giving the stopping power as a function of position.
Evaluate this integral. Hint: let u = T2 - (2Cx + T22)/2.
(e) If x is less than T12/2C, all the particles contribute to the stopping power at x, so
An integral giving the stopping power as a function of position.
Evaluate this integral.

(f) Plot S(x) versus x. Compare your plot with that found in part a, and with Fig. 16.52.
One reason this problem didn’t make the cut is that it is rather difficult. Let me know if you need the solution. The bottom line: this homework problem does a pretty good job of explaining the results in Fig. 16.52, and provides insight into how to apply proton therapy to an large tumor.