Friday, September 10, 2021

Is Shot Noise Also White Noise?

In Chapters 9 and 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss shot noise.

9.8.1 Shot Noise

The first (and smallest) limitation [on our ability to measure current] is called shot noise. It is due to the fact that the charge is transported by ions that move randomly and independently through the channels....

11.16.2 Shot Noise

Chapter 9 also mentioned shot noise, which occurs because the charge carriers have a finite charge, so the number of them passing a given point in a circuit in a given time fluctuates about an average value. One can show that shot noise is also white noise [my italics].
Introduction to Membrane Noise, by Louis DeFelice, superimposed on Intermediate Physics for Medicine and Biology.
Introduction to Membrane Noise,
by Louis DeFelice
How does one show that shot noise is white noise (independent of frequency)? I’m going to follow Lou DeFelice’s explanation in his book Introduction to Membrane Noise (cited in IPMB). I won’t give a rigorous proof. Instead, I’ll first state Campbell’s theorem (without proving it), and then show that the whiteness of shot noise is a consequence of that theorem.

Campbell’s Theorem

To start, I’ll quote DeFelice, but I will change the names of a few variables.
Suppose N impulses i(t) arrive randomly in the time interval T. The sum of these will result in a random noise signal I(t). This is shown qualitatively in Figure 78.1.

Below is my version of Fig. 78.1.

A diagram illustrating the sum of N impulses, i(t), each shown in red, arriving randomly in the time interval T. The blue curve represents their sum, I(t), and the green dashed line represents the average, <I(t)>. Adapted from Fig. 78.1 in Introduction to Membrane Noise by Louis DeFelice.
A diagram illustrating the sum of N impulses, i(t), each shown in red, arriving randomly in the time interval T. The blue curve represents their sum, I(t), and the green dashed line represents the average, <I(t)>. Adapted from Fig. 78.1 in Introduction to Membrane Noise by Louis DeFelice.

DeFelice shows that the average of I(t), which I’ll denote <I(t)>, is

Equation for the average of I(t).
Here he lets T and N both be large, but their ratio (the average rate that the impulses arrived) remains finite.

He then shows that the variance of I(t), called σI2, is
Equation for the variance of I(t).
Finally, he writes

In order to calculate the spectral density of I(t) from i(t) we need Rayleigh’s theorem [also known as Parseval’s theorem]…
Parseval's theorem
where î(f) is the Fourier transform of i(t) [and f is the frequency].

He concludes that the spectral density SI(f) is given by

Equation for the spectral density of I(t).

These three results (for the average, the variance, and the spectral density) constitute Campbell’s theorem.

Shot Noise

Now, let’s analyze shot noise by using Campbell’s theorem assuming the impulse is a delta function (zero everywhere except at t = 0 where it’s infinite). Set i(t) = q δ(t), where q is the charge of each discrete charge carrier.

First, the average <I(t)> is simply Nq/T, or the total charge divided by the total time. 

Second, the variance is the integral of the delta function squared. When any function is multiplied by a delta function and then integrated over time, you get that function evaluated at time zero. So, the integral of the square of the delta function gives the delta function itself evaluated at zero, which is infinity. Yikes! The variance of shot noise is infinite.

Third, to get the spectral density of shot noise we need the Fourier transform of the delta function. 

Equation for the spectral density of shot noise.
The key point is that SI(f) is independent of frequency; it’s white.

DeFelice ends with

This [the expression for the spectral density] is the formula for shot noise first derived by Schottky (1918, pp. 541-567) in 1918. Evidently, the variance defined as
Equation for the variance in terms of the spectral density.
is again infinite; this is a consequence of the infinitely small width of the delta function.
As DeFelice reminds us, shot noise is white because the delta function is infinitely narrow. As soon as you assume i(t) has some width (perhaps the time it takes for a charge to cross the membrane), the spectrum will fall off at high frequencies, the variance won’t be infinite (thank goodness!), and the noise won’t be white. The bottom line is that shot noise is white because the Fourier transform of a delta function is a constant.

Conclusion

Perhaps you’re thinking I haven’t helped you all that much. I merely changed your question from “why is shot noise white” to “how do I prove Campbell’s theorem.” You have a point. Maybe proving Campbell’s theorem can be the story of another post.

I met Lou DeFelice in 1984, when I was a graduate student at Vanderbilt University and he came to give a talk. In the summer of 1986, my PhD advisor John Wikswo and I traveled to Emory University to visit DeFelice and Robert DeHaan. During that trip, Wikswo and I were walking across the Emory campus when Wikswo decided he knew a short cut (he didn’t). He left the sidewalk and entered a forest, with me following behind him. After what seemed like half an hour of wandering through a thicket, we emerged from the woods at a back entrance to the Yerkes Primate Research Center. We’re lucky we weren’t arrested.

DeFelice joined the faculty at Vanderbilt in 1995, and we both worked there in the late 1990s. He was a physicist by training, but spent most of his career studying electrophysiology. Sadly, in 2016 he passed away.

Friday, September 3, 2021

The Unit of Vascular Resistance: A Naming Opportunity

The metric system is based on three fundamental units: the kilogram (kg, mass), the meter (m, distance), and the second (s, time). Often a combination of these three is given a name (called a derived unit), usually honoring a famous scientist. For example, a newton, the unit of force named after the English physicist and mathematician Isaac Newton (1642 – 1727), is a kg m s−2; a joule, the unit of energy named for English physicist James Joule (1818 – 1889), is a kg m2 s−2; a pascal, the unit of pressure named for French mathematician and physicist Blaise Pascal (1623 – 1662), is a kg m−1 s−2; a watt, the unit of power named for Scottish engineer James Watt (1736 – 1819), is a kg m2 s−3; and a rayl, the unit of acoustic impedance named for English physicist John Strutt (1842 – 1919) who is also known as Lord Rayleigh, is a kg m−2 s−1.

In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the human circulatory system. We talk about blood pressure, p, which is usually expressed in mmHg or torr, but in the metric system is given in pascals. We also analyze blood flow or cardiac output, i, sometimes expressed in milliliters per minute, but properly should be m3 s−1. Then Russ and I introduce is the vascular resistance.

We define the vascular resistance R in a pipe or a segment of the circulatory system as the ratio of pressure difference across the pipe or segment to the flow through it:

R = Δp/i .                  (1.58)

The units are Pa m−3 s. Physiologists use the peripheral resistance unit (PRU), which is torr ml−1 min.

What name is given to the Pa m−3 s, or equivalently the kg m−4 s−1? Sometimes it’s called the “acoustic ohm,” stressing its analogy to the electrical unit of the ohm (a volt per amp). If the unit for electrical resistance can honor a scientist, German physicist Georg Ohm (1789 – 1854), why can’t the unit for mechanical resistance do the same? Let’s name the unit for vascular resistance!

I know what you’re thinking: we already have a name, the peripheral resistance unit. True, but I see three disadvantages with the PRU. First, it’s based on oddball units (pressure in torr? time in minutes?), so it’s not standard metric. Second, sometimes it’s defined using the second rather than the minute, so it’s confusing and you always must be on your toes to avoid making a mistake. Third, it wastes the chance to honor a scientist. We can do better.

My first inclination was to name this unit after the French physician Jean Poiseuille (1797 – 1869). He is the hero of Sec. 1.17 in IPMB. His equation relating the pressure drop and flow through a tube—often called the Poiseuille law—explains much about blood flow. However, Poiseuille already has a unit. The coefficient of dynamic viscosity has units of kg m−1 s−1, which is sometimes called a poiseuille. It’s not used much, but it would be confusing to adopt it for vascular resistance in addition to viscosity. Moreover, the old cgs unit for viscosity, g cm−1 s−1, is also named for Poiseuille; it’s called the poise, and it is commonly used. With two units already, Poiseuille is out.

Henry Darcy (1803 – 1858) was a French engineer who made important contributions to hydraulics, including Darcy’s law for flow in a porous medium. However, an older unit of hydraulic permeability is the darcy. Having another unit named after Darcy (even if it’s an mks unit instead of an oddball obsolete unit) would complicate things. So, no to Mr. Darcy.

The Irish physicist and mathematician George Stokes (1819 – 1903) helped develop the theoretical justification for the Poiseuille law. I’m a big fan of Stokes. He seems like a perfect candidate. However, the cgs unit of kinematic viscosity, the cm2 s−1, is called the stokes. He’s taken.

The Poiseuille law is sometimes called the Hagen-Poiseuille law, after the German scientist Gotthilf Hagen (1797 – 1884). He would be a good candidate for the unit, and some might choose to call a kg m−4 s−1 a hagen. Why am I not satisfied with this choice? Hagen appears to be more of a hydraulic engineer than a biomedical scientist, and one theme in IPMB is to celebrate researchers who work at the interface between physics and physiology. Nope.

A portrait of William Harvey,
downloaded from wikipedia
(public domain).

My vote goes to William Harvey (1578 – 1657), the English physician who first discovered the circulation of blood. I can find no units already named for Harvey. He doesn’t have a physics education, but he did make quantitative estimates of blood flow to help establish his hypothesis of blood circulation (such numerical calculations were uncommon in his day, but are in the spirit of IPMB). Harvey is a lot easier to pronounce than Poiseuille. Moreover, my favorite movie is Harvey.

We can name the kg m−4 s−1 as the harvey (Ha), and 1 Ha = 1.25 × 10-10 PRU (we may end up using gigaharveys when analyzing the peripheral resistance in people). One final advantage of the harvey: for those of you who disagree with me, you can claim that “Ha" actually stands for Hagen.

Ceaseless Motion: William Harvey’s Experiments in Circulation.

 
The trailer from the 1950 film Harvey,
starring James Stewart as Elwood P. Dowd.

Friday, August 27, 2021

Can Induced Electric Fields Explain Biological Effects of Power-Line Magnetic Fields?

Sometimes proponents of pseudoscience embrace nonsense, but other times they propose plausible-sounding ideas that are wrong because the numbers don’t add up. For example, suppose you are discussing with your friend about the biological effects of power-line magnetic fields. Your friend might say something like this:
“You keep claiming that magnetic fields don’t have any biological effects. But suppose it’s not the magnetic field itself, but the electric field induced by the changing magnetic field that causes the effect. We know an electric field can stimulate nerves. Perhaps power-line effects operate like transcranial magnetic stimulation, by inducing electric fields.”
Well, there’s nothing absurd about this hypothesis. Transcranial magnetic stimulation does work by creating electric fields in the body via electromagnetic induction, and these electric fields can stimulate nerves. The qualitative idea is reasonable. But does it work quantitatively? If you do the calculation, the answer is no. The electric field induced by a power line is less than the endogenous electric field associated with the electrocardiogram. You don’t have to perform a difficult, detailed calculation to show this. A back-of-the-envelope estimation suffices. Below is a new homework problem showing you how to make such an estimate.
Section 9.10

Problem 36 ½
. Estimate the electric field induced in the body by a power-line magnetic field, and compare it to the endogenous electric field in the body associated with the electrocardiogram.

(a) Use Eq. 8.25 to estimate the induced electric field, E. The magnetic field in the vicinity of a power line can be as strong as 5 μT (Possible Health Effects of Exposure to Residential Electric and Magnetic Fields, 1997, Page 32), and it oscillates at 60 Hz. The radius, a, of the current loop in our body is difficult to estimate, but take it as fairly large (say, half a meter) to ensure you do not underestimate the induced electric field.

(b) Estimate the endogenous electric field in the torso from the electrocardiogram, using Figures 7.19 and 7.23.

(c) Compare the electric fields found in parts (a) and (b). Which is larger? Explain how an induced electric field could have an effect if it is smaller than the electric fields already existing in the body.
Let’s go through the solution to this new problem. First, part (a). The amplitude of the magnetic field is 0.000005 T. The field oscillates with a period of 1/60 Hz, or about 0.017 s. The peak rate of change will occur during only a fraction of this period, and a reasonable approximation is to divide the period by 2π, so the time over which the magnetic field changes is 0.0027 s. Thus, the rate of change dB/dt is 0.000005 T/0.0027 s, or about 0.002 T/s. Now use Eq. 8.25, E = a/2 dB/dt (ignore the minus sign in the equation, which merely indicates the phase), with a = 0.5 m, to get E = 0.0005 V/m.

Now part (b). Figure 7.23 indicates that the QRS complex in the electrocardiogram has a magnitude of about ΔV = 0.001 V (one millivolt). Figure 7.19 shows that the distance between leads is on the order of Δr = 0.5 m. The magnitude of the electric field is approximately ΔV/Δr = 0.002 V/m.

In part (c) you compare the electric field induced by a power line, 0.0005 V/m, to the electric field in the body caused by the electrocardiogram, 0.002 V/m. The field produced by the ECG is four times larger. So, how can the induced electric field have a biological effect if we are constantly exposed to larger electric fields produced by our own body? I don’t know. It seems to me that would be difficult.

Hart and Gandhi (1998) Phys. Med. Biol., 43:3083–3099, superimposed on Intermediate Physics for Medicine and Biology.
Hart and Gandhi (1998)
Phys. Med. Biol.,
43:3083–3099.
But wait! Our calculation in part (b) is really rough. Perhaps we should do a more detailed calculation. Rodney Hart and Om Gandhi did just that (“Comparison of cardiac-induced endogenous fields and power frequency induced exogenous fields in an anatomical model of the human body,”  Physics in Medicine & Biology, Volume 43, Pages 3083–3099, 1998). They found that during the QRS complex the endogenous electric field varied throughout the body, but it is usually larger than what we estimated. It’s giant in the heart itself, about 3 V/m. All through the torso it’s more than ten times what we found; for instance, in the intestines it’s 0.04 V/m. Even in the brain the field strength (0.014 V/m) is seven times larger than our estimate (0.002 V/m).

Moreover, the heart isn’t the only source of endogenous fields (although it’s the strongest). The brain, peripheral nerves, skeletal muscle, and the gut all produce electric fields. In addition, our calculation of the induced electric field is evaluated at the edge of the body, where the current loop is largest. Deeper within the torso, the field will be less. Finally, our value of 5 μT is extreme. Magnetic fields associated with power lines are usually about one tenth of this. In other words, in all our estimates we took values that favor the induced electric field over the endogenous electric field, and the endogenous electric field is still four times larger.

What do we conclude? The qualitative mechanism proposed by your friend is not ridiculous, but it doesn’t work when you do the calculation. The induced electric field would be swamped by the endogenous electric field.

The moral of the story is that proposed mechanisms must work both qualitatively and quantitatively. Doing the math is not an optional step to refine your hypothesis and make it more precise. You have to do at least an approximate calculation to decide if your idea is reasonable. That’s why Russ Hobbie and I emphasize solving toy problems and estimation in Intermediate Physics for Medicine and Biology. Without estimating how big effects are, you may go around saying things that sound reasonable but just aren’t true.

Friday, August 20, 2021

The Central Slice Theorem: An Example

The central slice theorem is key to understanding tomography. In Intermediate Physics for Medicine and Biology, Russ Hobbie and I ask the reader to prove the central slice theorem in a homework problem. Proofs are useful for their generality, but I often understand a theorem better by working an example. In this post, I present a new homework problem that guides you through every step needed to verify the central slice theorem. This example contains a lot of math, but once you get past the calculation details you will find it provides much insight.

The central slice theorem states that taking a one-dimensional Fourier transform of a projection is equivalent to taking the two-dimensional Fourier transform and evaluating it along one direction in frequency space. Our “object” will be a mathematical function (representing, say, the x-ray attenuation coefficient as a function of position). Here is a summary of the process, cast as a homework problem.

Section 12.4 

Problem 21½
. Verify the central slice theorem for the object

(a) Calculate the projection of the object using Eq. 12.29,

Then take a one-dimensional Fourier transform of the projection using Eq. 11.59,
 
(b) Calculate the two-dimensional Fourier transform of the object using Eq. 12.11a,
Then transform (kx,ky ) to (θ,k) by converting from Cartesian to polar coordinates in frequency space.
(c) Compare your answers to parts (a) and (b). Are they the same?


I’ll outline the solution to this problem, and leave it to the reader to fill in the missing steps. 

 
Fig. 12.12 from Intermediate Physics for Medicine and Biology, showing how to do a projection.
Fig. 12.12 from IPMB, showing how to do a projection.

The Projection 

Figure 12.12 shows that the projection is an integral of the object along various lines in the direction θ, as a function of displacement perpendicular to each line, x'. The integral becomes


Note that you must replace x and y by the rotated coordinates x' and y'


You can verify that x2 + y2= x'2 + y'2.

After some algebra, you’re left with integrals involving eby'2 (Gaussian integrals) such as those analyzed in Appendix K of IPMB. The three you’ll need are


The resulting projection is


Think of the projection as a function of x', with the angle θ being a parameter.

 

The One-Dimensional Fourier Transform

The next step is to evaluate the one-dimensional Fourier transform of the projection

The variable k is the spatial frequency. This integral isn’t as difficult as it appears. The trick is to complete the square of the exponent


Then make a variable substitution u = x' + ik2b. Finally, use those Gaussian integrals again. You get


This is our big result: the one-dimensional Fourier transform of the projection. Our next goal is to show that it’s equal to the two-dimensional Fourier transform of the object evaluated in the direction θ.

Two-Dimensional Fourier Transform

To calculate the two-dimensional Fourier transform, we must evaluate the double integral


The variables kx and ky are again spatial frequencies, and they make up a two-dimensional domain we call frequency space.

You can separate this double integral into the product of an integral over x and an integral over y. Solving these requires—guess what—a lot of algebra, completing the square, and Gaussian integrals. But the process is straightforward, and you get


Select One Direction in Frequency Space

If we want to focus on one direction in frequency space, we must convert to polar coordinates: kx = k cosθ and ky = k sinθ. The result is 

This is exactly the result we found before! In other words, we can take the one-dimensional Fourier transform of the projection, or the two-dimensional Fourier transform of the object evaluated in the direction θ in frequency space, and we get the same result. The central slice theorem works.

I admit, the steps I left out involve a lot of calculations, and not everyone enjoys math (why not?!). But in the end you verify the central slice theorem for a specific example. I hope this helps clarify the process, and provides insight into what the central slice theorem is telling us.

Friday, August 13, 2021

John Schenck and the First Brain Selfie

The first page of Schenck, J. F. (2005)  Prog. Biophys. Mol. Biol. 87:185–204, superimposed on Intermediate Physics for Medicine and Biology.
Schenck, J. F. (2005) 
Prog. Biophys. Mol. Biol.

87:185–204.
In Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss biomagnetism, magnetic resonance imaging, and the biological effects of electromagnetic fields. We don’t, however, talk about the safety of static magnetic fields. If you want to learn more about that topic, I suggest an article by John Schenck:
Schenck, J. F. (2005) “Physical interactions of static magnetic fields with living tissues,” Prog. Biophys. Mol. Biol. Volume 87, Pages 185–204.
This paper appeared in a special issue of the journal Progress in Biophysics and Molecular Biology analyzing the health effects of magnetic fields. The abstract states:
Clinical magnetic resonance imaging (MRI) was introduced in the early 1980s and has become a widely accepted and heavily utilized medical technology. This technique requires that the patients being studied be exposed to an intense magnetic field of a strength not previously encountered on a wide scale by humans. Nonetheless, the technique has proved to be very safe and the vast majority of the scans have been performed without any evidence of injury to the patient. In this article the history of proposed interactions of magnetic fields with human tissues is briefly reviewed and the predictions of electromagnetic theory on the nature and strength of these interactions are described. The physical basis of the relative weakness of these interactions is attributed to the very low magnetic susceptibility of human tissues and the lack of any substantial amount of ferromagnetic material normally occurring in these tissues. The presence of ferromagnetic foreign bodies within patients, or in the vicinity of the scanner, represents a very great hazard that must be scrupulously avoided. As technology and experience advance, ever stronger magnetic field strengths are being brought into service to improve the capabilities of this imaging technology and the benefits to patients. It is imperative that vigilance be maintained as these higher field strengths are introduced into clinical practice to assure that the high degree of patient safety that has been associated with MRI is maintained.
The article discusses magnetic forces due to tissue susceptibility differences, magnetic torques caused by anisotropic susceptibilities, flow or motion-induced currents, magnetohydrodynamic pressure, and magnetic excitation of sensory receptors.

On the lighter side, below are excerpts from a 2015 General Electric press report that describes one of Schenck’s claims to fame: his brain was the first one imaged using a clinical 1.5 T MRI scanner.

Heady Times: This Scientist Took the First Brain Selfie and Helped Revolutionize Medical Imaging

Early one October morning 30 years ago, GE scientist John Schenck was lying on a makeshift platform inside a GE lab in upstate New York. The [lab itself] was put together with special non-magnetic nails because surrounding his body was a large magnet, 30,000 times stronger than the Earth’s magnetic field. Standing at his side were a handful of colleagues and a nurse. They were there to peer inside Schenck’s head and take the first magnetic resonance scan (MRI) of the brain…

[In the 1970s] GE imaging pioneer Rowland “Red” Redington… hired Schenck, a bright young medical doctor with a PhD in physics [to work on MRI]... Schenck spent days inside Redington’s lab researching giant magnets and nights and weekends tending to emergency room patients. “This was an exciting time,” Schenck remembers….

It took Schenck and the team two years to obtain a magnet strong enough to… achieve useful high-resolution images. The magnet... arrived in Schenck’s lab in the spring of 1982. Since there was very little research about the effects of such [a] strong magnetic field on humans, Schenck turned it on, asked a nurse to monitor his vitals, and went inside it for ten minutes.

The field did Schenck no harm and the team spent that summer building the first MRI prototype using [a] high-strength magnetic field. By October 1982 they were ready to image Schenck’s brain.

Many scientists at the time thought that at 1.5 tesla, signals from deep tissue would be absorbed by the body before they could be detected. “We worried that there would only be a big black hole in the center” of the image, Schenck says. But the first MRI imaging test was a success. “We got to see my whole brain,” Schenck says. “It was kind of exciting.”…

Schenck, now 76, still works at his GE lab and works on improving the machine. He’s been scanning his brain every year and looking for changes… “When we started, we didn’t know whether there would be a future,” he says. “Now there is an MRI machine in every hospital.”

Friday, August 6, 2021

Two-Semester Intermediate Course Sequence in Physics for the Life Sciences

This week I spoke at the American Association of Physics Teachers 2021 Summer Meeting. Getting to the meeting was easy; I just logged onto a website. Because of the Covid-19 pandemic, the entire conference was virtual and all the talks were prerecorded. A video of my talk—“Two-Semester Intermediate Course Sequence in Physics for the Life Sciences”—is posted below. If you want a powerpoint of the slides, you can find it here. As readers of this blog might suspect, the courses I describe are based on the textbook Intermediate Physics for Medicine and Biology

“Two-Semester Intermediate Course Sequence in Physics for the Life Sciences,” delivered at the AAPT 2021 Virtual Summer Meeting on August 2, 2021. https://www.youtube.com/watch?v=_1b9OdQktrI

Redish, E. F. (2021) "Using Math in Physics: Overview," The Physics Teacher, 59:314-318, superimposed on Intermediate Physics for Medicine and Biology.
Redish, E. F. (2021)
“Using Math in Physics: Overview,”
The Physics Teacher, 59:314–318.
In my lecture, I emphasize the role of toy models in developing insight, and the importance of connecting math to physics and biology. After the talk, I had a chat with Ed Redish (who I’ve mentioned in this blog before), and he referred me to a series of articles he’s publishing in The Physics Teacher. The first is titled “Using Math in Physics: Overview” (Volume 59, Pages 314–318, 2021). Redish and I seem to be singing the same song, although his lyrics are better. What he says about math in physics describes what Russ Hobbie and I try to do in IPMB. Redish begins

The key difference between math as math and math in science is that in science we blend our physical knowledge with our knowledge of math. This blending changes the way we put meaning to math and even the way we interpret mathematical equations. Learning to think about physics with math instead of just calculating involves a number of general scientific thinking skills that are often taken for granted [my italics] (and rarely taught) in physics classes. In this paper, I give an overview of my analysis of these additional skills. I propose specific tools for helping students develop these skills in subsequent papers.
He makes other good points, such as
• Math in math classes tends to be about numbers. Math in science is not. Math in science blends physics conceptual knowledge with mathematical symbols
and my favorite
• In introductory math, equations are almost always about solving and calculating. In physics [they’re] often about explaining! [his italics, my exclamation point].
The Art of Insight
in Science and Engineering

by Sanjoy Mahajan.
I like to paraphrase Richard Hamming and say “the purpose of equations is insight, not numbers.” Redish’s article reminds me of Sanjoy Mahajan’s book The Art of Insight in Science and Engineering. Both are superb.

In subsequent articles in The Physics Teacher (some already published, some in the works), Redish discusses skills every student needs to master.

  • Dimensional Analysis 
  • Estimation 
  • Anchor Equations 
  • Toy Models 
  • Functional Dependence 
  • Reading the Physics in a Graph 
  • Telling the Story

I like to think that IPMB reinforces these skills. They certainly are ones that I try to emphasize in my “Biological Physics” and “Medical Physics” classes, and that Russ and I attempt to reinforce in our homework problems.

Screenshot of the
Living Physics Portal.
Finally, a valuable resource for teachers of physics-for-the-life-sciences was noted during the Q&A: the Living Physics Portal.

The Living Physics Portal is an online environment for physics faculty to share and discuss free curricular resources for teaching introductory physics for life sciences (IPLS). The objective of the Portal is to improve the education of the next generation of medical professionals and biologists by making physics classes more relevant for life sciences students. We do this by supporting physics instructors in finding and creating curricular materials and engaging in community discussions with other instructors to improve their courses.
Although IPMB is not intended to be used in an introductory course, I believe many materials on the Living Physics Portal would be useful to instructors teaching from IPMB. Conversely, much of the information you find in IPMB, and on this blog, could be helpful to introductory teachers. 
 
If you’re preparing to teach a class based on Intermediate Physics for Medicine and Biology, I suggest first looking at the materials on the book’s website, then scanning through the book’s blog (especially those posts marked “useful for instructors”), next reading Redish’s The Physics Teacher articles, and finally browsing the Living Physics Portal. Then you’ll be ready to teach physics for the life sciences at any level.

Friday, July 30, 2021

tDCS Peripheral Nerve Stimulation: A Neglected Mode of Action?

In the November 13, 2020 episode of Shark Tank (Season 12, Episode 5), two earnest entrepreneurs, Ken and Allyson, try to persuade five investors, the “sharks,” to buy into their company. The entrepreneurs sell LIFTiD, a device that applies a small steady current to the forehead. Ken said it’s supposed to improve “productivity, focus, and performance.” Allyson claimed it’s a “smarter way to get a… boost of energy.”

The device is based on transcranial direct current stimulation (tDCS). In 2009 I published an editorial in the journal Clinical Neurophysiology to accompany a paper appearing in the same issue by Pedro Miranda and his colleagues (Clin. Neurophysiol., Volume 120, Pages 1183–1187, 2009), in which they calculated the electric field in the brain caused by a 1 mA current applied to the scalp. I wrote
Although Miranda et al.’s paper is useful and enlightening, one crucial issue is not addressed: the mechanism of tDCS. In other words, how does the electric field interact with the neurons to modulate their excitability? Miranda et al. calculate a current density in the brain on the order of 0.01 mA/cm2, which corresponds to an electric field of about 0.3 V/m (a magnitude that is consistent with other studies (Wagner et al., 2007)). Such a small electric field should polarize a neuron only slightly. Hause’s model of a single neuron predicts that a 10 V/m electric field would induce a transmembrane potential of 6–8 mV (Hause, 1975), implying that the 0.3 V/m electric field during tDCS should produce a transmembrane potential of less than 1 mV. Can such a small polarization significantly influence neuron excitability? If so, how? These questions perplex me, yet answers are essential for understanding tDCS. Detailed models of the cortical geometry and brain heterogeneities may be necessary to address this issue (Silva et al., 2008), but ultimately the response of the neuron (or network of neurons) to the electric field must be included in the model in order to unravel the mechanism. Moreover, because the effect of tDCS can last for up to an hour after the current turns off (Nitsche et al., 2008), the mechanism is likely to be more complicated than just neural polarization.
van Boekholdt et al. (2021) "tDCS peripheral nerve stimulation: a neglected mode of action?" Mol. Psychiatry 26:456–461, superimposed on Intermediate Physics for Medicine and Biology.
van Boekholdt et al. (2021)
My participation in the field of transcranial direct current stimulation started and ended with writing this editorial. However, I still follow the literature, and was was fascinated by a recent article by Luuk van Boekholdt and his coworkers in Molecular Psychiatry (Volume 26, Pages 456–461, 2021). Their abstract says
Transcranial direct current stimulation (tDCS) is a noninvasive neuromodulation method widely used by neuroscientists and clinicians for research and therapeutic purposes. tDCS is currently under investigation as a treatment for a range of psychiatric disorders. Despite its popularity, a full understanding of tDCS’s underlying neurophysiological mechanisms is still lacking. tDCS creates a weak electric field in the cerebral cortex which is generally assumed to cause the observed effects. Interestingly, as tDCS is applied directly on the skin, localized peripheral nerve endings are exposed to much higher electric field strengths than the underlying cortices. Yet, the potential contribution of peripheral mechanisms in causing tDCS’s effects has never been systemically investigated. We hypothesize that tDCS induces arousal and vigilance through peripheral mechanisms. We suggest that this may involve peripherally-evoked activation of the ascending reticular activating system, in which norepinephrine is distributed throughout the brain by the locus coeruleus. Finally, we provide suggestions to improve tDCS experimental design beyond the standard sham control, such as topical anesthetics to block peripheral nerves and active controls to stimulate non-target areas. Broad adoption of these measures in all tDCS experiments could help disambiguate peripheral from true transcranial tDCS mechanisms.

When the sharks tried the LIFTiD device, they each could feel a tingling shock on their scalp. If van Boekholdt et al.’s suggestion is correct, the titillation and annoyance caused by that shock might be responsible for the effects associated with tDCS. In that case, the method would work even if you could somehow make the skull a perfect insulator, so no current whatsoever could enter the brain. I like how van Boekholdt suggests specific, simple experiments that could test their hypothesis.

If you’re trying to buy a device to improve brain performance, you might not care if it works by directly stimulating the brain or just by exciting peripheral nerves. In fact, you might be able to save money by hiring someone to poke you in the back every few seconds. Do whatever it takes to focus your attention.

None of the sharks invested in LIFTiD. My favorite shark, Mark Cuban, claimed the entrepreneurs “tried to sell science without using science.” I couldn’t have said it better myself. 

LIFTiD Neurostimulation Personal Brain Stimulator; https://www.youtube.com/watch?v=hFzihXprRUM

Friday, July 23, 2021

Currents of Fear: In Which Power Lines Are Suspected of Causing Cancer

Voodoo Science, by Robert Park, superimposed on the cover of Intermediate Physics for Medicine and Biology.
Voodoo Science,
by Robert Park

These days—when so many people believe crazy conspiracy theories, refuse life-saving vaccines, promote alternative medicine, fret about perceived 5G cell phone hazards, and postulate implausible microwave weapons to explain the Havana Syndrome—we need to understand better how science interacts with society. In particular, we should examine past controversies to see what we can learn. In this post, I review the power line/cancer debate of the 1980s and 90s. I remember it well, because it raged during my graduate school days. The dispute centered on the physics Russ Hobbie and I describe in Chapter 9 of Intermediate Physics for Medicine and Biology

To tell this tale, I’ve selected excerpts from Robert Park’s book Voodoo Science: The Road from Foolishness to Fraud. The story has important lessons for today. Enjoy!

Currents of Fear: In Which Power Lines Are Suspected of Causing Cancer

In 1979, an unemployed epidemiologist named Nancy Wertheimer obtained the addresses of childhood leukemia patients in Denver and drove about the city looking for some common environmental factor that might be responsible. What she noticed was that many of the homes of victims seemed to be near power transformers. Could it be that fields from the electric power distribution system were linked to leukemia? She teamed up with a physicist named Ed Leeper, who devised a “wiring code” based on the size and proximity of power lines to estimate the strength of the magnetic fields. Together they eventually produced a paper relating childhood leukemia to the fields from power lines…

In June of 1989, The New Yorker carried a new three-part series of highly sensational articles by Paul Brodeur… on the hazards of power-line fields…. The series reached an affluent, educated, environmentally concerned audience. Suddenly, Brodeur was everywhere: the Today show on NBC, Nightline on ABC, This Morning on CBS, and, of course, Larry King Live on CNN. In the fall, Brodeur published the New Yorker series as a book with the lurid title Currents of Death. A new generation of environmental activists, led by mothers who feared for their children’s lives, demanded government action…

By [1995], sixteen years had passed since Nancy Wertheimer took her historic drive around Denver. An entire industry had grown up around the power-line controversy. Armies of epidemiologists conducted ever larger studies; activists organized campaigns to relocate power lines away from schools; the courts were clogged with damage suits; a half dozen newsletters were devoted to reporting on EMF [electromagnetic fields]; a brisk business had developed in measuring 60 Hz magnetic fields in homes and workplaces; fraudulent devices of every sort were being marketed to protect against EMF; and, of course, Paul Brodeur’s books were selling well…

It was into this climate that the Stevens Report was released by the National Academy of Sciences in 1996 with it unanimous conclusion that “the current body of evidence does not show that exposure to these fields presents a human health hazard.”… The chair of the review panel, Charles Stevens, a distinguished neurobiologist with the Salk Institute, [explained] the difficulty of trying to identify weak environmental hazards. Scientists had labored for seventeen years to evaluate the hazards of power-line fields; they had conducted epidemiological studies, laboratory research, and computational analysis. “Our committee evaluated over five hundred studies,” Stevens said, “and in the end all we can say is that the evidence doesn’t point to these fields as being a health risk…”

On July 2, 1997, the National Cancer Institute (NCI) finally announced the results of its exhaustive epidemiological study, “Residential Exposure to Magnetic Fields and Acute Lymphoblastic Leukemia in Children”… It was the most unimpeachable epidemiological study of the connection between power lines and cancer yet undertaken. Every conceivable source of investigator bias was eliminated. There were 638 children under age fifteen with acute lymphoblastic leukemia enrolled in the study along with 620 carefully matched controls, ensuring reliable statistics. All measurements were double blind… [The study] concluded that any link between acute lymphoblastic leukemia in children and magnetic fields is too weak to detect or to be concerned about. But the most surprising result had to do with the proximity of power lines to the homes of leukemia victims: the study found no association at all. The supposed association between proximity to power lines and childhood leukemia, which had kept the controversy alive all these years, was spurious—just an artifact of the statistical analysis. As is so often the case with voodoo science, with every improved study the effect had gotten smaller. Now, after eighteen years, it was gone completely.

 

 

Friday, July 16, 2021

The Bragg Peak (Continued)

In last weeks post, I discussed the Bragg peak: protons passing through tissue lose most of their energy near the end of their path. In Chapter 16 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I present a homework problem in which the student calculates the stopping power (energy lost per distance traveled), S, as a function of depth, x, given a relationship between stopping power and energy, T. This problem is a toy model illustrating the physical origin of the Bragg peak. Often its helpful to have two such exercises; one to assign as homework and one to work in class (or put on an exam). Heres a new homework problem similar to the one in IPMB, but with a different assumption about how stopping power depends on energy.

Section 16.10

Problem 31 ½. Assume the stopping power of a particle, S = −dT/dx, as a function of kinetic energy, T, is S = So eT/To
(a) What are the units of So and To
(b) If the initial kinetic energy at x = 0 is Ti, calculate T(x). 
(c) Determine the range R of the particle as a function of To, So, and Ti
(d) Plot S(x) vs. x. Does this plot contain a Bragg peak? 
(e) Discuss the implications of the shape of S(x) for radiation treatment using this particle.

The answer to part (d) is difficult, because your conclusion is different depending on the relative magnitude of Ti and To. You might consider adding a part (f)

(f) Plot T(x), S(x), and R(Ti) for Ti >> To and for Ti << To.

The case Ti >> To has a conspicuous Bragg peak; the case Ti << To doesnt. 

The homework problem in IPMB is more realistic than this new one, because Fig. 15.17 indicates that the stopping power decreases as 1/T (assumed in the original problem) rather than exponentially (assumed in the new problem). This changes the particles behavior, particularly at low energies (near the end of its range, in the Bragg peak). Nevertheless, having multiple versions of the problem is useful. 

The answer to part (e) is given in IPMB.

Protons are also used to treat tumors... Their advantage is the increase of stopping power at low energies. It is possible to make them come to rest in the tissue to be destroyed, with an enhanced dose relative to intervening tissue and almost no dose distally.

 Enjoy!

Friday, July 9, 2021

The Bragg Peak

In Chapter 16 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Bragg peak.
Protons are also used to treat tumors (Khan 2010, Ch. 26; Goitein 2008). Their advantage is the increase of stopping power at low energies. It is possible to make them come to rest in the tissue to be destroyed, with an enhanced dose relative to intervening tissue and almost no dose distally (“downstream”) as shown by the Bragg peak in Fig.16.47.
Energy loss versus depth for a 150 MeV proton beam in water, with and without straggling (fluctuations in the range). The Bragg peak enhances the energy deposition at the end of the proton range. Adapted from Fig. 16.47 in Intermediate Physics for Medicine and Biology.
Energy loss versus depth for a 150 MeV proton beam in water, with and without straggling (fluctuations in the range). The Bragg peak enhances the energy deposition at the end of the proton range. Adapted from Fig. 16.47 in Intermediate Physics for Medicine and Biology.

William Henry Bragg, discoverer of the Bragg peak.
William Henry Bragg
Sir William Henry Bragg
(1862 – 1942) was an English scientist who shared the 1915 Nobel Prize in Physics with his son Lawrence Bragg for their analysis of crystal structure using X-rays. In 2004, Andrew Brown and Herman Suit published an article commemorating “The Centenary of the Discovery of the Bragg Peak” (Radiotherapy and Oncology, Volume 73, Pages 265-268).
In December 1904, William Henry Bragg, Professor of Mathematics and Physics at the University of Adelaide and his assistant Richard Kleeman published in the Philosophical Magazine (London) novel observations on radioactivity. Their paper “On the ionization of curves of radium,” gave measurements of the ionization produced in air by alpha particles, at varying distances from a very thin source of radium salt. The recorded ionization curves “brought to light a fact, which we believe to have been hitherto unobserved. It is, that the alpha particle is a more efficient ionizer towards the extreme end of its course.” This was promptly followed by further results in the Philosophical Magazine in 1905. Their finding was contrary to the accepted wisdom of the day, viz. that the ionizations produced by alpha particles decrease exponentially with range. From theoretical considerations, they concluded that an alpha particle possesses a definite range in air, determined by its initial energy and produces increasing ionization density near the end of its range due to its diminishing speed.
Although Bragg discovered the Bragg peak for alpha particles, the same behavior is found for other heavy charged particles such as protons. It is the key concept underlying the development of proton therapy. Brown and Suit conclude
The first patient treatment by charged particle therapy occurred within a decade of Wilson’s paper [the first use of protons in therapy, published in 1946]. Since then, the radiation oncology community has been evaluating various particle beams for clinical use. By December 2004, a century after Bragg’s original publication, the approximate number of patients treated by proton–neon beams is 47,000 (Personal communication, Janet Sisterson, Editor, Particles) [over 170,000 today]. There have been several clear clinical gains. None of these would have been possible, were it not for the demonstration that radically different depth dose curves were feasible.