Friday, October 25, 2024

A Toy Model of Climate Change

Introduction

A screenshot of the online book
Math for the People.
In Introductory Physics for Medicine and Biology, Russ Hobbie and I make use of toy models. Such mathematical descriptions are not intended to be accurate or realistic. Rather, they‘re simple models that capture the main idea without getting bogged down in the details. Today, I present an example of a toy model. It’s not related to medicine or biology, but instead describes climate change. I didn’t originally derive this model. Much of the analysis below comes from other sources, such as the online book Math for the People published by Mark Branson and Whitney George.

Earth Without an Atmosphere

First, consider the earth with no atmosphere. We will balance the energy coming into the earth from the sun with the energy from the earth that is radiated out into space. Our goal will be to calculate the earth’s temperature, T.

The power density (energy per unit time per unit area, in watts per square meter) emitted by the sun is called the solar constant, S. It depends on how far you are from the sun, but at the earth’s orbit S = 1360 W/m2. To get the total power impinging on our planet, we must multiply S by the area subtended by the earth, which is πR2, where R is the earth’s radius (R = 6.4 × 106 meters). This gives SπR2 = 1.8 × 1017 W, or nearly 200,000 TW (T, or tera-, means one trillion). That’s a lot of power. The total average power consumption by humanity is only about 20 TW, so there’s plenty of energy from the sun.

We often prefer to talk about the energy loss or gain per unit area of the earth’s surface. The surface area of the earth is 4πR2 (the factor of four comes from the total surface area of the spherical earth, in contrast to the area subtended by the earth when viewed from the sun). The power per unit area of the earth’s surface is therefore SπR2/4πR2, or S/4.

Not all of this energy is absorbed by the earth; some is reflected back into space. The albedo, a, is a dimensionless number that indicates the fraction of the sun’s energy that is reflected. The power absorbed per unit area is then (1 – a)S/4. About 30% of the sun’s energy is reflected (a = 0.3), so the power of sunlight absorbed by the earth per unit of surface area is 238 W/m2.

What happens to that energy? The sun heats the earth to a temperature T. Any hot object radiates energy. Such thermal radiation is analyzed in Section 14.8 of Intermediate Physics for Medicine and Biology. The radiated power per unit area is equal to eσT4. The symbol σ is the Stefan-Boltzmann constant, σ = 5.7 × 10–8 W/(m2 K4). As stated earlier, T is the earth’s temperature. When raising the temperature to the fourth power, T must be expressed as the absolute temperature measured in kelvin (K). Sometimes it’s convenient at the end of a calculation to convert kelvin to the more familiar degrees Celsius (°C), where 0°C = 273 K. But remember, all calculations of T4 must use kelvin. Finally, e is the emissivity of the earth, which is a measure of how well the earth absorbs and emits radiation. The emissivity is another dimensionless number ranging between zero and one. The earth is an excellent emitter and absorber, so e = 1. From now on, I’ll not even bother including e in our equations, in which case the power density emitted is just σT4.

Let’s assume the earth is in steady state, meaning the temperature is not increasing or decreasing. Then the power in must equal the power out, so 

(1 – a)S/4 = σT4

Solving for the temperature gives

T = ∜[(1 – a)S/4σ] .

Because we know a, S, and σ, we can calculate the temperature. It is T = 254 K = –19°C. That’s really cold (remember, in the Celsius scale water freezes at 0°C). Without an atmosphere, the earth would be a frozen wasteland.

Earth With an Atmosphere

Often we can learn much from a toy model by adding in complications, one by one. Now, we’ll include an atmosphere around earth. We must keep track of the power into and out of both the earth and the atmosphere. The earth has temperature TE and the atmosphere has temperature TA.

First, let’s analyze the atmosphere. Sunlight passes right through the air without being absorbed because it’s mainly visible light and our atmosphere is transparent in the visible part of the spectrum. The main source of thermal (or infrared) radiation (for which the atmosphere is NOT transparent) is from the earth. We already know how much that is, σTE4. The atmosphere only absorbs a fraction of the earth’s radiation, eA, so the power per unit area absorbed by the atmosphere is eAσTE4.

Just like the earth, the atmosphere will heat up to a temperature TA and emit its own thermal radiation. The emitted power per unit area is eAσTA4. However, the atmosphere has upper and lower surfaces, and we’ll assume they both emit equally well. So the total power emitted by the atmosphere per unit area is 2eAσTA4.

If we balance the power in and out of the atmosphere, we get 

eAσTE4 = 2eAσTA4

Interestingly, the fraction of radiation absorbed by the atmosphere, eA, cancels out of our equation (a good emitter is also a good absorber). The Stefan-Boltzmann constant σ also cancels, and we just get TE4 = 2TA4. If we take the forth root of each side of the equation, we find that TA = 0.84 TE. The atmosphere is somewhat cooler than the earth.

Next, let’s reanalyze the power into and out of the earth when surrounded by an atmosphere. The sunlight power per unit area impinging on earth is still (1 – a)S/4. The radiation emitted by the earth is still σTE4. However, the thermal radiation produced by the atmosphere that is aimed inward toward the earth is all absorbed by the earth (since the emissivity of the earth is one, eE = 1), so this provides another factor of eAσTA4. Balancing power in and out gives

(1 – a)S/4 + eAσTA4 = σTE4 .

Notice that if eA were zero, this would be the same relationship as we found when there was no atmosphere: (1 – a)S/4 = σTE4. The atmosphere provides additional heating, warming the earth.

We found earlier that TE4 = 2TA4. If we rewrite this as TA4 = TE4/2 and plug that into our energy balance equation, we get

(1 – a)S/4 + eAσTE4/2 = σTE4 .

With a bit of algebra, we find

(1 – a)S/4 = σTE4 (1 – eA/2) .

Solving for the earth’s temperature gives

TE = ∜[(1 – a)S/4σ] ∜[1/(1 – eA/2) ] .

If eA were zero, this would be exactly the relationship we had for no atmosphere. The fraction of energy absorbed by the atmosphere is not zero, however, but is approximately eA = 0.8. The atmosphere provides a dimensionless correction factor of ∜[1/(1 – eA/2)]. The temperature we found previously, 254 K, is corrected by this factor, 1.136. We get TE = 288.5 K = 15.5 °C. This is approximately the average temperature of the earth. Our atmosphere raised the earth’s temperature from –19°C to +15.5°C, a change of 34.5°C.

Climate Change

To understand climate change, we need to look more deeply into the meaning of the factor eA, the fraction of energy absorbed by the atmosphere. The main constituents of the atmosphere—oxygen and nitrogen—are transparent to both visible and thermal radiation, so they don’t contribute to eA. Thermal energy is primarily absorbed by greenhouse gases. Examples of such gases are water vapor, carbon dioxide, and methane. Methane is an excellent absorber of thermal radiation, but its concentration in the atmosphere is low. Water vapor is a good absorber, but water vapor is in equilibrium with liquid water, so it isn’t changing much. Carbon dioxide is a good absorber, has a relatively high concentration, and is being produced by burning fossil fuels, so a lot of our discussion about climate change focuses on carbon dioxide.

The key to understanding climate change is that greenhouse gasses like carbon dioxide affect the fraction of energy absorbed, eA. Suppose an increase in the carbon dioxide concentration in the atmosphere increased eA slightly, from 0.80 to 0.81. The correction factor  ∜(1/(1 – eA/2) ) would increase from 1.136 to 1.139, changing the temperature from 288.5 K to 289.3 K, implying an increase in temperature of 0.8 K. Because changes in temperature are the same if expressed in kelvin or Celsius, this is a 0.8°C rise. A small change in eA causes a significant change in the earth’s temperature. The more carbon dioxide in the atmosphere, the greater the temperature rise: Global warming.

Feedback

We have assumed the earth’s albedo, a, is a constant, but that is not strictly true. The albedo depends on how much snow and ice cover the earth. More snow and ice means more reflection, a larger albedo, a smaller amount of sunlight absorbed by the earth, and a lower temperature. But a lower temperature means more snow and ice. We have a viscous cycle: more snow and ice leads to a lower temperature which leads to more snow and ice, which leads to an even lower temperature, and so on. Intermediate Physics for Medicine and Biology dedicates an entire chapter to feedback, but it focuses mainly on negative feedback that tends to maintain a system in equilibrium. A viscous cycle is an example of positive feedback, which can lead to explosive change. An example from biology is the upstroke of a nerve action potential: an increase in the electrical voltage inside a nerve cell leads to an opening of sodium channels in the cell membrane, which lets positively charged sodium ions enter the cell, which causes the voltage inside the cell to increase even more. The earth’s climate has many such feedback loops. They are one of the reasons why climate modeling is so complicated.

Conclusion

Today I presented a simple description of the earth’s temperature and the impact of climate change. Many things were left out of this toy model. I ignored differences in temperature over the earth’s surface and within the atmosphere. I neglected ocean currents and the jet stream that move heat around the globe. I did not account for seasonal variations, or for other greenhouse gasses such as methane and water vapor, or how the amount of water vapor changes with temperature, or how clouds affect the albedo, and a myriad of other factors. Climate modeling is a complex subject. But toy models like I presented today provide insight into the underlying physical mechanisms. For that reason, they are crucial for understanding complex phenomena such as climate change.

Friday, October 18, 2024

A Continuum Model for Volume and Solute Transport in a Pore

As Gene Surdutovich and I prepare the 6th edition of Intermediate Physics for Medicine and Biology, we have to make many difficult decisions. We want to streamline the book, making it shorter and more focused on key concepts, with fewer digressions. Yet, what one instructor may view as “fat” another may consider part of the “meat.” One of these tough choices involves Section 5.9 (A Continuum Model for Volume and Solute Transport in a Pore).

Neither Gene nor I cover the rather long Sec. 5.9 when we teach our Biological Physics class; there just isn’t enough time. So, at the moment this section has been axed from the 6th edition. It now lies abandoned on the cutting room floor. (But, using LaTex’s “comment” feature we could reinstate it in a moment; there’s always hope.) Russ Hobbie would probably object, because I know he was fond of that material. Today, I want to revisit that section once more, for old times sake.

The section develops a model of solute flow through a pores in a membrane. One key parameter it derives is the “reflection coefficient,” σ, which accounts for the size of the solute particle. If the solute radius, a, is small compared to the pore radius, Rp, then solute can easily pass through and almost none is “reflected” or excluded from passing through the pore. In that case, the reflection coefficient goes to zero. If the solute radius is larger than the pore radius, the solute can’t pass through (it’s too big!); it’s completely blocked and the reflection coefficient is one. The transition from σ = 0 to σ = 1 for medium-sized solute particles depends on the pore model.

The fifth edition of IPMB presents two models to calculate how the reflection coefficient varies with solute radius. The figure below summarizes them. It is similar to Fig. 5.15 in IPMB, but is drawn with Mathematica as many of the figures in the 6th edition will be. 

The blue curve shows σ as a function of ξ = a/Rp, and represents the “steric factor” 2ξ ξ2. It arises from a model that assumes there is plug flow of solvent (usually water) through the pore; the flow velocity does not depend on position. The maize curve shows a more complex model that accounts for Poiseuille flow in the pore (no flow at the pore edge and a parabolic flow distribution that peaks in the pore center), and gives the reflection coefficient as 4ξ2 – 4ξ3ξ4. (Is it a coincidence that I use the University of Michigan’s school colors, blue and maize, for the two curves? Actually, it is.) Both vary between zero and one.

You can consult the textbook for the mathematical derivations of these functions. Today, I want to see if we can understand them qualitatively. For plug flow, reflection occurs if the solute is within one particle radius of the pore edge. In that case, the number of particles that reflect grows linearly with particle radius. The steric factor 2ξ ξ2 has this behavior. For Poiseuille flow, the size of the particle relative to the pore radius similarly plays a role. However, the flow is zero near the pore wall. Therefore, tiny particles adjacent to the edge did not contribute much to the flow anyway, so making them slightly larger does not make much difference. The reflection coefficient grows quadratically near ξ = 0, because as the particle radius increases you have more particles that would be blocked by the pore edge, and because the larger size of the particle means that it experiences a greater flow of solvent as you move radially in from the pore edge. So, the relative behavior of the two curves for small radius makes sense. In fact, for small values of ξ the two functions are quite different. At ξ = 0.1, the blue curve is over five times larger than the maize curve.

I find explaining what is happening for ξ approximately equal to one is more difficult. For plug flow, when the solute particle is just slightly smaller than the pore radius, it barely fits. But for Poiseuille flow, the particle not only barely fits, but it blocks all the fast flow near the pore center and you only get a contribution from the slow flow near the edge. This causes the maize curve to be more sensitive to what is happening near ξ = 1 than is the blue curve. I don’t find this explanation as intuitively obvious as the one in the previous paragraph, but it highlights an approximation that becomes important near ξ = 1. The model does not account for adjustment of the flow of solvent when the solute particle is relatively large are disrupts the flow. This can’t really be true. If a particle almost plugged a pore, it must affect the flow distribution. I suspect that the Poiseuille model is most useful for small values of ξ, but the behavior at large ξ (near one) should be taken with a grain of salt.

If find that it’s useful to force yourself (or your student) to provide physical interpretations of mathematical expressions, even when they’re not so obvious. Remember, the goal of doing these analytical toy models is to gain insight.

For those of you who might be disappointed to see Section 5.9 go, my advice is don’t toss out your 5th edition when you buy the 6th (and I’m assuming all of my dear readers will indeed buy the 6th edition). Stash the 5th edition away in your auxiliary bookshelf (or donate it to your school library), and pull it out if you really want a good continuum model for volume and solute transport in a pore.

Friday, October 11, 2024

Extracellular Magnetic Measurements to Determine the Transmembrane Action Potential and the Membrane Conduction Current in a Single Giant Axon

Forty years ago today I was attending my first scientific meeting: The Society for Neuroscience 14th Annual Meeting, held in Anaheim, California (October 10–15, 1984). As a 24-year-old graduate student in the Department of Physics at Vanderbilt University, I presented a poster based on the abstract shown below: “Extracellular Magnetic Measurements to Determine the Transmembrane Action Potential and the Membrane Conduction Current in a Single Giant Axon.”

I can’t remember much about the meeting. I’m sure I flew to California from Nashville, Tennessee, but I can’t recall if my PhD advisor John Wikswo went with me (his name is not listed on any meeting abstract except the one we presented). I believe the meeting was held at the Anaheim Convention Center. I remember walking along the sidewalk outside of Disneyland, but I didn’t go in (I had visited there with my parents as a child).

Neuroscience Society meetings are huge. This one had over 300 sessions and more than 4000 abstracts submitted. In the Oct. 11, 1984 entry in my research notebook, I wrote “My poster session went OK. Several people were quite enthusiastic.” I took notes from talks I listened to, including James Hudspeth discussing hearing, a Presidential Symposium by Gerald Fischbach, and a talk about synaptic biology and learning by Eric Kandel. I was there when Theodore Bullock and Susumu Hagiwara were awarded the Ralph W. Gerard Prize in Neuroscience.

The research Wikswo and I reported in our abstract was eventually published in my first two peer-reviewed journal articles:

Barach, J. P., B. J. Roth and J. P. Wikswo, Jr., 1985, Magnetic Measurements of Action Currents in a Single Nerve Axon: A Core-Conductor Model. IEEE Transactions on Biomedical Engineering, Volume 32, Pages 136-140.

Roth, B. J. and J. P. Wikswo, Jr., 1985, The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment. Biophysical Journal, Volume 48, Pages 93-109.

Both are cited in Chapter 8 of Intermediate Physics for Medicine and Biology.

This neuroscience abstract was not my first publication. I was listed as a coauthor on an abstract to the 1983 March Meeting of the American Physical Society, based on some research I helped with as an undergraduate physics major at the University of Kansas. But I didn’t attend that meeting. In my CV, I have only one publication listed for 1983 and one again in 1984. Then in 1985, they started coming fast and furious. 

Four decades is a long time, but it seems like yesterday.

Friday, October 4, 2024

The Difference between Traditional Magnetic Stimulation and Microcoil Stimulation: Threshold and the Electric Field Gradient

In Chapter 7 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss electrical stimulation of nerves. In particular, we describe how neural excitation depends on the duration of the stimulus pulse, leading to the strength-duration curve.
The strength-duration curve for current was first described by Lapicque (1909) as
where i is the current required for stimulation, iR is the rheobase [the minimum current required for a long stimulus pulse], t is the duration of the pulse, and tc is chronaxie, the duration of the pulse that requires twice the rheobase current.

An axon is difficult to excite using a brief pulse, and you have to apply a strong current. This behavior arises because the axon has its own characteristic time, τ (about 1 ms), which is basically the resistance-capacitance (RC) time constant of the cell membrane. If the stimulus duration is much shorter than this time constant, the stimulus strength must increase.

A nerve axon not only has a time constant τ, but also a space constant λ. Is there a similar spatial behavior when exciting a nerve? This is the question my graduate student Mohammed Alzahrani and I addressed in our recent article “The Difference between Traditional Magnetic Stimulation and Microcoil Stimulation: Threshold and the Electric Field Gradient” (Applied Sciences, Volume 14, Article 8349, 2024). The question becomes important during magnetic stimulation with a microcoil. Magnetic stimulation occurs when a pulse of current is passed through a coil held near the head. The changing magnetic field induces an electric field in the brain, and this electric field excites neurons. Recently, researchers have proposed performing magnetic stimulation using tiny “microcoils” that would be implanted in the brain. (Will such microcoils really work? That’s a long story, see here and here.) If the coil is only 100 microns in size, the induced electric field distribution will be quite localized. In fact, it may exist over a distance that’s short compared to the typical space constant of a nerve axon (about 1 mm). Mohammed and I calculated the response of a nerve to the electric field from a microcoil, and found that for a localized field the stimulus strength required for excitation is large.

Figure 6 of our article, reproduced below, plots the gradient of the induced electric field dEx/dx (which, in this case, is the stimulus strength) versus the parameter b (which characterizes the spatial width of the electric field distribution). Note that unlike the plot of the strength-duration curve above, Fig. 6 is a log-log plot

Figure 6 from Alzahrani and Roth, Appl. Sci., 14:8349, 2024

We wrote

Our strength-spatial extent curve in Figure 6 for magnetic stimulation is analogous to the strength-duration curve for electrical stimulation if we replace the stimulus duration [t] by the spatial extent of the stimulus b and the time constant τ by the [space] constant λ. Our results in Figure 6 have a “spatial rheobase” dEx/dx value (1853 mV/cm2) for large values of spatial extent b. At small values of b, the value of dEx/dx rises. If we wanted to define a “spatial chronaxie” (the value of b for which the threshold value of dEx/dx rises by a factor of two), it would be about half a centimeter.
To learn more about this effect you can read our paper, which was published open access so its available free to everyone. Some researchers have used a value of dEx/dx found when stimulating with a large coil held outside the head to estimate the threshold stimulus strength using a microcoil. We ended the paper with this warning:
In conclusion, our results show that even in the case of long, straight nerve fibers, you should not use a threshold value of dEx/dx in a microcoil experiment that was obtained from a traditional magnetic stimulation experiment with a large coil. The threshold value must be scaled to account for the spatial extent of the dEx/dx distribution. Magnetic stimulation is inherently more difficult for microcoils than for traditional large coils, and for the same reason, electrical stimulation is more difficult for short-duration stimulus pulses than for long-duration pulses. The nerve axon has its own time and space constants, and if the pulse duration or the extent of the dEx/dx distribution is smaller than these constants, the threshold stimulation will rise. For microcoil stimulation, the increase can be dramatic.

Friday, September 27, 2024

Taylor Diffusion

In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Poiseuille flow: the flow of a viscous fluid in a pipe. Consider laminar flow of a fluid, having viscosity η, through a long pipe with radius R and length Δx. The flow is driven by a pressure difference Δp across its ends. 

The velocity of the fluid in the pipe is 

where r is the distance from the center of the pipe. Figure 1.26 in IPMB includes a plot of the velocity profile, which is a parabola: large at the center of the pipe (r = 0) and zero at the wall (r = R) because of the no-slip boundary condition.

 
In most mechanics problems, not only is the velocity important but also the displacement. Yet, somehow until recently I never stopped to consider what the displacement of the fluid looks like during Poiseuille flow. Let’s say that at time t = 0 you somehow mark a thin layer of the fluid uniformly across the pipe’s cross section (the light blue line on the left in the figure below). Perhaps you do this by injecting dye or using magnetic resonance imaging to tag the spins. How does the fluid move?

At time tΔt the displacement also forms a parabola, with the fluid at the center moving a ways down the pipe to the right and the fluid at the wall not moving at all. As time marches on, the fluid keeps flowing down the pipe, with the parabola getting stretched longer and longer. Eventually, the marked fluid will extend the entire length of the pipe.

Poiseuille flow is laminar, meaning the fluid moves smoothly along streamlines. Laminar flow is typical of fluid motion when viscosity dominates so the Reynolds number is small. Now let’s consider how the marked or tagged fluid gets mixed with the normal fluid. In laminar flow, there is no turbulent mixing, because there are no eddies to stir the fluid. In fact, there is no component of the fluid velocity in the radial direction at all. There is no mixing, except by diffusion.

Diffusion is discussed in Chapter 4 of IPMB. It is the random movement of particles from a region of higher concentration to a region of lower concentration. Let’s consider what would happen to the marked fluid if flow was turned off (for instance, if we set Δp = 0) and only diffusion occurs. The originally narrow light blue band would no longer drift downstream but it would spread with time, rapidly at first and then more slowly later. In reality the concentration of marked fluid would change continuously in a Gaussian-like way, with a higher concentration at the center and gradually lower concentration in the periphery, but drawing that picture would be difficult, so I’ll settle for showing a uniform band getting wider in time. 

Now, what happens if drift and diffusion happen together? You get something like this: 

The parabola stretched out along the pipe is still there, but its gets wider and wider with time because of diffusion. 

What happens as even more time goes by? Eventually the marked fluid will have enough time to diffuse radially across the entire cross section of the pipe. If we look a ways downstream, the situation will be something like shown below.

The parabola disappears as the marked fluid becomes locally smeared out. Now, here’s the interesting thing: The spreading of the marked fluid is greater than you would expect from pure diffusion. It’s as if Poiseuille flow increased the diffusion. This effect is called Taylor diffusion: an effective diffusion on a large scale arising from Poiseuille flow on a small scale. The flow stretches that parabola axially and then diffusion spreads the marked fluid radially. This phenomenon is named after British physicist Geoffrey Ingram Taylor (1886–1975). Although the derivation is a bit too difficult for a blog post, you can show (see the Widipedia article about Taylor diffusion) that the long-time, large-scale behavior is a combination of drift plus diffusion with an effective diffusion constant, Deff, given by


where v is the mean flow speed (equal to one half the flow speed at the center of the tube). As the flow goes to zero (v = 0) the effective diffusion constant goes to Deff = D and Taylor diffusion disappears; it’s just plain old diffusion. If the flow speed is large, then Deff  is larger than D by a factor of R2 v2/48D2. The quantity Rv/D is the Péclet number (see Homework Problem 43 in Chapter 4 of IPMB), which is a dimensionless ratio of transport by convection to transport by diffusion. Taylor diffusion is particularly important when the Péclet number is large, meaning the drift caused by Poiseuille flow is greater than the spreading caused by diffusion. This enhanced diffusion can be important in some applications. For instance, if you are trying to mix two liquids using microfluidics, you would ordinarily have to wait a long time for diffusion to do its thing. Taylor diffusion can speed that mixing along.

You can call this phenomenon “Taylor diffusion” if you want. Some people use the term “Taylor dispersion.” I call it “diffusion (Taylor’s version).”

 Taylor Swift singing Shake It Off (Taylor’s Version)

 


 

Friday, September 20, 2024

Transitioning to Environmentally Sustainable, Climate-Smart Radiation Oncology Care

“Transitioning to Environmentally
Sustainable Climate-Smart
Radiation Oncology Care,”
by Lichter et al.,
IJROBP, 113:915–924, 2022.
Loyal readers of this blog may have noticed an increasing number of posts related to climate change, and the intersection of global warming with health care and medical physics. This is not an accident. I’m growing increasingly worried about the impact of climate change on our society. One way I act to oppose climate change is to write about it (here, here, here). So, I was delighted to read Katie Lichter and her team’s editorial about “Transitioning to Environmentally Sustainable, Climate-Smart Radiation Oncology Care” (International Journal of Radiation Oncology Biology Physics, Volume 113, Pages 915–924, 2022). Their introduction begins (references removed)

Climate change is among the most pressing global threats. Action now and in the coming decades is critical. Rising temperatures exacerbate the frequency and intensity of extreme weather events, including wildfires, hurricanes, floods, and droughts. Such events threaten not only our ecosystems, but also our health. Climate change’s negative effects on human health are slowly becoming better understood and are projected to increase if emissions mitigation remains inadequate. Emerging research notes a disproportionate effect of climate change on vulnerable populations (e.g., older populations, children, low-income populations, ethnic minorities, and patients with chronic conditions, including cancer) who are the least equipped to deal with these outsized effects.
Then Lichter and her coauthors get specific about radiation oncology.
More than half of cancer patients will require radiation therapy (RT) during the course of their illness. As most RT courses are delivered using fractionated external beam radiation (EBRT), patients undergoing EBRT are vulnerable to treatment disruptions from climate events. Notably, disruption of RT treatments due to severe weather events has been shown to affect patient treatment and survival. As radiation oncologists, it is imperative to recognize and further investigate the effects of climate change on health and cancer outcomes and understand the specific vulnerabilities of patients receiving RT to the effects of climate change. We must also advance our understanding of the contribution of radiation oncology as a specialty to green house gas (GHG) emissions, and what measures may be taken in our daily practices to join the international efforts in reducing our negative environmental impact.
Next the authors present their four R’s to address oncology care: reduce, reuse, recycle, rethink. This is sort of an inside joke among radiation biologists, because radiation biology famously has its own four R’s: repair, reassortment, reoxygenation, and repopulation. Lichter et al.’s four R’s explain how to lower radiation oncology’s effect on the climate.

  1. Reduce means to lower the energy needs for imaging and therapeutic devices, and to minimize medical waste.
  2. Reuse means to favor reusable equipment and supplies (such as surgical gowns) whenever possible.
  3. Recycle means to recycle any single-use supplies than cannot be reused. Much now finds its way to urban landfills rather than to recycling centers.
  4. Rethink means to reconsider all medical radiation oncology processes and procedures in light of climate change. Can some things be done by telemedicine? Can we reduce the number of fractions of radiation a patient receives so fewer visits to the hospital are required? Can some professional conferences be held virtually rather than in person? Sometimes the answer may be yes and sometimes no, but all these issues need to be reexamined.

Lichter’s editorial concludes (my italics)

The health care system contributes significantly to today’s climate health crisis. All efforts addressing the crisis are important due to their direct emissions reduction potential, and the example they set for the health care system and the patients who need the care. Although the effects of increasing global temperatures on human health are well studied, the effects of health care, and specifically oncology and radiation treatments, on contributing to climate change are not. The radiation oncology community has a unique opportunity to use our technological expertise and awareness to assess and minimize the environmental impact of our care and set the standard for sustainable health care practices for other specialties to emulate. 

Thank you Katie Lichter and your whole team for all the important work that you are doing to fight climate change! Your four R’s—reduce, reuse, recycle, and rethink—apply beyond radiation oncology, and even beyond health care, to all of our society’s activities. Perhaps writers of textbooks such as Intermediate Physics for Medicine and Biology need to reduce, reuse, recycle, and especially rethink how our books impact, and are impacted by, global warming.

 
Listen to Katie Lichter talk about her climate journey.

Friday, September 13, 2024

The Million Person Study: Whence It Came and Why

A screenshot of the article "How Sound is the Model Used to Establish Safe Radiation Levels?" on the website physicsworld.com, superimposed on the cover of Intermediate Physics for Medicine and Biology.
A screenshot of the article
“How Sound is the Model Used to
Establish Safe Radiation Levels?
on the website physicsworld.com.
Last fall, physicsworld.com published an editorial by Robert Crease asking “How Sound is the Model Used to Establish Safe Radiation Levels?” This question is addressed in Chapter 16 of Intermediate Physics for Medicine and Biology, and I have discussed it before in this blog. Crease begins
Ionizing radiation can damage living organisms, that’s clear. But there are big questions over the validity of the linear no-threshold model (LNT), which essentially states that the risk of cancer from radiation and carcinogens always increases linearly with dose. The LNT model implies, in other words, that any amount of radiation is always dangerous and that zero risk is present only at zero dose.
Crease notes that alternative models are the threshold model in which there is a minimum dose below which there is no risk, and the hormesis model which says that small doses are beneficial by triggering repair mechanisms. He explains that by adopting such a conservative position as the linear no-threshold model we may cause unforeseen negative consequences.

What sort of negative consequences? One of the most urgent and dire health hazards faced by humanity is climate change. Addressing the danger of a warming climate, with all its implications, must be our top priority. Climate change is caused primarily by the emission of greenhouse gasses such as carbon dioxide that result from the burning of fossil fuels to generate electricity, warm our homes, power our vehicles, or make steel and concrete. One alternative to burning fossil fuels is to use nuclear energy. But nuclear energy is feared by many, in part because of the linear no-threshold model, which implies that any exposure to ionizing radiation is dangerous. If, in fact, the linear no-threshold model is not valid at the low doses associated with nuclear power plants and nuclear waste disposal then the public might be more accepting of nuclear power, which may help us in the battle against climate change. Crease concludes
One of the many reasons for the need to study the validity of LNT is that convictions of its accuracy continue to be used as an argument against nuclear power plants, in connection with their operation as well as their spent fuel rods. Nuclear power may be undesirable for reasons other than this. But the critical need to find a workable alternative to fossil fuels for energy production requires an honest ability to assess the validity of this model.
In my opinion, determining if the linear no-threshold model is valid at low doses is one of the greatest challenges of medical physics today. It’s a critical example of how physics interacts with medicine and biology. We need to figure this out. But how?

Screenshot of The Million Person Study website, superimposed on the cover of Intermediate Physics for Medicine and Biology.
Screenshot of The Million
Person Study website.
One way is to conduct an epidemiological study of low-dose radiation exposure. But such a study would have to be huge, because it’s looking for a tiny effect influencing an enormous population. What you need is something like The Million Person Study. Yes, medical physics has its own “big science” large-scale collaboration. The Million Person Study’s website states
There is a major gap in epidemiological understanding, however, of the health effects experienced by populations exposed to radiation at lower doses, gradually over time.

The foundation of the Million Person Study is to fill that gap, using epidemiological methods of assessing rate and quality of mortality on a study group of one million persons exposed to this type of radiation.
The website notes that there are many reasons to assess the risk of low doses of radiation, including determining 1) the side effects of medical imaging procedures such as computed tomography, 2) the danger of nuclear accidents or terrorism (dirty bombs), 3) the safety of occupations that expose workers to a slight radiation dose, 4) the hazards of environmental exposure such as from radon in homes, and 5) the uncertainty of space and high altitude travel such as when sending astronauts to Mars. The Million Person Study not only focuses on the level of exposure, but also on the duration: was it a brief exposure as if from an nuclear accident, or a low dose delivered over a long time?

The cover of a special issue of the International Journal of Radiation Biology about The Million Person Study, superimposed on the cover of Intermediate Physics for Medicine and Biology.
The cover of a special issue of the
International Journal of Radiation Biology
about The Million Person Study.
Want to learn more about The Million Person Study? See the paper by John Boice, Sarah Cohen, Michael Mumma, and Elisabeth Ellis titled “The Million Person Study: Whence it Came and Why,” published in the International Journal of Radiation Biology in 2022 (Volume 98, Pages 537–550). Its abstract is printed below.
Purpose: The study of low dose and low-dose rate exposure is of immeasurable value in understanding the possible range of health effects from prolonged exposures to radiation. The Million Person Study (MPS) of low-dose health effects was designed to evaluate radiation risks among healthy American workers and veterans who are more representative of today’s populations than are the Japanese atomic bomb survivors exposed briefly to high-dose radiation in 1945. A million persons were needed for statistical reasons to evaluate low-dose and dose-rate effects, rare cancers, intakes of radioactive elements, and differences in risks between women and men.

Methods and Materials: The MPS consists of five categories of workers and veterans exposed to radiation from 1939 to the present. The U.S. Department of Energy (DOE) Health and Mortality study began over 40 years ago and is the source of ∼360,000 workers. Over 25 years ago, the National Cancer Institute (NCI) collaborated with the U.S. Nuclear Regulatory Commission (NRC) to effectively create a cohort of nuclear power plant workers (∼150,000) and industrial radiographers (∼130,000). For over 30 years, the Department of Defense (DoD) collected data on aboveground nuclear weapons test participants (∼115,000). At the request of NCI in 1978, Landauer, Inc., (Glenwood, IL) saved their dosimetry databases which became the source of a cohort of ∼250,000 medical and other workers.

Results: Overall, 29 individual cohorts comprise the MPS of which 21 have been or are under active study (∼810,000 persons). The remaining eight cohorts (∼190,000 persons) will be studied as resources become available. The MPS is a national effort with critical support from the NRC, DOE, National Aeronautics and Space Administration (NASA), DoD, NCI, the Centers for Disease Control and Prevention (CDC), the Environmental Protection Agency (EPA), Landauer, Inc., and national laboratories.

Conclusions: The MPS is designed to address the major unanswered question in radiation risk understanding: What is the level of health effects when exposure is gradual over time and not delivered briefly. The MPS will provide scientific understandings of prolonged exposure which will improve guidelines to protect workers and the public; improve compensation schemes for workers, veterans and the public; provide guidance for policy and decision makers; and provide evidence for or against the continued use of the linear nonthreshold dose-response model in radiation protection.

 Lead on Million Person Study, and thank you for your effort. We need those results!

Friday, September 6, 2024

Black Carbon and Radon

Drawdown
In a previous post, I reviewed the book Drawdown: The Most Comprehensive Plan Ever Proposed to Reverse Global Warming. Sometimes I visit the book’s associated website, drawdown.org, because it has so much to teach me about climate change. Recently, I read one of their publications about Reducing Black Carbon. The executive summary begins:
Black carbon—also referred to as soot—is a particulate matter that results from the incomplete combustion of fossil fuels and biomass. As a major air and climate pollutant, black carbon (BC) emissions have widespread adverse effects on human health and climate change. Globally, exposure to unhealthy levels of particulate matter, including BC, is estimated to cause between three and six million excess deaths every year. These health impacts—and the related economic losses—are felt disproportionately by those living in low- and middle-income countries. Furthermore, BC is a potent greenhouse gas with a short-term global warming potential well beyond carbon dioxide and methane. Worse still, it is often deposited on sea ice and glaciers, reducing reflectivity and accelerating melting, particularly in the Arctic and Himalayas.

Therefore, reducing BC emissions results in a triple win, mitigating climate change, improving the lives of more than two billion people currently exposed to unclean air, and saving trillions of dollars in economic losses.
As I learned more, I found that black carbon is only one type of fine particles in the air. I begin to wonder “where have I heard about the risk of particulate matter before?” Then it hit me: Section 17.12 of Intermediate Physics for Medicine and Biology, which is about radon. Russ Hobbie and I wrote
Uranium, and therefore radium and radon, are present in most rocks and soil. Radon, a noble gas, percolates through grainy rocks and soil and enters the air and water in different concentrations. Although radon is a noble gas, its decay products have different chemical properties and attach to dust or aerosol droplets which can collect in the lungs. High levels of radon products in the lungs have been shown by both epidemiological studies of uranium miners and by animal studies to cause lung cancer.

Aha! Perhaps black carbon is an effective carrier of radon decay products into the lungs. This is just a hypothesis, but I did find a reference that supported the idea (Wang et al., “Particle Radioactivity from Radon Decay Products and Reduced Pulmonary Function Among Chronic Obstructive Pulmonary Disease Patients,” Environmental Research, Volume 216, Article Number 114492, 2023). Below I present part of their introduction (references removed)

Consistent with the existing literature on ambient particulate matter (PM) exposure, our previous studies found that indoor PM was associated with increased systemic inflammation and oxidative stress and reduced pulmonary function among [chronic obstructive pulmonary disease] patients in Eastern Massachusetts. It has recently been recognized that an attribute of PM with potential to promote pulmonary damage after inhalation is radionuclides attached to PM, referred to as particle radioactivity (PR). Though ionizing radiation has many sources (e.g., cosmic radiation and medical procedures), the majority of natural background radiation (and, thus, of PR) is from radon (222Rn), which decays into α-, β-, and γ-emitting decay products. Although radon gas itself is rapidly exhaled, freshly generated radon decay products (also referred to a radon progeny) can rapidly attach to particles in the ambient and indoor air and be inhaled into the airways. After deposition, particles continue to emit radiation in the lungs with a residence time that can range from several days to months. Compared to β- and γ-emissions from radionuclides, α-emitting particles are considered the most toxic due to their high energy and large mass. Since α-radiation cannot penetrate the intact epidermis, inhalation is the predominant route of exposure, and evidence that α-radiation may cause pulmonary damage is suggested by its effects on inducing inflammation and reactive oxygen species in human lung fibroblasts as well as up-regulating gene pathways in human pulmonary epithelial cells associated with inflammatory and respiratory diseases.

I didn’t find any mention of radon in Drawdown’s publication Reducing Black Carbon or in the World Health Organization’s publication Health Effects of Black Carbon. I don’t know if radon is an important part of the mechanism by which black carbon causes health hazards. Yet, I wonder. I know that radon is a more serious hazard among smokers compared to nonsmokers, and smoking should have similarities to breathing soot. This black carbon/radon hypothesis raises some interesting questions. Is black carbon more effective than other types of particulate matter in transporting radon decay products? Does global warming increase lung cancer? Is black carbon more dangerous in areas with high radon concentrations? Is black carbon more hazardous for people living in poorly ventilated buildings rather than in well-ventilated buildings or outdoors?

Soot is clearly bad news. As drawdown.com says, it’s a triple threat: climate, health, and well-being. They offer several ideas for reducing black carbon:

  1. Urgently implement clean cooking solutions
  2. Target transportation to reduce current—and prevent future—emissions
  3. Reduce BC from the shipping industry
  4. Regulate air quality
  5. Include BC in nationally determined contributions and the United Nations Framework Convention on Climate Change
  6. Improve BC measurements and estimates

The item about regulating air quality makes me speculate if a positive feedback loop could underlie the impact of black carbon on the climate: Soot in the air increases global warming; increased global warming increases the number of forest fires, and an increased number of forest fires increases the amount of soot in the air. Again, this is just a hypothesis, and I don’t know it’s true. But I do know that in my 25 years living in Michigan, the only serious problem with air pollution and soot I’ve experienced was caused by last summer’s Canadian forest fires, and such fires appear, at least to me, to be related to global warming.

 

Black carbon may be one of the places where climate change and IPMB intersect. It’s an important topic and deserves closer study.

Friday, August 30, 2024

Joe Redish (1943–2024)

Edward “Joe” Redish, a University of Maryland physics professor, died August 24 of cancer. Joe has been mentioned many times in this blog (here, here, here, and here). He was deeply interested in how students—and in particular biology students—learn physics, an interest with obvious relevance to Intermediate Physics for Medicine and Biology.

Redish, E. F.,  “Using Math in Physics: 7. Telling the Story,” Phys. Teach., 62: 5–11, 2024, on the cover of Intermediate Physics for Medicine and Biology.
Redish, E. F., 
“Using Math in Physics: 7. Telling the Story,”
Phys. Teach.
, 62: 5–11, 2024.
I knew Joe, and valued his friendship. Rather than writing about him myself,  I’ll share some of his thoughts in his own words. He had a wonderful series of papers in The Physics Teacher about using math in physics. The last of the series (published this year) was about using math to tell a story (Redish, E. F., “Using Math in Physics: 7. Telling the Story,” Phys. Teach., Volume 62, Pages 5–11, 2024). He wrote

Even if students can make the blend—interpret physics correctly in mathematical symbology and graphs—they still need to be able to apply that knowledge in productive and coherent ways. As instructors, we can show our solutions to complex problems in class. We can give complex problems to students as homework. But our students are likely to still have trouble because they are missing a key element of making sense of how we think about physics: How to tell the story of what’s happening.

We use math in physics differently than it’s used in math classes. In math classes, students manipulate equations with abstract symbols that usually have no physical meaning. In physics, we blend conceptual physics knowledge with mathematical symbology. This changes the way that we use math and what we can do with it.

We use these blended mental structures to create stories about what’s happening (mechanism) and stabilize them with fundamental physical laws (synthesis).
In an oral history interview with the American Institute of Physics, Joe talked about using simple toy models when teaching physics to biology students.
One of the problems that students run into, that teachers of physics run into teaching biology students, is we use all these trivial toy models, right? Frictionless vacuum. Ignore air resistance. Treat it as a point mass. And the biology students come in and they look at this and they say, “These are not relevant. This is not the real world.” And they know in biology, that if you simplify a system, it dies. You can’t do that. In physics we do this all the time. Simple models are kind of a core epistemological resource for us. You find the simplest example you possibly can and you beat it to death. It illustrates the principle. Then you see how the mathematics goes with the physics. The whole issue of finding simple models is where a lot of the creative art is in physics.
Redish and Cooke, “Learning Each Other’s Ropes: Negotiating Interdisciplinary Authenticity” CBE—Life Sciences Education, 12:175–186, 2013, on the cover of Intermediate Physics for Medicine and Biology.
Redish and Cooke,
Learning Each Other’s Ropes:
Negotiating Interdisciplinary Authenticity

CBE—Life Sciences Education
,
12:175–186, 2013.
My favorite of Joe’s papers is “Learning Each Other’s Ropes: Negotiating Interdisciplinary Authenticity” which he coauthored with biologist Todd Cooke (CBE—Life Sciences Education, Volume 12, Pages 175–186, 2013).
From our extended conversations, both with each other and with other biologists, chemists, and physicists, we conclude that, “science is not just science.” Scientists in each discipline employ a tool kit of different types of scientific reasoning. A particular discipline is not characterized by the exclusive use of a set of particular reasoning types, but each discipline is characterized by the tendency to emphasize some types more than others and to value different kinds of knowledge differently. The physicist’s enthusiasm for characterizing an object as a disembodied point mass can make a biologist uncomfortable, because biologists find in biology that function is directly related to structure. Yet similar sorts of simplified structures can be very powerful in some biological analyses. The enthusiasm that some biologists feel toward our students learning physics is based not so much on the potential for students to learn physics knowledge, but rather on the potential for them to learn the types of reasoning more often experienced in physics classes. They do not want their students to think like physicists. They want them to think like biologists who have access to many of the tools and skills physicists introduce in introductory physics classes… We conclude that the process is significantly more complex than many reformers working largely within their discipline often assume. But the process of learning each other’s ropes—at least to the extent that we can understand each other’s goals and ask each other challenging questions—can be both enlightening and enjoyable. And much to our surprise, we each feel that we have developed a deeper understanding of our own discipline as a result of our discussions.

You can listen to Joe talk about physics education research on the Physics Alive podcast.

We’ll miss ya, Joe.

Friday, August 23, 2024

The Song of the Dodo

The Song of the Dodo,
by David Quammem.
One of my favorite science writers is David Quammen. I’ve discussed several of his books in this blog before, such as Breathless, Spillover, and The Tangled Tree. A copy of one of his earlier books—The Song of the Dodo: Island Biogeography in an Age of Extinctions—has sat on my bookshelf for a while, but only recently have I had a chance to read it. I shouldn’t have waited so long. It’s my favorite.

Quammen is not surprised that the central idea of biology, natural selection, was proposed by two scientists who studied islands: Charles Darwin and the Galapagos, and Alfred Russell Wallace and the Malay Archipelago. The book begins by telling Wallace’s story. Quammen calls him “the man who knew islands.” Wallace was the founder of the science of biogeography: the study of how species are distributed throughout the world. For example, Wallace’s line lies between two islands in Indonesia that are only 20 miles apart: Bali (with plants and animals similar to those native to Asia) and Lombok (with flora and fauna more like that found in Australia). Because islands are so isolated, they are excellent laboratories for studying speciation (the creation of new species through evolution) and extinction (the disappearance of existing species).

Quammen is the best writer about evolution since Stephen Jay Gould. I would say that Gould was better at penning essays and Quammen is better at authoring books. Much of The Song of the Dodo deals with the history of science. I would rank it up there with my favorite history of science books: The Making of the Atomic Bomb by Richard Rhodes, The Eighth Day of Creation by Horace Freeland Judson, and The Maxwellians by Bruce Hunt.

Yet, The Song of the Dodo is more than just a history. It’s also an amazing travelogue. Quammen doesn’t merely write about islands. He visits them, crawling through rugged jungles to see firsthand animals such as the Komodo Dragon (a giant man-eating lizard), the Madagascan Indri (a type of lemur), and the Thylacine (a marsupial also known as the Tasmanian tiger). A few parts of The Song of the Dodo are one comic sidekick away from sounding like a travel book Tony Horwitz might have written. Quammen talks with renowned scientists and takes part in their research. He reminds me of George Plimpton, sampling different fields of science instead of trying out various sports.

Although I consider myself a big Quammen fan, he does have one habit that bugs me. He hates math and assumes his readers hate it too. In fact, if Quammen’s wife Betsy wanted to get rid of her husband, she would only need to open Intermediate Physics for Medicine and Biology to a random page and flash its many mathematical equations in front of his face. It would put him into shock, and he probably wouldn’t last the hour. In his book, Quammen only presents one equation and apologizes profusely for it. It’s a power law relationship

S = c An .

This is the same equation that Russ Hobbie and I analyze in Chapter 2 of IPMB, when discussing log-log plots and scaling. How do you determine the dimensionless exponent n for a particular case? As is my wont, I’ll show you in a new homework problem.
Section 2.11

Problem 40½. In island biogeography, the number of species on an island, S, is related to the area of the island, A, by the species-area relationship: S = c An, where c and n are constants. Philip Darlington counted the number of reptile and amphibian species from several islands in the Antilles. He found that when the island area increased by a factor of ten, the number of species doubled. Determine the value of n.
Let me explain to mathaphobes like Quammen how to solve the problem. Assume that on one island there are S0 species and the area is A0. On another island, there are 2S0 species and an area of 10A0. Put these values into the power law to find S0 = cA0n and 2S0 = c(10A0)n. Now divide the second equation by the first (c, S0, and A0 all cancel) to find 2 = 10n. Take the logarithm of both sides, so log(2) = log(10n), or using a property of logarithms log(2) = n log(10). So n = log(2)/log(10) = 0.3. Note that n is positive, as it should be since increasing the area increases the number of species.

When I finished the main text of The Song of the Dodo, I thumbed through the glossary and found an entry for logarithm. “Aww,” I thought, “Quammen was only joking; he likes math after all.” Then I read his definition: “logarithm. A mathematical thing. Never mind.”

About halfway through, the book makes a remarkable leap from island biogeography—interesting for its history and relevance to exotic tropical isles—to mainland ecology, relevant to critical conservation efforts. Natural habitats on the continents are being broken up into patches, a process called fragmentation. The expansion of towns and farms creates small natural reserves surrounded by inhospitable homes and fields. The few remaining native regions tend to be small and isolated, making them similar to islands. A small natural reserve cannot support the species diversity that a large continent can (S = c An). Extinctions inevitably follow.

The Song of the Dodo also provides insight into how science is done. For instance, the species-area relationship was derived by Robert MacArthur and Edward Wilson. While it’s a valuable contribution to island biogeography, scientists disagree on its applicability to fragmented continents, and in particular they argue about its relevance to applied conservation. Is a single large reserve better than several small ones? In the 1970s a scientific battle raged, with Jared Diamond supporting a narrow interpretation of the species-area relationship and Dan Simberloff advocating for a more nuanced and less dogmatic view. As in any science, the key is to get data to test your hypothesis. Thomas Lovejoy performed an experiment in the Amazon to test the species-area relationship. Parts of the rainforest were being cleared for agriculture or other uses, but the Brazilian government insisted on preserving some of the native habitat. Lovejoy obtained permission to create many different protected rainforest reserves, each a different size. His team monitored the reserves before and after they became isolated from adjacent lands, and tracked the number of species supported in each of these “islands” over time. While the results are complicated, there is a correlation between species diversity and reserve size. Area matters.

One theme that runs through the story is extinction. If you read the book, you better have your hanky ready when you reach the part where Quammen imagines the death of the last Dodo bird. Conservation efforts are featured throughout the text, such as the quest to save the Mauritius kestrel.  
 
The Song of the Dodo concludes with a mix of optimism and pessimism. Near the end of the book, when writing about his trip to Aru (an island in eastern Indonesia) to observe a rare Bird of Paradise, Quammen writes
The sad, dire things that have happened elsewhere, in so many parts of the world—biological imperialism, massive habitat destruction, fragmentation, inbreeding depression, loss of adaptability, decline of wild populations to unviable population levels, ecosystem decay, trophic cascades, extinction, extinction, extinction—haven’t yet happened here. Probably they soon will. Meanwhile, though, there’s still time. If time is hope, there’s still hope.

An interview with David Quammen, by www.authorsroad.com

https://www.youtube.com/watch?v=Quq7PNH1zWM