Friday, November 8, 2024

International Day of Medical Physics Poster

Yesterday was the International Day of Medical Physics. This event is organized by the International Organization for Medical Physics, and is held each year on November 7, the birthday of Marie Curie. This year’s theme is “Inspiring the Next Generation of Medical Physicists.”

The IOMP held a poster design contest to celebrate the event. The winning poster was created by Lavanya Murugan from Rajiv Gandhi Government General Hospital and Madras Medical College in Chennai, India. IDMP coordinator Ibrahim Duhaini (who works right here in Michigan at Wayne State University) wrote that “Her artwork beautifully captures the theme and spirit of this year’s IDMP and will continuously serve as an inspiration to others… Let us all commit to being beacons of inspiration for the next generation.” I couldn’t have said it better (but maybe Randy Travis could).

The award-winning poster, a masterpiece, is shown below. In case you can’t read it, the quote in the center is by Curie: “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.” Never has this quote been more relevant than now, as we face the dire health threats generated by climate change. I can identify many of the famous physicists and medical physicists in the poster. Can you? By the way, that little sticky note on the upper left of the frame contains a conversion factor indicating that one roentgen deposits 0.877 rads in dry air.

Award winning poster for the International Day of Medical Physics.
The winning poster of the design contest associated with the
International Day of Medical Physics 2024.

Lavanya sent me her thoughts about the design of the poster.

Inspiration: Once, I gave up my dream of becoming an artist to pursue a career in Medical Physics. This piece of art is a reflection of my study wall and myself, inspired by the world around me.
Technique: It’s a digital Art piece.
This artwork portrays a young girl immersed in her studies, surrounded by images of great scientists who have contributed to the field of radiation. The wall features news clips about Roentgen’s groundbreaking discovery and a picture of Marie Curie’s notebook, symbolizing power of radiating knowledge. Everyone experiences uncertainty about their knowledge, future and career at some point. Believing in ourselves is the first step to achieving our goals. The individuals whose photos adorn the wall were once in our shoes, grappling with doubts and questioning their abilities. Yet, they persevered, never giving up and ultimately inspiring us in the field of radiation. Today, we proudly serve healthcare and humanity as Medical Physicists, standing on their shoulders.
I have included one of my favourite quotes from Marie Curie, a female scientist who has been inspiring women in research: “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.”
Everyone fears radiation and its impact on mankind, but people like us choose to be radiation professionals regardless of the risks involved. This quote inspires us to understand the risks for the betterment of this field.
The message I wanted to convey through this art is to inspire the next generation of Medical Physicists to contribute their best to our field, following in the footsteps of the great minds of our past.

Lavanya is a medical physicist with over eight years of clinical experience in radiotherapy, nuclear medicine, and radiology. She excels in treatment planning, quality assurance, and treatment delivery. She’s also an artist, creating artwork under the pseudonym “Nivi.” You can find many of her pieces at her Instagram account. Below I show a few that are related to medical physics. 

Lavanya calls this a “boredom doodle.”
Lavanya calls this a “boredom doodle.” You can see a tiny version of it
to the right of the Curie quote in her award winning poster.

 
This radiotherapy picture features many of the topics discussed in Intermediate Physics for Medicine and Biology.
This radiotherapy picture features many of the topics
discussed in Intermediate Physics for Medicine and Biology.

Lavanya at work as a medical physicist.
Lavanya at work as a medical physicist.






Randy Travis singing “Point of Light.”

https://www.youtube.com/watch?v=w3a8i2F1Mf0

 

 
International Day of Medical Physics 2024 message by Raymond Wu

https://www.youtube.com/watch?v=rKxZZEFv0Bo

 

Friday, November 1, 2024

Why Are Oxygen and Nitrogen Not Greenhouse Gases But Carbon Dioxide and Water Vapor Are?

In last week’s blog post about A Toy Model for Climate Change, I wrote
“The main constituents of the atmosphere—oxygen and nitrogen—are transparent to both visible and thermal radiation, so they don’t contribute to eA [the fraction of the earth’s infrared radiation that the atmosphere absorbs]. Thermal energy is primarily absorbed by greenhouse gases. Examples of such gases are water vapor, carbon dioxide, and methane.”

I never discussed why oxygen and nitrogen are not greenhouse gasses, although water vapor and carbon dioxide are. Today, I’ll address this question.

Below is a list of gasses in our atmosphere and their abundance.

    Nitrogen N2 78%
    Oxygen O2 21%
    Argon Ar   1%
    Carbon dioxide CO2
  0.03%
    Water vapor
H2O   0–4%
    Neon Ne  18 ppm
    Helium
He    5 ppm
    Methane CH4    2 ppm
    Krypton Kr    1 ppm
    Sulfur dioxide
SO2    1 ppm
    Hydrogen H2    0.5 ppm
    Nitrous Oxide
N2O    0.5 ppm

In order to absorb infrared radiation, a molecule must have a dipole moment that can oscillate with the same frequency as the infrared electromagnetic wave. Let’s look at these molecules case by case.

Nitrogen

Nitrogen (N2) is diatomic; it consists of two nitrogen atoms bound together. Because the two atoms are the same, they share the electron charge equally. If there is no charge separation, then there is no dipole moment to oscillate at the frequency of the infrared radiation. Therefore, diatomic nitrogen—by far the most abundant molecule in our atmosphere, with nearly four out of every five molecules being N2—does not absorb infrared radiation. It’s not a greenhouse gas.

Oxygen

About one out of every five molecules in the atmosphere is oxygen (O2), which is also diatomic with two identical atoms. Like nitrogen, oxygen can’t absorb infrared radiation. 

Argon

Almost one out of every hundred molecules in the atmosphere is argon (Ar). Argon is a nonreactive noble gas, so it consists of individual atoms. A single atom cannot have a dipole moment, so argon can’t absorb infrared radiation. Neither can the other noble gasses: neon, helium, and krypton

Carbon dioxide

The next most abundant gas is carbon dioxide (CO2), which makes up less than one tenth of one percent of the atmosphere. The above table lists the abundance of carbon dioxide as 0.03%, which corresponds to 300 parts per million (ppm). I must have gotten the 300 ppm value from an old source. Its concentration is now over 400 ppm and is increasing every year. The main cause of global warming is the rapidly increasing carbon dioxide concentration.

The carbon dioxide molecule has a linear structure; it has a central carbon atom surrounded by two oxygen atoms, one on each side, so the molecule forms a straight line. Perhaps instead of writing it as CO2 we should write OCO. The electrons of this molecule are more attracted to the oxygen atoms than the carbon atom, so the carbon carries a partially positive charge and the two oxygen atoms each are partially negative. But because of its linear structure, at equilibrium there is no net dipole moment. You can think of it as consisting of two dipoles with equal strength but oriented in opposite directions, so they cancel out.

Carbon dioxide has three types of “vibrational modes” (see the video at the end of this post). One is a symmetric stretch, where the two oxygen atoms move together outward or inward from the central carbon atom. This makes the OCO molecule first get longer and then shorter, but it still consists of two equal but opposite dipoles that add to zero. Thus, this mode does not produce a dipole, so it cannot absorb infrared radiation. 

Carbon dioxide can also undergo an asymmetric vibration, in which one of the oxygen atoms is moving inward or outward, and the other is moving outward or inward. In this case, the molecule maintains the same length, but the position of the oxygen atoms oscillate back and forth, with one being closer to the carbon atom and then the other. Now the two dipoles don’t cancel, so there’s a net dipole moment. (Think of the dipole moment as the charge times the distance; Even if the partial charge on each atom does not change, the different distances of each oxygen atom from the central carbon atom will alter the net dipole moment.) So, this mode of vibration will absorb infrared radiation. Carbon dioxide is a greenhouse gas.

Just for completeness, CO2 also has bending modes, where the two oxygen atoms move back and forth in a plane parallel to the line of the molecule (see the video). Again, these modes induce a dipole that can oscillate in synchrony with infrared radiation and are therefore greenhouse active. Carbon dioxide is the primary contributor to climate change. 

The earth is lucky that carbon dioxide has such a low concentration in its atmosphere. I wonder what would happen if most of our atmosphere consisted of CO2 instead of oxygen and nitrogen. Oh, wait… we don’t have to wonder. The atmosphere of Venus is 96% CO2, and Venus has an average surface temperature of 464°C (well above the boiling point of water). Wow! 

Water vapor

Water vapor (H2O) is a special case. Its abundance in the atmosphere is not constant. It can vary from nearly zero to about 4%, depending on the humidity. A molecule of water is also different than carbon dioxide because it is not a linear molecule. Figure 6.18 in Intermediate Physics for Medicine and Biology shows the structure of a water molecule, with its oxygen atom having a partial negative charge and its hydrogen atoms being partially positive. Even when at rest, a molecule of water has a dipole moment. The water molecule has several vibrational modes, all of which cause this dipole moment to change, and it’s therefore an absorber of infrared radiation.

Fig. 6.18 from Intermediate Physics for Medicine and Biology, showing the structure of a water molecule.

In the last post, I mentioned that feedback loops affect the climate. Water vapor provides an example. As the atmosphere heats up, it can hold more water vapor (see Homework Problems 65 and 66 in Chapter 3 of IPMB). More water vapor means more infrared absorption. More infrared absorption means more heating of the atmosphere, which means the atmosphere can hold more water vapor, which means more infrared absorption and heating, and so on. A positive feedback loop is sometimes called a vicious cycle.

Some of the water in the atmosphere is in the form of clouds. Clouds play a complex role in climate change. They can block the sunlight and therefore contribute to cooling. But it’s complicated.

Methane

Methane (CH4) is a very active infrared absorber. The methane molecule consists of a central carbon atom with partial negative charge, surrounded by a tetrahedron of four hydrogen atoms each with a partial positive change. Like carbon dioxide, when in equilibrium methane has no net dipole moment. However, methane has many complicated rotational and vibrational modes, in part because it consists of so many atoms. Many of those modes result in a changing dipole moment, similar to what we saw for carbon dioxide. So, methane can absorb infrared radiation and is an important greenhouse gas. Molecule for molecule, methane is a much stronger greenhouse gas than carbon dioxide. The only reason it doesn’t contribute more to global warming is that its concentration is so low. 

Sulfur dioxide

A molecule of sulfur dioxide (SO2) is a lot like a molecule of water, with a bent shape. In this case, the central sulfur atom carries a partial positive charge and the two oxygen atoms are partially negative. Water is a stable molecule but sulfur dioxide is chemically reactive. If it is present in a high concentration it’s hazardous to your health. In that case, its contribution as a greenhouse gas will be the least of your problems. It’s often emitted when burning fossil fuels (especially coal), and is considered an air pollutant. 

Sulfur dioxide can interact with water vapor to form tiny droplets called aerosols. These aerosols can remain in the air for years and reflect incoming sunlight (somewhat like clouds do). In this way, sulfur dioxide can have a cooling effect in addition to its greenhouse gas warming effect. On the whole, the aerosol cooling dominates, so sulfur dioxide cools the earth. It’s often released during volcanic eruptions, which can lead to cooler summers and colder winters for a few years.

Hydrogen

There is a tiny bit of hydrogen gas (H2) in the atmosphere, but like oxygen and nitrogen it’s diatomic so it doesn’t absorb infrared radiation. 

Nitrous oxide

Finally, nitrous oxide (laughing gas, N2O) is similar in structure to sulfur dioxide and water. Like sulfur dioxide, it’s a form of air pollution and can be a greenhouse gas too (although its concentration is so small that it doesn’t make much contribution to global warming). Our atmosphere consists mostly of nitrogen and oxygen. We are fortunate that the most common form these elements take in the atmosphere are diatomic N2 and O2. Imagine what would happen if chemistry was slightly different, so that a large fraction of our atmosphere was N2O instead of N2 and O2. Yikes!

 Gasses in the earth's atmosphere.

https://www.youtube.com/watch?v=BPdfKxS3rUc

 


 Carbon dioxide vibration modes.

https://www.youtube.com/watch?v=AauIOanNaWk

 

The normal modes of methane.

https://www.youtube.com/watch?v=v3QPe6-37bk

 

Friday, October 25, 2024

A Toy Model of Climate Change

Introduction

A screenshot of the online book
Math for the People.
In Introductory Physics for Medicine and Biology, Russ Hobbie and I make use of toy models. Such mathematical descriptions are not intended to be accurate or realistic. Rather, they‘re simple models that capture the main idea without getting bogged down in the details. Today, I present an example of a toy model. It’s not related to medicine or biology, but instead describes climate change. I didn’t originally derive this model. Much of the analysis below comes from other sources, such as the online book Math for the People published by Mark Branson and Whitney George.

Earth Without an Atmosphere

First, consider the earth with no atmosphere. We will balance the energy coming into the earth from the sun with the energy from the earth that is radiated out into space. Our goal will be to calculate the earth’s temperature, T.

The power density (energy per unit time per unit area, in watts per square meter) emitted by the sun is called the solar constant, S. It depends on how far you are from the sun, but at the earth’s orbit S = 1360 W/m2. To get the total power impinging on our planet, we must multiply S by the area subtended by the earth, which is πR2, where R is the earth’s radius (R = 6.4 × 106 meters). This gives SπR2 = 1.8 × 1017 W, or nearly 200,000 TW (T, or tera-, means one trillion). That’s a lot of power. The total average power consumption by humanity is only about 20 TW, so there’s plenty of energy from the sun.

We often prefer to talk about the energy loss or gain per unit area of the earth’s surface. The surface area of the earth is 4πR2 (the factor of four comes from the total surface area of the spherical earth, in contrast to the area subtended by the earth when viewed from the sun). The power per unit area of the earth’s surface is therefore SπR2/4πR2, or S/4.

Not all of this energy is absorbed by the earth; some is reflected back into space. The albedo, a, is a dimensionless number that indicates the fraction of the sun’s energy that is reflected. The power absorbed per unit area is then (1 – a)S/4. About 30% of the sun’s energy is reflected (a = 0.3), so the power of sunlight absorbed by the earth per unit of surface area is 238 W/m2.

What happens to that energy? The sun heats the earth to a temperature T. Any hot object radiates energy. Such thermal radiation is analyzed in Section 14.8 of Intermediate Physics for Medicine and Biology. The radiated power per unit area is equal to eσT4. The symbol σ is the Stefan-Boltzmann constant, σ = 5.7 × 10–8 W/(m2 K4). As stated earlier, T is the earth’s temperature. When raising the temperature to the fourth power, T must be expressed as the absolute temperature measured in kelvin (K). Sometimes it’s convenient at the end of a calculation to convert kelvin to the more familiar degrees Celsius (°C), where 0°C = 273 K. But remember, all calculations of T4 must use kelvin. Finally, e is the emissivity of the earth, which is a measure of how well the earth absorbs and emits radiation. The emissivity is another dimensionless number ranging between zero and one. The earth is an excellent emitter and absorber, so e = 1. From now on, I’ll not even bother including e in our equations, in which case the power density emitted is just σT4.

Let’s assume the earth is in steady state, meaning the temperature is not increasing or decreasing. Then the power in must equal the power out, so 

(1 – a)S/4 = σT4

Solving for the temperature gives

T = ∜[(1 – a)S/4σ] .

Because we know a, S, and σ, we can calculate the temperature. It is T = 254 K = –19°C. That’s really cold (remember, in the Celsius scale water freezes at 0°C). Without an atmosphere, the earth would be a frozen wasteland.

Earth With an Atmosphere

Often we can learn much from a toy model by adding in complications, one by one. Now, we’ll include an atmosphere around earth. We must keep track of the power into and out of both the earth and the atmosphere. The earth has temperature TE and the atmosphere has temperature TA.

First, let’s analyze the atmosphere. Sunlight passes right through the air without being absorbed because it’s mainly visible light and our atmosphere is transparent in the visible part of the spectrum. The main source of thermal (or infrared) radiation (for which the atmosphere is NOT transparent) is from the earth. We already know how much that is, σTE4. The atmosphere only absorbs a fraction of the earth’s radiation, eA, so the power per unit area absorbed by the atmosphere is eAσTE4.

Just like the earth, the atmosphere will heat up to a temperature TA and emit its own thermal radiation. The emitted power per unit area is eAσTA4. However, the atmosphere has upper and lower surfaces, and we’ll assume they both emit equally well. So the total power emitted by the atmosphere per unit area is 2eAσTA4.

If we balance the power in and out of the atmosphere, we get 

eAσTE4 = 2eAσTA4

Interestingly, the fraction of radiation absorbed by the atmosphere, eA, cancels out of our equation (a good emitter is also a good absorber). The Stefan-Boltzmann constant σ also cancels, and we just get TE4 = 2TA4. If we take the forth root of each side of the equation, we find that TA = 0.84 TE. The atmosphere is somewhat cooler than the earth.

Next, let’s reanalyze the power into and out of the earth when surrounded by an atmosphere. The sunlight power per unit area impinging on earth is still (1 – a)S/4. The radiation emitted by the earth is still σTE4. However, the thermal radiation produced by the atmosphere that is aimed inward toward the earth is all absorbed by the earth (since the emissivity of the earth is one, eE = 1), so this provides another factor of eAσTA4. Balancing power in and out gives

(1 – a)S/4 + eAσTA4 = σTE4 .

Notice that if eA were zero, this would be the same relationship as we found when there was no atmosphere: (1 – a)S/4 = σTE4. The atmosphere provides additional heating, warming the earth.

We found earlier that TE4 = 2TA4. If we rewrite this as TA4 = TE4/2 and plug that into our energy balance equation, we get

(1 – a)S/4 + eAσTE4/2 = σTE4 .

With a bit of algebra, we find

(1 – a)S/4 = σTE4 (1 – eA/2) .

Solving for the earth’s temperature gives

TE = ∜[(1 – a)S/4σ] ∜[1/(1 – eA/2) ] .

If eA were zero, this would be exactly the relationship we had for no atmosphere. The fraction of energy absorbed by the atmosphere is not zero, however, but is approximately eA = 0.8. The atmosphere provides a dimensionless correction factor of ∜[1/(1 – eA/2)]. The temperature we found previously, 254 K, is corrected by this factor, 1.136. We get TE = 288.5 K = 15.5 °C. This is approximately the average temperature of the earth. Our atmosphere raised the earth’s temperature from –19°C to +15.5°C, a change of 34.5°C.

Climate Change

To understand climate change, we need to look more deeply into the meaning of the factor eA, the fraction of energy absorbed by the atmosphere. The main constituents of the atmosphere—oxygen and nitrogen—are transparent to both visible and thermal radiation, so they don’t contribute to eA. Thermal energy is primarily absorbed by greenhouse gases. Examples of such gases are water vapor, carbon dioxide, and methane. Methane is an excellent absorber of thermal radiation, but its concentration in the atmosphere is low. Water vapor is a good absorber, but water vapor is in equilibrium with liquid water, so it isn’t changing much. Carbon dioxide is a good absorber, has a relatively high concentration, and is being produced by burning fossil fuels, so a lot of our discussion about climate change focuses on carbon dioxide.

The key to understanding climate change is that greenhouse gasses like carbon dioxide affect the fraction of energy absorbed, eA. Suppose an increase in the carbon dioxide concentration in the atmosphere increased eA slightly, from 0.80 to 0.81. The correction factor  ∜(1/(1 – eA/2) ) would increase from 1.136 to 1.139, changing the temperature from 288.5 K to 289.3 K, implying an increase in temperature of 0.8 K. Because changes in temperature are the same if expressed in kelvin or Celsius, this is a 0.8°C rise. A small change in eA causes a significant change in the earth’s temperature. The more carbon dioxide in the atmosphere, the greater the temperature rise: Global warming.

Feedback

We have assumed the earth’s albedo, a, is a constant, but that is not strictly true. The albedo depends on how much snow and ice cover the earth. More snow and ice means more reflection, a larger albedo, a smaller amount of sunlight absorbed by the earth, and a lower temperature. But a lower temperature means more snow and ice. We have a viscous cycle: more snow and ice leads to a lower temperature which leads to more snow and ice, which leads to an even lower temperature, and so on. Intermediate Physics for Medicine and Biology dedicates an entire chapter to feedback, but it focuses mainly on negative feedback that tends to maintain a system in equilibrium. A viscous cycle is an example of positive feedback, which can lead to explosive change. An example from biology is the upstroke of a nerve action potential: an increase in the electrical voltage inside a nerve cell leads to an opening of sodium channels in the cell membrane, which lets positively charged sodium ions enter the cell, which causes the voltage inside the cell to increase even more. The earth’s climate has many such feedback loops. They are one of the reasons why climate modeling is so complicated.

Conclusion

Today I presented a simple description of the earth’s temperature and the impact of climate change. Many things were left out of this toy model. I ignored differences in temperature over the earth’s surface and within the atmosphere. I neglected ocean currents and the jet stream that move heat around the globe. I did not account for seasonal variations, or for other greenhouse gasses such as methane and water vapor, or how the amount of water vapor changes with temperature, or how clouds affect the albedo, and a myriad of other factors. Climate modeling is a complex subject. But toy models like I presented today provide insight into the underlying physical mechanisms. For that reason, they are crucial for understanding complex phenomena such as climate change.

Friday, October 18, 2024

A Continuum Model for Volume and Solute Transport in a Pore

As Gene Surdutovich and I prepare the 6th edition of Intermediate Physics for Medicine and Biology, we have to make many difficult decisions. We want to streamline the book, making it shorter and more focused on key concepts, with fewer digressions. Yet, what one instructor may view as “fat” another may consider part of the “meat.” One of these tough choices involves Section 5.9 (A Continuum Model for Volume and Solute Transport in a Pore).

Neither Gene nor I cover the rather long Sec. 5.9 when we teach our Biological Physics class; there just isn’t enough time. So, at the moment this section has been axed from the 6th edition. It now lies abandoned on the cutting room floor. (But, using LaTex’s “comment” feature we could reinstate it in a moment; there’s always hope.) Russ Hobbie would probably object, because I know he was fond of that material. Today, I want to revisit that section once more, for old times sake.

The section develops a model of solute flow through a pores in a membrane. One key parameter it derives is the “reflection coefficient,” σ, which accounts for the size of the solute particle. If the solute radius, a, is small compared to the pore radius, Rp, then solute can easily pass through and almost none is “reflected” or excluded from passing through the pore. In that case, the reflection coefficient goes to zero. If the solute radius is larger than the pore radius, the solute can’t pass through (it’s too big!); it’s completely blocked and the reflection coefficient is one. The transition from σ = 0 to σ = 1 for medium-sized solute particles depends on the pore model.

The fifth edition of IPMB presents two models to calculate how the reflection coefficient varies with solute radius. The figure below summarizes them. It is similar to Fig. 5.15 in IPMB, but is drawn with Mathematica as many of the figures in the 6th edition will be. 

The blue curve shows σ as a function of ξ = a/Rp, and represents the “steric factor” 2ξ ξ2. It arises from a model that assumes there is plug flow of solvent (usually water) through the pore; the flow velocity does not depend on position. The maize curve shows a more complex model that accounts for Poiseuille flow in the pore (no flow at the pore edge and a parabolic flow distribution that peaks in the pore center), and gives the reflection coefficient as 4ξ2 – 4ξ3ξ4. (Is it a coincidence that I use the University of Michigan’s school colors, blue and maize, for the two curves? Actually, it is.) Both vary between zero and one.

You can consult the textbook for the mathematical derivations of these functions. Today, I want to see if we can understand them qualitatively. For plug flow, reflection occurs if the solute is within one particle radius of the pore edge. In that case, the number of particles that reflect grows linearly with particle radius. The steric factor 2ξ ξ2 has this behavior. For Poiseuille flow, the size of the particle relative to the pore radius similarly plays a role. However, the flow is zero near the pore wall. Therefore, tiny particles adjacent to the edge did not contribute much to the flow anyway, so making them slightly larger does not make much difference. The reflection coefficient grows quadratically near ξ = 0, because as the particle radius increases you have more particles that would be blocked by the pore edge, and because the larger size of the particle means that it experiences a greater flow of solvent as you move radially in from the pore edge. So, the relative behavior of the two curves for small radius makes sense. In fact, for small values of ξ the two functions are quite different. At ξ = 0.1, the blue curve is over five times larger than the maize curve.

I find explaining what is happening for ξ approximately equal to one is more difficult. For plug flow, when the solute particle is just slightly smaller than the pore radius, it barely fits. But for Poiseuille flow, the particle not only barely fits, but it blocks all the fast flow near the pore center and you only get a contribution from the slow flow near the edge. This causes the maize curve to be more sensitive to what is happening near ξ = 1 than is the blue curve. I don’t find this explanation as intuitively obvious as the one in the previous paragraph, but it highlights an approximation that becomes important near ξ = 1. The model does not account for adjustment of the flow of solvent when the solute particle is relatively large are disrupts the flow. This can’t really be true. If a particle almost plugged a pore, it must affect the flow distribution. I suspect that the Poiseuille model is most useful for small values of ξ, but the behavior at large ξ (near one) should be taken with a grain of salt.

If find that it’s useful to force yourself (or your student) to provide physical interpretations of mathematical expressions, even when they’re not so obvious. Remember, the goal of doing these analytical toy models is to gain insight.

For those of you who might be disappointed to see Section 5.9 go, my advice is don’t toss out your 5th edition when you buy the 6th (and I’m assuming all of my dear readers will indeed buy the 6th edition). Stash the 5th edition away in your auxiliary bookshelf (or donate it to your school library), and pull it out if you really want a good continuum model for volume and solute transport in a pore.

Friday, October 11, 2024

Extracellular Magnetic Measurements to Determine the Transmembrane Action Potential and the Membrane Conduction Current in a Single Giant Axon

Forty years ago today I was attending my first scientific meeting: The Society for Neuroscience 14th Annual Meeting, held in Anaheim, California (October 10–15, 1984). As a 24-year-old graduate student in the Department of Physics at Vanderbilt University, I presented a poster based on the abstract shown below: “Extracellular Magnetic Measurements to Determine the Transmembrane Action Potential and the Membrane Conduction Current in a Single Giant Axon.”

I can’t remember much about the meeting. I’m sure I flew to California from Nashville, Tennessee, but I can’t recall if my PhD advisor John Wikswo went with me (his name is not listed on any meeting abstract except the one we presented). I believe the meeting was held at the Anaheim Convention Center. I remember walking along the sidewalk outside of Disneyland, but I didn’t go in (I had visited there with my parents as a child).

Neuroscience Society meetings are huge. This one had over 300 sessions and more than 4000 abstracts submitted. In the Oct. 11, 1984 entry in my research notebook, I wrote “My poster session went OK. Several people were quite enthusiastic.” I took notes from talks I listened to, including James Hudspeth discussing hearing, a Presidential Symposium by Gerald Fischbach, and a talk about synaptic biology and learning by Eric Kandel. I was there when Theodore Bullock and Susumu Hagiwara were awarded the Ralph W. Gerard Prize in Neuroscience.

The research Wikswo and I reported in our abstract was eventually published in my first two peer-reviewed journal articles:

Barach, J. P., B. J. Roth and J. P. Wikswo, Jr., 1985, Magnetic Measurements of Action Currents in a Single Nerve Axon: A Core-Conductor Model. IEEE Transactions on Biomedical Engineering, Volume 32, Pages 136-140.

Roth, B. J. and J. P. Wikswo, Jr., 1985, The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment. Biophysical Journal, Volume 48, Pages 93-109.

Both are cited in Chapter 8 of Intermediate Physics for Medicine and Biology.

This neuroscience abstract was not my first publication. I was listed as a coauthor on an abstract to the 1983 March Meeting of the American Physical Society, based on some research I helped with as an undergraduate physics major at the University of Kansas. But I didn’t attend that meeting. In my CV, I have only one publication listed for 1983 and one again in 1984. Then in 1985, they started coming fast and furious. 

Four decades is a long time, but it seems like yesterday.

Friday, October 4, 2024

The Difference between Traditional Magnetic Stimulation and Microcoil Stimulation: Threshold and the Electric Field Gradient

In Chapter 7 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss electrical stimulation of nerves. In particular, we describe how neural excitation depends on the duration of the stimulus pulse, leading to the strength-duration curve.
The strength-duration curve for current was first described by Lapicque (1909) as
where i is the current required for stimulation, iR is the rheobase [the minimum current required for a long stimulus pulse], t is the duration of the pulse, and tc is chronaxie, the duration of the pulse that requires twice the rheobase current.

An axon is difficult to excite using a brief pulse, and you have to apply a strong current. This behavior arises because the axon has its own characteristic time, τ (about 1 ms), which is basically the resistance-capacitance (RC) time constant of the cell membrane. If the stimulus duration is much shorter than this time constant, the stimulus strength must increase.

A nerve axon not only has a time constant τ, but also a space constant λ. Is there a similar spatial behavior when exciting a nerve? This is the question my graduate student Mohammed Alzahrani and I addressed in our recent article “The Difference between Traditional Magnetic Stimulation and Microcoil Stimulation: Threshold and the Electric Field Gradient” (Applied Sciences, Volume 14, Article 8349, 2024). The question becomes important during magnetic stimulation with a microcoil. Magnetic stimulation occurs when a pulse of current is passed through a coil held near the head. The changing magnetic field induces an electric field in the brain, and this electric field excites neurons. Recently, researchers have proposed performing magnetic stimulation using tiny “microcoils” that would be implanted in the brain. (Will such microcoils really work? That’s a long story, see here and here.) If the coil is only 100 microns in size, the induced electric field distribution will be quite localized. In fact, it may exist over a distance that’s short compared to the typical space constant of a nerve axon (about 1 mm). Mohammed and I calculated the response of a nerve to the electric field from a microcoil, and found that for a localized field the stimulus strength required for excitation is large.

Figure 6 of our article, reproduced below, plots the gradient of the induced electric field dEx/dx (which, in this case, is the stimulus strength) versus the parameter b (which characterizes the spatial width of the electric field distribution). Note that unlike the plot of the strength-duration curve above, Fig. 6 is a log-log plot

Figure 6 from Alzahrani and Roth, Appl. Sci., 14:8349, 2024

We wrote

Our strength-spatial extent curve in Figure 6 for magnetic stimulation is analogous to the strength-duration curve for electrical stimulation if we replace the stimulus duration [t] by the spatial extent of the stimulus b and the time constant τ by the [space] constant λ. Our results in Figure 6 have a “spatial rheobase” dEx/dx value (1853 mV/cm2) for large values of spatial extent b. At small values of b, the value of dEx/dx rises. If we wanted to define a “spatial chronaxie” (the value of b for which the threshold value of dEx/dx rises by a factor of two), it would be about half a centimeter.
To learn more about this effect you can read our paper, which was published open access so its available free to everyone. Some researchers have used a value of dEx/dx found when stimulating with a large coil held outside the head to estimate the threshold stimulus strength using a microcoil. We ended the paper with this warning:
In conclusion, our results show that even in the case of long, straight nerve fibers, you should not use a threshold value of dEx/dx in a microcoil experiment that was obtained from a traditional magnetic stimulation experiment with a large coil. The threshold value must be scaled to account for the spatial extent of the dEx/dx distribution. Magnetic stimulation is inherently more difficult for microcoils than for traditional large coils, and for the same reason, electrical stimulation is more difficult for short-duration stimulus pulses than for long-duration pulses. The nerve axon has its own time and space constants, and if the pulse duration or the extent of the dEx/dx distribution is smaller than these constants, the threshold stimulation will rise. For microcoil stimulation, the increase can be dramatic.

Friday, September 27, 2024

Taylor Diffusion

In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Poiseuille flow: the flow of a viscous fluid in a pipe. Consider laminar flow of a fluid, having viscosity η, through a long pipe with radius R and length Δx. The flow is driven by a pressure difference Δp across its ends. 

The velocity of the fluid in the pipe is 

where r is the distance from the center of the pipe. Figure 1.26 in IPMB includes a plot of the velocity profile, which is a parabola: large at the center of the pipe (r = 0) and zero at the wall (r = R) because of the no-slip boundary condition.

 
In most mechanics problems, not only is the velocity important but also the displacement. Yet, somehow until recently I never stopped to consider what the displacement of the fluid looks like during Poiseuille flow. Let’s say that at time t = 0 you somehow mark a thin layer of the fluid uniformly across the pipe’s cross section (the light blue line on the left in the figure below). Perhaps you do this by injecting dye or using magnetic resonance imaging to tag the spins. How does the fluid move?

At time tΔt the displacement also forms a parabola, with the fluid at the center moving a ways down the pipe to the right and the fluid at the wall not moving at all. As time marches on, the fluid keeps flowing down the pipe, with the parabola getting stretched longer and longer. Eventually, the marked fluid will extend the entire length of the pipe.

Poiseuille flow is laminar, meaning the fluid moves smoothly along streamlines. Laminar flow is typical of fluid motion when viscosity dominates so the Reynolds number is small. Now let’s consider how the marked or tagged fluid gets mixed with the normal fluid. In laminar flow, there is no turbulent mixing, because there are no eddies to stir the fluid. In fact, there is no component of the fluid velocity in the radial direction at all. There is no mixing, except by diffusion.

Diffusion is discussed in Chapter 4 of IPMB. It is the random movement of particles from a region of higher concentration to a region of lower concentration. Let’s consider what would happen to the marked fluid if flow was turned off (for instance, if we set Δp = 0) and only diffusion occurs. The originally narrow light blue band would no longer drift downstream but it would spread with time, rapidly at first and then more slowly later. In reality the concentration of marked fluid would change continuously in a Gaussian-like way, with a higher concentration at the center and gradually lower concentration in the periphery, but drawing that picture would be difficult, so I’ll settle for showing a uniform band getting wider in time. 

Now, what happens if drift and diffusion happen together? You get something like this: 

The parabola stretched out along the pipe is still there, but its gets wider and wider with time because of diffusion. 

What happens as even more time goes by? Eventually the marked fluid will have enough time to diffuse radially across the entire cross section of the pipe. If we look a ways downstream, the situation will be something like shown below.

The parabola disappears as the marked fluid becomes locally smeared out. Now, here’s the interesting thing: The spreading of the marked fluid is greater than you would expect from pure diffusion. It’s as if Poiseuille flow increased the diffusion. This effect is called Taylor diffusion: an effective diffusion on a large scale arising from Poiseuille flow on a small scale. The flow stretches that parabola axially and then diffusion spreads the marked fluid radially. This phenomenon is named after British physicist Geoffrey Ingram Taylor (1886–1975). Although the derivation is a bit too difficult for a blog post, you can show (see the Widipedia article about Taylor diffusion) that the long-time, large-scale behavior is a combination of drift plus diffusion with an effective diffusion constant, Deff, given by


where v is the mean flow speed (equal to one half the flow speed at the center of the tube). As the flow goes to zero (v = 0) the effective diffusion constant goes to Deff = D and Taylor diffusion disappears; it’s just plain old diffusion. If the flow speed is large, then Deff  is larger than D by a factor of R2 v2/48D2. The quantity Rv/D is the Péclet number (see Homework Problem 43 in Chapter 4 of IPMB), which is a dimensionless ratio of transport by convection to transport by diffusion. Taylor diffusion is particularly important when the Péclet number is large, meaning the drift caused by Poiseuille flow is greater than the spreading caused by diffusion. This enhanced diffusion can be important in some applications. For instance, if you are trying to mix two liquids using microfluidics, you would ordinarily have to wait a long time for diffusion to do its thing. Taylor diffusion can speed that mixing along.

You can call this phenomenon “Taylor diffusion” if you want. Some people use the term “Taylor dispersion.” I call it “diffusion (Taylor’s version).”

 Taylor Swift singing Shake It Off (Taylor’s Version)

 


 

Friday, September 20, 2024

Transitioning to Environmentally Sustainable, Climate-Smart Radiation Oncology Care

“Transitioning to Environmentally
Sustainable Climate-Smart
Radiation Oncology Care,”
by Lichter et al.,
IJROBP, 113:915–924, 2022.
Loyal readers of this blog may have noticed an increasing number of posts related to climate change, and the intersection of global warming with health care and medical physics. This is not an accident. I’m growing increasingly worried about the impact of climate change on our society. One way I act to oppose climate change is to write about it (here, here, here). So, I was delighted to read Katie Lichter and her team’s editorial about “Transitioning to Environmentally Sustainable, Climate-Smart Radiation Oncology Care” (International Journal of Radiation Oncology Biology Physics, Volume 113, Pages 915–924, 2022). Their introduction begins (references removed)

Climate change is among the most pressing global threats. Action now and in the coming decades is critical. Rising temperatures exacerbate the frequency and intensity of extreme weather events, including wildfires, hurricanes, floods, and droughts. Such events threaten not only our ecosystems, but also our health. Climate change’s negative effects on human health are slowly becoming better understood and are projected to increase if emissions mitigation remains inadequate. Emerging research notes a disproportionate effect of climate change on vulnerable populations (e.g., older populations, children, low-income populations, ethnic minorities, and patients with chronic conditions, including cancer) who are the least equipped to deal with these outsized effects.
Then Lichter and her coauthors get specific about radiation oncology.
More than half of cancer patients will require radiation therapy (RT) during the course of their illness. As most RT courses are delivered using fractionated external beam radiation (EBRT), patients undergoing EBRT are vulnerable to treatment disruptions from climate events. Notably, disruption of RT treatments due to severe weather events has been shown to affect patient treatment and survival. As radiation oncologists, it is imperative to recognize and further investigate the effects of climate change on health and cancer outcomes and understand the specific vulnerabilities of patients receiving RT to the effects of climate change. We must also advance our understanding of the contribution of radiation oncology as a specialty to green house gas (GHG) emissions, and what measures may be taken in our daily practices to join the international efforts in reducing our negative environmental impact.
Next the authors present their four R’s to address oncology care: reduce, reuse, recycle, rethink. This is sort of an inside joke among radiation biologists, because radiation biology famously has its own four R’s: repair, reassortment, reoxygenation, and repopulation. Lichter et al.’s four R’s explain how to lower radiation oncology’s effect on the climate.

  1. Reduce means to lower the energy needs for imaging and therapeutic devices, and to minimize medical waste.
  2. Reuse means to favor reusable equipment and supplies (such as surgical gowns) whenever possible.
  3. Recycle means to recycle any single-use supplies than cannot be reused. Much now finds its way to urban landfills rather than to recycling centers.
  4. Rethink means to reconsider all medical radiation oncology processes and procedures in light of climate change. Can some things be done by telemedicine? Can we reduce the number of fractions of radiation a patient receives so fewer visits to the hospital are required? Can some professional conferences be held virtually rather than in person? Sometimes the answer may be yes and sometimes no, but all these issues need to be reexamined.

Lichter’s editorial concludes (my italics)

The health care system contributes significantly to today’s climate health crisis. All efforts addressing the crisis are important due to their direct emissions reduction potential, and the example they set for the health care system and the patients who need the care. Although the effects of increasing global temperatures on human health are well studied, the effects of health care, and specifically oncology and radiation treatments, on contributing to climate change are not. The radiation oncology community has a unique opportunity to use our technological expertise and awareness to assess and minimize the environmental impact of our care and set the standard for sustainable health care practices for other specialties to emulate. 

Thank you Katie Lichter and your whole team for all the important work that you are doing to fight climate change! Your four R’s—reduce, reuse, recycle, and rethink—apply beyond radiation oncology, and even beyond health care, to all of our society’s activities. Perhaps writers of textbooks such as Intermediate Physics for Medicine and Biology need to reduce, reuse, recycle, and especially rethink how our books impact, and are impacted by, global warming.

 
Listen to Katie Lichter talk about her climate journey.

Friday, September 13, 2024

The Million Person Study: Whence It Came and Why

A screenshot of the article "How Sound is the Model Used to Establish Safe Radiation Levels?" on the website physicsworld.com, superimposed on the cover of Intermediate Physics for Medicine and Biology.
A screenshot of the article
“How Sound is the Model Used to
Establish Safe Radiation Levels?
on the website physicsworld.com.
Last fall, physicsworld.com published an editorial by Robert Crease asking “How Sound is the Model Used to Establish Safe Radiation Levels?” This question is addressed in Chapter 16 of Intermediate Physics for Medicine and Biology, and I have discussed it before in this blog. Crease begins
Ionizing radiation can damage living organisms, that’s clear. But there are big questions over the validity of the linear no-threshold model (LNT), which essentially states that the risk of cancer from radiation and carcinogens always increases linearly with dose. The LNT model implies, in other words, that any amount of radiation is always dangerous and that zero risk is present only at zero dose.
Crease notes that alternative models are the threshold model in which there is a minimum dose below which there is no risk, and the hormesis model which says that small doses are beneficial by triggering repair mechanisms. He explains that by adopting such a conservative position as the linear no-threshold model we may cause unforeseen negative consequences.

What sort of negative consequences? One of the most urgent and dire health hazards faced by humanity is climate change. Addressing the danger of a warming climate, with all its implications, must be our top priority. Climate change is caused primarily by the emission of greenhouse gasses such as carbon dioxide that result from the burning of fossil fuels to generate electricity, warm our homes, power our vehicles, or make steel and concrete. One alternative to burning fossil fuels is to use nuclear energy. But nuclear energy is feared by many, in part because of the linear no-threshold model, which implies that any exposure to ionizing radiation is dangerous. If, in fact, the linear no-threshold model is not valid at the low doses associated with nuclear power plants and nuclear waste disposal then the public might be more accepting of nuclear power, which may help us in the battle against climate change. Crease concludes
One of the many reasons for the need to study the validity of LNT is that convictions of its accuracy continue to be used as an argument against nuclear power plants, in connection with their operation as well as their spent fuel rods. Nuclear power may be undesirable for reasons other than this. But the critical need to find a workable alternative to fossil fuels for energy production requires an honest ability to assess the validity of this model.
In my opinion, determining if the linear no-threshold model is valid at low doses is one of the greatest challenges of medical physics today. It’s a critical example of how physics interacts with medicine and biology. We need to figure this out. But how?

Screenshot of The Million Person Study website, superimposed on the cover of Intermediate Physics for Medicine and Biology.
Screenshot of The Million
Person Study website.
One way is to conduct an epidemiological study of low-dose radiation exposure. But such a study would have to be huge, because it’s looking for a tiny effect influencing an enormous population. What you need is something like The Million Person Study. Yes, medical physics has its own “big science” large-scale collaboration. The Million Person Study’s website states
There is a major gap in epidemiological understanding, however, of the health effects experienced by populations exposed to radiation at lower doses, gradually over time.

The foundation of the Million Person Study is to fill that gap, using epidemiological methods of assessing rate and quality of mortality on a study group of one million persons exposed to this type of radiation.
The website notes that there are many reasons to assess the risk of low doses of radiation, including determining 1) the side effects of medical imaging procedures such as computed tomography, 2) the danger of nuclear accidents or terrorism (dirty bombs), 3) the safety of occupations that expose workers to a slight radiation dose, 4) the hazards of environmental exposure such as from radon in homes, and 5) the uncertainty of space and high altitude travel such as when sending astronauts to Mars. The Million Person Study not only focuses on the level of exposure, but also on the duration: was it a brief exposure as if from an nuclear accident, or a low dose delivered over a long time?

The cover of a special issue of the International Journal of Radiation Biology about The Million Person Study, superimposed on the cover of Intermediate Physics for Medicine and Biology.
The cover of a special issue of the
International Journal of Radiation Biology
about The Million Person Study.
Want to learn more about The Million Person Study? See the paper by John Boice, Sarah Cohen, Michael Mumma, and Elisabeth Ellis titled “The Million Person Study: Whence it Came and Why,” published in the International Journal of Radiation Biology in 2022 (Volume 98, Pages 537–550). Its abstract is printed below.
Purpose: The study of low dose and low-dose rate exposure is of immeasurable value in understanding the possible range of health effects from prolonged exposures to radiation. The Million Person Study (MPS) of low-dose health effects was designed to evaluate radiation risks among healthy American workers and veterans who are more representative of today’s populations than are the Japanese atomic bomb survivors exposed briefly to high-dose radiation in 1945. A million persons were needed for statistical reasons to evaluate low-dose and dose-rate effects, rare cancers, intakes of radioactive elements, and differences in risks between women and men.

Methods and Materials: The MPS consists of five categories of workers and veterans exposed to radiation from 1939 to the present. The U.S. Department of Energy (DOE) Health and Mortality study began over 40 years ago and is the source of ∼360,000 workers. Over 25 years ago, the National Cancer Institute (NCI) collaborated with the U.S. Nuclear Regulatory Commission (NRC) to effectively create a cohort of nuclear power plant workers (∼150,000) and industrial radiographers (∼130,000). For over 30 years, the Department of Defense (DoD) collected data on aboveground nuclear weapons test participants (∼115,000). At the request of NCI in 1978, Landauer, Inc., (Glenwood, IL) saved their dosimetry databases which became the source of a cohort of ∼250,000 medical and other workers.

Results: Overall, 29 individual cohorts comprise the MPS of which 21 have been or are under active study (∼810,000 persons). The remaining eight cohorts (∼190,000 persons) will be studied as resources become available. The MPS is a national effort with critical support from the NRC, DOE, National Aeronautics and Space Administration (NASA), DoD, NCI, the Centers for Disease Control and Prevention (CDC), the Environmental Protection Agency (EPA), Landauer, Inc., and national laboratories.

Conclusions: The MPS is designed to address the major unanswered question in radiation risk understanding: What is the level of health effects when exposure is gradual over time and not delivered briefly. The MPS will provide scientific understandings of prolonged exposure which will improve guidelines to protect workers and the public; improve compensation schemes for workers, veterans and the public; provide guidance for policy and decision makers; and provide evidence for or against the continued use of the linear nonthreshold dose-response model in radiation protection.

 Lead on Million Person Study, and thank you for your effort. We need those results!