Friday, September 12, 2025

Why Are There So Few Aerial Plankton?

Air and Water,
by Mark Denny.
In his book Air and Water, Mark Denny asks an oddball question: Why are there so few aerial plankton? In my mind, this question transforms into: Why are there no flying blue whales, sucking in mouthfuls of air and filtering out tiny organisms for food? Here is what Denny says:
A general characteristic of aquatic (especially marine) environments is the presence of planktonic life. A cubic meter of water taken from virtually anywhere in a stream, lake, or ocean is teeming with small, suspended organisms. In fact, the concentration of these plants and animals is such that many kinds of invertebrates, including clams, mussels, anemones, polychaete worms, and bryozoans, can reliably use planktonic particles as their sole source of food. In contrast, air is relatively devoid of suspended matter. A cubic meter of air might contain a few bacteria, a pollen grain or two, and very occasionally a flying insect or wind-borne seed. Air is so depauperate compared to the aquatic ‘soup’ that few terrestrial animals manage to make a living by straining their food from the surrounding fluid. Web-building spiders are the only example that comes to mind.
To understand why, my first inclination is to examine the balance between gravity, thermal motion, and concentration. You can use a Boltzmann factor, emgh/kBT, to determine how the concentration changes with height h, assuming particles of mass m are in contact with a fluid at temperature T (g is the acceleration of gravity and kB is Boltzmann’s constant). But there’s one problem: the Peclet number is often large, meaning advection dominates diffusion. In other words, air or water currents are more effective than diffusion for mixing (like in a blender). In many cases the problem is even worse: the flow is turbulent, which tends to mix materials much more rapidly than diffusion does. In some ways the analysis of turbulent mixing is similar to the case of diffusion. The flux of particles is proportional to the concentration gradient, but the constant of proportionality is not the diffusion constant but instead the turbulent diffusivity. Denny does the analysis in more detail than I can go into here (turbulent flow is always complex). But he states his conclusion clearly and simply
The sinking rates of particles in air are just too high to allow them to remain passively suspended and as a result, aerial plankton are sparse. In water, slow sinking speeds insure that many particles are suspended, and the plankton is plentiful. The abundance of aquatic suspension feeders and the scarcity of terrestrial ones, can therefore be thought of as a direct consequence of the differences in density and viscosity between air and water.

One other factor plays a role here: buoyancy. If the small organisms have a density approximately that of water, then tiny aquatic animals would be almost neutrally buoyant, so they’d be easy to suspend. In air, however, buoyancy plays almost no roll, so these little animals “seem” to be much denser.

It looks like I should abandon my search for a giant flying suspension feeder, resembling a blimp with a big mouth to suck in large amounts of air that it filters to extract food. Too bad. I was looking forward to befriending one, if the physics had only allowed it.


 https://www.youtube.com/watch?v=NivDXM88oCo

Friday, September 5, 2025

Does Stokes’ Law Hold for a Bubble?

In Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Stokes’ law. When a small sphere, of radius a, moves with speed U through a fluid having a viscosity η, the drag force D is 6πηaU. This result is well known, but does it apply to a gas bubble moving in water?

Life in Moving Fluids, superimposed on Intermediate Physics for Medicine and Biology.
Life in Moving Fluids,
by Steven Vogel.
I was reading through Life in Moving Fluids, by Steven Vogel, when I came across the answer. Vogel considers a fluid sphere moving in a fluid medium. His Eq. 15.8 is
Here, ηext is the viscosity of the external fluid and ηint is the viscosity of the internal fluid.

Suppose you have a sphere of water in air (say, a raindrop falling from the sky toward earth). Then ηint = ηwater = 10–3 Pa s and ηext = ηair = 2 × 10–5 Pa s. Thus ηext/ηint = 0.02. For our purposes, this is nearly zero, and the drag force reduces to Stokes’ law, D = 6πηextaU.

Now, consider a sphere of air in water (say, a bubble rising toward the surface of a lake). Then ηint = ηair = 2 × 10–5 Pa s and ηext = ηwater = 10–3 Pa s. Thus ηext/ηint = 50. For our purposes, this is nearly infinity, and the drag force becomes D = 4πηextaU. Yikes! Stokes’ law does not hold for a bubble. Who knew? (Vogel knew.)

Apparently when the sphere is a fluid, internal motion occurs, as shown in Vogel’s picture below.
 

Note that at the edge of the sphere, the internal and external flows are in the same direction. This changes the boundary condition at the surface. A rigid sphere would obey the no-slip condition, but a fluid sphere does not because the internal fluid is moving.

Although Vogel doesn’t address this, I wonder what the drag force is on a sphere of water in water? Does this even make sense? Perhaps we would be better off considering a droplet of some liquid that has the same viscosity as water moving through water (I can imaging this might happen in a microfluidics apparatus). In that case the drag force becomes D = 5πηextaU. I must confess, I’m not sure if the derivation of the general equation is valid in this case, but I don’t see why it shouldn’t be.

There are all kinds of little jewels inside Vogel’s book. I sure wish he were still around.

Friday, August 29, 2025

Approximate Analytical Solutions of the Bidomain Equations for Electrical Stimulation of Cardiac Tissue With Curving Fibers

Russ Hobbie and I stress qualitative thinking, in addition to quantitative thinking, in Intermediate Physics for Medicine and Biology. What’s the difference? Qualitative thinking is the ability to guess approximately what a solution will look like. As an example, let me share the first part of one of my publications that I always liked. It is from the article “Approximate analytical solutions of the bidomain equations for electrical stimulation of cardiac tissue with curving fibers,” published in Physical Review E (Volume 67, Article number 051925) in 2003. I wrote the paper with my graduate student Debbie Langrill Beaudoin.

In the first paragraph, we wrote “Fig. 1 shows the fiber geometry throughout a sheet of tissue and the direction of the applied electric field. Can you look at Fig. 1 and predict where the tissue will be depolarized and where it will be hyperpolarized?” Figure 1 is shown below. 


Warning: Debbie and I cooked up a way to avoid polarization at the boundaries. Ordinarily you would expect a big hyperpolarization on the left and a depolarization on the right, both restricted to only a few length constants from the edge. Ignore this effect. Just consider the polarization caused by the fiber curvature.

At this point, dear reader, I ask you to stop reading and guess the distribution of polarization. Take a piece of paper, sketch the fiber distribution in Fig. 1, and then mark which areas of the tissue are depolarized and which are hyperpolarized. If you have some colored pencils handy, just color the depolarized region red and the hyperpolarized region blue. Go ahead. I’ll wait... 


 

Okay, let’s see how you did. Below is Fig. 6 of our paper, which gives the result. Depolarization is in red, hyperpolarization is in blue.


Did you get anything like this?

To help explain the polarization distribution, I’ve created two new figures not in the paper. In both, the short gray line segments show the fiber geometry. The purple arrows indicate the direction of the applied electric field. Green shows a component of the intracellular current density. The red D’s and blue H’s indicate depolarization and hyperpolarization. The first of the two figures is shown below, and illustrates what I’ll call “Mechanism 1.” 

Mechanism 1

In the region where the fibers point along the applied electric field (like along the left edge of the tissue), the current is divided approximately equally between the intracellular and extracellular spaces because they have similar conductivities in that direction. So, the green intracellular current density arrows are relatively large there. In the region where the fibers point perpendicular to the applied electric field (like in the center), the intracellular current density is less than the extracellular current density because the intracellular conductivity is smaller than the extracellular conductivity in that direction, so the green intracellular current density arrows are relatively small. Somewhere between these two regions current had to leave the intracellular space and pass out through the membrane, entering the extracellular space. This outward membrane current depolarizes the tissue (red D’s). If the fiber direction then changes back to being parallel to the electric field (like along the right edge of the tissue), some extracellular current must recross the membrane and reenter the cell, which hyperpolarizes the tissue (blue H’s). This behavior is shown in the upper small plot to the left of Fig. 6a.

The next new figure illustrates “Mechanism 2.” Consider what happens where the fibers are oriented at an angle of about 45 degrees to the electric field. In that case, even though the electric field may be horizontal the anisotropy rotates the intracellular current density to be more nearly parallel to the fibers (the direction with the highest conductivity). In other words, the electric field is horizontal, but the intracellular current density rotates counterclockwise. The extracellular current density also rotates, but not as much because the intracellular space is more anisotropic than the extracellular space. Thus, you pick up a component of the intracellular current perpendicular to the applied electric field (shown by the green arrows). If the fibers change direction so they are either parallel or perpendicular to the field, you get no rotation of the current density there, so there is no component of the intracellular current perpendicular to the electric field. At the head of one of those green arrows, the intracellular current density vector ends so the intracellular current must cross the membrane and enter the extracellular space, depolarizing the tissue. At the tail of a green arrow the intracellular current density vector begins so the extracellular current must cross the membrane and enter the intracellular space, hyperpolarizing the tissue. This results in a somewhat complicated pattern of polarization (H’s and D’s), which resembles the pattern shown in the lower small plot to the left of Fig. 6a. 

Mechanism 2

Both of these mechanisms operate simultaneously, so the net polarization is the sum of those two small plots. This results in the Yin-Yang pattern of depolarization and hyperpolarization of Fig. 6a. (Stare at Fig. 6a long enough until you realize this is correct.) Below it, in Fig. 6b, is the result you get if you just mindlessly solve the bidomain equations numerically. (Actually, Debbie solved them, and she did nothing mindlessly, but you know what I mean). The two are qualitatively the same, although there are quantitative differences.

So, how many of you guessed the Yin-Yang pattern? To tell you the truth, I’m not sure I did when Debbie and I first started this analysis. It’s difficult. But at least now I have a way to understand this pretty but nonintuitive pattern. I’ve found that being able to do these hand-waving types of explanations is useful. It lets you understand what is going on, rather than just putting a calculation into a black-box computer program and getting out an answer with no insight. Remember: the purpose of computing is insight, not numbers!

Finally, I really enjoyed starting a research paper off with a puzzle like that in Fig. 1 and ending it with the solution like in Fig. 6. I think you should consider using this trick in your next article.

Friday, August 22, 2025

The Cardiac Bidomain Model in Twelve Publications

Recently I wrote a review of the bidomain model of cardiac tissue. Russ Hobbie and I discuss the bidomain model in Section 7.9 of Intermediate Physics for Medicine and Biology. It’s a mathematical description of heart muscle that keeps track of the voltages and currents both inside and outside the myocardial cells. What I wrote is not really an academic review article, it’s not a history, and it’s not a memoir. To tell you the truth, I’m not sure what it is. I originally thought I’d try and publish it, but I’m not sure who would accept such an unusual article. So, I decided it would be best to distribute it on my blog. There is little I can do for my dear readers, but I can give them this review.

The format is to describe the bidomain model by considering twelve publications. Below is a list of the articles I chose. Each article is meant to feature one researcher, whose names are listed in bold.

Tung L (1978) A bi-domain model for describing ischemic myocardial dc potentials. PhD Dissertation, Massachusetts Institute of Technology.

Plonsey R, Barr RC (1984) Current flow patterns in two-dimensional anisotrpic bisyncytia with normal and extreme conductivities. Biophys J 45:557–571.

Sepulveda NG, Roth BJ, Wikswo JP, Jr (1989) Current injection into a two-dimensional anisotropic bidomain. Biophys J, 55:987-999. 

Henriquez CS, Plonsey R (1990b) Simulation of propagation along a cylindrical bundle of cardiac tissue. II. Results of the simulation. IEEE Trans Biomed Eng 37:861–875.

Neu JC, Krassowska W (1993) Homogenization of syncytial tissue. Crit Rev Biomed Eng 21:137–199.

Wikswo JP Jr, Lin SF, Abbas RA (1995) Virtual electrodes in cardiac tissue: A common mechanism for anodal and cathodal stimulation. Biophys J 69:2195–2210.

Trayanova N, Skouibine K, Aguel F (1998) The role of cardiac tissue structure in defibrillation. Chaos 8:221–233.

Knisley SB, Trayanova N, Aguel F (1999) Roles of electric field and fiber structure in cardiac electric stimulation. Biophys J 77:1404–1417.

Efimov IR, Cheng Y, van Wagoner DR, Mazgalev T, Tchou PJ (1998) Virtual electrode-induced phase singularity: A basic mechanism of defibrillation failure. Circ Res 82:918–925.

Entcheva E, Eason J, Efimov IR, Cheng Y, Malkin R, Claydon F (1998) Virtual electrode effects in transvenous defibrillation-modulation by structure and interface: Evidence from bidomain simulations and optical mapping. J Cardiovasc Electrophysiol 9:949–961.

Rodriguez B, Li L, Eason JC, Efimov IR, Trayanova NA (2005) Differences between left and right ventricular chamber geometry affect cardiac vulnerability to electric shocks. Circ Res 97:168–175.

Bishop MJ, Boyle PM, Plank G, Welsh DG, Vigmond EJ (2010) Modeling the role of the coronary vasculature during external field stimulation. IEEE Trans Biomed Eng 57:2335–2345.

My biggest worry is that I’ve left too much out. For instance, I could easily have featured other researchers, such as Rick Gray, Jamey Eason, Roger Barr, Marc Lin, Felipe Aguel, David Geselowitz, and others. Also, I suspect there are many researchers who, if they read this review, will be hurt because they are completely ignored. All I can say is, I’m sorry. I tried to relate the story as best as I can remember it, but I may have remembered some things wrong.

You can download my review here. I hope you enjoy reading the article as much as I enjoyed writing it. It was an honor to work on this topic with so many outstanding scientists. As Randy Travis sings, these scientists are my heroes and friends.

 
 
“Heroes and Friends,” by Randy Travis

Friday, August 15, 2025

Lutetium-177

When preparing the 6th edition of Intermediate Physics for Medicine and Biology, I like to scan the literature for new medical advances. While revising the chapter on nuclear medicine, I found some fascinating information about an isotope that was not mentioned in the 5th edition of IPMB: lutetium-177.

First, the physics. Lutetium (pronouced loo-tee-shee-uhm) is element 71 in the periodic table. Below are the energy level and decay data. The primary mechanism of decay is emitting a beta-particle (an electron), transmuting into a stable isotope of hafnium. The maximum energy of this electron is about 500 keV. Two other possibilities (each happening in about one out of every ten decays) is beta decay of 177Lu to one of two excited levels of 177Hf followed by gamma decay. The two most common gamma photons have energies of 113 and 208 keV. Lutetium-177 produces few internal conversion or Auger electrons. The average energy of all the emitted electrons is about 150 keV, which have a range of about 0.25 mm. The half-life of 177Lu is roughly a week. 

Next, the biology and medicine. Lutetium can be used for imaging (using the gamma rays) or therapy (using the electrons). While the dose arising from all the electrons does not make this isotope ideal for pure imaging studies (technetium-99m might be a better choice), the gammas do provide a way to monitor 177Lu during therapy (in this way it is similar to iodine-131 used in thyroid cancer therapy and imaging). Such a combined function allows the physician to do “theranostics” (a combination of  therapy and diagnostics), a term I don’t care for but it is what it is. 177Lu can be bound to other molecules to improve its ability to target a tumor. For instance, it is sometimes attached to a molecule that binds specifically to prostate specific membrane antigen. The PSMA molecule is over-expressed in a tumor, so this allows the 177Lu to target prostate tumor cells. One advantage of using 177Lu in this way—rather than, say, using radiotherapy with x-rays directed at the prostate—is that the 177Lu will seek out and irradiate any metastasizing cancer cells as well as the main tumor. Clinical trials show that it can prolong the life of those suffering from prostate cancer


 Lutetium-177: PSMA Guided Treatment

https://www.youtube.com/watch?v=Th42pFOx0Fs

Friday, August 8, 2025

Push Back Hard

Last November, right after the Presidential election, I wrote a blog post about trusted information on public health. In that post, I featured the science communication efforts by Katelyn Jetelina (Your Local Epidemiologist) and Andrea Love (Immunologic). I didn’t realize at the time just how much I would come to rely on these two science advocates for trustworthy information, especially related to vaccines.

Today, I recommend several more science communicators. The first is Skeptical Science. That website focuses primarily on climate science. The current Republican administration has denied and mocked the very idea of climate change, describing it as a “hoax.” Skeptical Science has a simple mission: “debunk climate misinformation.” This is extraordinarily important, as climate change may be the most important issue of our time. Check out their website www.skepticalscience.com, and follow them on Facebook. I just signed up for their Cranky Uncle app on my phone. I learned about Skeptical Science from my Climate Reality mentor, John Forslin. For those more interested in doing rather than reading and listening, I recommend The Climate Reality Project (Al Gore’s group). Take their training. I did. Oh, and don’t forget Katharine Hayhoe’s website https://www.katharinehayhoe.com.

I recently leaned about the Center for Infectious Disease Research and Policy that operates out of the University of Minnesota (Russ Hobbie, the main author of Intermediate Physics for Medicine and Biology, worked there for most of his career). I can’t tell you too much about it, except that it’s director is Michael Osterholm, a leading and widely respected vaccine expert and advocate. 

Want to know more about science funding, especially to the National Institutes of Health? Check out Unbreaking. They’re documenting all the bad stuff happening to science these days. I learned about Unbreaking from Liz Neeley's weekly newsletter Meeting the Moment. Liz is married to Ed Yong, who I have written about before.

My next recommendation is Angela Rasmussen, a virologist who publishes at the site Rasmussen Retorts on Substack. What I like about Rasmussen is that she tells it like it is, and doesn’t worry if her salty language offends anyone. I must confess, as I experience more and more of what I call the Republican War on Science, I get angrier and angrier. Rasmussen’s retorts reflect my rage. She writes “Oh, also, I swear sometimes. It’s not the most professional behavior but I believe in calling things what they are and sometimes nothing besides ‘asshole’ is accurate.” Give ’em hell, Angie! Here’s the concluding two paragraphs of her August 5 post:

There’s always a ton of talk about how public health and science have lost trust. A lot of people like to tell me that it’s our fault. Scientists didn’t show enough humility or acknowledge uncertainty during the COVID pandemic. We were wrong about masks or vaccines or variants or whatever. We didn’t communicate clearly. We overclaimed and underdelivered. I reject these arguments.

The public didn’t lose trust in science because experts are wrong sometimes, and are imperfect human beings who make mistakes. They lost trust because people like [Robert F. Kennedy, Jr.] constantly lied about science. He is constantly lying still. He’s eliminating experts so that he and his functionaries on ACIP [The CDC’s Advisory Committee on Immunization Practices] will be able to continue lying without any inconvenient pushback. We need to recognize this and push back hard.
What am I doing to push back hard? Regular readers of this blog may recall my post from this April in which I imagined what Bob Park’s newsletter What’s New would look like today. Well, I’ve made that a weekly thing. You can find them published on my Medium account (https://medium.com/@bradroth). I’ll link a few of the updates below.
https://medium.com/@bradroth/bob-parks-what-s-new-august-1-2025-5cf2c5bfc598

https://medium.com/@bradroth/bob-parks-what-s-new-july-25-2025-bc10a841cc28

https://medium.com/@bradroth/bob-parks-what-s-new-july-18-2025-eca27626c79b

https://medium.com/@bradroth/bob-parks-what-s-new-july-11-2025-68c5943218d7

You will also find these IPMB blog posts republished there, plus a few other rants. When I started writing my updated version of What’s New, I (ha, ha)… I thought (ha, ha, ha!)... I thought that I might run out of things to talk about. That hasn’t been a problem. But writing a weekly newsletter in addition to my weekly IPMB blog posts takes time, and it makes me appreciate all the more the heroic efforts of Katelyn, Andrea, Liz, and Angela. I hope they all know how much we appreciate their effort.

Is there anything else on the horizon? The book Science Under Siege, by Michael Mann and Peter Hotez, is out next month. As soon as I can get my hands on a copy and read it, I will post a review on this blog. In the meantime, I’ll keep my powder dry, waiting until RFK Jr starts in on microwave health effects (Y’all know it’s coming). Now that’s physics applied to medicine and biology, right up my alley!

“Don’t Choose Extinction.” This is one of John Forslin’s favorite videos. Enjoy!

https://www.youtube.com/watch?v=3DOcQRl9ASc

Friday, August 1, 2025

The History of the Linear No-Threshold Model and Recommendations for a Path Forward

As Gene Surdutovich and I were preparing the 6th edition of Intermediate Physics for Medicine and Biology, we decided to update the discussion about the linear no-threshold model of radiation risk. In the 5th edition of IPMB, Russ Hobbie and I had written
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear no-threshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.
In our update, we added a citation to a paper by John Cardarelli, Barbara Hamrick, Dan Sowers, and Brett Burk titled “The History of the Linear No-Threshold Model and Recommendations for a Path Forward,” (Health Physics, Volume 124, Pages 131–135, 2022). When I looked over the paper, I found that there is a video series accompanying it. I said to myself: “Brad, that sounds like just the sort of thing readers of your blog might enjoy.” I found all the videos on the Health Physics YouTube station, and I have added links to them below.

Wow! This is not a dry, technical discussion. It is IPMB meets 60 Minutes. This is a hard-hitting investigation into scientific error and even scientific fraud. It’s amazing, fascinating, and staggering.

John Cardarelli, the president of the Health Physics Society when the videos were filmed, acts as the host, introducing and concluding each of the 22 episodes. The heart of the video series is Barbara Hamrick, past president of the Health Physics Society, interviewing Edward Calabrese, a leading toxicologist and a champion of the hormesis model (low doses of radiation are beneficial).

Calabrese claims that our use of the linear no-threshold model is based on “severe scientific, ethical, and policy problems.” He reviews the history of the LNT model, starting with the work of the Nobel Prize winner Hermann Muller on the genetics of fruit flies. He reviews the evidence to support his contention that Muller and other scientists were biased in favor of the LNT model, and sometimes carried that bias to extreme lengths. At first I said to myself “this is interesting, but its all ancient history.” But as the video series progressed, it approached closer and closer to the present, and I began to appreciate how these early studies impact our current safety and regulatory standards.

I watched every minute of this gripping tale. (OK, I admit I watched it at a 2x playback speed, and I skipped Cardarelli’s introductions and conclusions after the first couple videos; there is only so much time in a day.) Anyone interested in the linear no-threshold model needs to watch this. I have to confess, I can offer no independent confirmation of Calabrese’s claims. I’m not a toxicologist, and my closest approach to radiobiology is being a coauthor on IPMB. Still, if Calabrese’s claims are even half true then the LNT assumption is based on weak data, to put it mildly.

Watch these videos. Maybe you’ll agree with them and maybe not, but I bet you’ll enjoy them. You may be surprised and even astounded by them.



https://www.youtube.com/watch?v=G5FjhgcnMjU

Episode 1: Who Is Dr. Edward Calabrese?


https://www.youtube.com/watch?v=slIylnAZsDY

Episode 2: LNT Beginnings—Extrapolation From ~100,000,000 x Background?


https://www.youtube.com/watch?v=4UxqcscXHWE

Episode 3: Muller Creates a Revolution


https://www.youtube.com/watch?v=E2WCE30_o3s

Episode 4: Muller: How Ambition Affects Science


https://www.youtube.com/watch?v=LP_eIQDa6rY

Episode 5: The Big Challenge


https://www.youtube.com/watch?v=PMCOejiERbQ

Episode 6: The Birth of the LNT Single-Hit Theory


https://www.youtube.com/watch?v=srDKPtbiLhI

Episode 7: Pursuit to Be the First to Discover Gene Mutation


https://www.youtube.com/watch?v=7hTfVMDPrcY

Episode 8: "Fly in the Ointment"


https://www.youtube.com/watch?v=34nNwqwIcbU

Episode 9: Why the First Human Risk Assessment Was Based on Flawed Fruit-Fly Research


https://www.youtube.com/watch?v=D2Tmvc8awZQ

Episode 10: The Birth of LNT Activism


https://www.youtube.com/watch?v=7f99cSK0lQc

Episode 11: Creation of the Biological Effects of Atomic Radiation (BEAR) I Committee


https://www.youtube.com/watch?v=JaDfua6mRIw

Episode 12: Was There Scientific Misconduct Among the BEAR Genetics Committee Members?


https://www.youtube.com/watch?v=GMhPFpeqjG8

Episode 13: Is Lower Always Better?


https://www.youtube.com/watch?v=i5ixKEHTFKE

Episode 14: Should the Genetics Panel Science Paper Be Retracted?


https://www.youtube.com/watch?v=paRx3SFfKXM

Episode 15: Follow the Money Trail: "We Are Just All Conspirators Here Together"


https://www.youtube.com/watch?v=NNdF1-K6my4

Episode 16: The Most Important Paper in Cancer Risk Assessment That Affects Policy in the US



https://www.youtube.com/watch?v=yHdLe5hileI

Episode 17: Studies With a Surprising Low-Dose Health Effect


https://www.youtube.com/watch?v=_CzS5I8DK6k

Episode 18: Ideology Trumps Science, Precautionary Principle Saves the LNT


https://www.youtube.com/watch?v=rdrKwVUuLGc

Episode 19: Genetic Repair Acknowledged


https://www.youtube.com/watch?v=892prKIMjvg

Episode 20: BEIR I Acknowledges Repair but Keeps LNT. Why?


https://www.youtube.com/watch?v=ZZx9SiY7wuI

Episode 21: BEIR I Mistake Revealed, LNT Challenged, Threshold Supported


https://www.youtube.com/watch?v=L3ZfL4vTPPM

Episode 22: Making Sense of History and a Path Forward by Dr. Calabrese

Friday, July 25, 2025

Everything Is Tuberculosis

Everything Is Tuberculosis,
by John Green.

Recently I read the current bestseller Everything Is Tuberculosis: The History and Persistence of Our Deadliest Infection, by John Green. Tuberculosis is the deadliest infectious disease worldwide. According to Green,

Just in the last two centuries, tuberculosis [TB] caused over a billion human deaths. One estimate, from Frank Ryan’s Tuberculosis: The Greatest Story Never Told, maintains that TB has killed around one in seven people who’ve ever lived. Covid-19 displaced tuberculosis as the world’s deadliest infectious disease from 2020 through 2022, but in 2023, TB regained the status it has held for most of what we know of human history: Killing 1,250,000 people, TB once again became our deadliest infection. What’s different now from 1804 or 1904 is that tuberculosis is curable, and has been since the mid-1950s. We know how to live in a world without tuberculosis. But we choose not to live in that world…
Some of the symptoms of tuberculosis are difficulty breathing, coughing up blood, night sweats, and weight loss. It is a slowly progressing disease, which led to its now-archaic nickname “consumption.” Green writes
Some patients will recover without treatment. Some will survive for decades but with permanent disability, including lung problems, devastating fatigue, and painful bone deformities. But if left untreated, most people who develop active TB will eventually die of the disease.
In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I stress the importance of understanding the sizes of things. Tuberculosis is caused by bacteria, which are each a couple microns long and about a half a micron wide. But the body reacts to these bacteria by surrounding them with white blood cells and T cells of the immune system, “creating a ball of calcifying tissue known as a tubercle.” Tubercles vary in size, from a few tenths of a millimeter to a centimeter. That’s too big to pass through capillaries in the bloodstream and too big to fit into a single alveolus in the lungs.

IPMB only mentions tuberculosis twice. Russ and I write
Spontaneous pneumothorax [air between the lung and the chest wall] can occur in any pulmonary disease that causes an alveolus (air sac) on the surface of the lung to rupture: most commonly emphysema, asthma, or tuberculosis….

Some pathologic conditions can be identified by the deposition of calcium salts. Such dystrophic (defective) calcification occurs in any form of tissue injury, particularly if there has been tissue necrosis (cell death). It is found in necrotizing tumors (particularly carcinomas), atherosclerotic blood vessels, areas of old abscess formation, tuberculous foci, and damaged heart valves, among others.

This history of tuberculosis as a disease is fascinating. Green writes that in eighteenth century Europe “the disease became not just the leading cause of human death, but overwhelmingly the leading cause of human death.” Oddly, it became romanticized. People like the poet John Keats and the pianist Frederic Chopin died of tuberculosis, and the illness came to be linked with creativity. It also became associated with female beauty, as the thin, wide-eyed, rosy-cheeked appearance of a woman with tuberculosis became fashionable. Later, the disease was stigmatized, being tied to race and a lack of moral virtue. When a person suffered from tuberculosis, they often went to a sanatorium for rest and treatment, and usually died there.

The German microbiologist Robert Koch isolated Mycobacterium tuberculosis in 1882. Koch was a rival of Frenchman Louis Pasteur, and both worked on treatments. I was surprised to learn that author Arthur Conan Doyle—famous for his Sherlock Holmes stories—also played a role in developing treatments for the disease. Tuberculosis remains latent in people until it’s activated by some other problem, such as malnutrition or an immune system disease like AIDS. Many infectious diseases attack children or the elderly, but TB is common in young adults. Physicist Richard Feynman’s 25-year-old wife Arline died of tuberculosis.

Green explains that 

in the decades after the discovery of Koch’s bacillus, small improvements emerged. Better diagnostics meant the disease could be identified and treated earlier, especially once chest X-rays emerged as a diagnostic tool.

The main impact of medical physics on tuberculosis is the development of radiography. X-rays weren’t even discovered until 1895, a decade after Koch isolated the tuberculosis bacterium. They arrived just in time. The often-decaying bacteria at the center of a tubercle accumulates calcium. For low x-ray energies, when the photoelectric effect is the dominant mechanism determining how x-ray photons interact with tissue, the cross section for x-ray attenuation varies as the fourth power of the atomic number. Because calcium has a relatively high atomic number (Z = 20) compared to hydrogen, carbon, nitrogen, and oxygen (Z = 1, 6, 7, 8, respectively), and because lung tissue in general has a low attenuation because of the low-density of air, tubercles show up on a chest x-ray with a great deal of contrast.

The primary treatment for tuberculosis nowadays is antibiotics. The first one to be used for TB, streptomycin, was discovered in the 1940s. By the mid 1950s, several antibiotics made TB curable. I was born in 1960, just after the threat of tuberculosis subsided dramatically in the United States. I can still remember us kids getting those TB skin tests in our forearms, which we all had to have before entering school. But I don’t remember being very worried about TB as a child. The threat was over by then.

A vaccine exists for tuberculosis (the Bacillus Calmette–Guérin, or BCG, vaccine), but it’s mainly effective when given to children, and isn’t used widely in the United States, where tuberculosis is rare. In poorer countries, however, the vaccine saves millions of lives. Currently, mRNA vaccines are being developed against TB. This crucial advance is happening just as Robert F. Kennedy, Jr. is leading his crazy anti-science crusade against vaccines in general, and mRNA vaccines in particular. The vaccine alliance GAVI is hoping to introduce new vaccines for tuberculosis, and this effort will certainly be hurt by the United States defunding GAVI. The World Health Organization has an “end TB strategy” that, again, will be slowed by America’s withdraw from WHO and the dismantling of USAID. Green’s book was published in 2025, but I suspect it was written in 2024, before the Trump administration’s conspiracy-theory laden effort to oppose vaccines and deny vaccine science got underway.

Many of these world-wide efforts to eliminate TB depend on access to new drugs that can overcome drug-resistant TB. Unfortunately, such drugs are expensive, and are difficult to afford or even obtain in poorer countries.

In the final pages of Everything is Tuberculosis, Green writes eloquently

...TB [tuberculosis] in the twenty-first century is not really caused by a bacteria that we know how to kill. TB in the twenty-first century is really caused by those social determinants of health, which at their core are about human-built systems for extracting and allocating resources. The real cause of contemporary tuberculosis is, for lack of a better term, us...

We cannot address TB only with vaccines and medications. We cannot address it only with comprehensive STP [Search, Treat, Prevent] programs. We must also address the root cause of tuberculosis, which is injustice. In a world where everyone can eat, and access healthcare, and be treated humanely, tuberculosis has no chance. Ultimately, we are the cause.

We must also be the cure.

Green serves on the board of trustees for the global health non-profit Partners In Health. To anyone wanting to join the worldwide fight against tuberculosis, I suggest starting at https://www.pih.org.

 
John Green reads the first chapter of Everything Is Tuberculosis.

https://www.youtube.com/watch?v=CCbDdk8Wz-8



John Green discusses Everything Is Tuberculosis on the Daily Show

https://www.youtube.com/watch?v=2uppLo4lZRc


Friday, July 18, 2025

Millikan and the Magnetic Field of a Single Axon

“The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment” superimposed on Intermediate Physics for Medicine and Biology.
The Magnetic Field of a Single Axon:
A Comparison of Theory and Experiment.”

Forty years ago this month, I published one of my first scientific papers. “The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment” appeared in the July, 1985 issue of the Biophysical Journal (Volume 48, Pages 93–109). I was a graduate student at Vanderbilt University at the time, and my coauthor was my PhD advisor John Wikswo. When discussing the paper below, I will write “I did this…” and “I thought that…” because I was the one in the lab doing the experiments, but of course it was really Wikswo and I together writing the paper and analyzing the results.

Selected Papers of Great American Physicists superimpsed on the cover of Intermediate Physics for Medicine and Biology.
Selected Papers of
Great American Physicists
.
In those days I planned to be an experimentalist (like Wikswo). About the time I was writing “The Magnetic Field of a Single Axon,” I read “On the Elementary Electrical Charge and The Avogadro Constant” by Robert Millikan (Physical Review, Volume 11, Pages 109–143, 1913). It had been reprinted in the book Selected Papers of Great American Physicists, published by the American Institute of Physics.

If you are reading this blog, you’re probably are familiar with Millikan’s oil drop experiment. He measured the speed of small droplets of oil suspended in air and placed in gravitational and electric fields, and was able to determine the charge of a single electron. I remember doing this experiment as a undergraduate physics major at the University of Kansas. I was particularly impressed by the way Millikan analyzed his experiment for possible systematic errors: He worried about deviations of the frictional force experienced by the drops from Stokes’ law and corrected for it; he analyzed the possible changes to the density of the oil in small drops; he checked that his 5300 volt battery was calibrated correctly and supplied a constant voltage; and he fussed about convection currents in the air influencing his results. He was especially concerned about his value of the viscosity of air, which he estimated was known to about one part in a thousand. Rooting out systematic errors is a hallmark of a good experimentalist. I wanted to be like Millikan, so I analyzed my magnetic field measurement for a variety of systematic errors.

The first type of error in my experiment was in the parameters used to calculate the magnetic field (so I could compare it to the measured field). I estimated that my largest source of error was in my measurement of the axon radius. This was done using a reticle in the dissecting microscope eyepiece. I only knew the radius to 10% accuracy, in part because I could see that it was not altogether uniform along the axon, and because I could not be sure the axon’s cross section was circular. It was my biggest source of error for calculating the magnitude of the magnetic field, because the field varied as the axon cross-sectional area, which is proportional to the radius squared.
Figure 1 from "The Magnetic Field of a Single Axon."
Figure 1 from "The Magnetic
Field of a Single Axon."

I measured the magnetic field by threading the axon through a wire-wound ferrite-core toroid (I’ve written about these toroid measurements before in this blog). I assumed the axon was at the center of the toroid, but this was not always the case. I performed calculations assuming the toroid averaged the magnetic field for an off-axis axon, and was able to set an upper limit on this error of about 2%. The magnetic field was not measured at a point but was averaged over the cross-sectional area of the ferrite core. More numerical analysis suggested that I could account for the core area to within about 1%. I was able to show that inductive effects from the toroid were utterly negligible. Finally, I assumed the high permeability ferrite did not affect the magnetic field distribution. This should be true if the axon is concentric with the toroid and aligned properly. I didn’t have a good way to estimate the size of this error.

Figure 2 from "The Magnetic Field of a Single Axon."
Figure 2 from "The Magnetic
Field of a Single Axon."
The toroid and axon were suspended in a saline bath (technically, Van Harreveld's solution), and this bath gave rise to other sources of error. I analyzed the magnetic field for different sized baths (the default assumption was an unbounded bath), and for when the bath had a planar insulating boundary. I could do the experiment of measuring the magnetic field as we raised and lowered the volume of fluid in the bath. The effect was negligible. I spent a lot of time worrying about the heterogeneity caused by the axon being embedded in a nerve bundle. I didn’t really know the conductivity of the surrounding nerve bundle, but for reasonable assumptions it didn’t seem to have much effect. Perhaps the biggest heterogeneity in our experiment was the “giant” (~1 mm inner radius, 2 mm outer radius, 1 mm thick) toroid, which was embedded in an insulated epoxy coating. This big chunk of epoxy certainly influenced the current density in the surrounding saline. I had to develop a new way of calculating the extracellular current entirely numerically to estimate this effect. The calculation was so complicated that Wikswo and I didn’t describe it in our paper, but instead cited another paper that we listed as “in preparation” but that in fact never was published. I concluded that the toroid was not a big effect for my nerve axon measurements, although it seemed to be more important when I later studied strands of cardiac tissue.

Figure 3 of "The Magnetic Field of a Single Axon."
Figure 3 of "The Magnetic
Field of a Single Axon."
Other miscellaneous potential sources of error include capacitive effects in the saline and an uncertainty in the action potential conduction velocity (measured using a second toroid). I determined the transmembrane potential by taking the difference between the intracellular potential (measured by a glass microelectrode, see more here) and a metal extracellular electrode. However, I could not position the two electrodes too accurately, and the extracellular potential varies considerably over small distances from the axon, so my resulting transmembrane potential certainly had a little bit of error. Measurement of the intracellular potential using the microelectrode was susceptible to capacitive coupling to the surrounding saline bath. I used a “frequency compensator” to supply “negative capacitance” and correct for this coupling, but I could not be sure the correction was accurate enough to avoid introducing any error. One of my goals was to calculate the magnetic field from the transmembrane potential, so any systematic errors in my voltage measurements were concerning. Finally, I worried about cell damage when I pushed the glass microelectrode into the axon. I could check this by putting a second glass microelectrode in nearby and I didn’t see any significant effect, but such things are difficult to be sure about.

All of this analysis of systematic errors, and more, went into our rather long Biophysical Journal paper. It remains one of my favorite publications. I hope Millikan would have been proud. If you want to learn more, see Chapter 8 about Biomagnetism in Intermediate Physics for Medicine and Biology

Forty years is a long time, but to this old man it seems like just yesterday.

Friday, July 11, 2025

David Cohen: The Father of MEG

David Cohen: The Father of MEG, superimposed on the cover of Intermediate Physics for Medicine and Biology.
David Cohen: The
Father of MEG
,
 by Gary Boas.
Gary Boas
recently published a short biography of David Cohen, known as the father of magnetoencephalography (MEG). The book begins with Cohen’s childhood in Winnipeg, Canada, including the influence of his uncle who introduced him to electronics and crystal radios. It then describes his college days and his graduate studies at the University of California, Berkeley. He was a professor at the University of Illinois Chicago, where he built his first magnetically shielded room in which he hoped to measure the magnetic fields of the body. Unfortunately, Cohen didn’t get tenure there, mainly for political reasons (and a bias against applied research related to biology and medicine). However, he found a new professorship at the Massachusetts Institute of Technology, where he built an even bigger shielded room. The climax of several years of work came in 1969, when he combined the SQUID magnetometer and his shielded room to make groundbreaking biomagnetic recordings. Boas describes the big event this way:
To address this problem [of noise in his copper-coil based magnetic field detector drowning out the signal], he [David Cohen] turned to James Zimmerman, who had invented a superconducting quantum interference device (SQUID) several years before… The introduction came by way of Ed Edelsack, a U.S. Navy funding officer… In a 2024 retrospective about his biomagnetism work in Boston, David described what happened next.

“Ed put me in touch with Jim, and it was arranged that Jim would bring one of his first SQUIDs to my lab at MIT, to look for biomagnetic signals in the shielded room. Jim arrived near the end of December, complete with SQUID, electronics, and nitrogen-shielded glass dewar. It took a few days to set up his system in the shielded room, and for Jim to tune the SQUID. Finally, we were ready to look at the easiest biomagnetic signal: the signal from the human heart, because it was large and regular. Jim stripped down to his shorts, and it was his heart that we first looked at.”

The results were nothing short of astounding; in terms of the signal measured, they were light years beyond anything David had seen with the copper-coil based detector. By combining the highly sensitive SQUID with the shielded room, which successfully eliminated outside magnetic disturbances, the two researchers were able to produce, for the first time, clear, unambiguous signals showing the magnetic fields produced by various organs of the human body. The implications of this were far reaching, with potential for a wide range of both basic science and clinical applications. David didn’t quite realize this at the time, but he and Zimmerman had just launched a new field of study, biomagnetism

Having demonstrated the efficacy of the new approach… David switched off the lights in the lab and he and Zimmerman went out to celebrate. It was December 31, 1969. The thrill of possibility hung in the air as they joined other revelers to ring in a new decade—indeed, a new era.

“Biomagnetism: The First Sixty Years” superimposed on the cover of Intermediate Physics for Medicine and Biology.
Biomagnetism: The
First Sixty Years.”
The biography is an interesting read. I always enjoy stories illustrating how physicists become interested in biology and medicine. Russ Hobbie and I discuss the MEG in Chapter 8 of Intermediate Physics for Medicine and Biology.You can also learn more about Cohen's contributions in my review article “Biomagnetism: The First Sixty Years.”

Today Cohen is 97 years old and still active in the field of biomagnetism. The best thing about Boas’s biography is you can read it for free at https://meg.martinos.org/david-cohen-the-father-of-meg. Enjoy! 


The Birth of the MEG: A Brief History
 https://www.youtube.com/watch?v=HxQ8D4cPIHI