Friday, July 25, 2025

Everything Is Tuberculosis

Everything Is Tuberculosis,
by John Green.

Recently I read the current bestseller Everything Is Tuberculosis: The History and Persistence of Our Deadliest Infection, by John Green. Tuberculosis is the deadliest infectious disease worldwide. According to Green,

Just in the last two centuries, tuberculosis [TB] caused over a billion human deaths. One estimate, from Frank Ryan’s Tuberculosis: The Greatest Story Never Told, maintains that TB has killed around one in seven people who’ve ever lived. Covid-19 displaced tuberculosis as the world’s deadliest infectious disease from 2020 through 2022, but in 2023, TB regained the status it has held for most of what we know of human history: Killing 1,250,000 people, TB once again became our deadliest infection. What’s different now from 1804 or 1904 is that tuberculosis is curable, and has been since the mid-1950s. We know how to live in a world without tuberculosis. But we choose not to live in that world…
Some of the symptoms of tuberculosis are difficulty breathing, coughing up blood, night sweats, and weight loss. It is a slowly progressing disease, which led to its now-archaic nickname “consumption.” Green writes
Some patients will recover without treatment. Some will survive for decades but with permanent disability, including lung problems, devastating fatigue, and painful bone deformities. But if left untreated, most people who develop active TB will eventually die of the disease.
In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I stress the importance of understanding the sizes of things. Tuberculosis is caused by bacteria, which are each a couple microns long and about a half a micron wide. But the body reacts to these bacteria by surrounding them with white blood cells and T cells of the immune system, “creating a ball of calcifying tissue known as a tubercle.” Tubercles vary in size, from a few tenths of a millimeter to a centimeter. That’s too big to pass through capillaries in the bloodstream and too big to fit into a single alveolus in the lungs.

IPMB only mentions tuberculosis twice. Russ and I write
Spontaneous pneumothorax [air between the lung and the chest wall] can occur in any pulmonary disease that causes an alveolus (air sac) on the surface of the lung to rupture: most commonly emphysema, asthma, or tuberculosis….

Some pathologic conditions can be identified by the deposition of calcium salts. Such dystrophic (defective) calcification occurs in any form of tissue injury, particularly if there has been tissue necrosis (cell death). It is found in necrotizing tumors (particularly carcinomas), atherosclerotic blood vessels, areas of old abscess formation, tuberculous foci, and damaged heart valves, among others.

This history of tuberculosis as a disease is fascinating. Green writes that in eighteenth century Europe “the disease became not just the leading cause of human death, but overwhelmingly the leading cause of human death.” Oddly, it became romanticized. People like the poet John Keats and the pianist Frederic Chopin died of tuberculosis, and the illness came to be linked with creativity. It also became associated with female beauty, as the thin, wide-eyed, rosy-cheeked appearance of a woman with tuberculosis became fashionable. Later, the disease was stigmatized, being tied to race and a lack of moral virtue. When a person suffered from tuberculosis, they often went to a sanatorium for rest and treatment, and usually died there.

The German microbiologist Robert Koch isolated Mycobacterium tuberculosis in 1882. Koch was a rival of Frenchman Louis Pasteur, and both worked on treatments. I was surprised to learn that author Arthur Conan Doyle—famous for his Sherlock Holmes stories—also played a role in developing treatments for the disease. Tuberculosis remains latent in people until it’s activated by some other problem, such as malnutrition or an immune system disease like AIDS. Many infectious diseases attack children or the elderly, but TB is common in young adults. Physicist Richard Feynman’s 25-year-old wife Arline died of tuberculosis.

Green explains that 

in the decades after the discovery of Koch’s bacillus, small improvements emerged. Better diagnostics meant the disease could be identified and treated earlier, especially once chest X-rays emerged as a diagnostic tool.

The main impact of medical physics on tuberculosis is the development of radiography. X-rays weren’t even discovered until 1895, a decade after Koch isolated the tuberculosis bacterium. They arrived just in time. The often-decaying bacteria at the center of a tubercle accumulates calcium. For low x-ray energies, when the photoelectric effect is the dominant mechanism determining how x-ray photons interact with tissue, the cross section for x-ray attenuation varies as the fourth power of the atomic number. Because calcium has a relatively high atomic number (Z = 20) compared to hydrogen, carbon, nitrogen, and oxygen (Z = 1, 6, 7, 8, respectively), and because lung tissue in general has a low attenuation because of the low-density of air, tubercles show up on a chest x-ray with a great deal of contrast.

The primary treatment for tuberculosis nowadays is antibiotics. The first one to be used for TB, streptomycin, was discovered in the 1940s. By the mid 1950s, several antibiotics made TB curable. I was born in 1960, just after the threat of tuberculosis subsided dramatically in the United States. I can still remember us kids getting those TB skin tests in our forearms, which we all had to have before entering school. But I don’t remember being very worried about TB as a child. The threat was over by then.

A vaccine exists for tuberculosis (the Bacillus Calmette–GuĂ©rin, or BCG, vaccine), but it’s mainly effective when given to children, and isn’t used widely in the United States, where tuberculosis is rare. In poorer countries, however, the vaccine saves millions of lives. Currently, mRNA vaccines are being developed against TB. This crucial advance is happening just as Robert F. Kennedy, Jr. is leading his crazy anti-science crusade against vaccines in general, and mRNA vaccines in particular. The vaccine alliance GAVI is hoping to introduce new vaccines for tuberculosis, and this effort will certainly be hurt by the United States defunding GAVI. The World Health Organization has an “end TB strategy” that, again, will be slowed by America’s withdraw from WHO and the dismantling of USAID. Green’s book was published in 2025, but I suspect it was written in 2024, before the Trump administration’s conspiracy-theory laden effort to oppose vaccines and deny vaccine science got underway.

Many of these world-wide efforts to eliminate TB depend on access to new drugs that can overcome drug-resistant TB. Unfortunately, such drugs are expensive, and are difficult to afford or even obtain in poorer countries.

In the final pages of Everything is Tuberculosis, Green writes eloquently

...TB [tuberculosis] in the twenty-first century is not really caused by a bacteria that we know how to kill. TB in the twenty-first century is really caused by those social determinants of health, which at their core are about human-built systems for extracting and allocating resources. The real cause of contemporary tuberculosis is, for lack of a better term, us...

We cannot address TB only with vaccines and medications. We cannot address it only with comprehensive STP [Search, Treat, Prevent] programs. We must also address the root cause of tuberculosis, which is injustice. In a world where everyone can eat, and access healthcare, and be treated humanely, tuberculosis has no chance. Ultimately, we are the cause.

We must also be the cure.

Green serves on the board of trustees for the global health non-profit Partners In Health. To anyone wanting to join the worldwide fight against tuberculosis, I suggest starting at https://www.pih.org.

 
John Green reads the first chapter of Everything Is Tuberculosis.

https://www.youtube.com/watch?v=CCbDdk8Wz-8



John Green discusses Everything Is Tuberculosis on the Daily Show

https://www.youtube.com/watch?v=2uppLo4lZRc


Friday, July 18, 2025

Millikan and the Magnetic Field of a Single Axon

“The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment” superimposed on Intermediate Physics for Medicine and Biology.
The Magnetic Field of a Single Axon:
A Comparison of Theory and Experiment.”

Forty years ago this month, I published one of my first scientific papers. “The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment” appeared in the July, 1985 issue of the Biophysical Journal (Volume 48, Pages 93–109). I was a graduate student at Vanderbilt University at the time, and my coauthor was my PhD advisor John Wikswo. When discussing the paper below, I will write “I did this…” and “I thought that…” because I was the one in the lab doing the experiments, but of course it was really Wikswo and I together writing the paper and analyzing the results.

Selected Papers of Great American Physicists superimpsed on the cover of Intermediate Physics for Medicine and Biology.
Selected Papers of
Great American Physicists
.
In those days I planned to be an experimentalist (like Wikswo). About the time I was writing “The Magnetic Field of a Single Axon,” I read “On the Elementary Electrical Charge and The Avogadro Constant” by Robert Millikan (Physical Review, Volume 11, Pages 109–143, 1913). It had been reprinted in the book Selected Papers of Great American Physicists, published by the American Institute of Physics.

If you are reading this blog, you’re probably are familiar with Millikan’s oil drop experiment. He measured the speed of small droplets of oil suspended in air and placed in gravitational and electric fields, and was able to determine the charge of a single electron. I remember doing this experiment as a undergraduate physics major at the University of Kansas. I was particularly impressed by the way Millikan analyzed his experiment for possible systematic errors: He worried about deviations of the frictional force experienced by the drops from Stokes’ law and corrected for it; he analyzed the possible changes to the density of the oil in small drops; he checked that his 5300 volt battery was calibrated correctly and supplied a constant voltage; and he fussed about convection currents in the air influencing his results. He was especially concerned about his value of the viscosity of air, which he estimated was known to about one part in a thousand. Rooting out systematic errors is a hallmark of a good experimentalist. I wanted to be like Millikan, so I analyzed my magnetic field measurement for a variety of systematic errors.

The first type of error in my experiment was in the parameters used to calculate the magnetic field (so I could compare it to the measured field). I estimated that my largest source of error was in my measurement of the axon radius. This was done using a reticle in the dissecting microscope eyepiece. I only knew the radius to 10% accuracy, in part because I could see that it was not altogether uniform along the axon, and because I could not be sure the axon’s cross section was circular. It was my biggest source of error for calculating the magnitude of the magnetic field, because the field varied as the axon cross-sectional area, which is proportional to the radius squared.
Figure 1 from "The Magnetic Field of a Single Axon."
Figure 1 from "The Magnetic
Field of a Single Axon."

I measured the magnetic field by threading the axon through a wire-wound ferrite-core toroid (I’ve written about these toroid measurements before in this blog). I assumed the axon was at the center of the toroid, but this was not always the case. I performed calculations assuming the toroid averaged the magnetic field for an off-axis axon, and was able to set an upper limit on this error of about 2%. The magnetic field was not measured at a point but was averaged over the cross-sectional area of the ferrite core. More numerical analysis suggested that I could account for the core area to within about 1%. I was able to show that inductive effects from the toroid were utterly negligible. Finally, I assumed the high permeability ferrite did not affect the magnetic field distribution. This should be true if the axon is concentric with the toroid and aligned properly. I didn’t have a good way to estimate the size of this error.

Figure 2 from "The Magnetic Field of a Single Axon."
Figure 2 from "The Magnetic
Field of a Single Axon."
The toroid and axon were suspended in a saline bath (technically, Van Harreveld's solution), and this bath gave rise to other sources of error. I analyzed the magnetic field for different sized baths (the default assumption was an unbounded bath), and for when the bath had a planar insulating boundary. I could do the experiment of measuring the magnetic field as we raised and lowered the volume of fluid in the bath. The effect was negligible. I spent a lot of time worrying about the heterogeneity caused by the axon being embedded in a nerve bundle. I didn’t really know the conductivity of the surrounding nerve bundle, but for reasonable assumptions it didn’t seem to have much effect. Perhaps the biggest heterogeneity in our experiment was the “giant” (~1 mm inner radius, 2 mm outer radius, 1 mm thick) toroid, which was embedded in an insulated epoxy coating. This big chunk of epoxy certainly influenced the current density in the surrounding saline. I had to develop a new way of calculating the extracellular current entirely numerically to estimate this effect. The calculation was so complicated that Wikswo and I didn’t describe it in our paper, but instead cited another paper that we listed as “in preparation” but that in fact never was published. I concluded that the toroid was not a big effect for my nerve axon measurements, although it seemed to be more important when I later studied strands of cardiac tissue.

Figure 3 of "The Magnetic Field of a Single Axon."
Figure 3 of "The Magnetic
Field of a Single Axon."
Other miscellaneous potential sources of error include capacitive effects in the saline and an uncertainty in the action potential conduction velocity (measured using a second toroid). I determined the transmembrane potential by taking the difference between the intracellular potential (measured by a glass microelectrode, see more here) and a metal extracellular electrode. However, I could not position the two electrodes too accurately, and the extracellular potential varies considerably over small distances from the axon, so my resulting transmembrane potential certainly had a little bit of error. Measurement of the intracellular potential using the microelectrode was susceptible to capacitive coupling to the surrounding saline bath. I used a “frequency compensator” to supply “negative capacitance” and correct for this coupling, but I could not be sure the correction was accurate enough to avoid introducing any error. One of my goals was to calculate the magnetic field from the transmembrane potential, so any systematic errors in my voltage measurements were concerning. Finally, I worried about cell damage when I pushed the glass microelectrode into the axon. I could check this by putting a second glass microelectrode in nearby and I didn’t see any significant effect, but such things are difficult to be sure about.

All of this analysis of systematic errors, and more, went into our rather long Biophysical Journal paper. It remains one of my favorite publications. I hope Millikan would have been proud. If you want to learn more, see Chapter 8 about Biomagnetism in Intermediate Physics for Medicine and Biology

Forty years is a long time, but to this old man it seems like just yesterday.

Friday, July 11, 2025

David Cohen: The Father of MEG

David Cohen: The Father of MEG, superimposed on the cover of Intermediate Physics for Medicine and Biology.
David Cohen: The
Father of MEG
,
 by Gary Boas.
Gary Boas
recently published a short biography of David Cohen, known as the father of magnetoencephalography (MEG). The book begins with Cohen’s childhood in Winnipeg, Canada, including the influence of his uncle who introduced him to electronics and crystal radios. It then describes his college days and his graduate studies at the University of California, Berkeley. He was a professor at the University of Illinois Chicago, where he built his first magnetically shielded room in which he hoped to measure the magnetic fields of the body. Unfortunately, Cohen didn’t get tenure there, mainly for political reasons (and a bias against applied research related to biology and medicine). However, he found a new professorship at the Massachusetts Institute of Technology, where he built an even bigger shielded room. The climax of several years of work came in 1969, when he combined the SQUID magnetometer and his shielded room to make groundbreaking biomagnetic recordings. Boas describes the big event this way:
To address this problem [of noise in his copper-coil based magnetic field detector drowning out the signal], he [David Cohen] turned to James Zimmerman, who had invented a superconducting quantum interference device (SQUID) several years before… The introduction came by way of Ed Edelsack, a U.S. Navy funding officer… In a 2024 retrospective about his biomagnetism work in Boston, David described what happened next.

“Ed put me in touch with Jim, and it was arranged that Jim would bring one of his first SQUIDs to my lab at MIT, to look for biomagnetic signals in the shielded room. Jim arrived near the end of December, complete with SQUID, electronics, and nitrogen-shielded glass dewar. It took a few days to set up his system in the shielded room, and for Jim to tune the SQUID. Finally, we were ready to look at the easiest biomagnetic signal: the signal from the human heart, because it was large and regular. Jim stripped down to his shorts, and it was his heart that we first looked at.”

The results were nothing short of astounding; in terms of the signal measured, they were light years beyond anything David had seen with the copper-coil based detector. By combining the highly sensitive SQUID with the shielded room, which successfully eliminated outside magnetic disturbances, the two researchers were able to produce, for the first time, clear, unambiguous signals showing the magnetic fields produced by various organs of the human body. The implications of this were far reaching, with potential for a wide range of both basic science and clinical applications. David didn’t quite realize this at the time, but he and Zimmerman had just launched a new field of study, biomagnetism

Having demonstrated the efficacy of the new approach… David switched off the lights in the lab and he and Zimmerman went out to celebrate. It was December 31, 1969. The thrill of possibility hung in the air as they joined other revelers to ring in a new decade—indeed, a new era.

“Biomagnetism: The First Sixty Years” superimposed on the cover of Intermediate Physics for Medicine and Biology.
Biomagnetism: The
First Sixty Years.”
The biography is an interesting read. I always enjoy stories illustrating how physicists become interested in biology and medicine. Russ Hobbie and I discuss the MEG in Chapter 8 of Intermediate Physics for Medicine and Biology.You can also learn more about Cohen's contributions in my review article “Biomagnetism: The First Sixty Years.”

Today Cohen is 97 years old and still active in the field of biomagnetism. The best thing about Boas’s biography is you can read it for free at https://meg.martinos.org/david-cohen-the-father-of-meg. Enjoy! 


The Birth of the MEG: A Brief History
 https://www.youtube.com/watch?v=HxQ8D4cPIHI
 
 

Friday, July 4, 2025

An Alternative to the Linear-Quadratic Model

In Section 16.9 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the linear-quadratic model.

The linear-quadratic model is often used to describe cell survival curves… We use it as a simplified model for DNA damage from ionizing radiation.
Suppose you plate cells in a culture and then expose them to x-rays. In the linear-quadratic model, the probability of cell survival, P, is

P = e–αD–βD2

where D is the dose (in grays) and α and β are constants. At large doses, the quadratic term dominates and P falls as P = e–βD2. In some experiments, however, at large doses P falls exponentially. It turns out that there is another simple model—called the multi-target single-hit (MTSH) model—describing how P depends on D in survival curves,

P = 1 – (1 –e–α'D)N

Let’s compare and contrast these curves. They both have two parameters: α and β for the linear-quadratic model, and Î±' and N in the MTSH model. Both give P = 1 if D is zero (as they must). They both fall off more slowly at small doses and then faster at large doses. However, while the linear-quadratic model falls off at large dose as e–βD2, the MTSH model falls off exponentially (linearly in a semilog plot).

If α'D is large, then the exponential is small. We can expand the polynomial using (1 – x)N = 1 – N x + …, keep only the first two terms, and then use some algebra to shown that at large doses P = N e–α'D. If you extrapolate this large-dose behavior back to zero dose, you get P = N, which provides a simple way to determine N.

Below is a plot of both curves. The blue curve is the linear-quadratic model with α = 0.1 Gy-1 and Î˛ = 0.1 Gy-2. The gold curve is the MTSH model with Î±’=1.2 Gy-1 and N = 10. The dashed gold line is the extrapolation of the large dose behavior back to zero dose to get N



If the survival curve falls off exponentially at large doses use the MTSH model. If it falls off quadratically at large doses use the linear-quadratic model. Sometimes the data doesn’t fit either of these simple toy models. Moreover, often P is difficult to measure when it’s very small, so the large dose behavior is unclear. The two models are based on different assumptions, none of which may apply to your data. Choosing which model to use is not always easy. That’s what makes it so fun.

Friday, June 27, 2025

A Toy Model for Radiation Damage

Sometimes a toy model (a simple model that strips away all the detail to expose the underlying mechanisms more clearly) can be useful. Today I present a new homework problem that contains a toy model for understanding equivalent dose.
Section 16.12

Problem 34 ½
. Consider two scenarios.
Scenario 1: N* particles are distributed evenly in a volume V*, so the concentration is C* = N*/V*.
Scenario 2: The volume V* is divided into two noninteracting regions of volume V1 and V2, where V* = V1V2. All N* particles are placed in V2. Therefore, the concentration of particles in V1 is C1 = 0, and the concentration in V2 is C2 = N*/V2 = N*/V*[(V1V2)/V2] = C*[(V1V2)/V2].

Now, examine two cases about how, in a local region, cellular damage, D, relates to the concentration C.

Case 1: Damage is proportional to the concentration. In other words, D = αC, where Î± is a constant of proportionality.
Case 2: Damage is proportional to the square of the concentration. In other words, D = βC2, where β is another constant of proportionality.

For both cases and both scenarios (a total of four different situations), average the damage over the entire volume V* to get D. Find how D is related to C*.

Stop! To get the most out of this blog post, stop reading and solve this homework problem yourself...


...Okay, so you solved it and now you’re back. Help me explain it to that fellow who didn’t bother to solve it for himself.

Case 1 (damage proportional to concentration)

Scenario 1: The concentration is uniform throughout V*. Averaging the local relation D = αC over V* simply gives DαC*. The average relationship is the same as the local relationship.

Scenario 2: Locally, D1 = 0 because all the particles are in V2 so C1 = 0. Moreover, D2α C2αC*[(V1V2)/V2]. Now, average the damage over the volume V*. You get D = [V1/(V1V2)] (0) + [V2/(V1V2)] αC*[(V1V2)/V2]. But all those complicated factors cancel out, and you get simply D = αC*. This is the same result as in scenario 1. The average damage is proportional to C*.
Case 2 (damage proportional to concentration squared)
Scenario 1: Again, the concentration is uniform throughout V*. So you just get DβC*2. All that matters is the average concentration, C*.

Scenario 2: Locally, D1 = 0 and D2 βC22βC*2[(V1V2)/V2]2. Now average over the volume V*. You get DβC*2[(V1V2)/V2]. If V2 is much less than V1, then D is much greater than βC*2. It is as if the average damage is supercharged by the concentration being, well, concentrated. In this scenario, the average damage depends on both C* and the ratio V1/V2.

This is all interesting, but what does it mean? It means that if you deposit energy locally, then the concentration (or "dose") alone may not tell the whole story. It depends on how the damage depends on the concentration. What is an example of when the damage would be proportional to the square of the concentration? Suppose we are talking about damage to DNA. The concentration might refer to the number of “breaks” in the DNA strand caused by radiation. Now suppose further that DNA has a repair mechanism that can fix breaks as long as they are far apart. That is, as long as they are isolated. But if you get two breaks near each other, then the repair mechanism is overwhelmed and doesn’t work. So, you need two “breaks” close together or you get no damage (in the jargon of radiobiology, you need double-strand breaks instead of just single-strand breaks). The concentration squared tells you something about having two events happen at the same place. You need a “break” to happen at some target spot along the DNA (proportional to the concentration) and then you need another “break” to happen nearby (again, proportional to the concentration), so the probability of getting two breaks near the target spot is proportional to the concentration squared.

Now let’s compare x-rays and alpha particles. Suppose you irradiate tissue so that the energy deposited in the tissue is the same for both. Then, the “dose” (energy per unit mass, analogous to C) is the same in both cases. But the alpha particles (scenario 2) deposit all their energy along a few thin tracks, whereas x-rays (scenario 1) deposit their energy all over the place randomly. You might say: well, for alpha particles the energy has a high density along the path, but everywhere else there is nothing, so on average those effects balance out. That’s true if damage is proportional to concentration (case 1 above). But if damage is proportional to concentration squared (case 2), it’s not true. The average damage caused by alpha particles is more extensive than for x-rays, even if the energy deposited into the tissue (the dose) is the same. The “equivalent dose” (another term for “damage”) is higher for the alpha particles than for the x-rays.

Intermediate Physics for Medicine and Biology.
Intermediate Physics for
Medicine and Biology.

In Section 16.12 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce the concept of equivalent dose. To find the equivalent dose, the dose is multiplied by a dimensionless weighting factor (in the jargon, called the “relative biological effectiveness”), which is one for x-rays and twenty for alpha particles. The equivalent dose even has its own unit, the sievert (as opposed to the gray, the unit of the dose). Both the sievert and the gray are abbreviations for joules per kilogram, but the sievert includes the weighting factor. Alpha particles just do more damage than x-rays for a given dose. This is because alpha particles deposit their energy in a smaller volume, and damage depends on DNA being hit twice close together. In other words, damage depends on the concentration squared. In our toy model, the weighting factor is (V1V2)/V2.

Our whole story about DNA repair mechanisms is reasonable and most likely true. But any other mechanism that results in the damage depending on the concentration (or dose) squared would give the same behavior. This result is not limited to DNA repair processes.

In general, case 1 (damage proportional to concentration) and case 2 (damage proportional to the square of the concentration) are not mutually exclusive. For instance, instead of DNA repair mechanisms being perfect for single-strand breaks and being useless for double-strand breaks, perhaps they are 90% effective for single-strand breaks and only 10% effective for double-strand breaks. In Section 16.9 of IPMB, Russ and I show that cell survival curves typically have two terms, one proportional to the dose and one proportional to the dose squared. At low doses the linear term dominates, but at high doses the quadratic one does.

The goal of toy models is to provide insight. I hope that even though the model in this new homework problem is oversimplified and artificial, it helps you get an intuitive feel for the equivalent dose.

Friday, June 20, 2025

A Toy Model for Straggling

One of the homework problems in Intermediate Physics for Medicine and Biology (Problem 31 in Chapter 16) introduces a toy model for the Bragg peak. I won’t review that entire problem, but students derive an equation for the stopping power, S, (the energy per unit distance deposited in tissue by a high energy ion) as a function of the depth below the tissue surface, x

where S0 is the ion’s stopping power at the surface (x = 0) and R is the ion’s range. At a glance you can see how the Bragg peak arises—the denominator goes to zero at x = R so the stopping power goes to infinity. That, in fact, is why proton therapy for cancer is becoming so popular: Energy is deposited primarily at one spot well below the tissue surface where a tumor is located, with only a small dose to upstream healthy tissue. 

One topic that comes up when discussing the Bragg peak is straggling. The idea is that the range is not a single parameter. Instead, protons have a distribution of ranges. When preparing the 6th edition of Intermediate Physics for Medicine and Biology, I thought I would try to develop a toy model in a new homework problem to illustrate straggling. 

Section 16.10 

Problem 31 ½. Consider a beam of protons incident on a tissue. Assume the stopping power S for a single proton as a function of depth x below the tissue surface is


Furthermore assume that instead of all the protons having the same range R, the protons have a uniform distribution of ranges between R – δ/2 and R + δ/2, and no protons have a range outside this interval. Calculate the average stopping power by integrating S(x) over this distribution of ranges. 

This calculation is a little more challenging than I had expected. We have to consider three possibilities for x

x < R — δ/2

In this case, all of the protons contribute so the average stopping power is

We need to solve the integral 

First, let

With a little analysis, you can show that

So the integral becomes

This new integral I can look up in my integral table

Finally, after a bit of algebra, I get

Well, that was a lot of work and the result is not very pretty. And we are not even done yet! We still have the other two cases. 

 R — δ/2 <  x R + δ/2

In this case, if the range is less than x there is no contribution to the stopping power, but if the range is greater than x there is. So, we must solve the integral

I’m not going to go through all those calculations again (I’ll leave it to you, dear reader, to check). The result is 

x   R + δ/2

This is the easy case. None of the protons make it to x, so the stopping power is zero. 

Well, I can’t look at these functions and tell what the plot will look like. All I can do is ask Mr. Mathematica to make the plot (he’s much smarter than I am). Here’s what he said: 


The peak of the “pure” (single value for the range) curve (the red one) goes to infinity at x = R, and is zero for any x greater than R. As you begin averaging, you start getting some stopping power past the original range, out to R + δ/2. To me the most interesting thing is that for x = R δ/2, the stopping power is larger than for the pure case. The curves all overlap for R + δ/2 (of course, they are all zero), and for fairly small values x (in these cases, about x <  0.5) the curves are all nearly equal (indistinguishable in the plot). Even a small value of δ (in this case, for a spread of ranges equal to one tenth the pure range), the peak of the stopping power curve is suppressed. 

The curves for straggling that you see in most textbooks are much smoother, but that’s because I suspect they assume a smoother distribution of range values, such as a normal distribution. In this example, I wanted something simple enough to get an analytical solution, so I took a uniform distribution over a width Î´

Will this new homework problem make it into the 6th edition? I’m not sure. It’s definitely a candidate. However, the value of toy models is that they illustrate the physical phenomenon and describe it in simple equations. I found the equations in this example to be complicated and not illuminating. There is still some value, but if you are not gaining a lot of insight from your toy model, it may not be worth doing. I’ll leave the decision of including it in the 6th edition to my new coauthor, Gene Surdutovich. After all, he’s the expert in the interaction of ions with tissue.

Friday, June 13, 2025

Photobiomodulation

Harvest with her copy of Intermediate Physics for Medicine and Biology.
Harvest with her copy of
Intermediate Physics for
Medicine and Biology
.
My Treeing Walker Coonhound Harvest is getting older and having some trouble with arthritis. The vet says she’s showing signs of hip dysplasia, but it’s not too severe yet. I want to nip this problem in the bud, so we have started a treatment regime that includes oral supplements, pain medication, moderate exercise, weight control, and massage. We’re also trying photobiomodulation, sometimes called low-level laser therapy or cold laser therapy.

Russ Hobbie and I don’t mention photobiomodulation in Intermediate Physics for Medicine and Biology. Is it for real? That’s what I want to discuss in today’s blog post. I’ll give you a hint: my answer will be “maybe.”

Harvest getting
photobiomodulation treatment.
We bought a device called Lumasoothe 2 Light Therapy for Pets (lumasoothe.com). I use it in it’s IR Deep Treatment Mode, which shines three wavelengths of light—infrared (940 nm), red (650 nm) and green (520 nm)—from an array of light emitting diodes. I doubt the green light can penetrate to the hip, but red and especially infrared are not attenuated as much. In IPMB, Russ and I talk about how red light is highly scattered, and you can see that by noticing how the red spreads out to the sides of the applicator (kind of like when you hold a flashlight up to your mouth and your checks glow red). The light is delivered in pulses that come at a frequency of about 2.5 Hz (I used the metronome that sits atop my piano to estimate the frequency). I can’t imagine any advantage to pulsing the light, and suspect it’s done simply for the visual effect. I apply the light to Harvest’s hips, about 15 minutes each side.

Mechanisms and Applications
of the Anti-Inflammatory Effects
of Photobiomodulation.
When we first purchased the device, I assumed it worked by heating tissue. But researchers and device manufacturers insist the mechanism is not thermal. So how does it work? To explore that and other issues, I searched the literature, and found a particularly clear open-access review article by Michael Hamblin, then with the Harvard-MIT Division of Health Sciences and Technology: “Mechanisms and applications of the anti-inflammatory effects of photobiomodulation” (AIMS Biophysics, Volume 4, Pages 337–36, 2017). Hamblin has a long history of research on photodynamic therapy (analyzed in Chapter 14 of IPMB), and his more recent work has focused on photobiomodulation.

Hamblin begins (with references removed),
Photobiomodulation (PBM) was discovered almost 50 years ago by Endre Mester in Hungary. For most of this time PBM was known as “low-level laser therapy” as ruby laser (694 nm) and HeNe lasers (633 nm) were the first devices used. Recently a consensus decision was taken to use the terminology “PBM” since the term “low-level” was very subjective, and it is now known that actual lasers are not required, as non-coherent light-emitting diodes (LEDs) work equally well. For much of this time the mechanism of action of PBM was unclear, but in recent years much progress has been made in elucidating chromophores and signaling pathways.

Any time you are talking about a therapy, the dose is crucial. According to a study by medcovet, the output of Lumasoothe is 0.225 J/cm² per minute (it’s advertised at 6.4). I don’t know which of these values to use, so I’ll just pick something in the middle: 1 J/cm². If we divide by 60 seconds, this converts to about 0.017 W/cm². The intensity of sunlight that reaches the earth’s surface is about 0.1 W/cm², so the device puts out less than the intensity of sunlight (at noon, at the equator, with no clouds). The advertised intensity would be similar to the intensity of sunlight. Of course, sunlight includes a wide band of frequencies, while the Lumasoothe emits just three. 

There seems to be an optimum dose, as is often found in toxicology. Hamblin explains

The “biphasic dose response” describes a situation in which there is an optimum value of the “dose” of PBM most often defined by the energy density (J/cm²). It has been consistently found that when the dose of PBM is increased a maximum response is reached at some value, and if the dose in increased beyond that maximal value, the response diminishes, disappears and it is even possible that negative or inhibitory effects are produced at very high fluences.
Joules per square centimeter per minute may not be the best unit to assess heating effects of the Lumasoothe. Let’s assume that 0.017 W/cm² of light penetrates into the tissue about one centimeter (a guess). This means that the device dumps 0.017 watts into a cubic centimeter of tissue. That volume of tissue has a density of about that of water: 1 g/cm3. So the specific absorption rate should be about 0.017 W/g or 17 W/kg. That’s not negligible. A person’s metabolism generates only about 1.5 W/kg. Diathermy to heat tissues uses about 20 W/kg. I don’t think we can rule out some heating using this device. (However, I shined it on my forearm for about two minutes and didn’t feel any obvious warming.)

Hamblin believes there are non-thermal mechanisms involved.
Cytochrome c oxidase (CCO) is unit IV in the mitochondrial electron transport chain. It transfers one electron (from each of four cytochrome c molecules), to a single oxygen molecule, producing two molecules of water. At the same time the four protons required, are translocated across the mitochondrial membrane, producing a proton gradient that the ATP synthase enzyme needs to synthesize ATP. CCO has two heme centers (a and a3) and two copper centers (CuA and CuB). Each of these metal centers can exist in an oxidized or a reduced state, and these have different absorption spectra, meaning CCO can absorb light well into the NIR [near infrared] region (up to 950 nm). Tiina Karu from Russia was the first to suggest that the action spectrum of PBM effects matched the absorption spectrum of CCO, and this observation was confirmed by Wong-Riley et al in Wisconsin. The assumption that CCO is a main target of PBM also explains the wide use of red/NIR wavelengths as these longer wavelengths have much better tissue penetration than say blue or green light which are better absorbed by hemoglobin. The most popular theory to explain exactly why photon absorption by CCO could led [sic] to increase of the enzyme activity, increased oxygen consumption, and increased ATP production is based on photodissociation of inhibitory nitric oxide (NO). Since NO is non-covalently bound to the heme and Cu centers and competitively blocks oxygen at a ratio of 1:10, a relatively low energy photon can kick out the NO and allow a lot of respiration to take place.
That’s a considerable amount of biochemistry, which I’m not an expert in. I’ll assume Hamblin knows a lot more about it than I do. I worry, however, when he writes “the assumption that…” and “the most popular theory…” It makes me wonder how well this mechanism is established. He goes on to suggest other mechanisms, such as the production of reactive oxygen species and a reduction in inflammation.

Hamblin concludes
The clinical applications of PBM have been increasing apace in recent years. The recent adoption of inexpensive large area LED arrays, that have replaced costly, small area laser beams with a risk of eye damage, has accelerated this increase in popularity. Advances in understanding of PBM mechanisms of action at a molecular and cellular level, have provided a scientific rationale for its use for multiple diseases. Many patients have become disillusioned with traditional pharmaceutical approaches to a range of chronic conditions, with their accompanying distressing side-effects and have turned to complementary and alternative medicine for more natural remedies. PBM has an almost complete lack of reported adverse effects, provided the parameters are understood at least at a basic level. The remarkable range of medical benefits provided by PBM, has led some to suggest that it may be “too good to be true”. However one of the most general benefits of PBM that has recently emerged, is its pronounced anti-inflammatory effects. While the exact cellular signaling pathways responsible for this anti-inflammatory action are not yet completely understood, it is becoming clear that both local and systemic mechanisms are operating. The local reduction of edema, and reductions in markers of oxidative stress and pro-inflammatory cytokines are well established. However there also appears to be a systemic effect whereby light delivered to the body, can positively benefit distant tissues and organs.
I have to admit that Hamblin makes a strong case. But there is another side to the question. Hamblin himself uses that worrisome phrase “complementary and alternative medicine.” I have to wonder about thermal effects. We know that temperature can influence healing (that’s why people often use a heating pad). If photobiomodulation causes even a little heating, this might explain some of its effect.

I’ve talked a lot in this blog about websites or groups that debunk alternative medicine. Stephen Barrett of quackwatch looked at Low Level Laser Therapy in 2018, and concluded that “At this writing, the bottom line appears to be that LLLT devices may bring about temporary relief of some types of pain, but there’s no reason to believe that they will influence the course of any ailment or are more effective than standard forms of heat delivery.” Mark Crislip writing for Science Based Medicine in 2012 concluded “I suspect that time and careful studies on the efficacy of low level laser will have the same results as the last decade of acupuncture studies: there is no there there.” Jonathan Jarry wrote about “The Hype Around Photobiomodulation,” saying “That is not to say that all of PBM’s applications are hogwash or that future research will never produce more effective applications of it. But given biomedical research’s modest success rate these days and the ease of coming up with a molecular pathway that fits our wishes, we’re going to need more than mice studies and a plausible mechanism of action to see photobiomodulation in a more favourable light. A healthy skepticism is needed here, especially when it comes to claims of red light improving dementia.” 

What about clinical trials? An interesting one titled “Photobiomodulation Therapy is Not Better Than Placebo in Patients with Chronic Nonspecific Low Back Pain: A Randomised Placebo-Controlled Trial” was published in the journal PAIN in 2021 (Volume 162, Pages 1612–1620). It concluded “Photobiomodulation therapy was not better than placebo to reduce pain and disability in patients with chronic nonspecific LBP [low back pain].” The importance of a randomized, controlled study with an effective placebo is crucial. We need more of these types of studies.

Are Electromagnetic Fields
Making Me Ill?

So, what’s the bottom line? In my book Are Electromagnetic Fields Making Me Ill?, I divided different medical devices, procedures, and hypotheses into three categories: Firmly Established, Questionable, and Improbable (basically: yes, maybe, and no). I would put photobiomodulation therapy in the maybe category, along with transcutaneous electrical nerve stimulation, bone healing using electromagnetic fields, and transcranial direct current stimulation. As a scientist, I’m skeptical about photobiomodulation therapy. But as dog lover, I’m using it every day to try and help Harvest’s hip dysplasia. This probably says more about how much I love Harvest than about my confidence in the technique. My advice is to not get your hopes up, and to follow your vet’s advice about traditional and better-established treatments. The good news: I don’t see much potential for side effects. Is it worth the money to purchase the device? My wife and I were willing to take a moderately expensive bet on a low probability outcome for Harvest’s sake. because she’s the goodest gurl. 

Mechanisms & History of Photobiomodulation with Dr. Michael Hamblin

https://www.youtube.com/watch?v=udnRpZ8l1_0

Friday, June 6, 2025

Mechanisms of the FLASH Effect: Current Insights and Advances

I’ve written about FLASH radiotherapy previously in this blog (here and here). FLASH is when you apply radiation in a single brief pulse rather than slowly or in several fractions. It’s one of the most important developments in radiation therapy in the last decade, but no one is sure why FLASH works better than conventional methods. (Skeptics might say no one is sure if FLASH works better than conventional methods, but I’ll assume in this post that it’s better.) FLASH is too new for Russ Hobbie and I to mention it in the 5th edition of Intermediate Physics for Medicine and Biology, but Gene Surdutovich and I will add a discussion of it to the 6th edition.

The article "Mechanisms of the FLASH Effect: Current Insights and Advances," by Giulia Rosini, Esther Ciarrocchi, and Beatrice D’Orse, superimposed on Intermediate Physics for Medicine and Biology.
Mechanisms of the FLASH Effect:
Current Insights and Advances,”
by Giulia Rosini, Esther Ciarrocchi,
and Beatrice D’Orse
I recently read a fascinating mini review in Frontiers in Cell and Developmental Biology by Giulia Rosini, Esther Ciarrocchi, and Beatrice D’Orse of the Institute of Neuroscience in Pisa, Italy. They’re trying to address that why question. Their article, titled “Mechanisms of the FLASH Effect: Current Insights and Advances,” is well worth reading. (Some scientific leaders in the United States claim that modern medicine focuses on treating symptoms rather than addressing underlying causes. This article shows that scientists do just the opposite: They search for basic mechanisms. Bravo! At least in Italy science is still alive.)

Below I reproduce their introduction (references removed and Wikipedia links added). If you want more detail, I suggest reading the review in its entirety (it’s open access, so you don’t need a subscription to the journal).
Radiotherapy is one of the most effective treatments for cancer, used in more than 60% of cancer patients during their oncological care to eliminate/reduce the size of the tumor. Currently, conventional radiotherapy (CONV-RT) remains the standard in clinical practice but has limitations, including the risk of damage to surrounding healthy tissues. A recent innovation, FLASH radiotherapy (FLASH-RT), employs ultra-high-dose rate (UHDR) irradiation to selectively spare healthy tissue while maintaining its therapeutic effect on tumors. However, the precise radiobiological mechanism behind this protective “FLASH effect” remains unclear. To understand the FLASH effect, several hypotheses have been proposed, focusing on the differential responses of normal and tumor tissues to UHDR irradiation: (i) Oxygen depletion: FLASH-RT may rapidly deplete oxygen in normal tissues, creating transient hypoxia that reduces oxygen-dependent DNA damage; (ii) Radical-radical interaction: The rapid production of reactive oxygen species (ROS) during UHDR irradiation may lead to radical recombination, preventing oxidative damage to healthy tissues; (iii) Mitochondrial preservation: FLASH-RT appears to preserve mitochondrial integrity and ATP production in normal tissues, minimizing oxidative stress. Conversely, FLASH-RT may promote oxidative damage and apoptosis in tumor cells, potentially improving therapeutic efficacy; (iv) DNA damage and repair: The differential response of normal and tumor tissues may result from variations in DNA damage formation and repair. Normal cells rely on highly conserved repair mechanisms, while tumor cells often exhibit dysregulated repair pathways; and (v) Immune response: FLASH-RT may better preserve circulating immune cells and reduce inflammation in normal tissues compared to CONV-RT. In this mini-review, we summarize the current insights into the cellular mechanisms underlying the FLASH effect. Preclinical studies in animal models have demonstrated the FLASH effect, and early-phase clinical trials are now underway to evaluate its safety and efficacy in human patients. While FLASH-RT holds great promise for improving the balance between tumor control and normal tissue sparing in cancer treatment, continued research is necessary to fully elucidate its mechanisms, optimize its clinical application, and minimize potential side effects. Understanding these mechanisms will pave the way for safer and more effective radiotherapy strategies.

I’ll take advantage of this paper being open access to reproduce Rosini et al.’s Figure 1, which is a beautiful summary of their article. 

Figure 1 from “Mechanisms of the FLASH Effect: Current Insights and Advances,” by Giulia Rosini, Esther Ciarrocchi and Beatrice D’Orsi
Figure 1 from “Mechanisms of the FLASH Effect: Current Insights and Advances,”
by Giulia Rosini, Esther Ciarrocchi and Beatrice D’Orsi

If I were a betting man, I’d put my money on the radical-radical interaction mechanism. But don’t trust me, because I’m not an expert in this field. Read this well-written review yourself and draw your own conclusion.

I’ll end by giving Rosini, Ciarrocchi, and D’Orse the final word. Their conclusion is quoted below.

FLASH-RT has emerged as a promising alternative to CONV-RT, offering potential advantages in reducing normal tissues toxicity while maintaining or even potentially enhancing tumor control. However, the underlying mechanisms remain incompletely understood. Oxygen depletion, radical recombination, mitochondrial preservation, DNA repair and immune response modulation, have all been proposed as contributing factors… but no single mechanism fully explains the FLASH effect. This further highlights the complex interplay between physical, biological, and immunological factors that might behind the FLASH effect. Importantly, combining FLASH-RT with adjuvant therapies, such as radioprotectors, immunotherapy or nanotechnology, could synergize with these mechanisms to further widen the therapeutic window. FLASH-RT’s ability to reduce inflammation, preserve immune function, and minimize damage to healthy tissues contrasts sharply with CONV-RT, which often induces significant toxicity. However, despite promising preclinical findings, critical questions remain regarding the precise mechanisms driving the FLASH effect and its clinical applicability. Continued research is essential to fully elucidate these mechanisms, optimize FLASH-RT delivery, and translate its benefits into safe and effective clinical applications. By addressing these challenges, FLASH-RT has the potential to significantly improve therapeutic outcomes for cancer patients, offering a paradigm shift in radiation oncology.