Friday, August 15, 2025

Lutetium-177

When preparing the 6th edition of Intermediate Physics for Medicine and Biology, I like to scan the literature for new medical advances. While revising the chapter on nuclear medicine, I found some fascinating information about an isotope that was not mentioned in the 5th edition of IPMB: lutetium-177.

First, the physics. Lutetium (pronouced loo-tee-shee-uhm) is element 71 in the periodic table. Below are the energy level and decay data. The primary mechanism of decay is emitting a beta-particle (an electron), transmuting into a stable isotope of hafnium. The maximum energy of this electron is about 500 keV. Two other possibilities (each happening in about one out of every ten decays) is beta decay of 177Lu to one of two excited levels of 177Hf followed by gamma decay. The two most common gamma photons have energies of 113 and 208 keV. Lutetium-177 produces few internal conversion or Auger electrons. The average energy of all the emitted electrons is about 150 keV, which have a range of about 0.25 mm. The half-life of 177Lu is roughly a week. 

Next, the biology and medicine. Lutetium can be used for imaging (using the gamma rays) or therapy (using the electrons). While the dose arising from all the electrons does not make this isotope ideal for pure imaging studies (technetium-99m might be a better choice), the gammas do provide a way to monitor 177Lu during therapy (in this way it is similar to iodine-131 used in thyroid cancer therapy and imaging). Such a combined function allows the physician to do “theranostics” (a combination of  therapy and diagnostics), a term I don’t care for but it is what it is. 177Lu can be bound to other molecules to improve its ability to target a tumor. For instance, it is sometimes attached to a molecule that binds specifically to prostate specific membrane antigen. The PSMA molecule is over-expressed in a tumor, so this allows the 177Lu to target prostate tumor cells. One advantage of using 177Lu in this way—rather than, say, using radiotherapy with x-rays directed at the prostate—is that the 177Lu will seek out and irradiate any metastasizing cancer cells as well as the main tumor. Clinical trials show that it can prolong the life of those suffering from prostate cancer


 Lutetium-177: PSMA Guided Treatment

https://www.youtube.com/watch?v=Th42pFOx0Fs

Friday, August 8, 2025

Push Back Hard

Last November, right after the Presidential election, I wrote a blog post about trusted information on public health. In that post, I featured the science communication efforts by Katelyn Jetelina (Your Local Epidemiologist) and Andrea Love (Immunologic). I didn’t realize at the time just how much I would come to rely on these two science advocates for trustworthy information, especially related to vaccines.

Today, I recommend several more science communicators. The first is Skeptical Science. That website focuses primarily on climate science. The current Republican administration has denied and mocked the very idea of climate change, describing it as a “hoax.” Skeptical Science has a simple mission: “debunk climate misinformation.” This is extraordinarily important, as climate change may be the most important issue of our time. Check out their website www.skepticalscience.com, and follow them on Facebook. I just signed up for their Cranky Uncle app on my phone. I learned about Skeptical Science from my Climate Reality mentor, John Forslin. For those more interested in doing rather than reading and listening, I recommend The Climate Reality Project (Al Gore’s group). Take their training. I did. Oh, and don’t forget Katharine Hayhoe’s website https://www.katharinehayhoe.com.

I recently leaned about the Center for Infectious Disease Research and Policy that operates out of the University of Minnesota (Russ Hobbie, the main author of Intermediate Physics for Medicine and Biology, worked there for most of his career). I can’t tell you too much about it, except that it’s director is Michael Osterholm, a leading and widely respected vaccine expert and advocate. 

Want to know more about science funding, especially to the National Institutes of Health? Check out Unbreaking. They’re documenting all the bad stuff happening to science these days. I learned about Unbreaking from Liz Neeley's weekly newsletter Meeting the Moment. Liz is married to Ed Yong, who I have written about before.

My next recommendation is Angela Rasmussen, a virologist who publishes at the site Rasmussen Retorts on Substack. What I like about Rasmussen is that she tells it like it is, and doesn’t worry if her salty language offends anyone. I must confess, as I experience more and more of what I call the Republican War on Science, I get angrier and angrier. Rasmussen’s retorts reflect my rage. She writes “Oh, also, I swear sometimes. It’s not the most professional behavior but I believe in calling things what they are and sometimes nothing besides ‘asshole’ is accurate.” Give ’em hell, Angie! Here’s the concluding two paragraphs of her August 5 post:

There’s always a ton of talk about how public health and science have lost trust. A lot of people like to tell me that it’s our fault. Scientists didn’t show enough humility or acknowledge uncertainty during the COVID pandemic. We were wrong about masks or vaccines or variants or whatever. We didn’t communicate clearly. We overclaimed and underdelivered. I reject these arguments.

The public didn’t lose trust in science because experts are wrong sometimes, and are imperfect human beings who make mistakes. They lost trust because people like [Robert F. Kennedy, Jr.] constantly lied about science. He is constantly lying still. He’s eliminating experts so that he and his functionaries on ACIP [The CDC’s Advisory Committee on Immunization Practices] will be able to continue lying without any inconvenient pushback. We need to recognize this and push back hard.
What am I doing to push back hard? Regular readers of this blog may recall my post from this April in which I imagined what Bob Park’s newsletter What’s New would look like today. Well, I’ve made that a weekly thing. You can find them published on my Medium account (https://medium.com/@bradroth). I’ll link a few of the updates below.
https://medium.com/@bradroth/bob-parks-what-s-new-august-1-2025-5cf2c5bfc598

https://medium.com/@bradroth/bob-parks-what-s-new-july-25-2025-bc10a841cc28

https://medium.com/@bradroth/bob-parks-what-s-new-july-18-2025-eca27626c79b

https://medium.com/@bradroth/bob-parks-what-s-new-july-11-2025-68c5943218d7

You will also find these IPMB blog posts republished there, plus a few other rants. When I started writing my updated version of What’s New, I (ha, ha)… I thought (ha, ha, ha!)... I thought that I might run out of things to talk about. That hasn’t been a problem. But writing a weekly newsletter in addition to my weekly IPMB blog posts takes time, and it makes me appreciate all the more the heroic efforts of Katelyn, Andrea, Liz, and Angela. I hope they all know how much we appreciate their effort.

Is there anything else on the horizon? The book Science Under Siege, by Michael Mann and Peter Hotez, is out next month. As soon as I can get my hands on a copy and read it, I will post a review on this blog. In the meantime, I’ll keep my powder dry, waiting until RFK Jr starts in on microwave health effects (Y’all know it’s coming). Now that’s physics applied to medicine and biology, right up my alley!

“Don’t Choose Extinction.” This is one of John Forslin’s favorite videos. Enjoy!

https://www.youtube.com/watch?v=3DOcQRl9ASc

Friday, August 1, 2025

The History of the Linear No-Threshold Model and Recommendations for a Path Forward

As Gene Surdutovich and I were preparing the 6th edition of Intermediate Physics for Medicine and Biology, we decided to update the discussion about the linear no-threshold model of radiation risk. In the 5th edition of IPMB, Russ Hobbie and I had written
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear no-threshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.
In our update, we added a citation to a paper by John Cardarelli, Barbara Hamrick, Dan Sowers, and Brett Burk titled “The History of the Linear No-Threshold Model and Recommendations for a Path Forward,” (Health Physics, Volume 124, Pages 131–135, 2022). When I looked over the paper, I found that there is a video series accompanying it. I said to myself: “Brad, that sounds like just the sort of thing readers of your blog might enjoy.” I found all the videos on the Health Physics YouTube station, and I have added links to them below.

Wow! This is not a dry, technical discussion. It is IPMB meets 60 Minutes. This is a hard-hitting investigation into scientific error and even scientific fraud. It’s amazing, fascinating, and staggering.

John Cardarelli, the president of the Health Physics Society when the videos were filmed, acts as the host, introducing and concluding each of the 22 episodes. The heart of the video series is Barbara Hamrick, past president of the Health Physics Society, interviewing Edward Calabrese, a leading toxicologist and a champion of the hormesis model (low doses of radiation are beneficial).

Calabrese claims that our use of the linear no-threshold model is based on “severe scientific, ethical, and policy problems.” He reviews the history of the LNT model, starting with the work of the Nobel Prize winner Hermann Muller on the genetics of fruit flies. He reviews the evidence to support his contention that Muller and other scientists were biased in favor of the LNT model, and sometimes carried that bias to extreme lengths. At first I said to myself “this is interesting, but its all ancient history.” But as the video series progressed, it approached closer and closer to the present, and I began to appreciate how these early studies impact our current safety and regulatory standards.

I watched every minute of this gripping tale. (OK, I admit I watched it at a 2x playback speed, and I skipped Cardarelli’s introductions and conclusions after the first couple videos; there is only so much time in a day.) Anyone interested in the linear no-threshold model needs to watch this. I have to confess, I can offer no independent confirmation of Calabrese’s claims. I’m not a toxicologist, and my closest approach to radiobiology is being a coauthor on IPMB. Still, if Calabrese’s claims are even half true then the LNT assumption is based on weak data, to put it mildly.

Watch these videos. Maybe you’ll agree with them and maybe not, but I bet you’ll enjoy them. You may be surprised and even astounded by them.



https://www.youtube.com/watch?v=G5FjhgcnMjU

Episode 1: Who Is Dr. Edward Calabrese?


https://www.youtube.com/watch?v=slIylnAZsDY

Episode 2: LNT Beginnings—Extrapolation From ~100,000,000 x Background?


https://www.youtube.com/watch?v=4UxqcscXHWE

Episode 3: Muller Creates a Revolution


https://www.youtube.com/watch?v=E2WCE30_o3s

Episode 4: Muller: How Ambition Affects Science


https://www.youtube.com/watch?v=LP_eIQDa6rY

Episode 5: The Big Challenge


https://www.youtube.com/watch?v=PMCOejiERbQ

Episode 6: The Birth of the LNT Single-Hit Theory


https://www.youtube.com/watch?v=srDKPtbiLhI

Episode 7: Pursuit to Be the First to Discover Gene Mutation


https://www.youtube.com/watch?v=7hTfVMDPrcY

Episode 8: "Fly in the Ointment"


https://www.youtube.com/watch?v=34nNwqwIcbU

Episode 9: Why the First Human Risk Assessment Was Based on Flawed Fruit-Fly Research


https://www.youtube.com/watch?v=D2Tmvc8awZQ

Episode 10: The Birth of LNT Activism


https://www.youtube.com/watch?v=7f99cSK0lQc

Episode 11: Creation of the Biological Effects of Atomic Radiation (BEAR) I Committee


https://www.youtube.com/watch?v=JaDfua6mRIw

Episode 12: Was There Scientific Misconduct Among the BEAR Genetics Committee Members?


https://www.youtube.com/watch?v=GMhPFpeqjG8

Episode 13: Is Lower Always Better?


https://www.youtube.com/watch?v=i5ixKEHTFKE

Episode 14: Should the Genetics Panel Science Paper Be Retracted?


https://www.youtube.com/watch?v=paRx3SFfKXM

Episode 15: Follow the Money Trail: "We Are Just All Conspirators Here Together"


https://www.youtube.com/watch?v=NNdF1-K6my4

Episode 16: The Most Important Paper in Cancer Risk Assessment That Affects Policy in the US



https://www.youtube.com/watch?v=yHdLe5hileI

Episode 17: Studies With a Surprising Low-Dose Health Effect


https://www.youtube.com/watch?v=_CzS5I8DK6k

Episode 18: Ideology Trumps Science, Precautionary Principle Saves the LNT


https://www.youtube.com/watch?v=rdrKwVUuLGc

Episode 19: Genetic Repair Acknowledged


https://www.youtube.com/watch?v=892prKIMjvg

Episode 20: BEIR I Acknowledges Repair but Keeps LNT. Why?


https://www.youtube.com/watch?v=ZZx9SiY7wuI

Episode 21: BEIR I Mistake Revealed, LNT Challenged, Threshold Supported


https://www.youtube.com/watch?v=L3ZfL4vTPPM

Episode 22: Making Sense of History and a Path Forward by Dr. Calabrese

Friday, July 25, 2025

Everything Is Tuberculosis

Everything Is Tuberculosis,
by John Green.

Recently I read the current bestseller Everything Is Tuberculosis: The History and Persistence of Our Deadliest Infection, by John Green. Tuberculosis is the deadliest infectious disease worldwide. According to Green,

Just in the last two centuries, tuberculosis [TB] caused over a billion human deaths. One estimate, from Frank Ryan’s Tuberculosis: The Greatest Story Never Told, maintains that TB has killed around one in seven people who’ve ever lived. Covid-19 displaced tuberculosis as the world’s deadliest infectious disease from 2020 through 2022, but in 2023, TB regained the status it has held for most of what we know of human history: Killing 1,250,000 people, TB once again became our deadliest infection. What’s different now from 1804 or 1904 is that tuberculosis is curable, and has been since the mid-1950s. We know how to live in a world without tuberculosis. But we choose not to live in that world…
Some of the symptoms of tuberculosis are difficulty breathing, coughing up blood, night sweats, and weight loss. It is a slowly progressing disease, which led to its now-archaic nickname “consumption.” Green writes
Some patients will recover without treatment. Some will survive for decades but with permanent disability, including lung problems, devastating fatigue, and painful bone deformities. But if left untreated, most people who develop active TB will eventually die of the disease.
In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I stress the importance of understanding the sizes of things. Tuberculosis is caused by bacteria, which are each a couple microns long and about a half a micron wide. But the body reacts to these bacteria by surrounding them with white blood cells and T cells of the immune system, “creating a ball of calcifying tissue known as a tubercle.” Tubercles vary in size, from a few tenths of a millimeter to a centimeter. That’s too big to pass through capillaries in the bloodstream and too big to fit into a single alveolus in the lungs.

IPMB only mentions tuberculosis twice. Russ and I write
Spontaneous pneumothorax [air between the lung and the chest wall] can occur in any pulmonary disease that causes an alveolus (air sac) on the surface of the lung to rupture: most commonly emphysema, asthma, or tuberculosis….

Some pathologic conditions can be identified by the deposition of calcium salts. Such dystrophic (defective) calcification occurs in any form of tissue injury, particularly if there has been tissue necrosis (cell death). It is found in necrotizing tumors (particularly carcinomas), atherosclerotic blood vessels, areas of old abscess formation, tuberculous foci, and damaged heart valves, among others.

This history of tuberculosis as a disease is fascinating. Green writes that in eighteenth century Europe “the disease became not just the leading cause of human death, but overwhelmingly the leading cause of human death.” Oddly, it became romanticized. People like the poet John Keats and the pianist Frederic Chopin died of tuberculosis, and the illness came to be linked with creativity. It also became associated with female beauty, as the thin, wide-eyed, rosy-cheeked appearance of a woman with tuberculosis became fashionable. Later, the disease was stigmatized, being tied to race and a lack of moral virtue. When a person suffered from tuberculosis, they often went to a sanatorium for rest and treatment, and usually died there.

The German microbiologist Robert Koch isolated Mycobacterium tuberculosis in 1882. Koch was a rival of Frenchman Louis Pasteur, and both worked on treatments. I was surprised to learn that author Arthur Conan Doyle—famous for his Sherlock Holmes stories—also played a role in developing treatments for the disease. Tuberculosis remains latent in people until it’s activated by some other problem, such as malnutrition or an immune system disease like AIDS. Many infectious diseases attack children or the elderly, but TB is common in young adults. Physicist Richard Feynman’s 25-year-old wife Arline died of tuberculosis.

Green explains that 

in the decades after the discovery of Koch’s bacillus, small improvements emerged. Better diagnostics meant the disease could be identified and treated earlier, especially once chest X-rays emerged as a diagnostic tool.

The main impact of medical physics on tuberculosis is the development of radiography. X-rays weren’t even discovered until 1895, a decade after Koch isolated the tuberculosis bacterium. They arrived just in time. The often-decaying bacteria at the center of a tubercle accumulates calcium. For low x-ray energies, when the photoelectric effect is the dominant mechanism determining how x-ray photons interact with tissue, the cross section for x-ray attenuation varies as the fourth power of the atomic number. Because calcium has a relatively high atomic number (Z = 20) compared to hydrogen, carbon, nitrogen, and oxygen (Z = 1, 6, 7, 8, respectively), and because lung tissue in general has a low attenuation because of the low-density of air, tubercles show up on a chest x-ray with a great deal of contrast.

The primary treatment for tuberculosis nowadays is antibiotics. The first one to be used for TB, streptomycin, was discovered in the 1940s. By the mid 1950s, several antibiotics made TB curable. I was born in 1960, just after the threat of tuberculosis subsided dramatically in the United States. I can still remember us kids getting those TB skin tests in our forearms, which we all had to have before entering school. But I don’t remember being very worried about TB as a child. The threat was over by then.

A vaccine exists for tuberculosis (the Bacillus Calmette–Guérin, or BCG, vaccine), but it’s mainly effective when given to children, and isn’t used widely in the United States, where tuberculosis is rare. In poorer countries, however, the vaccine saves millions of lives. Currently, mRNA vaccines are being developed against TB. This crucial advance is happening just as Robert F. Kennedy, Jr. is leading his crazy anti-science crusade against vaccines in general, and mRNA vaccines in particular. The vaccine alliance GAVI is hoping to introduce new vaccines for tuberculosis, and this effort will certainly be hurt by the United States defunding GAVI. The World Health Organization has an “end TB strategy” that, again, will be slowed by America’s withdraw from WHO and the dismantling of USAID. Green’s book was published in 2025, but I suspect it was written in 2024, before the Trump administration’s conspiracy-theory laden effort to oppose vaccines and deny vaccine science got underway.

Many of these world-wide efforts to eliminate TB depend on access to new drugs that can overcome drug-resistant TB. Unfortunately, such drugs are expensive, and are difficult to afford or even obtain in poorer countries.

In the final pages of Everything is Tuberculosis, Green writes eloquently

...TB [tuberculosis] in the twenty-first century is not really caused by a bacteria that we know how to kill. TB in the twenty-first century is really caused by those social determinants of health, which at their core are about human-built systems for extracting and allocating resources. The real cause of contemporary tuberculosis is, for lack of a better term, us...

We cannot address TB only with vaccines and medications. We cannot address it only with comprehensive STP [Search, Treat, Prevent] programs. We must also address the root cause of tuberculosis, which is injustice. In a world where everyone can eat, and access healthcare, and be treated humanely, tuberculosis has no chance. Ultimately, we are the cause.

We must also be the cure.

Green serves on the board of trustees for the global health non-profit Partners In Health. To anyone wanting to join the worldwide fight against tuberculosis, I suggest starting at https://www.pih.org.

 
John Green reads the first chapter of Everything Is Tuberculosis.

https://www.youtube.com/watch?v=CCbDdk8Wz-8



John Green discusses Everything Is Tuberculosis on the Daily Show

https://www.youtube.com/watch?v=2uppLo4lZRc


Friday, July 18, 2025

Millikan and the Magnetic Field of a Single Axon

“The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment” superimposed on Intermediate Physics for Medicine and Biology.
The Magnetic Field of a Single Axon:
A Comparison of Theory and Experiment.”

Forty years ago this month, I published one of my first scientific papers. “The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment” appeared in the July, 1985 issue of the Biophysical Journal (Volume 48, Pages 93–109). I was a graduate student at Vanderbilt University at the time, and my coauthor was my PhD advisor John Wikswo. When discussing the paper below, I will write “I did this…” and “I thought that…” because I was the one in the lab doing the experiments, but of course it was really Wikswo and I together writing the paper and analyzing the results.

Selected Papers of Great American Physicists superimpsed on the cover of Intermediate Physics for Medicine and Biology.
Selected Papers of
Great American Physicists
.
In those days I planned to be an experimentalist (like Wikswo). About the time I was writing “The Magnetic Field of a Single Axon,” I read “On the Elementary Electrical Charge and The Avogadro Constant” by Robert Millikan (Physical Review, Volume 11, Pages 109–143, 1913). It had been reprinted in the book Selected Papers of Great American Physicists, published by the American Institute of Physics.

If you are reading this blog, you’re probably are familiar with Millikan’s oil drop experiment. He measured the speed of small droplets of oil suspended in air and placed in gravitational and electric fields, and was able to determine the charge of a single electron. I remember doing this experiment as a undergraduate physics major at the University of Kansas. I was particularly impressed by the way Millikan analyzed his experiment for possible systematic errors: He worried about deviations of the frictional force experienced by the drops from Stokes’ law and corrected for it; he analyzed the possible changes to the density of the oil in small drops; he checked that his 5300 volt battery was calibrated correctly and supplied a constant voltage; and he fussed about convection currents in the air influencing his results. He was especially concerned about his value of the viscosity of air, which he estimated was known to about one part in a thousand. Rooting out systematic errors is a hallmark of a good experimentalist. I wanted to be like Millikan, so I analyzed my magnetic field measurement for a variety of systematic errors.

The first type of error in my experiment was in the parameters used to calculate the magnetic field (so I could compare it to the measured field). I estimated that my largest source of error was in my measurement of the axon radius. This was done using a reticle in the dissecting microscope eyepiece. I only knew the radius to 10% accuracy, in part because I could see that it was not altogether uniform along the axon, and because I could not be sure the axon’s cross section was circular. It was my biggest source of error for calculating the magnitude of the magnetic field, because the field varied as the axon cross-sectional area, which is proportional to the radius squared.
Figure 1 from "The Magnetic Field of a Single Axon."
Figure 1 from "The Magnetic
Field of a Single Axon."

I measured the magnetic field by threading the axon through a wire-wound ferrite-core toroid (I’ve written about these toroid measurements before in this blog). I assumed the axon was at the center of the toroid, but this was not always the case. I performed calculations assuming the toroid averaged the magnetic field for an off-axis axon, and was able to set an upper limit on this error of about 2%. The magnetic field was not measured at a point but was averaged over the cross-sectional area of the ferrite core. More numerical analysis suggested that I could account for the core area to within about 1%. I was able to show that inductive effects from the toroid were utterly negligible. Finally, I assumed the high permeability ferrite did not affect the magnetic field distribution. This should be true if the axon is concentric with the toroid and aligned properly. I didn’t have a good way to estimate the size of this error.

Figure 2 from "The Magnetic Field of a Single Axon."
Figure 2 from "The Magnetic
Field of a Single Axon."
The toroid and axon were suspended in a saline bath (technically, Van Harreveld's solution), and this bath gave rise to other sources of error. I analyzed the magnetic field for different sized baths (the default assumption was an unbounded bath), and for when the bath had a planar insulating boundary. I could do the experiment of measuring the magnetic field as we raised and lowered the volume of fluid in the bath. The effect was negligible. I spent a lot of time worrying about the heterogeneity caused by the axon being embedded in a nerve bundle. I didn’t really know the conductivity of the surrounding nerve bundle, but for reasonable assumptions it didn’t seem to have much effect. Perhaps the biggest heterogeneity in our experiment was the “giant” (~1 mm inner radius, 2 mm outer radius, 1 mm thick) toroid, which was embedded in an insulated epoxy coating. This big chunk of epoxy certainly influenced the current density in the surrounding saline. I had to develop a new way of calculating the extracellular current entirely numerically to estimate this effect. The calculation was so complicated that Wikswo and I didn’t describe it in our paper, but instead cited another paper that we listed as “in preparation” but that in fact never was published. I concluded that the toroid was not a big effect for my nerve axon measurements, although it seemed to be more important when I later studied strands of cardiac tissue.

Figure 3 of "The Magnetic Field of a Single Axon."
Figure 3 of "The Magnetic
Field of a Single Axon."
Other miscellaneous potential sources of error include capacitive effects in the saline and an uncertainty in the action potential conduction velocity (measured using a second toroid). I determined the transmembrane potential by taking the difference between the intracellular potential (measured by a glass microelectrode, see more here) and a metal extracellular electrode. However, I could not position the two electrodes too accurately, and the extracellular potential varies considerably over small distances from the axon, so my resulting transmembrane potential certainly had a little bit of error. Measurement of the intracellular potential using the microelectrode was susceptible to capacitive coupling to the surrounding saline bath. I used a “frequency compensator” to supply “negative capacitance” and correct for this coupling, but I could not be sure the correction was accurate enough to avoid introducing any error. One of my goals was to calculate the magnetic field from the transmembrane potential, so any systematic errors in my voltage measurements were concerning. Finally, I worried about cell damage when I pushed the glass microelectrode into the axon. I could check this by putting a second glass microelectrode in nearby and I didn’t see any significant effect, but such things are difficult to be sure about.

All of this analysis of systematic errors, and more, went into our rather long Biophysical Journal paper. It remains one of my favorite publications. I hope Millikan would have been proud. If you want to learn more, see Chapter 8 about Biomagnetism in Intermediate Physics for Medicine and Biology

Forty years is a long time, but to this old man it seems like just yesterday.

Friday, July 11, 2025

David Cohen: The Father of MEG

David Cohen: The Father of MEG, superimposed on the cover of Intermediate Physics for Medicine and Biology.
David Cohen: The
Father of MEG
,
 by Gary Boas.
Gary Boas
recently published a short biography of David Cohen, known as the father of magnetoencephalography (MEG). The book begins with Cohen’s childhood in Winnipeg, Canada, including the influence of his uncle who introduced him to electronics and crystal radios. It then describes his college days and his graduate studies at the University of California, Berkeley. He was a professor at the University of Illinois Chicago, where he built his first magnetically shielded room in which he hoped to measure the magnetic fields of the body. Unfortunately, Cohen didn’t get tenure there, mainly for political reasons (and a bias against applied research related to biology and medicine). However, he found a new professorship at the Massachusetts Institute of Technology, where he built an even bigger shielded room. The climax of several years of work came in 1969, when he combined the SQUID magnetometer and his shielded room to make groundbreaking biomagnetic recordings. Boas describes the big event this way:
To address this problem [of noise in his copper-coil based magnetic field detector drowning out the signal], he [David Cohen] turned to James Zimmerman, who had invented a superconducting quantum interference device (SQUID) several years before… The introduction came by way of Ed Edelsack, a U.S. Navy funding officer… In a 2024 retrospective about his biomagnetism work in Boston, David described what happened next.

“Ed put me in touch with Jim, and it was arranged that Jim would bring one of his first SQUIDs to my lab at MIT, to look for biomagnetic signals in the shielded room. Jim arrived near the end of December, complete with SQUID, electronics, and nitrogen-shielded glass dewar. It took a few days to set up his system in the shielded room, and for Jim to tune the SQUID. Finally, we were ready to look at the easiest biomagnetic signal: the signal from the human heart, because it was large and regular. Jim stripped down to his shorts, and it was his heart that we first looked at.”

The results were nothing short of astounding; in terms of the signal measured, they were light years beyond anything David had seen with the copper-coil based detector. By combining the highly sensitive SQUID with the shielded room, which successfully eliminated outside magnetic disturbances, the two researchers were able to produce, for the first time, clear, unambiguous signals showing the magnetic fields produced by various organs of the human body. The implications of this were far reaching, with potential for a wide range of both basic science and clinical applications. David didn’t quite realize this at the time, but he and Zimmerman had just launched a new field of study, biomagnetism

Having demonstrated the efficacy of the new approach… David switched off the lights in the lab and he and Zimmerman went out to celebrate. It was December 31, 1969. The thrill of possibility hung in the air as they joined other revelers to ring in a new decade—indeed, a new era.

“Biomagnetism: The First Sixty Years” superimposed on the cover of Intermediate Physics for Medicine and Biology.
Biomagnetism: The
First Sixty Years.”
The biography is an interesting read. I always enjoy stories illustrating how physicists become interested in biology and medicine. Russ Hobbie and I discuss the MEG in Chapter 8 of Intermediate Physics for Medicine and Biology.You can also learn more about Cohen's contributions in my review article “Biomagnetism: The First Sixty Years.”

Today Cohen is 97 years old and still active in the field of biomagnetism. The best thing about Boas’s biography is you can read it for free at https://meg.martinos.org/david-cohen-the-father-of-meg. Enjoy! 


The Birth of the MEG: A Brief History
 https://www.youtube.com/watch?v=HxQ8D4cPIHI
 
 

Friday, July 4, 2025

An Alternative to the Linear-Quadratic Model

In Section 16.9 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the linear-quadratic model.

The linear-quadratic model is often used to describe cell survival curves… We use it as a simplified model for DNA damage from ionizing radiation.
Suppose you plate cells in a culture and then expose them to x-rays. In the linear-quadratic model, the probability of cell survival, P, is

P = e–αD–βD2

where D is the dose (in grays) and α and β are constants. At large doses, the quadratic term dominates and P falls as P = e–βD2. In some experiments, however, at large doses P falls exponentially. It turns out that there is another simple model—called the multi-target single-hit (MTSH) model—describing how P depends on D in survival curves,

P = 1 – (1 –e–α'D)N

Let’s compare and contrast these curves. They both have two parameters: α and β for the linear-quadratic model, and α' and N in the MTSH model. Both give P = 1 if D is zero (as they must). They both fall off more slowly at small doses and then faster at large doses. However, while the linear-quadratic model falls off at large dose as e–βD2, the MTSH model falls off exponentially (linearly in a semilog plot).

If α'D is large, then the exponential is small. We can expand the polynomial using (1 – x)N = 1 – N x + …, keep only the first two terms, and then use some algebra to shown that at large doses P = N e–α'D. If you extrapolate this large-dose behavior back to zero dose, you get P = N, which provides a simple way to determine N.

Below is a plot of both curves. The blue curve is the linear-quadratic model with α = 0.1 Gy-1 and β = 0.1 Gy-2. The gold curve is the MTSH model with α’=1.2 Gy-1 and N = 10. The dashed gold line is the extrapolation of the large dose behavior back to zero dose to get N



If the survival curve falls off exponentially at large doses use the MTSH model. If it falls off quadratically at large doses use the linear-quadratic model. Sometimes the data doesn’t fit either of these simple toy models. Moreover, often P is difficult to measure when it’s very small, so the large dose behavior is unclear. The two models are based on different assumptions, none of which may apply to your data. Choosing which model to use is not always easy. That’s what makes it so fun.

Friday, June 27, 2025

A Toy Model for Radiation Damage

Sometimes a toy model (a simple model that strips away all the detail to expose the underlying mechanisms more clearly) can be useful. Today I present a new homework problem that contains a toy model for understanding equivalent dose.
Section 16.12

Problem 34 ½
. Consider two scenarios.
Scenario 1: N* particles are distributed evenly in a volume V*, so the concentration is C* = N*/V*.
Scenario 2: The volume V* is divided into two noninteracting regions of volume V1 and V2, where V* = V1V2. All N* particles are placed in V2. Therefore, the concentration of particles in V1 is C1 = 0, and the concentration in V2 is C2 = N*/V2 = N*/V*[(V1V2)/V2] = C*[(V1V2)/V2].

Now, examine two cases about how, in a local region, cellular damage, D, relates to the concentration C.

Case 1: Damage is proportional to the concentration. In other words, D = αC, where α is a constant of proportionality.
Case 2: Damage is proportional to the square of the concentration. In other words, D = βC2, where β is another constant of proportionality.

For both cases and both scenarios (a total of four different situations), average the damage over the entire volume V* to get D. Find how D is related to C*.

Stop! To get the most out of this blog post, stop reading and solve this homework problem yourself...


...Okay, so you solved it and now you’re back. Help me explain it to that fellow who didn’t bother to solve it for himself.

Case 1 (damage proportional to concentration)

Scenario 1: The concentration is uniform throughout V*. Averaging the local relation D = αC over V* simply gives DαC*. The average relationship is the same as the local relationship.

Scenario 2: Locally, D1 = 0 because all the particles are in V2 so C1 = 0. Moreover, D2α C2αC*[(V1V2)/V2]. Now, average the damage over the volume V*. You get D = [V1/(V1V2)] (0) + [V2/(V1V2)] αC*[(V1V2)/V2]. But all those complicated factors cancel out, and you get simply D = αC*. This is the same result as in scenario 1. The average damage is proportional to C*.
Case 2 (damage proportional to concentration squared)
Scenario 1: Again, the concentration is uniform throughout V*. So you just get DβC*2. All that matters is the average concentration, C*.

Scenario 2: Locally, D1 = 0 and D2 βC22βC*2[(V1V2)/V2]2. Now average over the volume V*. You get DβC*2[(V1V2)/V2]. If V2 is much less than V1, then D is much greater than βC*2. It is as if the average damage is supercharged by the concentration being, well, concentrated. In this scenario, the average damage depends on both C* and the ratio V1/V2.

This is all interesting, but what does it mean? It means that if you deposit energy locally, then the concentration (or "dose") alone may not tell the whole story. It depends on how the damage depends on the concentration. What is an example of when the damage would be proportional to the square of the concentration? Suppose we are talking about damage to DNA. The concentration might refer to the number of “breaks” in the DNA strand caused by radiation. Now suppose further that DNA has a repair mechanism that can fix breaks as long as they are far apart. That is, as long as they are isolated. But if you get two breaks near each other, then the repair mechanism is overwhelmed and doesn’t work. So, you need two “breaks” close together or you get no damage (in the jargon of radiobiology, you need double-strand breaks instead of just single-strand breaks). The concentration squared tells you something about having two events happen at the same place. You need a “break” to happen at some target spot along the DNA (proportional to the concentration) and then you need another “break” to happen nearby (again, proportional to the concentration), so the probability of getting two breaks near the target spot is proportional to the concentration squared.

Now let’s compare x-rays and alpha particles. Suppose you irradiate tissue so that the energy deposited in the tissue is the same for both. Then, the “dose” (energy per unit mass, analogous to C) is the same in both cases. But the alpha particles (scenario 2) deposit all their energy along a few thin tracks, whereas x-rays (scenario 1) deposit their energy all over the place randomly. You might say: well, for alpha particles the energy has a high density along the path, but everywhere else there is nothing, so on average those effects balance out. That’s true if damage is proportional to concentration (case 1 above). But if damage is proportional to concentration squared (case 2), it’s not true. The average damage caused by alpha particles is more extensive than for x-rays, even if the energy deposited into the tissue (the dose) is the same. The “equivalent dose” (another term for “damage”) is higher for the alpha particles than for the x-rays.

Intermediate Physics for Medicine and Biology.
Intermediate Physics for
Medicine and Biology.

In Section 16.12 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce the concept of equivalent dose. To find the equivalent dose, the dose is multiplied by a dimensionless weighting factor (in the jargon, called the “relative biological effectiveness”), which is one for x-rays and twenty for alpha particles. The equivalent dose even has its own unit, the sievert (as opposed to the gray, the unit of the dose). Both the sievert and the gray are abbreviations for joules per kilogram, but the sievert includes the weighting factor. Alpha particles just do more damage than x-rays for a given dose. This is because alpha particles deposit their energy in a smaller volume, and damage depends on DNA being hit twice close together. In other words, damage depends on the concentration squared. In our toy model, the weighting factor is (V1V2)/V2.

Our whole story about DNA repair mechanisms is reasonable and most likely true. But any other mechanism that results in the damage depending on the concentration (or dose) squared would give the same behavior. This result is not limited to DNA repair processes.

In general, case 1 (damage proportional to concentration) and case 2 (damage proportional to the square of the concentration) are not mutually exclusive. For instance, instead of DNA repair mechanisms being perfect for single-strand breaks and being useless for double-strand breaks, perhaps they are 90% effective for single-strand breaks and only 10% effective for double-strand breaks. In Section 16.9 of IPMB, Russ and I show that cell survival curves typically have two terms, one proportional to the dose and one proportional to the dose squared. At low doses the linear term dominates, but at high doses the quadratic one does.

The goal of toy models is to provide insight. I hope that even though the model in this new homework problem is oversimplified and artificial, it helps you get an intuitive feel for the equivalent dose.

Friday, June 20, 2025

A Toy Model for Straggling

One of the homework problems in Intermediate Physics for Medicine and Biology (Problem 31 in Chapter 16) introduces a toy model for the Bragg peak. I won’t review that entire problem, but students derive an equation for the stopping power, S, (the energy per unit distance deposited in tissue by a high energy ion) as a function of the depth below the tissue surface, x

where S0 is the ion’s stopping power at the surface (x = 0) and R is the ion’s range. At a glance you can see how the Bragg peak arises—the denominator goes to zero at x = R so the stopping power goes to infinity. That, in fact, is why proton therapy for cancer is becoming so popular: Energy is deposited primarily at one spot well below the tissue surface where a tumor is located, with only a small dose to upstream healthy tissue. 

One topic that comes up when discussing the Bragg peak is straggling. The idea is that the range is not a single parameter. Instead, protons have a distribution of ranges. When preparing the 6th edition of Intermediate Physics for Medicine and Biology, I thought I would try to develop a toy model in a new homework problem to illustrate straggling. 

Section 16.10 

Problem 31 ½. Consider a beam of protons incident on a tissue. Assume the stopping power S for a single proton as a function of depth x below the tissue surface is


Furthermore assume that instead of all the protons having the same range R, the protons have a uniform distribution of ranges between R – δ/2 and R + δ/2, and no protons have a range outside this interval. Calculate the average stopping power by integrating S(x) over this distribution of ranges. 

This calculation is a little more challenging than I had expected. We have to consider three possibilities for x

x < R — δ/2

In this case, all of the protons contribute so the average stopping power is

We need to solve the integral 

First, let

With a little analysis, you can show that

So the integral becomes

This new integral I can look up in my integral table

Finally, after a bit of algebra, I get

Well, that was a lot of work and the result is not very pretty. And we are not even done yet! We still have the other two cases. 

 R — δ/2 <  x R + δ/2

In this case, if the range is less than x there is no contribution to the stopping power, but if the range is greater than x there is. So, we must solve the integral

I’m not going to go through all those calculations again (I’ll leave it to you, dear reader, to check). The result is 

x   R + δ/2

This is the easy case. None of the protons make it to x, so the stopping power is zero. 

Well, I can’t look at these functions and tell what the plot will look like. All I can do is ask Mr. Mathematica to make the plot (he’s much smarter than I am). Here’s what he said: 


The peak of the “pure” (single value for the range) curve (the red one) goes to infinity at x = R, and is zero for any x greater than R. As you begin averaging, you start getting some stopping power past the original range, out to R + δ/2. To me the most interesting thing is that for x = R δ/2, the stopping power is larger than for the pure case. The curves all overlap for R + δ/2 (of course, they are all zero), and for fairly small values x (in these cases, about x <  0.5) the curves are all nearly equal (indistinguishable in the plot). Even a small value of δ (in this case, for a spread of ranges equal to one tenth the pure range), the peak of the stopping power curve is suppressed. 

The curves for straggling that you see in most textbooks are much smoother, but that’s because I suspect they assume a smoother distribution of range values, such as a normal distribution. In this example, I wanted something simple enough to get an analytical solution, so I took a uniform distribution over a width δ

Will this new homework problem make it into the 6th edition? I’m not sure. It’s definitely a candidate. However, the value of toy models is that they illustrate the physical phenomenon and describe it in simple equations. I found the equations in this example to be complicated and not illuminating. There is still some value, but if you are not gaining a lot of insight from your toy model, it may not be worth doing. I’ll leave the decision of including it in the 6th edition to my new coauthor, Gene Surdutovich. After all, he’s the expert in the interaction of ions with tissue.