The book is fascinating. It was published in 1959, the year before I was born. All three coauthors are British, and are famous enough to have Wikipedia pages. Cecil Frank Powell was a particle physicist who received the Nobel Prize in 1950 for developing the photographic method for studying nuclear processes, and for using this method to discover the pion. He was trained at the Cavendish Laboratory working with Rutherford. He died in 1969. Peter Howard Fowler was a student of Powell’s who worked on cosmic radiation. He was a radar officer with the Royal Air Force during World War II, and was able to detect German radar jamming and identify its source, leading to a destruction of the responsible German radar station. He was married to physicist Rosemary Fowler, who discovered the kaon. His grandfather was Ernest Rutherford. Fowler died in 1996. Donald Hill Perkins discovered the negative pion. He studied proton decay, and found early evidence of neutrino oscillations. Perkins and Fowler were the first to suggest using pion beams as therapy for cancer in 1961 (The use of pions in medicine hasn't panned out). Perkins died at the ripe old age of 97 in 2022.
When skimming through the book, I noticed an interesting illustration of a trident track produced by high energy electrons. It looks something like this:
A trident arises from the process of bremsstrahlung followed by pair production; both of which are described in IPMB. A fast electron interacts with an atomic nucleus, decelerating the electron and emitting a bremsstrahlung photon. This photon, if it has high enough energy, can then interact with an atomic nucleus to create an electron-positron pair. The intermediate photon can be “virtual,” existing only fleetingly. The end result is three particles: the original electron plus the pair. I gather that this requires a very high energy electron, and its cross-section is small, so it seems to contribute little to the dose in medical physics. The authors talk about the production of tridents for energies of more than a BeV, which is an old-fashioned way of saying a GeV, equivalent to 1000 MeV.
I’m glad Russ and I included figures from The Study of Elementary Particles by the Photographic Method. I hope I can figure out the permissions situation (the authors are all dead, and the publisher was sold to another company) and we can continue to include the figures in the 6th edition.
Most mornings I take a walk to keep myself in shape. Usually I listen to an audiobook while walking, but for some reason my earbuds didn’t recharge properly overnight and this morning they didn’t work right. So, I had to take my constitutional in silence.
It so happens that yesterday I was revising Appendix H (The Binomial Probability Distribution) for the 6th edition of Intermediate Physics for Medicine and Biology. (Yes, you’re right, Gene Surdutovich and I are getting close to being done if we’re already up to the appendices.) As I was reviewing the material, I thought “it sure would be nice to have some more nontrivial but not too complicated word problems for this appendix.” So, as I hiked I came up with this:
Appendix H
Problem 6. You are a young college student who wants to make a little extra cash for living expenses. You also are an occasional Dungeons and Dragons player, so you have a twenty-sided die in the top drawer of your desk. You decide to set up what you call the “Dollar and Dime” game. Any student in your dormitory can come to you and pay you a dollar and a dime, and you will take out your twenty-sided die and roll it once. If it gives a one, you hand the student a crisp, new twenty dollar bill. If it it rolls a two through twenty, the student walks away empty handed. You’re pretty happy with the game. On average, the dollars earned cover the required payouts, and the dimes are all profit. The game becomes popular among your dormmates, and people stop by to play dozens of times each day.
A page from the Rodgers and Hammerstein Song Book.
John comes to you late one Friday afternoon. He has invited Jane to attend the school musical Oklahoma! with him that evening (Jame loves musicals, especially those by Rodgers and Hammerstein), but two tickets will cost him $40, and all he has is $11. It’s too late to find a part-time job or to beg funds from his parents. His only chance to avoid reneging on the theater date is to get the needed cash by playing the Dollar and Dime game. John slaps the eleven bucks down on your desk and says “I wanna play ten times.”
Your first thought is to tell John to go to the bank and exchange the ten dollar bill for ten ones and the one dollar bill for ten dimes, so he can play the game properly. But John is on the school wrestling team, is six foot three, and weighs 270 pounds, so you decide to waive this technicality. You accept his $11, get out the twenty-sided die, and start rolling.
Ordinarily when playing this game you relax, knowing that in the long run you will make a profit. However, today you’re a bit nervous because you only have three portraits on Andrew Jackson in the envelope where you store the cash for your game. Earlier in the day, you told your wealthy roommate Peter about your situation, hoping he could cover you if needed (he declined). Now, if John wins the game four or more times, he’s gonna to be upset that you can’t pay him what you owe him, and John is not the kind of guy you want to make mad.
(a) What is the probability that John wins enough money to take Jane to Oklahoma!?
(b) What is the probability that you get clobbered by John?
(c) How do all these results change if Peter (who is annoyed that you converted your dorm room to a casino with people coming and going and noisily rolling that silly icosahedron at all hours of the night) loans John an extra $22, interest free?
Consider an experiment with two mutually exclusive outcomes, which is repeated N times, with each repetition being independent of every other one. One of the outcomes is labeled “success”, the other is called “failure.”
The binomial distribution is given by Eq. H.2,
where N is the number of tries (John has $11 so he can play the game ten times, N = 10), p is the probability of success for each try (it is a twenty sided die, so p = 0.05), and n is the number of successes (rolling a one). John will make 20n dollars by playing the Dollar and Dime game. The key question is, what’s the probability P that John gets n wins.
The odds of John never rolling a one and leaving broke is
Yikes! He has 3:2 odds of losing everything. Next, the probability that John wins only once are
Only one win will make John twenty bucks, so after paying $11 to play he’ll be nine dollars ahead, but that still isn’t enough to take Jane to see Curly give Laurey that ride in his surrey which, as you will recall, costs $40. He needs at least two wins for that. We now have enough information to answer part (a). The probability that John takes Jane to the show is one minus the probability that he doesn’t earn at least $40. So, John can avoid an unpleasant call to Jane (or, worse, escape being a no show) with a probability of 1 – 0.599 – 0.315 = 0.086. That means the odds are about 11:1 against making Jane happy. Looks like John’s in trouble.
John’s best chance is to win the Dollar and Dime game twice and earn the $40 needed for tickets. The odds are
Boy, that would be great. But if John is really lucky, he’ll win enough for the tickets plus some extra cash for a large popcorn and two medium soft drinks (which costs $18.49).
There’s only a one percent chance of Jane getting her popcorn.
But wait. If John wins four or more times, you won’t have the cash to cover his winnings. Either he’ll thrash you, or (more likely) you’ll be forced to make a deal where you pay John all that you have, $60, and promise to return his original investment of $11, and grovel before him begging for mercy. That would be good news for John. He would walk away with at least $71, and perhaps more if he knows how to drive a hard bargain (after all, you don’t want to end up daid, like poor Jud). What are the odds he’ll bust the bank?
We should also add in the chance that John will win five times, or six, or more, but those will be very small (calculate them yourself if you don’t believe me). So, the probability of a disaster (for you, not for John) is about one part per thousand, or a tenth of a percent. The odds are small, but the consequences would be dire (with you possibly ending up in the hospital), so you’re still nervous until John finishes all ten of his rolls.
Now, consider the final twist to the story. Imagine that when your so-called “friend” Peter sees John arrive, he pulls him aside, gives him a wink, and loans him another $22. (Pete could have easily just lent $29 so John would have enough to cover the cost of his date with Jane, but that would defeat his purpose, wouldn’t it?). Now John has $33 to spend on the Dollar and Dime game. The only thing that changes is N increases from ten to thirty. How does that change the probabilities? You can work out the details. I’ll just state the results.
nP
0 0.215
1 0.339
2 0.259
3 0.127
4 0.045
The chances of John taking Jane to the musical is now 0.446, so the odds are approaching 50-50. Still not great odds, but much better than before. John’s starting to dream that after he takes Jane to Oklahoma! “people will say we’re in love.” More importantly for you (and for that evil Peter), the odds of busting the bank are now 6%. So, at no cost to himself, Peter just increased the odds of shutting down the hated casino by a factor of sixty. Win or lose, you vow to start looking for another roommate; one who doesn’t know as much math.
By the time I came up with this homework problem, I had just about finished my walk. The problem has no biology or medicine in it, so it probably won’t make it into the revised sixth edition. With any luck, tomorrow I’ll be back to the audio book (and, oh, what a beautiful morning that will be). By the way, our goal is to submit the 6th edition of IPMB to our publisher, Springer, before the end of the year. It’s gonna be close, but we just might make it.
One of my scientific heroes is James Clerk Maxwell. Maxwell (1831–1879) was a Scottish physicist known for developing Maxwell’s equations of electricity and magnetism, and for his work on statistical mechanics. But Maxwell also studied the eye and was an early researcher of color vision.
Normally sighted individuals can perceive a short-lived darkened spot at the point of
fixation while viewing a plain white surface through a dichroic filter transmitting a mixture
of long- and short-wavelength lights. This entoptic phenomenon, known as Maxwell’s
spot (MS), was first described in detail by James Clerk Maxwell in 1856.
One of my goals today will be to merely explain what unfamiliar words mean. First, what is a “dichroic filter”? A dichroic filter uses interference rather than absorption to filter light. Interference was discussed only briefly in Intermediate Physics for Medicine and Biology, when Russ Hobbie and I described optical coherence tomography. A dichroic filter is typically made up of many thin layers, each which can reflect light. It creates colors in a similar way that a thin film of oil on the surface of water reflects some colors and not other. It depends on if the reflected or transmitted light from different layers interfere constructively or destructively. One advantage of a dichroic filter is that it can be very selective about what light is transmitted.
Next is “entoptic.” An entoptic phenomenon is a visual effect that arises from a source or structure within the eye itself. Examples of vision phenomena that are NOT entoptic include hallucinations (arising in the brain) and mirages (arising in the optical refraction of light in the environment). Entoptic phenomena would include floaters arising from the shadow of tiny objects in the vitreous humor of the eye, and phosphenes arising from mechanical or electrical excitation of the retina. Whatever Maxwell’s Spot is, it happens because of something within the eye itself.
Misson et al. continue
The most widely accepted hypothesis proposed for the origin of the peripheral zones in [Maxwell’s Spot], and its documented perceptual variations, is absorption of blue light by macular pigments that result in a reduction of foveal photoreceptor illumination.
First, what is the “macula?” It is an oval-shaped region in the center of the retina with a diameter of about 5 mm where there is a high density of cone cells responsible for color vision. At the center of the macula is a region of about 1.5 mm diameter that has the highest density of cone cells called the fovea.
To fully understand the vision, you need to realize that humans have what’s called an inverted retina. That is, the light-sensing rods and cones are at the back of the retina, behind the retina’s neurons and capillaries, so light must pass through these other structures before reaching the light-sensing cells. You may ask, why does the retina have this seemingly backwards structure? I’ll tell you. I don’t know. But it does.
The macula also contains pigments that absorb light. Like the neurons and capillaries, these pigments (at least some of them) are located in front of the rods and cones. Pigments are molecules that absorb certain wavelengths of light. Two of the main macular pigments are called lutein and zeaxanthin. These are carotenoids, which are the pigments that give color to pumpkins, carrots, and daffodils. In general, carotenoids absorb blue and violet light. So, the reason a carrot is orange is that when white light shines on it the caretenoids absorb much of the blue light, so the light reflected by the carrot (which is the light you see) is mainly the red and orange light that was not absorbed. This is also why the macula itself looks yellow when viewed with a ophthalmoscope.
Figure 3 from Misson et al. (2003). The image was obtained using optical coherence tomography. Light comes in from above. The bright areas on the top left and right are the macular pigments. There is also a lot of pigment below the retina, but that does play a key role in producing Maxwell’s Spot.
So, now we get to the cause of Maxwell’s Spot. Misson and his coauthors write
The results of this study support the theory that the principal mechanism of [Maxwell’s Spot] generation is pre-receptoral screening by macular pigment.
In other words, the pigments in front of the macula (through which the incoming light must pass to reach the rods and cones) filters out some of the blue light. As white light enters the eye, the macular pigments remove some of the blue, but the rest of the retina which does not have these pigments lets all the light through. So, a white screen appears white except at the center spot right where the eye is fixated on with highest spatial resolution and best color vision, and that spot is darker and redish. That, dark red region is Maxwell’s Spot. Note that this phenomenon arises because of the distribution of pigments within the eye; it is entoptic. Because everybody’s pigment distribution and macula arrangement can vary, so can everyone’s perception of Maxwell’s Spot.
Maxwell’s Spot was not Maxwell’s only contribution to vision physiology. He was one of the founders of the theory of trichromatic color vision, which states that there are three types of cone cells in the retina (red, green, and blue) that are responsible for our perception of color. There is no telling how much more Maxwell might have contributed to both physics and physiology if he had not died of cancer at the tragically young age of 48.
In his book Air and Water, Mark Denny asks an oddball question: Why are there so few aerial plankton? In my mind, this question transforms into: Why are there no flying blue whales, sucking in mouthfuls of air and filtering out tiny organisms for food? Here is what Denny says:
A general characteristic of aquatic (especially marine) environments is the presence of planktonic life. A cubic meter of water taken from virtually anywhere in a stream, lake, or ocean is teeming with small, suspended organisms. In fact, the concentration of these plants and animals is such that many kinds of invertebrates, including clams, mussels, anemones, polychaete worms, and bryozoans, can reliably use planktonic particles as their sole source of food. In contrast, air is relatively devoid of suspended matter. A cubic meter of air might contain a few bacteria, a pollen grain or two, and very occasionally a flying insect or wind-borne seed. Air is so depauperate compared to the aquatic ‘soup’ that few terrestrial animals manage to make a living by straining their food from the surrounding fluid. Web-building spiders are the only example that comes to mind.
To understand why, my first inclination is to examine the balance between gravity, thermal motion, and concentration. You can use a Boltzmann factor, e–mgh/kBT, to determine how the concentration changes with height h, assuming particles of mass m are in contact with a fluid at temperature T (g is the acceleration of gravity and kB is Boltzmann’s constant). But there’s one problem: the Peclet number is often large, meaning advection dominates diffusion. In other words, air or water currents are more effective than diffusion for mixing (like in a blender). In many cases the problem is even worse: the flow is turbulent, which tends to mix materials much more rapidly than diffusion does. In some ways the analysis of turbulent mixing is similar to the case of diffusion. The flux of particles is proportional to the concentration gradient, but the constant of proportionality is not the diffusion constant but instead the turbulent diffusivity. Denny does the analysis in more detail than I can go into here (turbulent flow is always complex). But he states his conclusion clearly and simply
The sinking rates of particles in air are just too high to allow them to remain passively suspended and as a result, aerial plankton are sparse. In water, slow sinking speeds insure that many particles are suspended, and the plankton is plentiful. The abundance of aquatic suspension feeders and the scarcity of terrestrial ones, can therefore be thought of as a direct consequence of the differences in density and viscosity between air and water.
One other factor plays a role here: buoyancy. If the small organisms have a density approximately that of water, then tiny aquatic animals would be almost neutrally buoyant, so they’d be easy to suspend. In air, however, buoyancy plays almost no roll, so these little animals “seem” to be much denser.
It looks like I should abandon my search for a giant flying suspension feeder, resembling a blimp with a big mouth to suck in large amounts of air that it filters to extract food. Too bad. I was looking forward to befriending one, if the physics had only allowed it.
In Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Stokes’ law. When a small sphere, of radius a, moves with speed U through a fluid having a viscosity η, the drag force D is 6πηaU. This result is well known, but does it apply to a gas bubble moving in water?
I was reading through Life in Moving Fluids, by Steven Vogel, when I came across the answer. Vogel considers a fluid sphere moving in a fluid medium. His Eq. 15.8 is
Here, ηext is the viscosity of the external fluid and ηint is the viscosity of the internal fluid.
Suppose you have a sphere of water in air (say, a raindrop falling from the sky toward earth). Then ηint = ηwater = 10–3 Pa s and ηext = ηair = 2 × 10–5 Pa s. Thus ηext/ηint = 0.02. For our purposes, this is nearly zero, and the drag force reduces to Stokes’ law, D = 6πηextaU.
Now, consider a sphere of air in water (say, a bubble rising toward the surface of a lake). Then ηint = ηair = 2 × 10–5 Pa s and ηext = ηwater = 10–3 Pa s. Thus ηext/ηint = 50. For our purposes, this is nearly infinity, and the drag force becomes D = 4πηextaU. Yikes! Stokes’ law does not hold for a bubble. Who knew? (Vogel knew.)
Apparently when the sphere is a fluid, internal motion occurs, as shown in Vogel’s picture below.
Note that at the edge of the sphere, the internal and external flows are in the same direction. This changes the boundary condition at the surface. A rigid sphere would obey the no-slip condition, but a fluid sphere does not because the internal fluid is moving.
Although Vogel doesn’t address this, I wonder what the drag force is on a sphere of water in water? Does this even make sense? Perhaps we would be better off considering a droplet of some liquid that has the same viscosity as water moving through water (I can imaging this might happen in a microfluidics apparatus). In that case the drag force becomes D = 5πηextaU. I must confess, I’m not sure if the derivation of the general equation is valid in this case, but I don’t see why it shouldn’t be.
There are all kinds of little jewels inside Vogel’s book. I sure wish he were still around.
In the first paragraph, we wrote “Fig. 1 shows the fiber geometry throughout a sheet of tissue and the direction of the applied electric field. Can you look at Fig. 1 and predict where the tissue will be depolarized and where it will be hyperpolarized?” Figure 1 is shown below.
Warning: Debbie and I cooked up a way to
avoid polarization at the boundaries. Ordinarily you would expect a big
hyperpolarization on the left and a depolarization on the right, both
restricted to only a few length constants from the edge. Ignore this
effect. Just consider the polarization caused by the fiber curvature.
At this point, dear reader, I ask you to stop reading and guess the distribution of polarization. Take a piece of paper, sketch the fiber distribution in Fig. 1, and then mark which areas of the tissue are depolarized and which are hyperpolarized. If you have some colored pencils handy, just color the depolarized region red and the hyperpolarized region blue. Go ahead. I’ll wait...
Okay, let’s see how you did. Below is Fig. 6 of our paper, which gives the result. Depolarization is in red, hyperpolarization is in blue.
Did you get anything like this?
To help explain the polarization distribution, I’ve created two new figures not in the paper. In both, the short gray line segments show the fiber geometry. The purple arrows indicate the direction of the applied electric field. Green shows a component of the intracellular current density. The red D’s and blue H’s indicate depolarization and hyperpolarization. The first of the two figures is shown below, and illustrates what I’ll call “Mechanism 1.”
Mechanism 1
In the region where the fibers point along the applied electric field (like along the left edge of the tissue), the current is divided approximately equally between the intracellular and extracellular spaces because they have similar conductivities in that direction. So, the green intracellular current density arrows are relatively large there. In the region where the fibers point perpendicular to the applied electric field (like in the center), the intracellular current density is less than the extracellular current density because the intracellular conductivity is smaller than the extracellular conductivity in that direction, so the green intracellular current density arrows are relatively small. Somewhere between these two regions current had to leave the intracellular space and pass out through the membrane, entering the extracellular space. This outward membrane current depolarizes the tissue (red D’s). If the fiber direction then changes back to being parallel to the electric field (like along the right edge of the tissue), some extracellular current must recross the membrane and reenter the cell, which hyperpolarizes the tissue (blue H’s). This behavior is shown in the upper small plot to the left of Fig. 6a.
The next new figure illustrates “Mechanism 2.” Consider what happens where the fibers are oriented at an angle of about 45 degrees to the electric field. In that case, even though the electric field may be horizontal the anisotropy rotates the intracellular current density to be more nearly parallel to the fibers (the direction with the highest conductivity). In other words, the electric field is horizontal, but the intracellular current density rotates counterclockwise. The extracellular current density also rotates, but not as much because the intracellular space is more anisotropic than the extracellular space. Thus, you pick up a component of the intracellular current perpendicular to the applied electric field (shown by the green arrows). If the fibers change direction so they are either parallel or perpendicular to the field, you get no rotation of the current density there, so there is no component of the intracellular current perpendicular to the electric field. At the head of one of those green arrows, the intracellular current density vector ends so the intracellular current must cross the membrane and enter the extracellular space, depolarizing the tissue. At the tail of a green arrow the intracellular current density vector begins so the extracellular current must cross the membrane and enter the intracellular space, hyperpolarizing the tissue. This results in a somewhat complicated pattern of polarization (H’s and D’s), which resembles the pattern shown in the lower small plot to the left of Fig. 6a.
Mechanism 2
Both of these mechanisms operate simultaneously, so the net polarization is the sum of those two small plots. This results in the Yin-Yang pattern of depolarization and hyperpolarization of Fig. 6a. (Stare at Fig. 6a long enough until you realize this is correct.) Below it, in Fig. 6b, is the result you get if you just mindlessly solve the bidomain equations numerically. (Actually, Debbie solved them, and she did nothing mindlessly, but you know what I mean).
The two are qualitatively the same, although there are quantitative differences.
So, how many of you guessed the Yin-Yang pattern? To tell you the truth, I’m not sure I did when Debbie and I first started this analysis. It’s difficult. But at least now I have a way to understand this pretty but nonintuitive pattern. I’ve found that being able to do these hand-waving types of explanations is useful. It lets you understand what is going on, rather than just putting a calculation into a black-box computer program and getting out an answer with no insight. Remember: the purpose of computing is insight, not numbers!
Finally, I really enjoyed starting a research paper off with a puzzle like that in Fig. 1 and ending it with the solution like in Fig. 6. I think you should consider using this trick in your next article.
Recently I wrote a review of the bidomain model of cardiac tissue. Russ Hobbie and I discuss the bidomain model in Section 7.9 of Intermediate Physics for Medicine and Biology. It’s a mathematical description of heart muscle that keeps track of the voltages and currents both inside and outside the myocardial cells. What I wrote is not really an academic review article, it’s not a history, and it’s not a memoir. To tell you the truth, I’m not sure what it is. I originally thought I’d try and publish it, but I’m not sure who would accept such an unusual article. So, I decided it would be best to distribute it on my blog. There is little I can do for my dear readers, but I can give them this review.
The format is to describe the bidomain model by considering twelve publications. Below is a list of the articles I chose. Each article is meant to feature one researcher, whose names are listed in bold.
Tung L (1978) A bi-domain model for describing ischemic myocardial dc potentials. PhD Dissertation, Massachusetts Institute of Technology.
Plonsey R, Barr RC (1984) Current flow patterns in two-dimensional anisotrpic bisyncytia with normal and extreme conductivities. Biophys J 45:557–571.
Sepulveda NG, Roth BJ, Wikswo JP, Jr (1989) Current injection into a two-dimensional anisotropic bidomain. Biophys J, 55:987-999.
Henriquez CS, Plonsey R (1990b) Simulation of propagation along a cylindrical bundle of cardiac tissue. II. Results of the simulation. IEEE Trans Biomed Eng 37:861–875.
Neu JC, Krassowska W (1993) Homogenization of syncytial tissue. Crit Rev Biomed Eng 21:137–199.
Wikswo JP Jr, Lin SF, Abbas RA (1995) Virtual electrodes in cardiac tissue: A common mechanism for anodal and cathodal stimulation. Biophys J 69:2195–2210.
Trayanova N, Skouibine K, Aguel F (1998) The role of cardiac tissue structure in defibrillation. Chaos 8:221–233.
Knisley SB, Trayanova N, Aguel F (1999) Roles of electric field and fiber structure in cardiac electric stimulation. Biophys J 77:1404–1417.
Efimov IR, Cheng Y, van Wagoner DR, Mazgalev T, Tchou PJ (1998) Virtual electrode-induced phase singularity: A basic mechanism of defibrillation failure. Circ Res 82:918–925.
Entcheva E, Eason J, Efimov IR, Cheng Y, Malkin R, Claydon F (1998) Virtual electrode effects in transvenous defibrillation-modulation by structure and interface: Evidence from bidomain simulations and optical mapping. J Cardiovasc Electrophysiol 9:949–961.
Rodriguez B, Li L, Eason JC, Efimov IR, Trayanova NA (2005) Differences between left and right ventricular chamber geometry affect cardiac vulnerability to electric shocks. Circ Res 97:168–175.
Bishop MJ, Boyle PM, Plank G, Welsh DG, Vigmond EJ (2010) Modeling the role of the coronary vasculature during external field stimulation. IEEE Trans Biomed Eng 57:2335–2345.
My biggest worry is that I’ve left too much out. For instance, I could easily have featured other researchers, such as Rick Gray, Jamey Eason, Roger Barr, Marc Lin, Felipe Aguel, David Geselowitz, and others. Also, I suspect there are many researchers who, if they read this review, will be hurt because they are completely ignored. All I can say is, I’m sorry. I tried to relate the story as best as I can remember it, but I may have remembered some things wrong.
You can download my review here. I hope you enjoy reading the article as much as I enjoyed writing it. It was an honor to work on this topic with so many outstanding scientists. As Randy Travis sings, these scientists are my heroes and friends.
First, the physics. Lutetium (pronouced loo-tee-shee-uhm) is element 71 in the periodic table. Below are the energy level and decay data. The primary mechanism of decay is emitting a beta-particle (an electron), transmuting into a stable isotope of hafnium. The maximum energy of this electron is about 500 keV. Two other possibilities (each happening in about one out of every ten decays) is beta decay of 177Lu to one of two excited levels of 177Hf followed by gamma decay. The two most common gamma photons have energies of 113 and 208 keV. Lutetium-177 produces few internal conversion or Auger electrons. The average energy of all the emitted electrons is about 150 keV, which have a range of about 0.25 mm. The half-life of 177Lu is roughly a week.
Next, the biology and medicine. Lutetium can be used for imaging (using the gamma rays) or therapy (using the electrons). While the dose arising from all the electrons does not make this isotope ideal for pure imaging studies (technetium-99m might be a better choice), the gammas do provide a way to monitor 177Lu during therapy (in this way it is similar to iodine-131 used in thyroid cancer therapy and imaging). Such a combined function allows the physician to do “theranostics” (a combination of therapy and diagnostics), a term I don’t care for but it is what it is. 177Lu can be bound to other molecules to improve its ability to target a tumor. For instance, it is sometimes attached to a molecule that binds specifically to prostate specific membrane antigen. The PSMA molecule is over-expressed in a tumor, so this allows the 177Lu to target prostate tumor cells. One advantage of using 177Lu in this way—rather than, say, using radiotherapy with x-rays directed at the prostate—is that the 177Lu will seek out and irradiate any metastasizing cancer cells as well as the main tumor. Clinical trials show that it can prolong the life of those suffering from prostate cancer.
Last November, right after the Presidential election, I wrote a blog post about trusted information on public health. In that post, I featured the science communication efforts by Katelyn Jetelina (Your Local Epidemiologist) and Andrea Love (Immunologic). I didn’t realize at the time just how much I would come to rely on these two science advocates for trustworthy information, especially related to vaccines.
Today, I recommend several more science communicators. The first is Skeptical Science. That website focuses primarily on climate science. The current Republican administration has denied and mocked the very idea of climate change, describing it as a “hoax.” Skeptical Science has a simple mission: “debunk climate misinformation.” This is extraordinarily important, as climate change may be the most important issue of our time. Check out their website www.skepticalscience.com, and follow them on Facebook. I just signed up for their Cranky Uncle app on my phone. I learned about Skeptical Science from my Climate Reality mentor, John Forslin. For those more interested in doing rather than reading and listening, I recommend The Climate Reality Project (Al Gore’s group). Take their training. I did. Oh, and don’t forget Katharine Hayhoe’s website https://www.katharinehayhoe.com.
Want to know more about science funding, especially to the National Institutes of Health? Check out Unbreaking.
They’re documenting all the bad stuff happening to science these days.
I learned about Unbreaking from Liz Neeley's weekly newsletter Meeting the Moment. Liz is married to Ed Yong, who I have written about before.
My next recommendation is Angela Rasmussen, a virologist who publishes at the site Rasmussen Retorts on Substack. What I like about Rasmussen is that she tells it like it is, and doesn’t worry if her salty language offends anyone. I must confess, as I experience more and more of what I call the Republican War on Science, I get angrier and angrier. Rasmussen’s retorts reflect my rage. She writes “Oh, also, I swear sometimes. It’s not the most professional behavior but I believe in calling things what they are and sometimes nothing besides ‘asshole’ is accurate.” Give ’em hell, Angie! Here’s the concluding two paragraphs of her August 5 post:
There’s always a ton of talk about how public health and science have lost trust. A lot of people like to tell me that it’s our fault. Scientists didn’t show enough humility or acknowledge uncertainty during the COVID pandemic. We were wrong about masks or vaccines or variants or whatever. We didn’t communicate clearly. We overclaimed and underdelivered. I reject these arguments.
The public didn’t lose trust in science because experts are wrong sometimes, and are imperfect human beings who make mistakes. They lost trust because people like [Robert F. Kennedy, Jr.] constantly lied about science. He is constantly lying still. He’s eliminating experts so that he and his functionaries on ACIP [The CDC’s Advisory Committee on Immunization Practices] will be able to continue lying without any inconvenient pushback. We need to recognize this and push back hard.
What am I doing to push back hard? Regular readers of this blog may recall my post from this April in which I imagined what Bob Park’s newsletter What’s New would look like today. Well, I’ve made that a weekly thing. You can find them published on my Medium account (https://medium.com/@bradroth). I’ll link a few of the updates below.
You will also find these IPMB blog posts republished there, plus a few other rants. When I started writing my updated version of What’s New, I (ha, ha)… I thought (ha, ha, ha!)... I thought that I might run out of things to talk about. That hasn’t been a problem. But writing a weekly newsletter in addition to my weekly IPMB blog posts takes time, and it makes me appreciate all the more the heroic efforts of Katelyn, Andrea, Liz, and Angela. I hope they all know how much we appreciate their effort.
Is there anything else on the horizon? The book Science Under Siege, by Michael Mann and Peter Hotez, is out next month. As soon as I can get my hands on a copy and read it, I will post a review on this blog. In the meantime, I’ll keep my powder dry, waiting until RFK Jr starts in on microwave health effects (Y’all know it’s coming). Now that’s physics applied to medicine and biology, right up my alley!
“Don’t Choose Extinction.” This is one of John Forslin’s favorite videos. Enjoy!
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear no-threshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.
Wow! This is not a dry, technical discussion. It is IPMB meets 60 Minutes. This is a hard-hitting investigation into scientific error and even scientific fraud. It’s amazing, fascinating, and staggering.
John Cardarelli, the president of the Health Physics Society when the videos were filmed, acts as the host, introducing and concluding each of the 22 episodes. The heart of the video series is Barbara Hamrick, past president of the Health Physics Society, interviewing Edward Calabrese, a leading toxicologist and a champion of the hormesis model (low doses of radiation are beneficial).
Calabrese claims that our use of the linear no-threshold model is based on “severe scientific, ethical, and policy problems.” He reviews the history of the LNT model, starting with the work of the Nobel Prize winner Hermann Muller on the genetics of fruit flies. He reviews the evidence to support his contention that Muller and other scientists were biased in favor of the LNT model, and sometimes carried that bias to extreme lengths. At first I said to myself “this is interesting, but its all ancient history.” But as the video series progressed, it approached closer and closer to the present, and I began to appreciate how these early studies impact our current safety and regulatory standards.
I watched every minute of this gripping tale. (OK, I admit I watched it at a 2x playback speed, and I skipped Cardarelli’s introductions and conclusions after the first couple videos; there is only so much time in a day.) Anyone interested in the linear no-threshold model needs to watch this. I have to confess, I can offer no independent confirmation of Calabrese’s claims. I’m not a toxicologist, and my closest approach to radiobiology is being a coauthor on IPMB. Still, if Calabrese’s claims are even half true then the LNT assumption is based on weak data, to put it mildly.
Watch these videos. Maybe you’ll agree with them and maybe not, but I bet you’ll enjoy them. You may be surprised and even astounded by them.
I am an emeritus professor of physics at Oakland University, and coauthor of the textbook Intermediate Physics for Medicine and Biology. The purpose of this blog is specifically to support and promote my textbook, and in general to illustrate applications of physics to medicine and biology.