One of the homework problems in Intermediate Physics for Medicine and Biology (Problem 31 in Chapter 16) introduces a toy model for the Bragg peak. I won’t review that entire problem, but students derive an equation for the stopping power, S, (the energy per unit distance deposited in tissue by a high energy ion) as a function of the depth below the tissue surface, x
where S0 is the ion’sstopping power at the surface (x = 0) and R is the ion’s range. At a glance you can see how the Bragg peak arises—the denominator goes to zero at x = R so the stopping power goes to infinity. That, in fact, is why proton therapy for cancer is becoming so popular: Energy is deposited primarily at one spot well below the tissue surface where a tumor is located, with only a small dose to upstream healthy tissue.
One topic that comes up when discussing the Bragg peak is straggling. The idea is that the range is not a single parameter. Instead, protons have a distribution of ranges. When preparing the 6th edition of Intermediate Physics for Medicine and Biology, I thought I would try to develop a toy model in a new homework problem to illustrate straggling.
Section 16.10
Problem 31 ½. Consider a beam of protons incident on a tissue. Assume the stopping power S for a single proton as a function of depth x below the tissue surface is
Furthermore assume that instead of all the protons having the same range R, the protons have a uniform distribution of ranges between R – δ/2 and R + δ/2, and no protons have a range outside this interval. Calculate the average stopping power by integrating S(x) over this distribution of ranges.
This calculation is a little more challenging than I had expected. We have to consider three possibilities for x.
x < R — δ/2
In this case, all of the protons contribute so the average stopping power is
We need to solve the integral
First, let
With a little analysis, you can show that
So the integral becomes
This new integral I can look up in my integral table
Finally, after a bit of algebra, I get
Well, that was a lot of work and the result is not very pretty. And we are not even done yet! We still have the other two cases.
R — δ/2 < x < R + δ/2
In this case, if the range is less than x there is no contribution to the stopping power, but if the range is greater than x there is. So, we must solve the integral
I’m not going to go through all those calculations again (I’ll leave it to you, dear reader, to check). The result is
x > R + δ/2
This is the easy case. None of the protons make it to x, so the stopping power is zero.
Well, I can’t look at these functions and tell what the plot will look like. All I can do is ask Mr. Mathematica to make the plot (he’s much smarter than I am). Here’s what he said:
The peak of the “pure” (single value for the range) curve (the red one) goes to infinity at x = R, and is zero for any x greater than R. As you begin averaging, you start getting some stopping power past the original range, out to R + δ/2. To me the most interesting thing is that for x = R– δ/2, the stopping power is larger than for the pure case. The curves all overlap for x > R + δ/2 (of course, they are all zero), and for fairly small values x (in these cases, about x < 0.5) the curves are all nearly equal (indistinguishable in the plot). Even a small value of δ (in this case, for a spread of ranges equal to one tenth the pure range), the peak of the stopping power curve is suppressed.
The curves for straggling that you see in most textbooks are much smoother, but that’s because I suspect they assume a smoother distribution of range values, such as a normal distribution. In this example, I wanted something simple enough to get an analytical solution, so I took a uniform distribution over a width δ .
Will this new homework problem make it into the 6th edition? I’m not sure. It’s definitely a candidate. However, the value of toy models is that they illustrate the physical phenomenon and describe it in simple equations. I found the equations in this example to be complicated and not illuminating. There is still some value, but if you are not gaining a lot of insight from your toy model, it may not be worth doing. I’ll leave the decision of including it in the 6th edition to my new coauthor, Gene Surdutovich. After all, he’s the expert in the interaction of ions with tissue.
My Treeing Walker Coonhound Harvest is getting older and having some trouble with arthritis. The vet says she’s showing signs of hip dysplasia, but it’s not too severe yet. I want to nip this problem in the bud, so we have started a treatment regime that includes oral supplements, pain medication, moderate exercise, weight control, and massage. We’re also trying photobiomodulation, sometimes called low-level laser therapy or cold laser therapy.
We bought a device called Lumasoothe 2 Light Therapy for Pets (lumasoothe.com). I use it in it’s IR Deep Treatment Mode, which shines three wavelengths of light—infrared (940 nm), red (650 nm) and green (520 nm)—from an array of light emitting diodes. I doubt the green light can penetrate to the hip, but red and especially infrared are not attenuated as much. In IPMB, Russ and I talk about how red light is highly scattered, and you can see that by noticing how the red spreads out to the sides of the applicator (kind of like when you hold a flashlight up to your mouth and your checks glow red). The light is delivered in pulses that come at a frequency of about 2.5 Hz (I used the metronome that sits atop my piano to estimate the frequency). I can’t imagine any advantage to pulsing the light, and suspect it’s done simply for the visual effect. I apply the light to Harvest’s hips, about 15 minutes each side.
Mechanisms and Applications of the Anti-Inflammatory Effects of Photobiomodulation.
Photobiomodulation (PBM) was discovered almost 50 years ago by Endre Mester in Hungary. For most of this time PBM was known as “low-level laser therapy” as ruby laser (694 nm) and HeNe lasers (633 nm) were the first devices used. Recently a consensus decision was taken to use the terminology “PBM” since the term “low-level” was very subjective, and it is now known that actual lasers are not required, as non-coherent light-emitting diodes (LEDs) work equally well. For much of this time the mechanism of action of PBM was unclear, but in recent years much progress has been made in elucidating chromophores and signaling pathways.
Any time you are talking about a therapy, the dose is crucial. According to a study by medcovet, the output of Lumasoothe is 0.225 J/cm² per minute (it’s advertised at 6.4). I don’t know which of these values to use, so I’ll just pick something in the middle: 1 J/cm². If we divide by 60 seconds, this converts to about 0.017 W/cm². The intensity of sunlight that reaches the earth’s surface is about 0.1 W/cm², so the device puts out less than the intensity of sunlight (at noon, at the equator, with no clouds). The advertised intensity would be similar to the intensity of sunlight. Of course, sunlight includes a wide band of frequencies, while the Lumasoothe emits just three.
There seems to be an optimum dose, as is often found in toxicology. Hamblin explains
The “biphasic dose response” describes a situation in which there is an optimum value of the “dose” of PBM most often defined by the energy density (J/cm²). It has been consistently found that when the dose of PBM is increased a maximum response is reached at some value, and if the dose in increased beyond that maximal value, the response diminishes, disappears and it is even possible that negative or inhibitory effects are produced at very high fluences.
Joules per square centimeter per minute may not be the best unit to assess heating effects of the Lumasoothe. Let’s assume that 0.017 W/cm² of light penetrates into the tissue about one centimeter (a guess). This means that the device dumps 0.017 watts into a cubic centimeter of tissue. That volume of tissue has a density of about that of water: 1 g/cm3. So the specific absorption rate should be about 0.017 W/g or 17 W/kg. That’s not negligible. A person’s metabolism generates only about 1.5 W/kg. Diathermy to heat tissues uses about 20 W/kg. I don’t think we can rule out some heating using this device. (However, I shined it on my forearm for about two minutes and didn’t feel any obvious warming.)
Hamblin believes there are non-thermal mechanisms involved.
Cytochrome c oxidase (CCO) is unit IV in the mitochondrialelectron transport chain. It transfers one electron (from each of four cytochrome c molecules), to a single oxygen molecule, producing two molecules of water. At the same time the four protons required, are translocated across the mitochondrial membrane, producing a proton gradient that the ATP synthase enzyme needs to synthesize ATP. CCO has two heme centers (a and a3) and two copper centers (CuA and CuB). Each of these metal centers can exist in an oxidized or a reduced state, and these have different absorption spectra, meaning CCO can absorb light well into the NIR [near infrared] region (up to 950 nm). Tiina Karu from Russia was the first to suggest that the action spectrum of PBM effects matched the absorption spectrum of CCO, and this observation was confirmed by Wong-Riley et al in Wisconsin. The assumption that CCO is a main target of PBM also explains the wide use of red/NIR wavelengths as these longer wavelengths have much better tissue penetration than say blue or green light which are better absorbed by hemoglobin. The most popular theory to explain exactly why photon absorption by CCO could led [sic] to increase of the enzyme activity, increased oxygen consumption, and increased ATP production is based on photodissociation of inhibitory nitric oxide (NO). Since NO is non-covalently bound to the heme and Cu centers and competitively blocks oxygen at a ratio of 1:10, a relatively low energy photon can kick out the NO and allow a lot of respiration to take place.
That’s a considerable amount of biochemistry, which I’m not an expert in. I’ll assume Hamblin knows a lot more about it than I do. I worry, however, when he writes “the assumption that…” and “the most popular theory…” It makes me wonder how well this mechanism is established. He goes on to suggest other mechanisms, such as the production of reactive oxygen species and a reduction in inflammation.
Hamblin concludes
The clinical applications of PBM have been increasing apace in recent years. The recent adoption of inexpensive large area LED arrays, that have replaced costly, small area laser beams with a risk of eye damage, has accelerated this increase in popularity. Advances in understanding of PBM mechanisms of action at a molecular and cellular level, have provided a scientific rationale for its use for multiple diseases. Many patients have become disillusioned with traditional pharmaceutical approaches to a range of chronic conditions, with their accompanying distressing side-effects and have turned to complementary and alternative medicine for more natural remedies. PBM has an almost complete lack of reported adverse effects, provided the parameters are understood at least at a basic level. The remarkable range of medical benefits provided by PBM, has led some to suggest that it may be “too good to be true”. However one of the most general benefits of PBM that has recently emerged, is its pronounced anti-inflammatory effects. While the exact cellular signaling pathways responsible for this anti-inflammatory action are not yet completely understood, it is becoming clear that both local and systemic mechanisms are operating. The local reduction of edema, and reductions in markers of oxidative stress and pro-inflammatory cytokines are well established. However there also appears to be a systemic effect whereby light delivered to the body, can positively benefit distant tissues and organs.
I have to admit that Hamblin makes a strong case. But there is another side to the question. Hamblin himself uses that worrisome phrase “complementary and alternative medicine.” I have to wonder about thermal effects. We know that temperature can influence healing (that’s why people often use a heating pad). If photobiomodulation causes even a little heating, this might explain some of its effect.
I’ve talked a lot in this blog about websites or groups that debunk alternative medicine. Stephen Barrett of quackwatch looked at Low Level Laser Therapy in 2018, and concluded that “At this writing, the bottom line appears to be that LLLT devices may bring about temporary relief of some types of pain, but there’s no reason to believe that they will influence the course of any ailment or are more effective than standard forms of heat delivery.”
Mark Crislip writing for Science Based Medicine in 2012 concluded “I suspect that time and careful studies on the efficacy of low level laser will have the same results as the last decade of acupuncture studies: there is no there there.” Jonathan Jarry wrote about “The Hype Around Photobiomodulation,” saying
“That is not to say that all of PBM’s applications are hogwash or that future research will never produce more effective applications of it. But given biomedical research’s modest success rate these days and the ease of coming up with a molecular pathway that fits our wishes, we’re going to need more than mice studies and a plausible mechanism of action to see photobiomodulation in a more favourable light. A healthy skepticism is needed here, especially when it comes to claims of red light improving dementia.”
So, what’s the bottom line? In my book Are Electromagnetic Fields Making Me Ill?, I divided different medical devices, procedures, and hypotheses into three categories: Firmly Established, Questionable, and Improbable (basically: yes, maybe, and no). I would put photobiomodulation therapy in the maybe category, along with transcutaneous electrical nerve stimulation, bone healing using electromagnetic fields, and transcranial direct current stimulation. As a scientist, I’m skeptical about photobiomodulation therapy. But as dog lover, I’m using it every day to try and help Harvest’s hip dysplasia. This probably says more about how much I love Harvest than about my confidence in the technique. My advice is to not get your hopes up, and to follow your vet’s advice about traditional and better-established treatments. The good news: I don’t see much potential for side effects. Is it worth the money to purchase the device? My wife and I were willing to take a moderately expensive bet on a low probability outcome for Harvest’s sake. because she’s the goodest gurl.
Mechanisms & History of Photobiomodulation with Dr. Michael Hamblin
I’ve written about FLASH radiotherapy previously in this blog (here and here). FLASH is when you apply radiation in a single brief pulse rather than slowly or in several fractions. It’s one of the most important developments in radiation therapy in the last decade, but no one is sure why FLASH works better than conventional methods. (Skeptics might say no one is sure if FLASH works better than conventional methods, but I’ll assume in this post that it’s better.)
FLASH is too new for Russ Hobbie and I to mention it in the 5th edition of Intermediate Physics for Medicine and Biology, but Gene Surdutovich and I will add a discussion of it to the 6th edition.
“Mechanisms of the FLASH Effect: Current Insights and Advances,” by Giulia Rosini, Esther Ciarrocchi, and Beatrice D’Orse
I recently read a fascinating mini review in Frontiers in Cell and Developmental Biology by Giulia Rosini, Esther Ciarrocchi, and Beatrice D’Orse of the Institute of Neuroscience in Pisa, Italy. They’re trying to address that why question. Their article, titled “Mechanisms of the FLASH Effect: Current Insights and Advances,” is well worth reading. (Some scientific leaders in the United States claim that modern medicine focuses on treating symptoms rather than addressing underlying causes. This article shows that scientists do just the opposite: They search for basic mechanisms. Bravo! At least in Italy science is still alive.)
Below I reproduce their introduction (references removed and Wikipedia links added). If you want more detail, I suggest reading the review in its entirety (it’s open access, so you don’t need a subscription to the journal).
Radiotherapy is one of the most effective treatments for cancer, used in more than 60% of cancer patients during their oncological care to eliminate/reduce the size of the tumor. Currently, conventional radiotherapy (CONV-RT) remains the standard in clinical practice but has limitations, including the risk of damage to surrounding healthy tissues. A recent innovation, FLASH radiotherapy (FLASH-RT), employs ultra-high-dose rate (UHDR) irradiation to selectively spare healthy tissue while maintaining its therapeutic effect on tumors. However, the precise radiobiological mechanism behind this protective
“FLASH effect” remains unclear. To understand the FLASH effect, several hypotheses have been proposed, focusing on the differential responses of normal and tumor tissues to UHDR irradiation: (i) Oxygen depletion: FLASH-RT may rapidly deplete oxygen in normal tissues, creating transient hypoxia that reduces oxygen-dependent DNA damage; (ii) Radical-radical interaction: The rapid production of reactive oxygen species (ROS) during UHDR irradiation may lead to radical recombination, preventing oxidative damage to healthy tissues; (iii) Mitochondrial preservation: FLASH-RT appears to
preserve mitochondrial integrity and ATP production in normal tissues, minimizing oxidative stress. Conversely, FLASH-RT may promote oxidative damage and apoptosis in tumor cells, potentially improving therapeutic efficacy; (iv) DNA damage and repair: The differential response of normal and tumor tissues may result from variations in DNA damage formation and repair. Normal cells rely on highly conserved repair mechanisms, while tumor cells often exhibit dysregulated repair pathways; and (v) Immune response: FLASH-RT may better preserve circulating immune cells and reduce inflammation in normal tissues compared to CONV-RT. In this mini-review, we summarize the current insights into the cellular mechanisms underlying the FLASH effect. Preclinical
studies in animal models have demonstrated the FLASH effect, and early-phase clinical trials are now underway to evaluate its safety and efficacy in human patients. While FLASH-RT holds great promise for improving the balance between tumor control and
normal tissue sparing in cancer treatment, continued research is necessary to fully elucidate its mechanisms, optimize its clinical application, and minimize potential side effects. Understanding these mechanisms will pave the way for safer and more effective
radiotherapy strategies.
I’ll take advantage of this paper being open access to reproduce Rosini et al.’s Figure 1, which is a beautiful summary of their article.
Figure 1 from “Mechanisms of the FLASH Effect: Current Insights and Advances,” by Giulia Rosini, Esther Ciarrocchi and Beatrice D’Orsi
If I were a betting man, I’d put my money on the radical-radical interaction mechanism. But don’t trust me, because I’m not an expert in this field. Read this well-written review yourself and draw your own conclusion.
I’ll end by giving Rosini, Ciarrocchi, and D’Orse the final word. Their conclusion is quoted below.
FLASH-RT has emerged as a promising alternative to CONV-RT, offering potential advantages in reducing normal tissues toxicity while maintaining or even potentially enhancing tumor control. However, the underlying mechanisms remain incompletely understood. Oxygen depletion, radical recombination, mitochondrial preservation, DNA repair and immune response modulation, have all been proposed as contributing factors… but no single mechanism fully explains the FLASH effect. This further highlights the complex interplay between physical, biological, and immunological factors that might behind the FLASH effect. Importantly, combining FLASH-RT with adjuvant therapies, such as radioprotectors, immunotherapy or nanotechnology, could synergize with these mechanisms to further widen the therapeutic window. FLASH-RT’s ability to reduce inflammation, preserve immune function, and minimize damage to healthy tissues contrasts sharply with CONV-RT, which often
induces significant toxicity. However, despite promising preclinical findings, critical questions remain regarding the precise mechanisms driving the FLASH effect and its clinical applicability. Continued research is essential to fully elucidate these mechanisms, optimize FLASH-RT delivery, and translate its benefits into safe and effective clinical applications. By addressing these challenges, FLASH-RT has the potential to significantly improve therapeutic outcomes for cancer patients, offering a paradigm shift in radiation oncology.
The article begins with a discussion of Hodgkin and Huxley’s research on a nerve axon. Russ Hobbie and I describe this work in Section 6.13 of IPMB (“The Hodgkin–Huxley Model for Membrane Current”). We focus on their 1952 papers in the Journal of Physiology, and especially the fifth one which developed their mathematical model in detail. It’s a wonderful paper, and when I used to teach my graduate bioelectricity class at Oakland University the students were assigned to read it. I thought I was familiar with the story behind Hodgkin and Huxley’s research, but I learned something new from Catacuzzeno et al. They write (with citations removed)
We wish to recall, in Hodgkin’s words, how in the summer
of 1949, in about a month, they managed to complete all
the experiments used in the five papers published in 1952, as
a special lesson for today’s times, when everything seems to
move so fast and often with little thought behind it: “I think
that we were able to do this so quickly and without leaving
too many gaps because we had spent so long thinking and
making calculations about the kind of system which might
produce an action potential of the kind seen in squid nerve.
We also knew what we had to measure in order to reconstruct
an action potential.”
One of the interesting features of the Hodgkin and Huxley work is that they did not know about ion channels. I find it hard to even begin teaching the subject without talking about ion channels, yet they presented all their results without referring to them. Catacuzzeno et al. seem to share my surprise.
Notably, in none of their papers did Hodgkin and Huxley
ever mention “ion channels,” only ion currents and conductance.
In fact, the concept of an ion channel, as we know it
today, did not even exist at the time. Carriers [now known as “transporters”] were more in
vogue in the scientific community, also in association with
membrane excitation. In the last of their 1952 papers, commenting
on the Na+ inward current, Hodgkin and Huxley
wrote that it could not be excluded “the possibility that Na+ ions cross the membrane in combination with a lipoid
solubile carrier.” This shows how strongly rooted the concept
of the carrier was at the time, and how far removed the
concept of the ion channel was.
The main purpose of Catacuzzeno et al.’s article is to explain how the existence of ion channels was established. The heroes of the story are Bertil Hille and Clay Armstrong. Russ and I refer to Hille in IPMB when we cite his wonderful book Ion Channels of Excitable Membranes. I’m embarrassed to say that we don’t mention Armstrong at all. The “crucial decade” in the title of the article is the ten years from roughly 1965 to 1975.
One nice thing about this review is that it really helps the reader see the scientific method in action. It presents the hypotheses that Hille and Armstrong introduce, and then explains how they designed experiments to test them. Sometimes this perspective gets lost in textbooks like IPMB, but I like how it’s highlighted in Catacuzzeno et al.’s more qualitative and historical review.
Catacuzzeno et al. claim that one of the key pieces of evidence supporting the idea of ion channels that are selective for different ions is the existence of chemicals that block a particular type of channel: tetrodotoxin (TTX) for a sodium channel and tetraethylammonium (TEA) for a potassium channel. Selectivity became a key issue. They write
Classic biophysical experiments beginning in the mid
1960s, which showed distinct conduction properties for
different ions, began to provide the first clues as to the architecture
and basic physico-chemical properties of the conduction
pores and the mechanisms underlying ion permeation
and selectivity. Hille focused his efforts on investigating the
selectivity properties of these membrane pores with the idea
that it would perhaps lead to something instructive regarding
their structural and chemical properties. A few studies had
already addressed this topic but not in a systematic
way as Hille had in mind. By selectively blocking either
pathway [sodium or potassium], his studies showed that at least 10 cations could
easily permeate through the Na+ pores and four through the K+ pores of Ranvier’s node [in a myelinated nerve axon]. Considering the size of the ions
tested and the length of hydrogen bonds, Hille estimated
the Na+ pores to have a size, in its most constricted portion,
which he called the “selectivity filter,” of 3.1 × 5.1 Å and
assumed to be lined by oxygen dipoles that would establish
hydrogen bonds with the permeating cations...
Most interesting was another observation: all cations
with a methyl group were impermeant, regardless of their
size. In other words, large hydroxy guanidinium could go
through the Na+ pore, while small methylammonium could
not. These results provided experimental
support for previous proposals that permeant ions interact
with the pore wall and that this interaction contributes to
the membrane’s permeation properties; in other words, the
membrane or the pores in it do not merely select by ion size,
as if they were simple physical sieves.
Bertil Hille received his PhD from the Rockefeller University in 1967. It was
during his graduate studies that he began his collaboration with Clay Armstrong
(both of them were young during the crucial decade when ion channels were established). He did a post doc with Alan Hodgkin of Hodgkin and Huxley fame.
He then became a professor for many years at the University of Washington’s School of Medicine.
Clay Armstrong, who was six years older than Hille, is a former student of
Andrew Huxley. He received his MD degree from the Washington University School of Medicine in 1960.
He is currently an emeritus professor of physiology at the University of Pennsylvania.
Catacuzzeno et al. describe his research in their review.
Other findings of that period, in particular Armstrong’s
experiments with TEA+ derivatives on the outward K+
current of the squid giant axon, strengthened the notion that
the membrane pores were at least partly made up of protein.
Years earlier [Ichiji] Tasaki and [Susumu] Hagiwara had obtained
action potentials with a long-lasting plateau, like the cardiac action potential, when they perfused internally the
squid giant axon with TEA+.
These data were interpreted
as being due to a TEA+-dependent block of the outward K+ current (which they called anomalous rectification) and
resulting failure of K+ current-dependent repolarization.
Armstrong and [Leonard] Binstock continued their investigation
with TEA+ by probing the drug on the K+
current under voltage clamp, thinking that these compounds could disclose
new mechanisms and the pore architecture. First,
they found that internal TEA+ eliminated the outward K+ current,
whereas it was totally ineffective when applied
from the outside. However, the most interesting results
came when Armstrong began probing a series of TEA+
derivatives made by replacing one of the four ethyl groups
by a progressively longer hydrophobic chain and found
that the efficacy of block increased with the chain length.
Using C9+ (nonyl triethylammonium ion) from the inside,
he found that the K+ current no longer reached a steady state
level during the voltage step but inactivated in a manner
quite like the Na+ current.
I love how Catacuzzeno et al. include anecdotes that
highlight the human side of science. For instance, they
demonstrate the initial resistance to the idea of ion channels with this story:
To further represent the general sentiment on the subject
at that time, it may also be helpful to recount what happened
at the 1966 Biophysical Society meeting, when Armstrong
and Hille presented two separate abstracts, both with the word
“channel” in the title. As Hille recalls in a recent retrospective
“the Chair of the session, Toshio Narahashi, began by
announcing that the word ‘channel’ could not be used in the
session. After our vigorous objection, he allowed us to use the
word ‘provided it did not imply any mechanism!’”.
I highly recommend the article by Catacuzzeno et al. as ancillary reading when studying from Intermediate Physics for Medicine and Biology.
It’s wonderfully written, informative, and fascinating. They conclude
Asked why a skeptical medical student would take an interest
in the study of ion channels, Clay Armstrong, upon
receiving the Albert Lasker Basic Medical Research Award
in November 1999 [along with Hille and Roderick MacKinnon], gave the following answer: “I think that
ion channels are the most important single class of proteins
that exist in the human body or any body for that matter”
Undoubtedly, Armstrong knows well that all proteins of
the body are crucial and that we cannot do without most
of them; undoubtedly, Armstrong is biased in favor of ion
channels after a lifetime spent with them. Yet, if he says that
ion channels are of outstanding importance, then there must
be something very special around them.
Intermediate Physics for Medicine and Biology doesn’t analyze diffraction. It’s mentioned a few times—in the chapters about images (Chap. 12), sound (Chap. 13), and light (Chap. 14)—but it’s never investigated in detail. In this post, I want to take a deeper dive into diffraction. In particular, we will examine a specific example that highlights many features of this effect: diffraction from a knife edge.
Assume a plane wave of light, with intensity I0 and wavelength λ, is incident on a half-infinite opaque screen (the knife edge, shown below with the incident light coming from the bottom upwards). If optics were entirely geometrical (light traveling in straight lines with no effect of its wavelength) the screen would cast a sharp shadow. All the light for x < 0 would continue propagating upward (in the y direction) while all the light for x > 0 would be blocked. But that’s not what really happens. Instead, light diffracts off the screen, causing fringes to appear in the region x < 0, and some light entering the shadow region of x > 0.
Optics, by Hecht and Zajac.
Knife-edge diffraction is one of the few diffraction problems that we can solve analytically. I’ll follow the solution given in Hecht and Zajac’s textbook Optics (1979), which I used in my optics class when I was an undergraduate physics major at the University of Kansas. The solution to the knife-edge problem involves Fresnel sine and cosine integrals
I’ve plotted C and S in the top panel of the figure below. Both are odd functions that approach one half as ξ approaches infinity, albeit with many oscillations along the way. There’s lots of interesting mathematics behind these integrals, like the Cornu spiral, but this post is long enough that we don’t have time for that digression.
If we solve for the intensity distribution beyond the screen (y > 0), we get
This is an interesting function. When I first saw this solution plotted, I noticed oscillations on the left (x < 0) but none in the shadow region on the right (x > 0).
But C and S are both odd, so they oscillate on the right and left. The middle two panels in the figure above show how this happens. Taking one half minus C and one half minus S just flips the two functions and adds a constant, so the functions vary from roughly zero to one instead of minus a half to plus a half. When you square these functions, the oscillations that are nearly equal to zero get really small (a small number like one tenth, when squared, gets very small) while the oscillations that are nearly equal to one are preserved (one squared is just one). There are still some small oscillations in the shadow region (x > 0), but somehow when you add the cosine and sine parts even they go away, and you end up with the classic solution in the bottom panel, which you see in all the textbooks.
I was curious to know is how this function behaves for different wavelengths. Diffraction effects are most important for long wavelengths, but for short wavelengths you expect to recover plain old geometrical optics. Interestingly, the wavelength λ only appears as a scaling factor for x in our solution. So changing λ merely stretches or contracts the function along the x axis. The figure below shows the solution for three different wavelengths. Note that the argument of the Fresnel sine and cosine functions is dimensionless: x has units of distance, but so do the wavelength λ and the distance past the opaque screen y, and they appear as a product under a square root. Therefore, we don’t need to worry about the units of x, y, and λ. As long as we use consistent units we are fine. As the wavelength gets small, the distribution gets crowded together close to x = 0. The red curve is the geometrical optics limit (λ = 0). The case of λ = 0.1 approaches this limit in a funny way. The amplitude of the oscillations does not change, but they fall off more quickly, so they basically only exist very near the knife edge. You wouldn’t notice them unless you looked with very fine spatial resolution. The intensity does seep into the shadow regions, and this is more pronounced at large wavelengths than at small.
The picture above was plotted for one distance, y = 1. It’s what you would get if you put a projector screen just behind the knife edge so you could observe the intensity pattern there. What happens if you move the screen farther back (increase y). Since y and λ enter our solution only as their product, changing y is much like changing λ. Below is a plot of intensity versus x at three different values of y, for a single wavelength. Making this plot was simple: I just changed the labels on the previous plot from λ to y, and from y to λ. As you get farther way (y gets larger), the distribution spreads out. But the spreading is not linear. Because of that square root, the spreading slows down (but never stops) at large y.
You can see the full pattern of intensity in the color plots below. Remember, x is horizontal and the opaque screen is on the right, y is vertical and opaque screen is at the bottom, and color indicates intensity. Yellow is the incident intensity I0, and blue is zero intensity (darkness). The geometrical limit would be a strip of blue on the right (the shadow) and yellow on the left (the unobstructed incident wave). The case for λ = 0.01 closely approximates this. The case of a really long wavelength (λ = 100) is interesting. The light spreads out all over, giving more of a uniform distribution. For long wavelengths, the light “bends” around the opaque screen. This is why you can still hear music even if there is an obstacle between you and the band. The sound wave diffracts because of its long wavelength (especially the low pitched notes).
Diffraction is fascinating, but it’s a bit too complicated to be included in IPMB. Nevertheless, it’s a part of the physics behind visual acuity, microscope resolution, ultrasound transducers, and many other applications.
The title of this week’s post is ironic, because with all the events of the last few months I often suspect that the Age of Reason is coming to a close. The title comes from volume seven of Will and Ariel Durant’s The Story of Civilization. After I retired from Oakland University, I set about reading the entire eleven-volume series. The subtitle of The Age of Reason Begins is: A History of European Civilization in the Period of Shakespeare, Bacon, Montaigne, Rembrandt, Galileo, and Descartes: 1558–1648.
Today I want to focus on Francis Bacon, who is probably the central figure in the Durants’ book (his picture was their choice for gracing the book’s cover). They introduce him this way.
Francis Bacon, who was destined to have more influence on European thought than any other Elizabethan, had been born (1561) in the very aura of the court, at York House, official residence of the Lord Keeper of the Great Seal, who was his father, Sir Nicholas; Elizabeth called the boy ‘the young Lord Keeper.’ His frail constitution drove him from sports to studies; his agile intellect grasped knowledge hungrily; soon his erudition was among the wonders of those ‘spacious times.’
Why bring up Bacon now? Well, the last few months have seen unprecedented attacks on science and scientists: Budget cuts to the National Institutes of Health and the National Science Foundation, climate change denial and vaccine hesitancy, conspiracy theories, political requirements for government funding, the demonization of scientists such an Anthony Fauci, and more. It seems like something horrible happens every day. This makes me wonder: what is the key feature of science that must be preserved above all else? What one thing must we save? I can think of many possibilities. Science drives our economy and prosperity. Scientific discoveries have led to amazing advances in human health. Educating and providing opportunities for our young scientists is a critical investment in our future. Yet, as important as these things are, they aren’t the central issue. They aren’t what we must save lest all be lost. It’s this key element of science, its essence, that brings me to Francis Bacon.
Bacon was an early promoter of the scientific method. The Durants write
Bacon felt that the old Organon [of Aristotle] had kept science stagnant by its stress on theoretical thought rather than practical observation. His Novum Organum proposed a new organ and system of thought—the inductive study of nature itself through experience and experiment. Though this book too was left incomplete, it is, with all its imperfections, the most brilliant production in English philosophy, the first clear call for an Age of Reason.
Let me explain (and perhaps expand on) Bacon’s idea in my own words. How do we know what is true and what is not? By evidence. By experiment. By data. By comparing our ideas to what we can measure happening in the world. By accepting as true only those hypotheses that survive our best efforts to disprove them. By submitting our conclusions to rigorous peer review from our fellow scientists. Yet the current Republican administration seems to have its own ideas of what is true, regardless of the evidence. This is the very opposite of science. It is anti-science.
For example, the reality of climate change and humanity’s impact on global warming is backed by an enormous body of data. We have records of temperature, carbon dioxide concentration, and increasingly violent storms. We have sophisticated mathematical models with which we can conduct numerical experiments to predict what will happen in the future. The evidence is truly overwhelming. Yet, many—including President Trump—don’t care about the evidence. They claim climate change is a “hoax.” They don’t back these claims with facts. They don’t approach the topic as an inductive study based on experience and experiment. They believe things for their own reasons that have nothing to do with evidence or science.
Another example is vaccines. There are so many clinical studies showing that vaccines don’t cause autism. Again, the evidence is overwhelming. Yet people like Health and Human Services Secretary Robert F. Kennedy, Jr. believe just the opposite: that autism is caused by vaccines. They don’t support such claims by presenting new evidence. While they occasionally drag up discredited studies or cherry-pick data, they don’t systematically examine all the evidence and weigh both sides. They don’t try to falsify their hypotheses. They don’t subject their ideas to peer-review.
Still another example is the source of covid. The evidence is uncertain enough that we cannot say definitively how the covid pandemic arose. Yet, the data points strongly in one direction: Spillover from an animal to a human. Nevertheless, the government’s covid.gov website now claims that the “lab leak” hypothesis has been proven, and asserts that covid arose from sinister events in a lab in China. No, we don’t know that. While we can’t yet be certain, the evidence suggests that the cause was not a lab leak. Just because some politicians want the source of covid to be a lab leak doesn’t make it so.
I would love to be proved wrong, and shown that, say, climate change is actually not happening. That would truly be wonderful, and millions of lives would be saved. But you have to prove that using evidence. You can’t just declare it. My dad was born in Kansas City and he used to say “I’m from Missouri and you have to show me!” That’s the gist of what it means to be a scientist. You have to show me, not tell me. Convince me with the data.
So, what is the feature of science that is essential? What aspect, if we lose it, means we no longer have science at all. I would say the belief that evidence matters. That experiments are how we determine what is true and what is not. If we give that up, all is lost and we’re back to the age of faith. Not religious faith necessarily, but an age where truth is determined not by evidence but by what is consistent with your personal beliefs, your friends and family, your wishful thinking, your fears, or your politics. The supremacy of evidence is where we must focus our resistance. That must be our line in the sand that we will not cross. That must be the hill from which we defend against the onslaught of the Republican War on Science, so that the Age of Reason can resume.
Because he [Bacon] expressed the noblest passion of his age—for the betterment of life through the extension of knowledge—posterity raised to his memory a living monument of influence. Scientists were stirred and invigorated not by his method but by his spirit. How refreshing, after centuries of minds imprisoned in their roots or caught in webs of their own wistful weaving, to come upon a man who loved the sharp tang of fact, the vitalizing air of seeking and finding, the zest of casting lines of doubt into the deepest pools of ignorance, superstition, and fear!...
…[Bacon] repudiated the reliance upon traditions and authorities; he required rational and natural explanations instead of emotional presumptions, supernatural interventions, and popular mythology. He raised a banner for all the sciences, and drew to it the most eager minds of the succeeding centuries.
Another candidate is Rita Hari (born 1948). Russ and I included the biomagnetism researcher Matti Hämäläinen in the IPMB100 list, and Hari is from the same Finnish research group and has made similar contributions as Hämäläinen. She was in fact second author after Hämäläinen on the definitive review article that Russ and I cite about magnetoencephalography (MEG). We cite another paper by Hari when discussing the clinical applications of the MEG.
I knew Carri Glide-Hurst (born 1979) when she was at Wayne State University here in southeast Michigan. She’s currently associated with the famous University of Wisconsin medical physics program. In IPMB Russ and I cite her Point/Counterpoint article in the journal Medical Physics, which examines the use of ultrasound for breast cancer screening.
Elizabeth Cherry (born 1975?) at Georgia Tech is a biomedical engineer who works on modeling cardiac electrophysiology. She is a co-author on a landmark paper looking at ways to perform low-energy defibrillation.
The contributions of three of my graduate students—Marcella Woods, Debbie Janks, and Debbie Langrill Beaudoin—are honored in IPMB by having their research turned into homework problems. And Russ’s daughter Sarah is cited for her studies on fitting ecological data using exponentials.
So yes, many females have contributed to IPMB, and could easily have been included in the IPMB100. You could probably name even more. I suspect that future editions of IPMB will feature many more women.
I am an emeritus professor of physics at Oakland University, and coauthor of the textbook Intermediate Physics for Medicine and Biology. The purpose of this blog is specifically to support and promote my textbook, and in general to illustrate applications of physics to medicine and biology.