Friday, May 31, 2024

Can the Microwave Auditory Effect Be ‘Weaponized’

Can the Microwave Auditory
Effect be Weaponized?”
I was recently reading Ken Foster, David Garrett, and Marvin Ziskin’s paper “Can the Microwave Auditory Effect Be Weaponized?” (Frontiers in Public Health, Volume 9, 2021). It analyzed if microwave weapons could be used to “attack” diplomats and thereby cause the Havana syndrome. While I am interested in the Havana syndrome (I discussed it in my book Are Electromagnetic Fields Making Me Ill?), today I merely want to better understand Foster et al.’s proposed mechanism by which an electromagnetic wave can induce an acoustic wave in tissue.

As is my wont, I will present this mechanism as a homework problem at a level you might find in Intermediate Physics for Medicine and Biology. I’ll assign the problem to Chapter 13 about Sound and Ultrasound, although it draws from several chapters.

Forster et al. represent the wave as decaying exponentially as it enters the tissue, with a skin depth λ. To keep things simple and to focus of the mechanism rather than the details, I’ll assume the energy in the electromagnetic wave is absorbed uniformly in a thin layer of thickness λ, ignoring the exponential behavior.

Section 13.4
Problem 17 ½. Assume an electromagnetic wave of intensity I0 (W/m2) with area A (m2) and duration τ (s) is incident on tissue. Furthermore, assume all its energy is absorbed in a depth λ (m).

(a) Derive an expression for the energy E (J) dissipated in the tissue.

(b) Derive an expression for the tissue’s increase in temperature ΔT (°C), E = C ΔT, where C (J/°C) is the heat capacity. Then express C in terms of the specific heat capacity c (J/°C kg), the density ρ (kg/m3), and the volume where the energy was deposited V (m3). (For a discussion of the heat capacity, see Sec. 3.11).

(c) Derive an expression for the fractional increase in volume, ΔV/V, caused by the increase in temperature, ΔV/V = αΔT, where α (1/°C) is the tissue’s coefficient of thermal expansion.

(d) Derive an expression for the change in pressure, ΔP (Pa), caused by this fractional change in volume, ΔP = B ΔV/V, where B (Pa) is the tissue’s bulk modulus. (For a discussion of the bulk modulus, see Sec. 1.14).

(e) You expression in part d should contain a factor Bα/. Show that this factor is dimensionless. It is called the Grüneisen parameter.

(f) Assume α = 0.0003 1/°C, B = 2 × 109 Pa, c = 4200 J/kg °C, and ρ = 1000 kg/m3. Evaluate the Grüneisen parameter. Calculate the change in pressure ΔP if the intensity is 10 W/m2, the skin depth is 1 mm, and the duration is 1 μs.

I won’t solve the entire problem for you, but the answer for part d is

                            ΔPI0 (τ/λ) [/] .

I should stress that this calculation is approximate. I ignored the exponential falloff. Some of the incident energy could be reflected rather than absorbed. It is unclear if I should use the linear coefficient of thermal expansion or the volume coefficient. The tissue may be heterogeneous. You can probably identify other approximations I’ve made. 

Interestingly, the pressure induced in the tissue varies inversely with the skin depth, which is not what I intuitively expected. As the skin depth gets smaller, the energy is dumped into a smaller volume, which means the temperature increase within this smaller volume is larger. The pressure increase is proportional to the temperature increase, so a thinner skin depth means a larger pressure.

You might be thinking: wait a minute. Heat diffuses. Do we know if the heat would diffuse away before it could change the pressure? The diffusion constant of heat (the thermal diffusivity) D for tissue is about 10-7 m2/s. From Chapter 4 in IPMB, the time to diffuse a distance λ is λ2/D. For λ = 1 mm, this diffusion time is 10 s. For pulses much shorter than this, we can ignore thermal diffusion. 

Perhaps you’re wondering how big the temperature rise is? For the parameters given, it’s really small: ΔT  = 2 × 10–9 °C. This means the fractional change in volume is around 10–12. It’s not a big effect.

The Grüneisen parameter is a dimensionless number. I’m used to thinking of such numbers as being the ratio of two quantities with the same units. For instance, the Reynolds number is the ratio of an inertial force to a viscous force, and the Péclet number is the ratio of transport by drift to transport by diffusion. I’m having trouble interpreting the Grüneisen parameter in this way. Perhaps it has something to do with the ratio of thermal energy to elastic energy, but the details are not obvious, at least not to me.

What does this all have to do with the Havana syndrome? Not much, I suspect. First, we don’t know if the Havana syndrome is caused by microwaves. As far as I know, no one has ever observed microwaves associated with one of these “attacks” (perhaps the government has but they keep the information classified). This means we don’t know what intensity, frequency (and thus, skin depth), and pulse duration to assume. We also don’t know what pressure would be required to explain the “victim’s” symptoms. 

In part f of the problem, I used for the intensity the upper limit allowed for a cell phone, the skin depth corresponding approximately to a microwave frequency of about ten gigahertz, and a pulse duration of one microsecond. The resulting pressure of 0.0014 Pa is much weaker than is used during medical ultrasound imaging, which is known to be safe. The acoustic pressure would have to increase dramatically to pose a hazard, which implies very large microwave intensities.

Are Electromagnetic Fields Making Me Ill? superimposed on the cover of Intermediate Physics for Medicine and Biology.
Are Electromagnetic Fields
Making Me Ill?

That such a large intensity electromagnetic wave could be present without being noticeable seems farfetched to me. Perhaps very low pressures could have harmful effects, but I doubt it. I think I’ll stick with my conclusion from Are Electromagnetic Fields Making Me Ill?

Microwave weapons and the Havana Syndrome: I am skeptical about microwave weapons, but so little evidence exists that I want to throw up my hands in despair. My guess: the cause is psychogenic. But if anyone detects microwaves during an attack, I will reconsider.

Friday, May 24, 2024

Magnetoelectrics For Biomedical Applications

“Magnetoelectrics for Biomedical Applications: 130 years Later, Bridging Materials, Energy, and Life” superimposed on Intermediate Physics for Medicine and Biology.
Magnetoelectrics for
Biomedical Applications:
130 Years Later, Bridging
Materials, Energy, and Life
I’m always looking for new ways physics can be applied to medicine and biology. Recently, I read the article “Magnetoelectrics for Biomedical Applications: 130 Years Later, Bridging Materials, Energy, and Life” by Pedro Martins and his colleagues (Nano Energy, in press).

The “130 years” in the title refers to the year 1894 when Pierre Curie conjectured that in some materials there could be a coupling between their magnetic and electric properties. While there are some single-phase magnetoelectric materials, most modern ones are composites: piezoelectric and magnetostrictive phases are coupled through mechanical strain. In this way, an applied magnetic field can produce an electric field, and vice versa.

Martins et al. outline many possible applications of magnetoelectric materials to medicine. I will highlight three.

  1. Chapter 7 of Intermediate Physics for Medicine and Biology mentions deep brain stimulators to treat Parkinson’s disease. Normally deep brain stimulation requires implanting a pacemaker-like device connected by wires inserted into the brain. A magnetoelectric stimulator could be small and wireless, using power delivered by a time-dependent magnetic field. The magnetic field would induce an electric field in the magnetoelectric material, and this electric field could act like an electrode, activating a neuron
  2. Chapter 8 of IPMB discusses ways to measure the tiny magnetic field produced by the heart: The magnetocardiogram. The traditional way to record the field is to use a superconducting quantum interference device (SQUID) magnetometer, which must be kept at cryogenic temperatures. Martins et al. describe how a weak magnetic field would produce a measurable voltage using a room-temperature magnetoelectric sensor.
  3. Magnetoelectric materials could be used for drug delivery. Martins et al. describe an amazing magnetoelectrical “nanorobot” that could be made to swim using a slowly rotating magnetic field. After the nanorobot reached its target, it could be made to release a cancer-fighting drug to the tissue by applying a rapidly changing magnetic field that produces a local electric field strong enough to cause electroporation in the target cell membrane, allowing delivery of the drug.

What I have supplied is just a sample of the many various applications of magnetoelectric materials. Martin’s et al. describe many more, and also provide a careful analysis to the limitations of these techniques. 

The third example related to drug delivery surprised me. Electroporation? Really? That requires a huge electric field. In Chapter 9 of IPMB, Russ Hobbie and I say that for electroporation the electric field in the membrane should be about 108 volts per meter. Later in that chapter, we analyze an example of a spherical cell in an electric field. To get a 108 V/m electric field in the membrane, the electric field applied to the cell as a whole should be on the order of 108 V/m times the thickness of the membrane (about 10–8 m) divided by the radius of the cell (about 10–5 m), or 105 V/m. The material used for drug delivery had a magnetoelectrical coefficient of about 100 volts per centimeter per oersted, which means 104 V/(m Oe). The oersted is really a unit of the magnetic field intensity H rather than of the magnetic field B. In most biological materials, the magnetic permeability is about that of a vacuum, so 1 Oe corresponds to 1 gauss, or 10–4 tesla. Therefore, the magnetoelectrical coefficient is 108 (V/m)/T. Martins et al. say that a magnetic field of about 1000 Oe (0.1 T) was used in these experiments. So, the electric field produced by the material was on the order of 107 V/m. The electric field that cells adjacent to the magnetoelectrical particle experience should be about this strength. We found earlier that electroporation requires an electric field applied to the cell of around 105 V/m. That means we should have about a factor of 100 more electric field strength than is needed. It should work, even if the neuron is a little bit distant from the device. Wow!

I’ll close with my favorite paragraph of the article, found near the end and summarizing the field.

The historical trajectory of ME [magnetoelectric] materials, spanning from Pierre Curie's suggestion in 1894 to recent anticancer activities in 2023, has been characterized by significant challenges and breakthroughs that have shaped their biomedical feasibility. Initially, limited understanding of the ME phenomenon and the absence of suitable materials posed critical obstacles. However, over the decades, intensive research efforts led to the discovery and synthesis of ME compounds, including novel composite structures and multiferroic materials with enhanced magnetoelectric coupling. These advancements, coupled with refinements in material synthesis and characterization techniques, propelled ME materials into the realm of biomedical applications. Additionally, piezoelectric polymers have been incorporated into ME composites, enhancing processing, biocompatibility, integration, and flexibility while maintaining or even improving the ME properties of the composite materials. In the 21st century, the exploration of ME materials for biomedical purposes gained momentum, particularly in anticancer activities. Breakthroughs in targeted drug delivery, magnetic hyperthermia therapy, and real-time cancer cell imaging showcased the therapeutic potential of ME materials. Despite these advancements, challenges such as ensuring biocompatibility, stability in physiological environments, and scalability for clinical translation persist. Ongoing research aims to optimize ME material properties for specific cancer types, enhance targeting efficiency, and address potential cytotoxicity concerns, with the ultimate goal of harnessing the full potential of ME materials to revolutionize cancer treatment and diagnosis.

Friday, May 17, 2024

FLASH Radiotherapy: Newsflash, or Flash in the Pan?

FLASH Radiotherapy: Newsflash or Flash in the Pan? (Med. Phys. 46:4287–4290, 2019) superimposed on the cover of Intermediate Physics for Medicine and Biology.
“FLASH Radiotherapy: Newsflash
or Flash in the Pan?” (Med. Phys.
46:4287–4290, 2019).
I’ve always been a fan of the Point/Counterpoint articles published in the journal Medical Physics. Today I will discuss one titled “FLASH Radiotherapy: Newsflash or Flash in the Pan?” (Volume 46, Pages 4287–4290, 2019). The title is clever, but doesn’t really fit. A point/counterpoint article is supposed to have a title in the form of a proposition that you can argue for or against. Perhaps “FLASH Radiotherapy: Newsflash, Not a Flash in the Pan” would have been better.

The format for any Point/Counterpoint article is a debate between two leading experts. Each provides an opening statement and then they finish with rebuttals. In this case, Pater Maxim argues for the proposition (Newsflash!), and Paul Keall argues against it (Flash in the Pan). The moderator, Jing Cai, provides an introductory overview:
Ionizing radiation with ultra-high dose rates (>40 Gy/s), known as FLASH, has drawn great attention since its introduction in 2014. It has been shown to markedly reduce radiation toxicity to normal healthy tissues while inhibiting tumor growth with similar efficiency as compared to conventional dose rate irradiation in pre-clinical models. Some believe that FLASH irradiation holds great promise and is perhaps the biggest finding in recent radiotherapy history. However, others remain skeptical about the replication of FLASH efficacy in cancer patients with concerns about technical complexity, lack of understanding of its molecular radiobiological underpinnings, and reliability. This is the premise debated in this month’s Point/Counterpoint.

I find it interesting that the mechanism for FLASH remains unknown. In his opening statement, Maxim says “we have barely scratched the surface of potential mechanistic pathways.” After citing several animals studies, he concludes that “these data provide strong evidence that the observed FLASH effect across multiple species and organ systems is real, which makes this dramatic finding the biggest 'Newsflash' in recent radiotherapy history.”

In his opening statement, Keall says that “FLASH therapy is an interesting concept… However, before jumping on the FLASH bandwagon, we should ask some questions.” His questions include “Does FLASH delivery technology exist for humans?” (No), “Will FLASH be cost effective? (No), “Will treatment times be reduced with FLASH therapy?” (Possibly), and “Are the controls questionable in FLASH experiments?” (Sometimes). He concludes by asking “Am I on the FLASH bandwagon? No. I remain an interested spectator.”

Maxim, in his rebuttal, claims that while FLASH is not currently available for treatment of humans, he sees a pathway for clinical translation in the foreseeable future, based on something called Pluridirectional High-Energy Agile Scanning Electronic Radiotherapy (PHASER). Moreover, he anticipates that PHASER will have an overall cost similar to conventional therapy. He notes that one motivation for adopting the FLASH technique is to reduce uncertainty caused by organ motion. Maxim concludes that “FLASH promised to be a paradigm shift in curative radiation therapy with preclinical evidence of fundamentally improved therapeutic index. If this remarkable finding is translatable to humans, the switch to the PHASER technology will become mandatory.”

Keall, in his rebuttle, points out weaknesses in the preclinical FLASH studies. In particular, studies so far have looked at only early biological effects, but late effects (perhaps years after treatment) are unknown. He also states that “FLASH works against one of the four R’s of radiobiology, reoxygenation.” Traditionally, a tumor has a hypoxic core meaning its has a low level of oxygen at its center, and this makes it resistant to radiation damage. When radiation is delivered in several fractions, there is enough time for a tumor to reoxygenate between fractions. This, in fact, is the primary motivation for using fractions. FLASH happens so fast there is no time for reoxygenation. This is why the mechanism of FLASH remains unclear: it goes against conventional ideas. Keall concludes “The scientists, authors, reviewers and editors involved with FLASH therapy need to carefully approach the subject and acknowledge the limitations of their studies. Overcoming these limitations will drive innovation. I will watch this space with interest.”

So what do I make of all this? From the point of view of a textbook writer, we really need to figure out the mechanism underlying FLASH. Otherwise, textbooks hardly know how to describe the technique, and optimizing it for the clinic will be difficult. Nevertheless, the new edition of Intermediate Physics for Medicine and Biology will have something to say about FLASH.

FLASH seems promising enough that we should certainly explore it further. But as I get older, I seem to be getting more conservative, so I tend to side with Keall. I would love to see the method work on patients, but I remain a skeptic until I see more evidence. I guess it depends on if you are a cup-half-full or a cup-half-empty kind of guy. I do know one thing: Point/Counterpoint articles help me understand the pros and cons of such difficult and technical issues.

Friday, May 10, 2024

Numerical Solution of the Quadratic Formula

Homework Problem 7 in Chapter 11 of Intermediate Physics for Medicine and Biology examines fitting data to a straight line. In that problem, the four data points to be fit are (100, 4004), (101, 4017), (102, 4039), and (103, 4063). The goal is to fit the data to the line y = ax + b. For this data, one must perform the calculation of a and b to high precision or else you get large errors. The solution manual (available to instructors upon request) says that
This problem illustrates how students can run into numerical problems if they are not careful. With modern calculators that carry many significant figures, this may seem like a moot point. But the idea is still important and can creep subtly into computer computations and cause unexpected, difficult-to-debug errors.
Numerical Recipes
Are there other examples of numerical errors creeping into calculations? Yes. You can find one discussed in Numerical Recipes that involves the quadratic formula.

We all know the quadratic formula from high school. If you have a quadratic equation of the form



the solution is



For example,
 
has two solutions
 

so x = 1 or 2.

Now, suppose the coefficient b is larger,



The solution is



so x = 300 or 0.00667.

This calculation is susceptible to numerical error. For instance, suppose all numerical calculations are performed to only four significant figures. Then when you reach the step



you must subtract 8 from 90,000. You get 89992, which to four significant figures becomes 89990, which has a square root of (again to four significant figures) 300.0. The solutions are therefore x = 300 or 0. The large solution (300) is correct, but the small solution (0 instead of 0.00667) is completely wrong. The main reason is that when using the minus sign for ± you must subtract two numbers that are almost the same (in this case, 300 – 299.98667) to get a much smaller number.

You might say “so what! Who uses only four significant figures in their calculations?” Okay, try solving



where I increased b from 300 to 3000. You’ll find that using even six significant figures gives one nonsense solution (try it). As you make b larger and larger, the calculation becomes more and more difficult. The situation can cause unexpected, difficult-to-debug errors.

What’s the moral to this story? Is it simply that you must use high precision when doing calculations? No. We can do better. Notice that the solution is fine when using the plus sign in the quadratic equation. We need make no changes. It’s the negative sign that gives the problem,
 

Let’s try a trick; multiply the expression by a very special form of one:



Simplifying, we get



Voilà! The denominator has the plus sign in front of the square root, so it is not susceptible to numerical error. The numerator is simplicity itself. Try solving x2 – 300x + 2 = 0 using math to four significant figures,

 
No error, even with just four sig figs. The problem is fixed!

I should note that the problem is fixed only for negative values of b. If b is positive, you can use an analogous approach to get a slightly different form of the solution (I’ll leave that as an exercise for the reader).

So, the moral of the story is: if you find that your numerical calculation is susceptible to numerical error, fix it! Look for a trick that eliminates the problem. Often you can find one.

Friday, May 3, 2024

The Well-Tempered Clavichord

Prelude No. 1,” from the Well-Tempered Clavichord, by Johann Sebastian Bach.
I played it (or tried to play it) back when I was 15 years old.

Most of my blog posts are about physics applied to medicine and biology, but today I want to talk about music. This topic may not seem relevant to Intermediate Physics for Medicine and Biology, but I would argue that it is. Music is, after all, as much about the human perception of sound as about sound itself. So, let’s talk about how we sort the different frequencies into notes.

Below I show a musical keyboard, like you would find on a piano.

A piano keyborad

Each key corresponds to a different pitch, or note. I want to discuss the relationships between the different notes. We have to start somewhere, so let’s take the lowest note on the keyboard and call it C. It will have some frequency, which we’ll call our base frequency. On the piano, this frequency is about 33 Hz, but for our purposes that doesn’t matter. We will consider all frequencies as being multiples of this base frequency, and take our C as having a frequency of 1.
 

When you double the frequency, our ear perceives that change as going up an octave. So, one octave above the first C is another C, with frequency 2.
 

Of course, that means there’s another C with frequency 4, and another with frequency 8, and so on. We get a whole series of C’s.
 

Now, if you pluck a string held down at both ends, it can produce many frequencies. In general, it produces frequencies that are multiples of a fundamental frequency f, so you get frequency f plus “overtone” frequencies 2f, 3f, 4f, 5f, etc. As we noted earlier, we don’t care about the frequency itself but only how different frequencies are related. If the fundamental frequency is a C with frequency 1, the first overtone is one octave up (with a frequency 2), another C. The second overtone has a frequency 3. That corresponds to a different note on our keyboard, which we’ll call G.


You could raise or lower G by octaves and still have the same note (like we did with C), so you have a whole series of G’s, including 3/2 which is between C’s corresponding to frequencies 1 and 2. When two notes have frequencies such that the upper frequency is 3/2 times the lower frequency (a 3:2 ratio), musicians call that a “fifth,” so G is a fifth above C.
 


Let’s keep going. The next overtone is 4, which is two octaves above the fundamental, so it’s one of the C’s. But the following overtone, 5, gives us a new note, E. 

 

As always, you can go up or down by octaves, so we get a whole series of E’s.

 

C and E are related by a ratio of 5:4 (that is, E has a frequency 5/4 times the C below it), which musicians call a “third.” The notes C, E, and G make up the “C major chord.”

The next overtone would be 6, but we already know 6 is a G. The overtone 7 doesn’t work. Apparently a frequency ratio of 7 is not one that we find pleasant (at least, not to those of us who have been trained on western music), so we’ll skip it. Overtone 8 is another C, but we get a new note with overtone 9 (and all its octaves up and down, which I’ll stop repeating again and again). We’ll call this note D, because it seems to fit nicely between C and E. The D right next to our base note C has a frequency of 9/8.

Next is overtone 10 (an E), then 11 (like 7, it doesn’t work), 12 (a G), 13 (nope), 14 (no because it’s an octave up from 7), and finally 15, a new note we’ll call B. 

We could go on, but we don’t perceive many of the higher overtones as harmonious, so let’s change track. There’s nothing special about our base note, the C on the far left of our keyboard. Suppose we wanted to use a different base note. What note would we use if we wanted it to be a fifth below C? If we started with a frequency of 2/3, then a fifth above that frequency would be 2/3 times 3/2 or 1, giving us C. We’ll call that new base frequency F. It’s off our keyboard to the left, but its octaves appear, including 4/3, 8/3, etc.  


What if we want to build a major chord based on F. We already have C as a fifth above F. What note is a third above F? In other words, start at 2/3 and multiply by 5/4 (a third), to get 10/12 which simplifies to 5/6. That’s off the keyboard too, but its octaves 5/3, 10/3, 20/3, etc. appear. Let’s call it A. So a major chord in the key of F is F, A, and C.

Does this work for other base frequencies? Try G (3/2). Go up a fifth from G and you get 9/4, which is a D. Go up a third from G and you get 15/8, which is a B. So G, B, and D make up a major chord in the key of G. It works again!

So now it looks like we’re done. We’ve given names and frequencies to all the notes: C (1), D (9/8), E (5/4), F (4/3), G (3/2), A (5/3), and B (15/8). This collection of frequencies is called “just intonation,” with “just” used in the sense of fair and honest. If you play a song in the key of C, you use only those notes and frequencies and it sounds just right.

What about those strange block notes between some, but not all, of the white notes? How do we determine their frequencies? For example, start at D (9/8) for your base note and build a major chord. First you and go up a third and get 9/8 times 5/4, or 45/32. That note, corresponding to the black key just to the right of F, is F-sharp (or F). To express the frequency as a decimal, 45/32 = 1.406, which is midway between F (4/3 = 1.333) and G (3/2 = 1.500). We could continue working out all the frequencies for the various sharps and flats, but we won’t. It gets tedious, and there is an even more interesting and surprising feature to study.

To complete our D major chord, we need to to determine what note is a fifth above D. You get D (9/8) times a fifth (3/2), or 27/16 = 1.688. That is almost the same as A (5/3 = 1.667), but not quite. It’s too close to A to correspond to A-sharp. It’s simply an out-of-tune A. In other words, using the frequencies we have worked out above, if you start with C as your base (that is, you play in the key of C) your G (a fifth) corresponds to a frequency ratio of 5/3 = 1.667. If you play, however, using D as your base (you play in the key of D), your A (what should be a fifth above D) has a frequency ratio of 27/16 = 1.688. Different keys have different intonations. Yikes! This is not a problem with only the key of D. It happens again and again for other keys. The intonation is all messed up. You either play in the key of C, or you play out of tune.

To avoid this problem, nowadays instruments are tuned so that there are 12 steps between octaves (the steps includes both the white and black keys), where each step corresponds to a frequency ratio of 21/12 = 1.0595. A fifth (seven steps) is then 27/12 = 1.498, which is not exactly 3/2 = 1.500 but is pretty close and—importantly—is the same for all keys. A third is 24/12 = 1.260, which is not 5/4 = 1.250 but is not too bad. A keyboard with frequencies adjusted in this way is called “well-tempered.” It means that all the keys sound the same, although each is slightly out of tune compared to just intonation. You don’t have to have your piano tuned every time you change keys.

Johann Sebastian Bach wrote a lovely set of piano pieces called The Well-Tempered Clavichord that showed off the power of well-tempered tuning. My copy of the most famous of these pieces is shown in the photo at the top of this post. Listen to it and other glorious music by Bach below.

 
Bach’s “Prelude No. 1” from The Well-Tempered Clavichord, played by Alexandre Tharaud.

https://www.youtube.com/watch?v=iWoI8vmE8bI


Bach’s “Cello Suite No. 1,” played by Yo Yo Ma.

https://www.youtube.com/watch?v=Rx_IibJH4rA


Bach’s “Toccata and Fugue in D minor.”

https://www.youtube.com/watch?v=erXG9vnN-GI


Bach’s “Jesu, Joy of Man’s Desiring,” performed by Daniil Trifonov.

https://www.youtube.com/watch?v=wEJruV9SPao


Bach’s “Air on the G String.”

https://www.youtube.com/watch?v=pzlw6fUux4o


Bach and Gounod’s “Ave Maria,” sung by Andrea Bocelli.

https://www.youtube.com/watch?v=YR_bGloUJNo

Friday, April 26, 2024

Thinking is Power

Are Electromagnetic Fields
Making me Ill?

by Brad Roth.

Recently I stumbled on a YouTube video about critical thinking in education, featuring Melanie Trecek-King (of the website “Thinking is Power”), Bertha Vazquez (with the Center for Inquiry) and Daniel Reed (the West Virginia Skeptics Society). I wasn’t expecting much, but as I listened I became enthralled. I highly recommend you spend some time exploring Thinking is Power, and perhaps follow it on Facebook. Trecek-King’s mantra is “Teach Skills Not Facts,” and she is trying to teach the skills of critical thinking. As I listened, I began to ask myself if I am doing all I can to teach critical thinking? In particular, are critical thinking skills emphasized in Intermediate Physics for Medicine and Biology, and in my popular science book Are Electromagnetic Fields Making Me Ill? I decided to use Are Electromagnetic Fields Making Me Ill? as a test case.

One of the key ideas in my book is the clinical trial. Critical thinking lies at the heart of such trials. In the chapter about the health effects of magnets, I discuss the importance of clinical trials being double blind, randomized, and placebo controlled. Why are these features crucial? They keep you from fooling yourself. In particular, a study being double blind (meaning that “not only the patient, but also the physician, does not know who is in the placebo or treatment group”) is vital to prevent a doctor from inadvertently signalling to the patient which group they are in. One of Trecek-King’s favorite sayings is the quote by Richard Feynman that “you must not fool yourself—and you are the easiest person to fool.” That sums up why double blinding is so important.

Placebos are discussed several times in my book. My favorite example of a placebo comes from a clinical trial to evaluate a new drug. “If a medication is being tested, the placebo is a sugar pill with the same size, shape, color, and taste as that of the drug.” One reason I dwell on placebos is that sometimes they are difficult to design. When testing if permanent magnets can reduce pain, “this means that some patients received treatment with real magnets, and others were treated with objects that resembled magnets but produced a much weaker magnetic field or no magnetic field at all.” It is hard to make a “fake magnet” or a “mock transcranial direct current stimulator.” Yet, designing the placebo is exactly a situation where critical thinking skills are essential.

Critical thinking overlaps with the scientific method, with its emphasis on examining the evidence. In Are Electromagnetic Fields Making Me Ill?, my goal was to present the evidence and then let the reader decide what to believe. But that’s hard. For instance, the experimental laboratory studies about the biological effects of cell phone radiation are a mixed bag. Some studies see effects, and some don’t. You can argue either way depending on what studies you emphasize. I tried to rely on critical reviews to sort all this out (after all, where better to find critical thinking than in a critical review). But even the critical reviews are not unanimous. I probably should’ve examined each article individually and weighed its pros and cons, but that would have taken years (the literature on this topic is vast).

Trecek-King often discusses the importance of finding reliable sources of information. I agree, but this too is not always easy. For instance, what could be more authoritative than a report produced by the National Academy of Sciences? In Are Electromagnetic Fields Making Me Ill? I laud the Stevens report published in the 1990s about the health hazards (or should I say lack of hazards) from powerline magnetic fields. Yet, I’m skeptical about the National Academies report published in 2020 regarding microwave weapons being responsible for the Havana Syndrome. What do I conclude? Sometimes deferring to authority is useful, but not always. You can’t delegate critical thinking.

I have found that one useful tool for teaching and illustrating critical thinking are the Point/Counterpoint articles published in the journal Medical Physics. In Are Electromagnetic Fields Making Me Ill? I cite three such articles, on magnets reducing pain, on cell phone radiation causing cancer, and on the safety of airport backscatter radiation scanners. Each of these articles are in the form of a debate, and any lack of critical thinking will be exposed and debunked in the rebuttals. I wrote

When I taught medical physics to college students, we spent 20 minutes each Friday afternoon discussing a point/counterpoint article. One feature of these articles that makes them such an outstanding teaching tool is that there exists no right answer, only weak or strong arguments. Science does not proceed by proclaiming universal truths, but by accumulating evidence that allows us to be more or less confident in our hypotheses. Conclusions beginning with “the evidence suggests…” are the best science has to offer.
One skill I emphasized in my teaching using IPMB, but which I don’t see mentioned by Trecek-King, is estimation. For instance, when discussing the potential health benefits or hazards of static magnetic fields, I calculated the energy of an electron in a magnetic field and compared it to its thermal energy. Such a simple order-of-magnitude estimate shows that thermal energy is vastly greater than magnetic energy, implying that static magnetic fields should have no effect on chemical reactions. Similarly, in my chapter about powerline magnetic fields, I estimated the electric field induced in the body by a 60 Hz magnetic field and compare it to endogenous electric fields due mainly to the heart’s electrical activity. Finally, in my discussion about cell phone radiation I compared the energy of a single radio-frequency photon to the energy of a chemical bond to prove that cell phones cannot cause cancer by directly disrupting DNA. This ability to estimate is crucial, and I believe it should be included under the umbrella of critical thinking skills.

In the video I watched, Trecek-King discussed the idea of consensus, and the different use of this term among scientists and nonscientists. When I analyzed transcranial direct current stimulation, I bemoaned the difficulty in finding a consensus among different research groups.
Finding the truth does not come from a eureka moment, but instead from a slow slog ultimately leading to a consensus among scientists.
I probably get closest to what scientists mean by consensus at the close of my chapter on the relationship (actually, the lack of relationship) between 5G cell phone radiation and COVID-19:
Scientific consensus arises when a diverse group of scientists openly scrutinizes claims and critically evaluates evidence.
Consensus is only valuable if it arises from individuals independently examining a body of evidence, debating an issue with others, and coming to their own conclusion. Peer review, so important in science, is one way scientists thrash out a consensus. I wrote
The reason for peer review is to force scientists to convince other scientists that their ideas and data are sound.
Perhaps the biggest issue in critical thinking is bias. One difficulty is that bias comes in many forms. One example is publication bias: “the tendency for only positive results to be published.” Another is recall bias that can infect a case-control epidemiological study. But the really thorny type of bias arises from prior beliefs that scientists may be reluctant to abandon. In Are Electromagnetic Fields Making Me Ill? I tell the story of how Robert Tucker and Otto Schmidt performed an experiment to determine if people could detect 60 Hz magnetic fields. They spent five years examining their experiment for possible systematic errors, and eventually concluded that 60 Hz fields are not detectable. I wrote “One reason the bioelectric literature is filled with inconsistent results may be that not all experimenters are as diligent as Robert Tucker and Otto Schmitt.”

After listening to Trecek-King’s video, I began to wonder if the Tucker and Schmidt experiment might alternatively be viewed be a cautionary tale about bias. Was their long effort a heroic example of detecting and eliminating systematic error, or was it a bias marathon where they slaved away until they finally came to the conclusion they wanted? I side with the heroic interpretation, but it does make me wonder about the connection between bias and experimental design. The hallmark of a good experimental scientist is the ability to identify and remove systematic errors from an experiment. Yet one must be careful to root out all systematic errors, not just those that affect the results in one direction. The conclusion: science is difficult, and you must be constantly on guard about fooling yourself.

I reexamined Are Electromagnetic Fields Making Me Ill? to search for signs of my own biases, and came away a little worried. For instance, when talking about 5G cell phone radiation risks, I wrote
The 5G cell phone debate strikes me as déjà vu. First Mesmer’s “animal magnetism” treatments ascended in popularity and then declined. Next the use of magnets for therapy rose and fell. Then came the power line debate; a crescendo followed by a diminuendo. Later the dispute over traditional cell phones came and went. Now, we are doing it all over again for 5G.

After listening to Trecek-King’s video, I am nervous that this was an inadvertent confession of bias. Do my past experiences predispose me to reject claims about electromagnetic fields being dangerous? Or am I merely stating a hard-earned opinion based on experience? Or are those the same thing? Is it bias to believe that Lucy will pull that football away from Charlie Brown at the last second?

I tried to focus my book on the evidence and not on personal opinions, but can we ever be sure? If I was a proponent of the idea that cell phones cause cancer, I might point to the above déjà vu quote as evidence that the author of Are Electromagnetic Fields Making Me Ill? was biased. Yet, if you asked me now if I still believed what I wrote in that quote, I would say “you betcha I do.” Does my statement have relevance to the 5G cell phone debate? I think it does, although it’s no substitute for hard evidence. Can we ever truly free ourselves from our biases? Perhaps not, but at least we can be aware of them, so as to be on guard.

All this discussion about critical thinking and bias is related to the claims of pseudoscience and alternative medicine. At the end of Are Electromagnetic Fields Making Me Ill? I ponder the difficulty of debunking false claims.

The study of biological effects of weak electric and magnetic fields attracts pseudoscientists and cranks. Sometimes I have a difficult time separating the charlatans from the mavericks. The mavericks—those holding nonconformist views based on evidence (sometimes a cherry-picked selection of the evidence)—can be useful to science, even if they are wrong. The charlatans—those snake-oil salesmen out to make a quick buck—either fool themselves or fool others into believing silly ideas or conspiracy theories. We should treat the mavericks with respect and let peer review correct their errors. We should treat the charlatans with disdain. I wish for the wisdom to tell them apart.
I’ll give Trecek-King’s the last word. Another of her mantras, which to me sums up why we care about critical thinking, is:
I am not saying that all of our problems can be solved with critical thinking. I’m saying that it is our best chance.

Critical Thinking in Education, featuring Melanie Trecek-King, Bertha Vazquez, and Daniel Reed

https://www.youtube.com/watch?v=QkPtC3gn6JE


Lucy, Charlie Brown, and the football.

https://www.youtube.com/watch?v=mC5MzvgE4c0


And another.

https://www.youtube.com/watch?v=ddmXM-96-no


 A Life Preserver for Staying Afloat in a Sea of Misinformation.

https://www.youtube.com/watch?v=JkGsxtbetts

Friday, April 19, 2024

Good Vibrations

In Chapter 10 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss negative feedback loops. Feedback is often used to maintain an important variable nearly constant. This idea underlies homeostasis and is fundamental to physiology.
"Are Physiological Oscillations Physiological?" by Ivy Xiong and Alan Garfinkel, superimposed on Intermediate Physics for Medicine and Biology.
"Are Physiological Oscillations Physiological?"
by Ivy Xiong and Alan Garfinkel.

So imagine my surprise when I read Ivy Xiong and Alan Garfinkel’s topical review in The Journal of Physiology titled “Are Physiological Oscillations Physiological?” (In Press), which changed the way I look at homeostasis and feedback. Its abstract states
Despite widespread and striking examples of physiological oscillations, their functional role is often unclear. Even glycolysis, the paradigm example of oscillatory biochemistry, has seen questions about its oscillatory function. Here, we take a systems approach to argue that oscillations play critical physiological roles, such as enabling systems to avoid desensitization, to avoid chronically high and therefore toxic levels of chemicals, and to become more resistant to noise. Oscillation also enables complex physiological systems to reconcile incompatible conditions such as oxidation and reduction, by cycling between them, and to synchronize the oscillations of many small units into one large effect. In pancreatic β-cells, glycolytic oscillations synchronize with calcium and mitochondrial oscillations to drive pulsatile insulin release, critical for liver regulation of glucose. In addition, oscillation can keep biological time, essential for embryonic development in promoting cell diversity and pattern formation. The functional importance of oscillatory processes requires a re-thinking of the traditional doctrine of homeostasis, holding that physiological quantities are maintained at constant equilibrium values, a view that has largely failed in the clinic. A more dynamic approach will initiate a paradigm shift in our view of health and disease. A deeper look into the mechanisms that create, sustain and abolish oscillatory processes requires the language of nonlinear dynamics, well beyond the linearization techniques of equilibrium control theory. Nonlinear dynamics enables us to identify oscillatory (‘pacemaking’) mechanisms at the cellular, tissue and system levels.
In their introduction, Xiong and Garfinkel get straight to the point. Homeostasis examines an equilibrium stabilized by negative feedback loops. Such systems are studied by linearizing the system around the equilibrium point. Oscillatory systems, on the other hand, correspond to limit cycle attractors in a nonlinear system. The regulatory mechanism must both create the oscillation and stabilize it.

In Russ’s and my defense, we do talk about oscillations in our chapter on feedback. One source of oscillations is when a feedback loop has two time constants (Section 10.6), but these aren’t what Xiong and Garfinkel are talking about because those oscillations are transient and only affect the approach to equilibrium. A true oscillation is more like the case of negative feedback plus a time delay (Sec. 10.10 of IPMB). Russ and I mention that such a model can lead to sustained oscillations, but in light of Xiong and Garfinkel’s review I wish now we had stressed that observation more. We analyzed the specific case but missed the big picture; the paradigm shift.

Another mechanism Xiong and Garfinkel highlight is what they call “negative resistance.” They use the FitzHugh-Nagumo model (analyzed in Problem 35 of Chapter 10 in IPMB) as an example of “short-range destabilizing positive feedback, making the equilibrium point inherently unstable.” Although they don’t mention it, I think another example is the Hodgkin and Huxley model of the squid axon with an added constant leakage current destabilizing the resting potential (Section 6.18 of IPMB). Xiong and Garfinkel do discuss oscillations in the heart’s sinoatrial node, which operate by a mechanism similar to the Hodgkin-Huxley model with that extra current.

A third mechanism generating oscillations is illustrated by premature ventricular contractions in the heart caused by “the repolarization gradient that manifests as the T wave.” As a cardiac modeling guy, I am embarrassed that I’m not more aware of this fascinating idea. To learn about it, I recommend you (and I) start with Xiong and Garfinkel’s review (we have no excuse; it’s open access) and then examine some of their references.

Even more interesting is Xiong and Garfinkel’s contention that “oscillations are not an unwanted product of negative feedback regulation. Rather, they represent an essential design feature of nearly all physiological systems.” They’re a feature, not a bug. Presumably evolution has selected for oscillating systems. I wonder how. (Xiong already has a background in molecular biology, bioinformatics, and mathematical modeling; perhaps while she’s at it she should add evolutionary biology.) Let me stress that Xiong and Garfinkel are not merely speculating; they provide many examples and cite many publications supporting this hypothesis.

A key paragraph in their paper introduces a new term: “homeodynamics.”
“If oscillatory processes are central to physiology, then we will need to take a fresh look at the doctrine of homeostasis. We suggest that the concept of homeostasis needs to be parsed into two separate components. The first component is the idea that physiological processes are regulated and must respond to environmental changes. This is obviously critical and true. The second component is that this physiological regulation takes the form of control to a static equilibrium point, a view which we believe is largely mistaken. ‘Homeodynamics’ is a term that has been used in the past to try to combine regulation with the idea that physiological processes are oscillatory... It may be time to revive this terminology.”

This topical review reminds me of Leon Glass and Michael Mackey’s wonderful book From Clocks to Chaos (cited in IPMB), which introduced the idea of a “dynamical disease.” Like Glass and Mackey, Xiong and Garfinkel present a convincing argument in support of a new paradigm in biology, rooted in the mathematics of nonlinear dynamics. 

In their cleverly titled section on “Bad Vibrations,” Xiong and Garfinkel note that “not all oscillations serve a positive physiological function. Oscillations in physiology can also be pathological and dysfunctional.” I suspect their section title is a sly reference to the Beach Boys hit “Good Vibrations.” After all, Xiong and Garfinkel are both from California and their review can be summed up as the mathematical modeling of good vibrations.

From Clocks to Chaos, by Glass and Mackey, superimposed on the cover of Intermediate Physics for Medicine and Biology.
From Clocks to Chaos,
by Glass and Mackey.

You know what: I think the upcoming 6th edition of Intermediate Physics for Medicine and Biology needs more emphasis on nonlinear dynamics. I’m lucky Xiong and Garfinkel’s article came out just as Gene Surdutovich and I were revising and updating the 5th edition of IPMB. God Only Knows that as I sit here In My Room working on the revision I’ll have Fun, Fun, Fun. Don’t Worry Baby, now that it’s spring and we have The Warmth of the Sun, Gene and I should make good progress. Unfortunately, because we’re working here in Michigan, we can’t occasionally take a break to Catch A Wave. Wouldn’t It be Nice if we could go on a Surfin’ Safari? (Sorry, I got a little carried away.)

So the answer to the question in the title—are physiological oscillations physiological?—is a resounding “Yes!” I’ll conclude with Xiong and Garfinkel’s final sentence, which I wholeheartedly agree with (except for the annoying British spelling of “center”):

It is time to bring this conception of physiological oscillations to the centre of biological discourse.

“Good Vibrations” by the Beach Boys.

https://www.youtube.com/watch?v=apBWI6xrbLY

 


 Ivy Xiong discussing mathematical modeling of biological oscillations.

https://www.youtube.com/watch?v=AUmpgrDpT08&list=PLreJ534rlE5XZdRbtftkW9gD1u6w-jTZt&index=6&t=44s