Friday, June 28, 2024

Could Ocean Acidification Deafen Dolphins?

In Chapter 13 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the attenuation of sound.
Water transmits sound better than air, but its attenuation is an even stronger function of frequency. It also depends on the salt content. At 1000 Hz, sound attenuates in fresh water by about 4 × 10−4 dB km−1. The attenuation in sea water is about a factor of ten times higher (Lindsay and Beyer 1989). The low attenuation of sound in water (especially at low frequencies) allows aquatic animals to communicate over large distances (Denny 1993).
“Ocean Acidification and the Increasing Transparency of the Ocean to Low-Frequency Sound,” Oceanography, 22: 86–93, 2009 superimposed on the cover of Intermediate Physics for Medicine and Biology.
“Ocean Acidification and the
Increasing Transparency of the Ocean
to Low-Frequency Sound,”
Oceanography, 22: 86–93, 2009.
To explore further into the attenuation of sound in seawater—and especially to examine that mysterious comment “it also depends on the salt content”— I will quote from an article by Peter Brewer and Keith Hester, titled “Ocean Acidification and the Increasing Transparency of the Ocean to Low-Frequency Sound” (Oceanography, Volume 22, Pages 86–93, 2009). The abstract is given below.
As the ocean becomes more acidic, low-frequency (~1–3 kHz and below) sound travels much farther due to changes in the amounts of pH-dependent species such as dissolved borate and carbonate ions, which absorb acoustic waves. The effect is quite large; a decline in pH of only 0.3 causes a 40% decrease in the intrinsic sound absorption properties of surface seawater. Because acoustic properties are measured on a logarithmic scale, and neglecting other losses, sound at frequencies important for marine mammals and for naval and industrial interests will travel some 70% farther with the ocean pH change expected from a doubling of CO2. This change will occur in surface ocean waters by mid century. The military and environmental consequences of these changes have yet to be fully evaluated. The physical basis for this effect is well known: if a sound wave encounters a charged molecule such as a borate ion that can be “squeezed” into a lower-volume state, a resonance can occur so that sound energy is lost, after which the molecule returns to its normal state. Ocean acousticians recognized this pH-sound linkage in the early 1970s, but the connection to global change and environmental science is in its infancy. Changes in pH in the deep sound channel will be large, and very-low-frequency sound originating there can travel far. In practice, it is the frequency range of ~ 300 Hz–10 kHz and the distance range of ~ 200–900 km that are of interest here.
To get additional insight, let us examine the structure of the negatively charged borate ion. It consists a central boron atom surrounded by four hydroxyl (OH) groups in a tetrahedral structure: B(OH)4. Also of interest is boric acid, which is uncharged and has the boron atom attached to three OH groups in a planar structure: B(OH)3. In water, the two are in equilibrium

B(OH)4 + H+ ⇔ B(OH)3 + H2O .

The equilibrium depends on pH and pressure. Brewer and Hester write
Boron exists in seawater in two forms—the B(OH)4 ion and the un-ionized form B(OH)3; their ratio is set by the pH of bulk seawater, and as seawater becomes more acidic, the fraction of the ionized B(OH)4 form decreases. Plainly, the B(OH)4 species is a bigger molecule than B(OH)3 and, because of its charge, also carries with it associated water molecules as a loose assemblage. This weakly associated complex can be temporarily compressed into a lower-volume form by the passage of a sound wave; there is just enough energy in a sound wave to do it. This compression takes work and thus robs the sound wave of some of its energy. Once the wave front has passed by, the B(OH)4 molecules return to their original volumes. Thus, in a more acidic ocean with fewer of the larger borate ions to absorb sound energy, sound waves will travel farther.
As sound waves travel farther, the oceans could become noisier. This behavior has even led one blogger to ask “could ocean acidification deafen dolphins?” 

Researchers at the Woods Hole Oceanographic Institution are skeptical of a dramatic change in sound wave propagation. In an article asking “Will More Acidic Oceans be Noisier?” science reporter Cherie Winner describes modeling studies by Woods Hole scientists such as Tim Duda. Winner explains
Results of the three models varied slightly in their details, but all told the same tale: The maximum increase in noise level due to more acidic seawater was just 2 decibels by the year 2100—a barely perceptible change compared to noise from natural events such as passing storms and big waves.
Duda said the main factor controlling how far sound travels in the seas will be the same in 100 years as it is today: geometry. Most sound waves will hit the ocean bottom and be absorbed by sediments long before they could reach whales thousands of kilometers away.
The three teams published their results in three papers in the September 2010 issue of the Journal of the Acoustical Society of America.
“We did these studies because of the misinformation going around,” said Duda. “Some papers implied, ‘Oh my gosh, the sound absorption will be cut in half, therefore the sound energy will double, and the ocean will be really noisy.’ Well, no, it doesn’t work that way.” 
So I guess we shouldn’t be too concerned about deafening those dolphins, but this entire subject is fascinating and highlights the role of physics for understanding medicine and biology.

Friday, June 21, 2024

Patrick Blackett and Pair Production

Patrick Blackett.
In Chapter 15 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe pair production.
A photon… can produce a particle-antiparticle pair: a negative electron and a positron… Since the rest energy (mec2) of an electron or positron is 0.51 MeV, pair production is energetically impossible for photons below 2mec2 = 1.02 MeV…

Pair production always takes place in the Coulomb field of another particle (usually a nucleus) that recoils to conserve momentum.

I often wonder how surprising or unexpected phenomena are discovered. Pair production was first observed by English physicist Patrick Blackett. Here is part of the entry about Blackett in Asimov’s Biographical Encyclopedia of Science & Technology.

Asimov’s Biographical
Encyclopedia of
Science & Technology.
BLACKETT, Patrick Maynard
English physicist
Born: London, November 18, 1897
Died: London, July 13, 1974

Blackett entered a naval school in 1910, at thirteen, to train as a naval officer. The outbreak of World War I came just in time to make use of him and he was at sea throughout the war, taking part in the Battle of Jutland.

With the war over, however, he resigned from the navy and went to Cambridge, where he studied under Ernest Rutherford and obtained his master’s degree in 1923. In 1933 he became professor of physics at the University of London, moving on to Manchester in 1937.

It was Blackett who first turned to the wholesale use of the Wilson cloud chamber [a box containing moist air which produces a visible track when an ion passes through it, condensing the moisture into tiny droplets]…

In 1935 Blackett showed that gamma rays, on passing through lead, sometimes disappear, giving rise to a positron and an electron. This was the first clear-cut case of the conversion of energy into matter. This confirmed the famous E = mc2 equation of Einstein as precisely as did the more numerous examples, earlier observed, of the conversion of matter to energy (and even more dramatically).

During World War II, Blackett worked on the development of radar and the atomic bomb… After the war, however, he was one of those most vociferously concerned with the dangers of nuclear warfare. In 1948 he was awarded the Nobel Prize in physics for his work with and upon the Wilson cloud chamber.

More detail about the discovery of pair production specifically can be found at the Linda Hall Library website.

In 1929, Paul Dirac had predicted the possibility of antimatter, specifically anti-electrons, or positrons, as they would eventually be called. His prediction was purely a result of his relativistic quantum mechanics, and had no experimental basis, so Blackett (with the help of an Italian vsitor, Giuseppe Occhialini), went looking, again with the help of a modified cloud chamber. Blackett suspected that the newly discovered cosmic rays were particles, and not gamma rays (as Robert Millikan at Caltech maintained). Blackett thought that a cosmic particle traveling very fast might have the energy to strike a nucleus and create an electron-positron pair, as Dirac predicted. They installed a magnet around the cloud chamber, to make the particles curve, and rigged the cloud chamber to a Geiger counter, so that the camera was triggered only when the Geiger counter detected an interaction. As a result, their photographs showed interactions nearly every time. They took thousands, and by 1932, 8 of those showed what appeared to be a new particle with the mass of an electron but curving in the direction of a positively charged particle. They had discovered the positron. But while Blackett, a very careful experimenter, checked and double-checked the results, a young American working for Millikan, Carl Anderson, detected positive electrons in his cloud chamber at Caltech in August of 1932, and he published his results first, in 1933. Anderson's discovery was purely fortuitous – he did not even know of Dirac's prediction. But in 1936, Anderson received the Noble Prize in Physics, and Blackett and Occhialini did not, which irritated the British physics community no end, although Blackett never complained or showed any concern. His own Nobel Prize would come in 1948, when he was finally recognized for his break-through work in particle physics.
If you watched last summer’s hit movie Oppenheimer, you might recall a scene where a young Oppenheimer tried to poison his supervisor with a apple injected with cyanide. That supervisor was Patrick Blackett.

The Nobel Prize committee always summarizes a recipient’s contribution in a brief sentence. I’ll end with Blackett’s summary:
"for his development of the Wilson cloud chamber method, and his discoveries therewith in the fields of nuclear physics and cosmic radiation"


The scene from Oppenheimer when Oppie poisons Blackett’s apple.

https://www.youtube.com/watch?v=P8RBHS8zrfk

 

 

Patrick Blackett—Draw My Life. From the Operational Research Society.

https://www.youtube.com/watch?v=jg5IQ5Hf2G0

Friday, June 14, 2024

Bernard Leonard Cohen (1924–2012)

The Nuclear Energy Option: An Alternative for the 90s. by Bernard Cohen, superimposed on Intermediate Physics for Medicine and Biology.
The Nuclear Energy Option: An Alternative for the 90s.
by Bernard Cohen.
Today is the one hundredth anniversary of the birth of American nuclear physicist Bernard Cohen. In Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Cohen mainly in the context of his work on the risk of low levels of ionizing radiation and his opposition to the linear no threshold model. Today, I will examine another aspect of his work: his advocacy for nuclear power. In particular, I will review his 1990 book The Nuclear Energy Option: An Alternative for the 90s.

Why read a 35-year old book about a rapidly changing technology like energy? I admit, the book is in some ways obsolete. Cohen insists on using rems as his unit of radiation effective dose, rather than the more modern Sievert (Sv). He discusses the problem of greenhouse gases and global warming, although in a rather hypothetical way as just one of the many problems with burning fossil fuels. He was optimistic about the future of nuclear energy, but we know now that in the decades following the book’s publication nuclear power in the United States did not do well (the average age of our nuclear power plants is over 40 years). Yet other features of the book have withstood the test of time. As our world now faces the dire consequences of climate change, the option of nuclear energy is an urgent consideration. Should we reconsider nuclear power as an alternative to coal/oil/natural gas? I suspect Cohen would say yes.

In Chapter 4 of The Nuclear Energy Option Cohen writes
We have seen that we will need more power plants in the near future, and that fueling them with coal, oil, or gas leads to many serious health, environmental, economic, and political problems. From the technological points of view, the obvious way to avoid these problems is to use nuclear fuels. They cause no greenhouse effect, no acid rain, no pollution of the air with sulfur dioxide, nitrogen oxides, or other dangerous chemicals, no oil spills, no strain on our economy from excessive imports, no dependence on unreliable foreign sources, no risk of military ventures. Nuclear power almost completely avoids all the problems associated with fossil fuels. It does have other impacts on our health and environment, which we will discuss in later chapters, but you will see that they are relatively minor.
He then compares the safety and economics of nuclear energy with other options, including solar and coal-powered plants for generating electricity. Some of the conclusions are surprising. For instance, you might think that energy conservation is always good (who roots for waste?). But Cohen writes
Another energy conservation strategy is to seal buildings more tightly to reduce the escape of heat, but this traps unhealthy materials like radon inside. Tightening buildings to reduce air leakage in accordance with government recommendations would give the average American an LLE [loss of life expectancy] of 20 days due to increased radon exposure, making conservation by far the most dangerous energy strategy from the standpoint of radiation exposure!
His Chapter 8 on Understanding Risk is a classic. He begins
One of the worst stumbling blocks in gaining widespread public acceptance of nuclear power is that the great majority of people do not understand and quantify the risks we face. Most of us think and act as though life is largely free of risk. We view taking risks as foolhardy, irrational, and assiduously to be avoided….

Unfortunately, life is not like that. Everything we do involves risk.

He then makes a catalog of risks, in which he converts risk to the average expected loss of life expectancy for each case. This LLE is really just a measure of probability. For instance, if getting a certain disease shortens your life by ten years, but there is only one chance out of a hundred of contracting that disease, it would correspond to an LLE of 0.1 years, or 36 days. In his catalog, the riskiest activity is living in poverty, which has an LLE of 3500 days (almost ten years). Smoking cigarettes results in an LLE of 2300 days. Being 30 pounds overweight is 900 days. Reducing the speed limit on our highways from 65 to 55 miles per hour would reduce traffic accidents and give us an extra 40 days. At the bottom of his list is living near a nuclear reactor, with a risk of only 0.4 days (less than ten hours). He makes a compelling case that nuclear power is extraordinarily safe.

Cohen summarizes these risks in a classic figure, shown below.

Figure 1 from Chapter 8 of The Nuclear Energy Option, superimposed on Intermediate Physics for Medicine and Biology.
Figure 1 from Chapter 8 of The Nuclear Energy Option.

Our poor risk perception causes us (and our government) to spend money foolishly. He translates societies efforts to reduce risk into the cost in dollars to save one life.

The $2.5 billion we spend to save a single life in making nuclear power safer could save many thousands of lives if spent on radon programs, cancer screening, or transportation safety. This means that many thousands of people are dying unnecessarily every year because we are spending this money in the wrong way.
He concludes
The failure of the American public to understand and quantify risk must rate as one of the most serious and tragic problems for our nation.
I agree.

Cohen believes that Americans have a warped view of the risk of nuclear energy.

The public has become irrational over fear of radiation. Its understanding of radiation dangers has virtually lost all contact with the actual dangers as understood by scientists.
Apparently conspiracy theories are a problem we face not only today but also decades ago, when the scientific establishment was accused of hiding the “truth” about radiation risks. Cohen counters
To believe that such highly reputable scientists conspired to practice deceit seems absurd, if for no other reason than that it would be easy to prove that they had done so and the consequences to their scientific careers would be devastating. All of them had such reputations that they could easily obtain a variety of excellent and well-paying academic positions independent of government or industry financing, so they were to vulnerable to economic pressures.

But above all, they are human beings who have chosen careers in a field dedicated to protection of the health of their fellow human beings; in fact, many of them are M.D.’s who have foregone financially lucrative careers in medical practice to become research scientists. To believe that nearly all of these scientists were somehow involved in a sinister plot to deceive the public indeed challenges the imagination.
To me, these words sound as if Cohen were talking now about vaccine hesitancy or climate change denial, rather than opposition to nuclear energy. 

What do I think? I would love to have solar and wind supply all our energy needs. But until they can, I vote for increasing our use of nuclear energy over continuing to burn fossil fuels (especially coal). Global warming is already bad and getting worse. It is a dire threat to us all and to our future generations. We should not rule out nuclear energy as one way to address climate change.

Happy birthday, Bernard Cohen! I think if you had lived to be 100 years old, you would have found so many topics to write about today. How we need your rational approach to risk assessment. 

 Firing Line with William F. Buckley Jr.: The Crisis of Nuclear Energy.

https://www.youtube.com/watch?v=ipOrGaXn-r4&list=RDCMUC9lqW3pQDcUuugXLIpzcUdA&start_radio=1&rv=ipOrGaXn-r4&t=52

Friday, June 7, 2024

The Magnetocardiogram

I recently published a review in the American Institute of Physics journal Biophysics Reviews about the magnetocardiogram (Volume 5, Article 021305, 2024).

The magnetic field produced by the heart’s electrical activity is called the magnetocardiogram (MCG). The first twenty years of MCG research established most of the concepts, instrumentation, and computational algorithms in the field. Additional insights into fundamental mechanisms of biomagnetism were gained by studying isolated hearts or even isolated pieces of cardiac tissue. Much effort has gone into calculating the MCG using computer models, including solving the inverse problem of deducing the bioelectric sources from biomagnetic measurements. Recently, most magnetocardiographic research has focused on clinical applications, driven in part by new technologies to measure weak biomagnetic fields.

This graphical abstract sums the article up. 


Let me highlight one paragraph of the review, about some of my own work on the magnetic field produced by action potential propagation in a slab of cardiac tissue.

The bidomain model led to two views of how an action potential wave front propagating through cardiac muscle produces a magnetic field.58 The first view (Fig. 7a) is the traditional one. It shows a depolarization wave front and its associated impressed current propagating to the left (in the x direction) through a slab of tissue. The extracellular current returns through the superfusing saline bath above and below the slab. This geometry generates a magnetic field in the negative y direction, like that for the nerve fiber shown in Fig. 5. This mechanism for producing the magnetic field does not require anisotropy. The second view (Fig. 7b) removes the superfusing bath. If the tissue were isotropic (or anisotropic with equal anisotropy ratios) the intracellular currents would exactly cancel the equal and opposite interstitial currents, producing no net current and no magnetic field. If, however, the tissue has unequal anisotropy ratios and the wave front is propagating at an angle to the fiber axis, the intracellular current will be rotated toward the fiber axis more than the interstitial current, forming a net current flowing in the y direction, perpendicular to the direction of propagation.59–63 This line of current generates an associated magnetic field. These two views provide different physical pictures of how the magnetic field is produced in cardiac tissue. In one case, the intracellular current forms current dipoles in the direction parallel to propagation, and in the other it forms lines of current in the direction perpendicular to propagation. Holzer et al. recorded the magnetic field created by a wave front in cardiac muscle with no superfusing bath present, and observed a magnetic field distribution consistent with Fig. 7b.64 In general, both mechanisms for producing the magnetic field operate simultaneously.

 

FIG. 7. Two mechanisms for how cardiac tissue produces a magnetic field.

This figure is a modified (and colorized) version of an illustration that appeared in our paper in the Journal of Applied Physics.

58. R. A. Murdick and B. J. Roth, “A comparative model of two mechanisms from which a magnetic field arises in the heart,” J. Appl. Phys. 95, 5116–5122 (2004). 

59. B. J. Roth and M. C. Woods, “The magnetic field associated with a plane wave front propa-gating through cardiac tissue,” IEEE Trans. Biomed. Eng. 46, 1288–1292 (1999). 

60. C. R. H. Barbosa, “Simulation of a plane wavefront propagating in cardiac tissue using a cellular automata model,” Phys. Med. Biol. 48, 4151–4164 (2003). 

61. R. Weber dos Santos, F. Dickstein, and D. Marchesin, “Transversal versus longitudinal current propagation on cardiac tissue and its relation to MCG,” Biomed. Tech. 47, 249–252 (2002). 

62. R. Weber dos Santos, O. Kosch, U. Steinhoff, S. Bauer, L. Trahms, and H. Koch, “MCG to ECG source differences: Measurements and a two-dimensional computer model study,” J. Electrocardiol. 37, 123–127 (2004). 

63. R. Weber dos Santos and H. Koch, “Interpreting biomagnetic fields of planar wave fronts in cardiac muscle,” Biophys. J. 88, 3731–3733 (2005). 

64. J. R. Holzer, L. E. Fong, V. Y. Sidorov, J. P. Wikswo, and F. Baudenbacher, “High resolution magnetic images of planar wave fronts reveal bidomain properties of cardiac tissue,” Biophys. J. 87, 4326–4332 (2004).

The first author is Ryan Murdick, an Oakland University graduate student who analyzed the mechanism of magnetic field production in the heart for his masters degree. He then went to Michigan State University for a PhD in physics and now works for Renaissance Scientific in Boulder, Colorado. I’ve always thought Ryan’s thesis topic about the two mechanisms is underappreciated, and I’m glad I had the opportunity to reintroduce it to the biomagnetism community in my review. It’s hard to believe it has been twenty years since we published that paper. It seems like yesterday.

Tuesday, June 4, 2024

Yesterday’s Attack on Dr. Anthony Fauci During his Testimony at the Congressional Select Subcommittee on the Coronavirus Pandemic Angers Me

Yesterday’s attack on Dr. Anthony Fauci during his testimony at the Congressional Select Subcommittee on the Coronavirus Pandemic angers me. I like Dr. Fauci and I like other vaccine scientists such as Peter Hotez and writers such as David Quammen who tell their tales. But, it isn’t really about them. What upsets me most is the attack on science itself; on the idea that evidence matters; on the idea that science is the way to determine the truth; on the idea that truth is important. I’m a scientist; it’s an attack on me. We must call it out for what it is: a war on science. #StandWithScience

Friday, May 31, 2024

Can the Microwave Auditory Effect Be ‘Weaponized’

Can the Microwave Auditory
Effect be Weaponized?”
I was recently reading Ken Foster, David Garrett, and Marvin Ziskin’s paper “Can the Microwave Auditory Effect Be Weaponized?” (Frontiers in Public Health, Volume 9, 2021). It analyzed if microwave weapons could be used to “attack” diplomats and thereby cause the Havana syndrome. While I am interested in the Havana syndrome (I discussed it in my book Are Electromagnetic Fields Making Me Ill?), today I merely want to better understand Foster et al.’s proposed mechanism by which an electromagnetic wave can induce an acoustic wave in tissue.

As is my wont, I will present this mechanism as a homework problem at a level you might find in Intermediate Physics for Medicine and Biology. I’ll assign the problem to Chapter 13 about Sound and Ultrasound, although it draws from several chapters.

Forster et al. represent the wave as decaying exponentially as it enters the tissue, with a skin depth λ. To keep things simple and to focus of the mechanism rather than the details, I’ll assume the energy in the electromagnetic wave is absorbed uniformly in a thin layer of thickness λ, ignoring the exponential behavior.

Section 13.4
Problem 17 ½. Assume an electromagnetic wave of intensity I0 (W/m2) with area A (m2) and duration τ (s) is incident on tissue. Furthermore, assume all its energy is absorbed in a depth λ (m).

(a) Derive an expression for the energy E (J) dissipated in the tissue.

(b) Derive an expression for the tissue’s increase in temperature ΔT (°C), E = C ΔT, where C (J/°C) is the heat capacity. Then express C in terms of the specific heat capacity c (J/°C kg), the density ρ (kg/m3), and the volume where the energy was deposited V (m3). (For a discussion of the heat capacity, see Sec. 3.11).

(c) Derive an expression for the fractional increase in volume, ΔV/V, caused by the increase in temperature, ΔV/V = αΔT, where α (1/°C) is the tissue’s coefficient of thermal expansion.

(d) Derive an expression for the change in pressure, ΔP (Pa), caused by this fractional change in volume, ΔP = B ΔV/V, where B (Pa) is the tissue’s bulk modulus. (For a discussion of the bulk modulus, see Sec. 1.14).

(e) You expression in part d should contain a factor Bα/. Show that this factor is dimensionless. It is called the Grüneisen parameter.

(f) Assume α = 0.0003 1/°C, B = 2 × 109 Pa, c = 4200 J/kg °C, and ρ = 1000 kg/m3. Evaluate the Grüneisen parameter. Calculate the change in pressure ΔP if the intensity is 10 W/m2, the skin depth is 1 mm, and the duration is 1 μs.

I won’t solve the entire problem for you, but the answer for part d is

                            ΔPI0 (τ/λ) [/] .

I should stress that this calculation is approximate. I ignored the exponential falloff. Some of the incident energy could be reflected rather than absorbed. It is unclear if I should use the linear coefficient of thermal expansion or the volume coefficient. The tissue may be heterogeneous. You can probably identify other approximations I’ve made. 

Interestingly, the pressure induced in the tissue varies inversely with the skin depth, which is not what I intuitively expected. As the skin depth gets smaller, the energy is dumped into a smaller volume, which means the temperature increase within this smaller volume is larger. The pressure increase is proportional to the temperature increase, so a thinner skin depth means a larger pressure.

You might be thinking: wait a minute. Heat diffuses. Do we know if the heat would diffuse away before it could change the pressure? The diffusion constant of heat (the thermal diffusivity) D for tissue is about 10-7 m2/s. From Chapter 4 in IPMB, the time to diffuse a distance λ is λ2/D. For λ = 1 mm, this diffusion time is 10 s. For pulses much shorter than this, we can ignore thermal diffusion. 

Perhaps you’re wondering how big the temperature rise is? For the parameters given, it’s really small: ΔT  = 2 × 10–9 °C. This means the fractional change in volume is around 10–12. It’s not a big effect.

The Grüneisen parameter is a dimensionless number. I’m used to thinking of such numbers as being the ratio of two quantities with the same units. For instance, the Reynolds number is the ratio of an inertial force to a viscous force, and the Péclet number is the ratio of transport by drift to transport by diffusion. I’m having trouble interpreting the Grüneisen parameter in this way. Perhaps it has something to do with the ratio of thermal energy to elastic energy, but the details are not obvious, at least not to me.

What does this all have to do with the Havana syndrome? Not much, I suspect. First, we don’t know if the Havana syndrome is caused by microwaves. As far as I know, no one has ever observed microwaves associated with one of these “attacks” (perhaps the government has but they keep the information classified). This means we don’t know what intensity, frequency (and thus, skin depth), and pulse duration to assume. We also don’t know what pressure would be required to explain the “victim’s” symptoms. 

In part f of the problem, I used for the intensity the upper limit allowed for a cell phone, the skin depth corresponding approximately to a microwave frequency of about ten gigahertz, and a pulse duration of one microsecond. The resulting pressure of 0.0014 Pa is much weaker than is used during medical ultrasound imaging, which is known to be safe. The acoustic pressure would have to increase dramatically to pose a hazard, which implies very large microwave intensities.

Are Electromagnetic Fields Making Me Ill? superimposed on the cover of Intermediate Physics for Medicine and Biology.
Are Electromagnetic Fields
Making Me Ill?

That such a large intensity electromagnetic wave could be present without being noticeable seems farfetched to me. Perhaps very low pressures could have harmful effects, but I doubt it. I think I’ll stick with my conclusion from Are Electromagnetic Fields Making Me Ill?

Microwave weapons and the Havana Syndrome: I am skeptical about microwave weapons, but so little evidence exists that I want to throw up my hands in despair. My guess: the cause is psychogenic. But if anyone detects microwaves during an attack, I will reconsider.

Friday, May 24, 2024

Magnetoelectrics For Biomedical Applications

“Magnetoelectrics for Biomedical Applications: 130 years Later, Bridging Materials, Energy, and Life” superimposed on Intermediate Physics for Medicine and Biology.
Magnetoelectrics for
Biomedical Applications:
130 Years Later, Bridging
Materials, Energy, and Life
I’m always looking for new ways physics can be applied to medicine and biology. Recently, I read the article “Magnetoelectrics for Biomedical Applications: 130 Years Later, Bridging Materials, Energy, and Life” by Pedro Martins and his colleagues (Nano Energy, in press).

The “130 years” in the title refers to the year 1894 when Pierre Curie conjectured that in some materials there could be a coupling between their magnetic and electric properties. While there are some single-phase magnetoelectric materials, most modern ones are composites: piezoelectric and magnetostrictive phases are coupled through mechanical strain. In this way, an applied magnetic field can produce an electric field, and vice versa.

Martins et al. outline many possible applications of magnetoelectric materials to medicine. I will highlight three.

  1. Chapter 7 of Intermediate Physics for Medicine and Biology mentions deep brain stimulators to treat Parkinson’s disease. Normally deep brain stimulation requires implanting a pacemaker-like device connected by wires inserted into the brain. A magnetoelectric stimulator could be small and wireless, using power delivered by a time-dependent magnetic field. The magnetic field would induce an electric field in the magnetoelectric material, and this electric field could act like an electrode, activating a neuron
  2. Chapter 8 of IPMB discusses ways to measure the tiny magnetic field produced by the heart: The magnetocardiogram. The traditional way to record the field is to use a superconducting quantum interference device (SQUID) magnetometer, which must be kept at cryogenic temperatures. Martins et al. describe how a weak magnetic field would produce a measurable voltage using a room-temperature magnetoelectric sensor.
  3. Magnetoelectric materials could be used for drug delivery. Martins et al. describe an amazing magnetoelectrical “nanorobot” that could be made to swim using a slowly rotating magnetic field. After the nanorobot reached its target, it could be made to release a cancer-fighting drug to the tissue by applying a rapidly changing magnetic field that produces a local electric field strong enough to cause electroporation in the target cell membrane, allowing delivery of the drug.

What I have supplied is just a sample of the many various applications of magnetoelectric materials. Martin’s et al. describe many more, and also provide a careful analysis to the limitations of these techniques. 

The third example related to drug delivery surprised me. Electroporation? Really? That requires a huge electric field. In Chapter 9 of IPMB, Russ Hobbie and I say that for electroporation the electric field in the membrane should be about 108 volts per meter. Later in that chapter, we analyze an example of a spherical cell in an electric field. To get a 108 V/m electric field in the membrane, the electric field applied to the cell as a whole should be on the order of 108 V/m times the thickness of the membrane (about 10–8 m) divided by the radius of the cell (about 10–5 m), or 105 V/m. The material used for drug delivery had a magnetoelectrical coefficient of about 100 volts per centimeter per oersted, which means 104 V/(m Oe). The oersted is really a unit of the magnetic field intensity H rather than of the magnetic field B. In most biological materials, the magnetic permeability is about that of a vacuum, so 1 Oe corresponds to 1 gauss, or 10–4 tesla. Therefore, the magnetoelectrical coefficient is 108 (V/m)/T. Martins et al. say that a magnetic field of about 1000 Oe (0.1 T) was used in these experiments. So, the electric field produced by the material was on the order of 107 V/m. The electric field that cells adjacent to the magnetoelectrical particle experience should be about this strength. We found earlier that electroporation requires an electric field applied to the cell of around 105 V/m. That means we should have about a factor of 100 more electric field strength than is needed. It should work, even if the neuron is a little bit distant from the device. Wow!

I’ll close with my favorite paragraph of the article, found near the end and summarizing the field.

The historical trajectory of ME [magnetoelectric] materials, spanning from Pierre Curie's suggestion in 1894 to recent anticancer activities in 2023, has been characterized by significant challenges and breakthroughs that have shaped their biomedical feasibility. Initially, limited understanding of the ME phenomenon and the absence of suitable materials posed critical obstacles. However, over the decades, intensive research efforts led to the discovery and synthesis of ME compounds, including novel composite structures and multiferroic materials with enhanced magnetoelectric coupling. These advancements, coupled with refinements in material synthesis and characterization techniques, propelled ME materials into the realm of biomedical applications. Additionally, piezoelectric polymers have been incorporated into ME composites, enhancing processing, biocompatibility, integration, and flexibility while maintaining or even improving the ME properties of the composite materials. In the 21st century, the exploration of ME materials for biomedical purposes gained momentum, particularly in anticancer activities. Breakthroughs in targeted drug delivery, magnetic hyperthermia therapy, and real-time cancer cell imaging showcased the therapeutic potential of ME materials. Despite these advancements, challenges such as ensuring biocompatibility, stability in physiological environments, and scalability for clinical translation persist. Ongoing research aims to optimize ME material properties for specific cancer types, enhance targeting efficiency, and address potential cytotoxicity concerns, with the ultimate goal of harnessing the full potential of ME materials to revolutionize cancer treatment and diagnosis.

Friday, May 17, 2024

FLASH Radiotherapy: Newsflash, or Flash in the Pan?

FLASH Radiotherapy: Newsflash or Flash in the Pan? (Med. Phys. 46:4287–4290, 2019) superimposed on the cover of Intermediate Physics for Medicine and Biology.
“FLASH Radiotherapy: Newsflash
or Flash in the Pan?” (Med. Phys.
46:4287–4290, 2019).
I’ve always been a fan of the Point/Counterpoint articles published in the journal Medical Physics. Today I will discuss one titled “FLASH Radiotherapy: Newsflash or Flash in the Pan?” (Volume 46, Pages 4287–4290, 2019). The title is clever, but doesn’t really fit. A point/counterpoint article is supposed to have a title in the form of a proposition that you can argue for or against. Perhaps “FLASH Radiotherapy: Newsflash, Not a Flash in the Pan” would have been better.

The format for any Point/Counterpoint article is a debate between two leading experts. Each provides an opening statement and then they finish with rebuttals. In this case, Pater Maxim argues for the proposition (Newsflash!), and Paul Keall argues against it (Flash in the Pan). The moderator, Jing Cai, provides an introductory overview:
Ionizing radiation with ultra-high dose rates (>40 Gy/s), known as FLASH, has drawn great attention since its introduction in 2014. It has been shown to markedly reduce radiation toxicity to normal healthy tissues while inhibiting tumor growth with similar efficiency as compared to conventional dose rate irradiation in pre-clinical models. Some believe that FLASH irradiation holds great promise and is perhaps the biggest finding in recent radiotherapy history. However, others remain skeptical about the replication of FLASH efficacy in cancer patients with concerns about technical complexity, lack of understanding of its molecular radiobiological underpinnings, and reliability. This is the premise debated in this month’s Point/Counterpoint.

I find it interesting that the mechanism for FLASH remains unknown. In his opening statement, Maxim says “we have barely scratched the surface of potential mechanistic pathways.” After citing several animals studies, he concludes that “these data provide strong evidence that the observed FLASH effect across multiple species and organ systems is real, which makes this dramatic finding the biggest 'Newsflash' in recent radiotherapy history.”

In his opening statement, Keall says that “FLASH therapy is an interesting concept… However, before jumping on the FLASH bandwagon, we should ask some questions.” His questions include “Does FLASH delivery technology exist for humans?” (No), “Will FLASH be cost effective? (No), “Will treatment times be reduced with FLASH therapy?” (Possibly), and “Are the controls questionable in FLASH experiments?” (Sometimes). He concludes by asking “Am I on the FLASH bandwagon? No. I remain an interested spectator.”

Maxim, in his rebuttal, claims that while FLASH is not currently available for treatment of humans, he sees a pathway for clinical translation in the foreseeable future, based on something called Pluridirectional High-Energy Agile Scanning Electronic Radiotherapy (PHASER). Moreover, he anticipates that PHASER will have an overall cost similar to conventional therapy. He notes that one motivation for adopting the FLASH technique is to reduce uncertainty caused by organ motion. Maxim concludes that “FLASH promised to be a paradigm shift in curative radiation therapy with preclinical evidence of fundamentally improved therapeutic index. If this remarkable finding is translatable to humans, the switch to the PHASER technology will become mandatory.”

Keall, in his rebuttle, points out weaknesses in the preclinical FLASH studies. In particular, studies so far have looked at only early biological effects, but late effects (perhaps years after treatment) are unknown. He also states that “FLASH works against one of the four R’s of radiobiology, reoxygenation.” Traditionally, a tumor has a hypoxic core meaning its has a low level of oxygen at its center, and this makes it resistant to radiation damage. When radiation is delivered in several fractions, there is enough time for a tumor to reoxygenate between fractions. This, in fact, is the primary motivation for using fractions. FLASH happens so fast there is no time for reoxygenation. This is why the mechanism of FLASH remains unclear: it goes against conventional ideas. Keall concludes “The scientists, authors, reviewers and editors involved with FLASH therapy need to carefully approach the subject and acknowledge the limitations of their studies. Overcoming these limitations will drive innovation. I will watch this space with interest.”

So what do I make of all this? From the point of view of a textbook writer, we really need to figure out the mechanism underlying FLASH. Otherwise, textbooks hardly know how to describe the technique, and optimizing it for the clinic will be difficult. Nevertheless, the new edition of Intermediate Physics for Medicine and Biology will have something to say about FLASH.

FLASH seems promising enough that we should certainly explore it further. But as I get older, I seem to be getting more conservative, so I tend to side with Keall. I would love to see the method work on patients, but I remain a skeptic until I see more evidence. I guess it depends on if you are a cup-half-full or a cup-half-empty kind of guy. I do know one thing: Point/Counterpoint articles help me understand the pros and cons of such difficult and technical issues.

Friday, May 10, 2024

Numerical Solution of the Quadratic Formula

Homework Problem 7 in Chapter 11 of Intermediate Physics for Medicine and Biology examines fitting data to a straight line. In that problem, the four data points to be fit are (100, 4004), (101, 4017), (102, 4039), and (103, 4063). The goal is to fit the data to the line y = ax + b. For this data, one must perform the calculation of a and b to high precision or else you get large errors. The solution manual (available to instructors upon request) says that
This problem illustrates how students can run into numerical problems if they are not careful. With modern calculators that carry many significant figures, this may seem like a moot point. But the idea is still important and can creep subtly into computer computations and cause unexpected, difficult-to-debug errors.
Numerical Recipes
Are there other examples of numerical errors creeping into calculations? Yes. You can find one discussed in Numerical Recipes that involves the quadratic formula.

We all know the quadratic formula from high school. If you have a quadratic equation of the form



the solution is



For example,
 
has two solutions
 

so x = 1 or 2.

Now, suppose the coefficient b is larger,



The solution is



so x = 300 or 0.00667.

This calculation is susceptible to numerical error. For instance, suppose all numerical calculations are performed to only four significant figures. Then when you reach the step



you must subtract 8 from 90,000. You get 89992, which to four significant figures becomes 89990, which has a square root of (again to four significant figures) 300.0. The solutions are therefore x = 300 or 0. The large solution (300) is correct, but the small solution (0 instead of 0.00667) is completely wrong. The main reason is that when using the minus sign for ± you must subtract two numbers that are almost the same (in this case, 300 – 299.98667) to get a much smaller number.

You might say “so what! Who uses only four significant figures in their calculations?” Okay, try solving



where I increased b from 300 to 3000. You’ll find that using even six significant figures gives one nonsense solution (try it). As you make b larger and larger, the calculation becomes more and more difficult. The situation can cause unexpected, difficult-to-debug errors.

What’s the moral to this story? Is it simply that you must use high precision when doing calculations? No. We can do better. Notice that the solution is fine when using the plus sign in the quadratic equation. We need make no changes. It’s the negative sign that gives the problem,
 

Let’s try a trick; multiply the expression by a very special form of one:



Simplifying, we get



Voilà! The denominator has the plus sign in front of the square root, so it is not susceptible to numerical error. The numerator is simplicity itself. Try solving x2 – 300x + 2 = 0 using math to four significant figures,

 
No error, even with just four sig figs. The problem is fixed!

I should note that the problem is fixed only for negative values of b. If b is positive, you can use an analogous approach to get a slightly different form of the solution (I’ll leave that as an exercise for the reader).

So, the moral of the story is: if you find that your numerical calculation is susceptible to numerical error, fix it! Look for a trick that eliminates the problem. Often you can find one.

Friday, May 3, 2024

The Well-Tempered Clavichord

Prelude No. 1,” from the Well-Tempered Clavichord, by Johann Sebastian Bach.
I played it (or tried to play it) back when I was 15 years old.

Most of my blog posts are about physics applied to medicine and biology, but today I want to talk about music. This topic may not seem relevant to Intermediate Physics for Medicine and Biology, but I would argue that it is. Music is, after all, as much about the human perception of sound as about sound itself. So, let’s talk about how we sort the different frequencies into notes.

Below I show a musical keyboard, like you would find on a piano.

A piano keyborad

Each key corresponds to a different pitch, or note. I want to discuss the relationships between the different notes. We have to start somewhere, so let’s take the lowest note on the keyboard and call it C. It will have some frequency, which we’ll call our base frequency. On the piano, this frequency is about 33 Hz, but for our purposes that doesn’t matter. We will consider all frequencies as being multiples of this base frequency, and take our C as having a frequency of 1.
 

When you double the frequency, our ear perceives that change as going up an octave. So, one octave above the first C is another C, with frequency 2.
 

Of course, that means there’s another C with frequency 4, and another with frequency 8, and so on. We get a whole series of C’s.
 

Now, if you pluck a string held down at both ends, it can produce many frequencies. In general, it produces frequencies that are multiples of a fundamental frequency f, so you get frequency f plus “overtone” frequencies 2f, 3f, 4f, 5f, etc. As we noted earlier, we don’t care about the frequency itself but only how different frequencies are related. If the fundamental frequency is a C with frequency 1, the first overtone is one octave up (with a frequency 2), another C. The second overtone has a frequency 3. That corresponds to a different note on our keyboard, which we’ll call G.


You could raise or lower G by octaves and still have the same note (like we did with C), so you have a whole series of G’s, including 3/2 which is between C’s corresponding to frequencies 1 and 2. When two notes have frequencies such that the upper frequency is 3/2 times the lower frequency (a 3:2 ratio), musicians call that a “fifth,” so G is a fifth above C.
 


Let’s keep going. The next overtone is 4, which is two octaves above the fundamental, so it’s one of the C’s. But the following overtone, 5, gives us a new note, E. 

 

As always, you can go up or down by octaves, so we get a whole series of E’s.

 

C and E are related by a ratio of 5:4 (that is, E has a frequency 5/4 times the C below it), which musicians call a “third.” The notes C, E, and G make up the “C major chord.”

The next overtone would be 6, but we already know 6 is a G. The overtone 7 doesn’t work. Apparently a frequency ratio of 7 is not one that we find pleasant (at least, not to those of us who have been trained on western music), so we’ll skip it. Overtone 8 is another C, but we get a new note with overtone 9 (and all its octaves up and down, which I’ll stop repeating again and again). We’ll call this note D, because it seems to fit nicely between C and E. The D right next to our base note C has a frequency of 9/8.

Next is overtone 10 (an E), then 11 (like 7, it doesn’t work), 12 (a G), 13 (nope), 14 (no because it’s an octave up from 7), and finally 15, a new note we’ll call B. 

We could go on, but we don’t perceive many of the higher overtones as harmonious, so let’s change track. There’s nothing special about our base note, the C on the far left of our keyboard. Suppose we wanted to use a different base note. What note would we use if we wanted it to be a fifth below C? If we started with a frequency of 2/3, then a fifth above that frequency would be 2/3 times 3/2 or 1, giving us C. We’ll call that new base frequency F. It’s off our keyboard to the left, but its octaves appear, including 4/3, 8/3, etc.  


What if we want to build a major chord based on F. We already have C as a fifth above F. What note is a third above F? In other words, start at 2/3 and multiply by 5/4 (a third), to get 10/12 which simplifies to 5/6. That’s off the keyboard too, but its octaves 5/3, 10/3, 20/3, etc. appear. Let’s call it A. So a major chord in the key of F is F, A, and C.

Does this work for other base frequencies? Try G (3/2). Go up a fifth from G and you get 9/4, which is a D. Go up a third from G and you get 15/8, which is a B. So G, B, and D make up a major chord in the key of G. It works again!

So now it looks like we’re done. We’ve given names and frequencies to all the notes: C (1), D (9/8), E (5/4), F (4/3), G (3/2), A (5/3), and B (15/8). This collection of frequencies is called “just intonation,” with “just” used in the sense of fair and honest. If you play a song in the key of C, you use only those notes and frequencies and it sounds just right.

What about those strange block notes between some, but not all, of the white notes? How do we determine their frequencies? For example, start at D (9/8) for your base note and build a major chord. First you and go up a third and get 9/8 times 5/4, or 45/32. That note, corresponding to the black key just to the right of F, is F-sharp (or F). To express the frequency as a decimal, 45/32 = 1.406, which is midway between F (4/3 = 1.333) and G (3/2 = 1.500). We could continue working out all the frequencies for the various sharps and flats, but we won’t. It gets tedious, and there is an even more interesting and surprising feature to study.

To complete our D major chord, we need to to determine what note is a fifth above D. You get D (9/8) times a fifth (3/2), or 27/16 = 1.688. That is almost the same as A (5/3 = 1.667), but not quite. It’s too close to A to correspond to A-sharp. It’s simply an out-of-tune A. In other words, using the frequencies we have worked out above, if you start with C as your base (that is, you play in the key of C) your G (a fifth) corresponds to a frequency ratio of 5/3 = 1.667. If you play, however, using D as your base (you play in the key of D), your A (what should be a fifth above D) has a frequency ratio of 27/16 = 1.688. Different keys have different intonations. Yikes! This is not a problem with only the key of D. It happens again and again for other keys. The intonation is all messed up. You either play in the key of C, or you play out of tune.

To avoid this problem, nowadays instruments are tuned so that there are 12 steps between octaves (the steps includes both the white and black keys), where each step corresponds to a frequency ratio of 21/12 = 1.0595. A fifth (seven steps) is then 27/12 = 1.498, which is not exactly 3/2 = 1.500 but is pretty close and—importantly—is the same for all keys. A third is 24/12 = 1.260, which is not 5/4 = 1.250 but is not too bad. A keyboard with frequencies adjusted in this way is called “well-tempered.” It means that all the keys sound the same, although each is slightly out of tune compared to just intonation. You don’t have to have your piano tuned every time you change keys.

Johann Sebastian Bach wrote a lovely set of piano pieces called The Well-Tempered Clavichord that showed off the power of well-tempered tuning. My copy of the most famous of these pieces is shown in the photo at the top of this post. Listen to it and other glorious music by Bach below.

 
Bach’s “Prelude No. 1” from The Well-Tempered Clavichord, played by Alexandre Tharaud.

https://www.youtube.com/watch?v=iWoI8vmE8bI


Bach’s “Cello Suite No. 1,” played by Yo Yo Ma.

https://www.youtube.com/watch?v=Rx_IibJH4rA


Bach’s “Toccata and Fugue in D minor.”

https://www.youtube.com/watch?v=erXG9vnN-GI


Bach’s “Jesu, Joy of Man’s Desiring,” performed by Daniil Trifonov.

https://www.youtube.com/watch?v=wEJruV9SPao


Bach’s “Air on the G String.”

https://www.youtube.com/watch?v=pzlw6fUux4o


Bach and Gounod’s “Ave Maria,” sung by Andrea Bocelli.

https://www.youtube.com/watch?v=YR_bGloUJNo