Friday, May 31, 2024

Can the Microwave Auditory Effect Be ‘Weaponized’

Can the Microwave Auditory
Effect be Weaponized?”
I was recently reading Ken Foster, David Garrett, and Marvin Ziskin’s paper “Can the Microwave Auditory Effect Be Weaponized?” (Frontiers in Public Health, Volume 9, 2021). It analyzed if microwave weapons could be used to “attack” diplomats and thereby cause the Havana syndrome. While I am interested in the Havana syndrome (I discussed it in my book Are Electromagnetic Fields Making Me Ill?), today I merely want to better understand Foster et al.’s proposed mechanism by which an electromagnetic wave can induce an acoustic wave in tissue.

As is my wont, I will present this mechanism as a homework problem at a level you might find in Intermediate Physics for Medicine and Biology. I’ll assign the problem to Chapter 13 about Sound and Ultrasound, although it draws from several chapters.

Forster et al. represent the wave as decaying exponentially as it enters the tissue, with a skin depth λ. To keep things simple and to focus of the mechanism rather than the details, I’ll assume the energy in the electromagnetic wave is absorbed uniformly in a thin layer of thickness λ, ignoring the exponential behavior.

Section 13.4
Problem 17 ½. Assume an electromagnetic wave of intensity I0 (W/m2) with area A (m2) and duration τ (s) is incident on tissue. Furthermore, assume all its energy is absorbed in a depth λ (m).

(a) Derive an expression for the energy E (J) dissipated in the tissue.

(b) Derive an expression for the tissue’s increase in temperature ΔT (°C), E = C ΔT, where C (J/°C) is the heat capacity. Then express C in terms of the specific heat capacity c (J/°C kg), the density ρ (kg/m3), and the volume where the energy was deposited V (m3). (For a discussion of the heat capacity, see Sec. 3.11).

(c) Derive an expression for the fractional increase in volume, ΔV/V, caused by the increase in temperature, ΔV/V = αΔT, where α (1/°C) is the tissue’s coefficient of thermal expansion.

(d) Derive an expression for the change in pressure, ΔP (Pa), caused by this fractional change in volume, ΔP = B ΔV/V, where B (Pa) is the tissue’s bulk modulus. (For a discussion of the bulk modulus, see Sec. 1.14).

(e) You expression in part d should contain a factor Bα/. Show that this factor is dimensionless. It is called the Grüneisen parameter.

(f) Assume α = 0.0003 1/°C, B = 2 × 109 Pa, c = 4200 J/kg °C, and ρ = 1000 kg/m3. Evaluate the Grüneisen parameter. Calculate the change in pressure ΔP if the intensity is 10 W/m2, the skin depth is 1 mm, and the duration is 1 μs.

I won’t solve the entire problem for you, but the answer for part d is

                            ΔPI0 (τ/λ) [/] .

I should stress that this calculation is approximate. I ignored the exponential falloff. Some of the incident energy could be reflected rather than absorbed. It is unclear if I should use the linear coefficient of thermal expansion or the volume coefficient. The tissue may be heterogeneous. You can probably identify other approximations I’ve made. 

Interestingly, the pressure induced in the tissue varies inversely with the skin depth, which is not what I intuitively expected. As the skin depth gets smaller, the energy is dumped into a smaller volume, which means the temperature increase within this smaller volume is larger. The pressure increase is proportional to the temperature increase, so a thinner skin depth means a larger pressure.

You might be thinking: wait a minute. Heat diffuses. Do we know if the heat would diffuse away before it could change the pressure? The diffusion constant of heat (the thermal diffusivity) D for tissue is about 10-7 m2/s. From Chapter 4 in IPMB, the time to diffuse a distance λ is λ2/D. For λ = 1 mm, this diffusion time is 10 s. For pulses much shorter than this, we can ignore thermal diffusion. 

Perhaps you’re wondering how big the temperature rise is? For the parameters given, it’s really small: ΔT  = 2 × 10–9 °C. This means the fractional change in volume is around 10–12. It’s not a big effect.

The Grüneisen parameter is a dimensionless number. I’m used to thinking of such numbers as being the ratio of two quantities with the same units. For instance, the Reynolds number is the ratio of an inertial force to a viscous force, and the Péclet number is the ratio of transport by drift to transport by diffusion. I’m having trouble interpreting the Grüneisen parameter in this way. Perhaps it has something to do with the ratio of thermal energy to elastic energy, but the details are not obvious, at least not to me.

What does this all have to do with the Havana syndrome? Not much, I suspect. First, we don’t know if the Havana syndrome is caused by microwaves. As far as I know, no one has ever observed microwaves associated with one of these “attacks” (perhaps the government has but they keep the information classified). This means we don’t know what intensity, frequency (and thus, skin depth), and pulse duration to assume. We also don’t know what pressure would be required to explain the “victim’s” symptoms. 

In part f of the problem, I used for the intensity the upper limit allowed for a cell phone, the skin depth corresponding approximately to a microwave frequency of about ten gigahertz, and a pulse duration of one microsecond. The resulting pressure of 0.0014 Pa is much weaker than is used during medical ultrasound imaging, which is known to be safe. The acoustic pressure would have to increase dramatically to pose a hazard, which implies very large microwave intensities.

Are Electromagnetic Fields Making Me Ill? superimposed on the cover of Intermediate Physics for Medicine and Biology.
Are Electromagnetic Fields
Making Me Ill?

That such a large intensity electromagnetic wave could be present without being noticeable seems farfetched to me. Perhaps very low pressures could have harmful effects, but I doubt it. I think I’ll stick with my conclusion from Are Electromagnetic Fields Making Me Ill?

Microwave weapons and the Havana Syndrome: I am skeptical about microwave weapons, but so little evidence exists that I want to throw up my hands in despair. My guess: the cause is psychogenic. But if anyone detects microwaves during an attack, I will reconsider.

Friday, May 24, 2024

Magnetoelectrics For Biomedical Applications

“Magnetoelectrics for Biomedical Applications: 130 years Later, Bridging Materials, Energy, and Life” superimposed on Intermediate Physics for Medicine and Biology.
Magnetoelectrics for
Biomedical Applications:
130 Years Later, Bridging
Materials, Energy, and Life
I’m always looking for new ways physics can be applied to medicine and biology. Recently, I read the article “Magnetoelectrics for Biomedical Applications: 130 Years Later, Bridging Materials, Energy, and Life” by Pedro Martins and his colleagues (Nano Energy, in press).

The “130 years” in the title refers to the year 1894 when Pierre Curie conjectured that in some materials there could be a coupling between their magnetic and electric properties. While there are some single-phase magnetoelectric materials, most modern ones are composites: piezoelectric and magnetostrictive phases are coupled through mechanical strain. In this way, an applied magnetic field can produce an electric field, and vice versa.

Martins et al. outline many possible applications of magnetoelectric materials to medicine. I will highlight three.

  1. Chapter 7 of Intermediate Physics for Medicine and Biology mentions deep brain stimulators to treat Parkinson’s disease. Normally deep brain stimulation requires implanting a pacemaker-like device connected by wires inserted into the brain. A magnetoelectric stimulator could be small and wireless, using power delivered by a time-dependent magnetic field. The magnetic field would induce an electric field in the magnetoelectric material, and this electric field could act like an electrode, activating a neuron
  2. Chapter 8 of IPMB discusses ways to measure the tiny magnetic field produced by the heart: The magnetocardiogram. The traditional way to record the field is to use a superconducting quantum interference device (SQUID) magnetometer, which must be kept at cryogenic temperatures. Martins et al. describe how a weak magnetic field would produce a measurable voltage using a room-temperature magnetoelectric sensor.
  3. Magnetoelectric materials could be used for drug delivery. Martins et al. describe an amazing magnetoelectrical “nanorobot” that could be made to swim using a slowly rotating magnetic field. After the nanorobot reached its target, it could be made to release a cancer-fighting drug to the tissue by applying a rapidly changing magnetic field that produces a local electric field strong enough to cause electroporation in the target cell membrane, allowing delivery of the drug.

What I have supplied is just a sample of the many various applications of magnetoelectric materials. Martin’s et al. describe many more, and also provide a careful analysis to the limitations of these techniques. 

The third example related to drug delivery surprised me. Electroporation? Really? That requires a huge electric field. In Chapter 9 of IPMB, Russ Hobbie and I say that for electroporation the electric field in the membrane should be about 108 volts per meter. Later in that chapter, we analyze an example of a spherical cell in an electric field. To get a 108 V/m electric field in the membrane, the electric field applied to the cell as a whole should be on the order of 108 V/m times the thickness of the membrane (about 10–8 m) divided by the radius of the cell (about 10–5 m), or 105 V/m. The material used for drug delivery had a magnetoelectrical coefficient of about 100 volts per centimeter per oersted, which means 104 V/(m Oe). The oersted is really a unit of the magnetic field intensity H rather than of the magnetic field B. In most biological materials, the magnetic permeability is about that of a vacuum, so 1 Oe corresponds to 1 gauss, or 10–4 tesla. Therefore, the magnetoelectrical coefficient is 108 (V/m)/T. Martins et al. say that a magnetic field of about 1000 Oe (0.1 T) was used in these experiments. So, the electric field produced by the material was on the order of 107 V/m. The electric field that cells adjacent to the magnetoelectrical particle experience should be about this strength. We found earlier that electroporation requires an electric field applied to the cell of around 105 V/m. That means we should have about a factor of 100 more electric field strength than is needed. It should work, even if the neuron is a little bit distant from the device. Wow!

I’ll close with my favorite paragraph of the article, found near the end and summarizing the field.

The historical trajectory of ME [magnetoelectric] materials, spanning from Pierre Curie's suggestion in 1894 to recent anticancer activities in 2023, has been characterized by significant challenges and breakthroughs that have shaped their biomedical feasibility. Initially, limited understanding of the ME phenomenon and the absence of suitable materials posed critical obstacles. However, over the decades, intensive research efforts led to the discovery and synthesis of ME compounds, including novel composite structures and multiferroic materials with enhanced magnetoelectric coupling. These advancements, coupled with refinements in material synthesis and characterization techniques, propelled ME materials into the realm of biomedical applications. Additionally, piezoelectric polymers have been incorporated into ME composites, enhancing processing, biocompatibility, integration, and flexibility while maintaining or even improving the ME properties of the composite materials. In the 21st century, the exploration of ME materials for biomedical purposes gained momentum, particularly in anticancer activities. Breakthroughs in targeted drug delivery, magnetic hyperthermia therapy, and real-time cancer cell imaging showcased the therapeutic potential of ME materials. Despite these advancements, challenges such as ensuring biocompatibility, stability in physiological environments, and scalability for clinical translation persist. Ongoing research aims to optimize ME material properties for specific cancer types, enhance targeting efficiency, and address potential cytotoxicity concerns, with the ultimate goal of harnessing the full potential of ME materials to revolutionize cancer treatment and diagnosis.

Friday, May 17, 2024

FLASH Radiotherapy: Newsflash, or Flash in the Pan?

FLASH Radiotherapy: Newsflash or Flash in the Pan? (Med. Phys. 46:4287–4290, 2019) superimposed on the cover of Intermediate Physics for Medicine and Biology.
“FLASH Radiotherapy: Newsflash
or Flash in the Pan?” (Med. Phys.
46:4287–4290, 2019).
I’ve always been a fan of the Point/Counterpoint articles published in the journal Medical Physics. Today I will discuss one titled “FLASH Radiotherapy: Newsflash or Flash in the Pan?” (Volume 46, Pages 4287–4290, 2019). The title is clever, but doesn’t really fit. A point/counterpoint article is supposed to have a title in the form of a proposition that you can argue for or against. Perhaps “FLASH Radiotherapy: Newsflash, Not a Flash in the Pan” would have been better.

The format for any Point/Counterpoint article is a debate between two leading experts. Each provides an opening statement and then they finish with rebuttals. In this case, Pater Maxim argues for the proposition (Newsflash!), and Paul Keall argues against it (Flash in the Pan). The moderator, Jing Cai, provides an introductory overview:
Ionizing radiation with ultra-high dose rates (>40 Gy/s), known as FLASH, has drawn great attention since its introduction in 2014. It has been shown to markedly reduce radiation toxicity to normal healthy tissues while inhibiting tumor growth with similar efficiency as compared to conventional dose rate irradiation in pre-clinical models. Some believe that FLASH irradiation holds great promise and is perhaps the biggest finding in recent radiotherapy history. However, others remain skeptical about the replication of FLASH efficacy in cancer patients with concerns about technical complexity, lack of understanding of its molecular radiobiological underpinnings, and reliability. This is the premise debated in this month’s Point/Counterpoint.

I find it interesting that the mechanism for FLASH remains unknown. In his opening statement, Maxim says “we have barely scratched the surface of potential mechanistic pathways.” After citing several animals studies, he concludes that “these data provide strong evidence that the observed FLASH effect across multiple species and organ systems is real, which makes this dramatic finding the biggest 'Newsflash' in recent radiotherapy history.”

In his opening statement, Keall says that “FLASH therapy is an interesting concept… However, before jumping on the FLASH bandwagon, we should ask some questions.” His questions include “Does FLASH delivery technology exist for humans?” (No), “Will FLASH be cost effective? (No), “Will treatment times be reduced with FLASH therapy?” (Possibly), and “Are the controls questionable in FLASH experiments?” (Sometimes). He concludes by asking “Am I on the FLASH bandwagon? No. I remain an interested spectator.”

Maxim, in his rebuttal, claims that while FLASH is not currently available for treatment of humans, he sees a pathway for clinical translation in the foreseeable future, based on something called Pluridirectional High-Energy Agile Scanning Electronic Radiotherapy (PHASER). Moreover, he anticipates that PHASER will have an overall cost similar to conventional therapy. He notes that one motivation for adopting the FLASH technique is to reduce uncertainty caused by organ motion. Maxim concludes that “FLASH promised to be a paradigm shift in curative radiation therapy with preclinical evidence of fundamentally improved therapeutic index. If this remarkable finding is translatable to humans, the switch to the PHASER technology will become mandatory.”

Keall, in his rebuttle, points out weaknesses in the preclinical FLASH studies. In particular, studies so far have looked at only early biological effects, but late effects (perhaps years after treatment) are unknown. He also states that “FLASH works against one of the four R’s of radiobiology, reoxygenation.” Traditionally, a tumor has a hypoxic core meaning its has a low level of oxygen at its center, and this makes it resistant to radiation damage. When radiation is delivered in several fractions, there is enough time for a tumor to reoxygenate between fractions. This, in fact, is the primary motivation for using fractions. FLASH happens so fast there is no time for reoxygenation. This is why the mechanism of FLASH remains unclear: it goes against conventional ideas. Keall concludes “The scientists, authors, reviewers and editors involved with FLASH therapy need to carefully approach the subject and acknowledge the limitations of their studies. Overcoming these limitations will drive innovation. I will watch this space with interest.”

So what do I make of all this? From the point of view of a textbook writer, we really need to figure out the mechanism underlying FLASH. Otherwise, textbooks hardly know how to describe the technique, and optimizing it for the clinic will be difficult. Nevertheless, the new edition of Intermediate Physics for Medicine and Biology will have something to say about FLASH.

FLASH seems promising enough that we should certainly explore it further. But as I get older, I seem to be getting more conservative, so I tend to side with Keall. I would love to see the method work on patients, but I remain a skeptic until I see more evidence. I guess it depends on if you are a cup-half-full or a cup-half-empty kind of guy. I do know one thing: Point/Counterpoint articles help me understand the pros and cons of such difficult and technical issues.

Friday, May 10, 2024

Numerical Solution of the Quadratic Formula

Homework Problem 7 in Chapter 11 of Intermediate Physics for Medicine and Biology examines fitting data to a straight line. In that problem, the four data points to be fit are (100, 4004), (101, 4017), (102, 4039), and (103, 4063). The goal is to fit the data to the line y = ax + b. For this data, one must perform the calculation of a and b to high precision or else you get large errors. The solution manual (available to instructors upon request) says that
This problem illustrates how students can run into numerical problems if they are not careful. With modern calculators that carry many significant figures, this may seem like a moot point. But the idea is still important and can creep subtly into computer computations and cause unexpected, difficult-to-debug errors.
Numerical Recipes
Are there other examples of numerical errors creeping into calculations? Yes. You can find one discussed in Numerical Recipes that involves the quadratic formula.

We all know the quadratic formula from high school. If you have a quadratic equation of the form



the solution is



For example,
 
has two solutions
 

so x = 1 or 2.

Now, suppose the coefficient b is larger,



The solution is



so x = 300 or 0.00667.

This calculation is susceptible to numerical error. For instance, suppose all numerical calculations are performed to only four significant figures. Then when you reach the step



you must subtract 8 from 90,000. You get 89992, which to four significant figures becomes 89990, which has a square root of (again to four significant figures) 300.0. The solutions are therefore x = 300 or 0. The large solution (300) is correct, but the small solution (0 instead of 0.00667) is completely wrong. The main reason is that when using the minus sign for ± you must subtract two numbers that are almost the same (in this case, 300 – 299.98667) to get a much smaller number.

You might say “so what! Who uses only four significant figures in their calculations?” Okay, try solving



where I increased b from 300 to 3000. You’ll find that using even six significant figures gives one nonsense solution (try it). As you make b larger and larger, the calculation becomes more and more difficult. The situation can cause unexpected, difficult-to-debug errors.

What’s the moral to this story? Is it simply that you must use high precision when doing calculations? No. We can do better. Notice that the solution is fine when using the plus sign in the quadratic equation. We need make no changes. It’s the negative sign that gives the problem,
 

Let’s try a trick; multiply the expression by a very special form of one:



Simplifying, we get



Voilà! The denominator has the plus sign in front of the square root, so it is not susceptible to numerical error. The numerator is simplicity itself. Try solving x2 – 300x + 2 = 0 using math to four significant figures,

 
No error, even with just four sig figs. The problem is fixed!

I should note that the problem is fixed only for negative values of b. If b is positive, you can use an analogous approach to get a slightly different form of the solution (I’ll leave that as an exercise for the reader).

So, the moral of the story is: if you find that your numerical calculation is susceptible to numerical error, fix it! Look for a trick that eliminates the problem. Often you can find one.

Friday, May 3, 2024

The Well-Tempered Clavichord

Prelude No. 1,” from the Well-Tempered Clavichord, by Johann Sebastian Bach.
I played it (or tried to play it) back when I was 15 years old.

Most of my blog posts are about physics applied to medicine and biology, but today I want to talk about music. This topic may not seem relevant to Intermediate Physics for Medicine and Biology, but I would argue that it is. Music is, after all, as much about the human perception of sound as about sound itself. So, let’s talk about how we sort the different frequencies into notes.

Below I show a musical keyboard, like you would find on a piano.

A piano keyborad

Each key corresponds to a different pitch, or note. I want to discuss the relationships between the different notes. We have to start somewhere, so let’s take the lowest note on the keyboard and call it C. It will have some frequency, which we’ll call our base frequency. On the piano, this frequency is about 33 Hz, but for our purposes that doesn’t matter. We will consider all frequencies as being multiples of this base frequency, and take our C as having a frequency of 1.
 

When you double the frequency, our ear perceives that change as going up an octave. So, one octave above the first C is another C, with frequency 2.
 

Of course, that means there’s another C with frequency 4, and another with frequency 8, and so on. We get a whole series of C’s.
 

Now, if you pluck a string held down at both ends, it can produce many frequencies. In general, it produces frequencies that are multiples of a fundamental frequency f, so you get frequency f plus “overtone” frequencies 2f, 3f, 4f, 5f, etc. As we noted earlier, we don’t care about the frequency itself but only how different frequencies are related. If the fundamental frequency is a C with frequency 1, the first overtone is one octave up (with a frequency 2), another C. The second overtone has a frequency 3. That corresponds to a different note on our keyboard, which we’ll call G.


You could raise or lower G by octaves and still have the same note (like we did with C), so you have a whole series of G’s, including 3/2 which is between C’s corresponding to frequencies 1 and 2. When two notes have frequencies such that the upper frequency is 3/2 times the lower frequency (a 3:2 ratio), musicians call that a “fifth,” so G is a fifth above C.
 


Let’s keep going. The next overtone is 4, which is two octaves above the fundamental, so it’s one of the C’s. But the following overtone, 5, gives us a new note, E. 

 

As always, you can go up or down by octaves, so we get a whole series of E’s.

 

C and E are related by a ratio of 5:4 (that is, E has a frequency 5/4 times the C below it), which musicians call a “third.” The notes C, E, and G make up the “C major chord.”

The next overtone would be 6, but we already know 6 is a G. The overtone 7 doesn’t work. Apparently a frequency ratio of 7 is not one that we find pleasant (at least, not to those of us who have been trained on western music), so we’ll skip it. Overtone 8 is another C, but we get a new note with overtone 9 (and all its octaves up and down, which I’ll stop repeating again and again). We’ll call this note D, because it seems to fit nicely between C and E. The D right next to our base note C has a frequency of 9/8.

Next is overtone 10 (an E), then 11 (like 7, it doesn’t work), 12 (a G), 13 (nope), 14 (no because it’s an octave up from 7), and finally 15, a new note we’ll call B. 

We could go on, but we don’t perceive many of the higher overtones as harmonious, so let’s change track. There’s nothing special about our base note, the C on the far left of our keyboard. Suppose we wanted to use a different base note. What note would we use if we wanted it to be a fifth below C? If we started with a frequency of 2/3, then a fifth above that frequency would be 2/3 times 3/2 or 1, giving us C. We’ll call that new base frequency F. It’s off our keyboard to the left, but its octaves appear, including 4/3, 8/3, etc.  


What if we want to build a major chord based on F. We already have C as a fifth above F. What note is a third above F? In other words, start at 2/3 and multiply by 5/4 (a third), to get 10/12 which simplifies to 5/6. That’s off the keyboard too, but its octaves 5/3, 10/3, 20/3, etc. appear. Let’s call it A. So a major chord in the key of F is F, A, and C.

Does this work for other base frequencies? Try G (3/2). Go up a fifth from G and you get 9/4, which is a D. Go up a third from G and you get 15/8, which is a B. So G, B, and D make up a major chord in the key of G. It works again!

So now it looks like we’re done. We’ve given names and frequencies to all the notes: C (1), D (9/8), E (5/4), F (4/3), G (3/2), A (5/3), and B (15/8). This collection of frequencies is called “just intonation,” with “just” used in the sense of fair and honest. If you play a song in the key of C, you use only those notes and frequencies and it sounds just right.

What about those strange block notes between some, but not all, of the white notes? How do we determine their frequencies? For example, start at D (9/8) for your base note and build a major chord. First you and go up a third and get 9/8 times 5/4, or 45/32. That note, corresponding to the black key just to the right of F, is F-sharp (or F). To express the frequency as a decimal, 45/32 = 1.406, which is midway between F (4/3 = 1.333) and G (3/2 = 1.500). We could continue working out all the frequencies for the various sharps and flats, but we won’t. It gets tedious, and there is an even more interesting and surprising feature to study.

To complete our D major chord, we need to to determine what note is a fifth above D. You get D (9/8) times a fifth (3/2), or 27/16 = 1.688. That is almost the same as A (5/3 = 1.667), but not quite. It’s too close to A to correspond to A-sharp. It’s simply an out-of-tune A. In other words, using the frequencies we have worked out above, if you start with C as your base (that is, you play in the key of C) your G (a fifth) corresponds to a frequency ratio of 5/3 = 1.667. If you play, however, using D as your base (you play in the key of D), your A (what should be a fifth above D) has a frequency ratio of 27/16 = 1.688. Different keys have different intonations. Yikes! This is not a problem with only the key of D. It happens again and again for other keys. The intonation is all messed up. You either play in the key of C, or you play out of tune.

To avoid this problem, nowadays instruments are tuned so that there are 12 steps between octaves (the steps includes both the white and black keys), where each step corresponds to a frequency ratio of 21/12 = 1.0595. A fifth (seven steps) is then 27/12 = 1.498, which is not exactly 3/2 = 1.500 but is pretty close and—importantly—is the same for all keys. A third is 24/12 = 1.260, which is not 5/4 = 1.250 but is not too bad. A keyboard with frequencies adjusted in this way is called “well-tempered.” It means that all the keys sound the same, although each is slightly out of tune compared to just intonation. You don’t have to have your piano tuned every time you change keys.

Johann Sebastian Bach wrote a lovely set of piano pieces called The Well-Tempered Clavichord that showed off the power of well-tempered tuning. My copy of the most famous of these pieces is shown in the photo at the top of this post. Listen to it and other glorious music by Bach below.

 
Bach’s “Prelude No. 1” from The Well-Tempered Clavichord, played by Alexandre Tharaud.

https://www.youtube.com/watch?v=iWoI8vmE8bI


Bach’s “Cello Suite No. 1,” played by Yo Yo Ma.

https://www.youtube.com/watch?v=Rx_IibJH4rA


Bach’s “Toccata and Fugue in D minor.”

https://www.youtube.com/watch?v=erXG9vnN-GI


Bach’s “Jesu, Joy of Man’s Desiring,” performed by Daniil Trifonov.

https://www.youtube.com/watch?v=wEJruV9SPao


Bach’s “Air on the G String.”

https://www.youtube.com/watch?v=pzlw6fUux4o


Bach and Gounod’s “Ave Maria,” sung by Andrea Bocelli.

https://www.youtube.com/watch?v=YR_bGloUJNo