Friday, March 12, 2021

The Rest of the Story 2

Hermann von Helmholtz in 1948.
Hermann in 1848.

The Rest of the Story

Hermann was born in 1821 in Potsdam, Germany. He was often sick as a child, suffering from illnesses such as scarlet fever, and started school late. He was hampered by a poor memory for disconnected facts, making subjects like languages and history difficult, so his interest turned to science. His father loaned him some cheap glass lenses that he used to build optical instruments. He wanted to become a physicist, but his family couldn’t afford to send him to college. Instead, he studied hard to pass an exam that won him a place in a government medical school in Berlin, where his education would be free if he served in the military for five years after he graduated.

The seventeen-year-old Hermann moved to Berlin in 1838. He brought his piano with him, on which he loved to play Mozart and Beethoven. He became friends with his fellow students Ernest von Brücke and Emil de Bois-Reymond, began doing scientific research under the direction of physiologist Johannes Müller, and taught himself higher mathematics in his spare time. By 1843 he graduated and began his required service as an army surgeon.

Life in the army required long hours, and Hermann was isolated from the scientific establishment in Berlin. But with the help of Brücke and de Bois-Reymond he somehow continued his research. His constitution was still delicate, and sometimes he would take time off to restore his health. Near the end of his five-year commitment to the army he fell in love with Olga von Velten, who would sing while he accompanied her on the piano. They became engaged, but he knew they could not marry until he found an academic job after his military service ended, and for that he needed to establish himself as a first-rank scientist. This illness-prone, cash-strapped, over-worked army doctor with a poor memory and a love for music needed to find a research project that would propel him to the top of German science.

Hermann rose to the challenge. He began a careful study of the balance between muscle metabolism and contraction. Using both experiments and mathematics he established the conservation of energy, and in the process showed that no vital force was needed to explain life. On July 23, 1847 he announced his discovery at a meeting of the German Physical Society.  

This research led to a faculty position in Berlin and his marriage to Olga. His career took off, and he later made contributions to the study of vision, hearing, nerve conduction, and ophthalmology. Today, the largest Association of German Research Centers bears his name. Many consider Hermann von Helmholtz to be the greatest biological physicist of all time.

And now you know THE REST OF THE STORY.

Good day! 

_____________________________________________________________

This blog post was written in the style of Paul Harvey’s “The Rest of the Story” radio program. The content is based on a biography of Helmholtz written by his friend and college roommate Leo Koenigsberger. You can read about nerve conduction and Helmholtz’s first measurement of its propagation speed in Chapter 6 of Intermediate Physics for Medicine and Biology. This August we will celebrate the 200th anniversary of Hermann von Helmholtz’s birth. 

Click here for another IPMBThe Rest of the Story” post.

 
Charles Osgood pays tribute to the master storyteller Paul Harvey.

Friday, March 5, 2021

Estimating the Properties of Water

Water
from: www.middleschoolchemistry.com
 
I found a manuscript on the arXiv by Andrew Lucas about estimating macroscopic properties of materials using just a few microscopic parameters. I decided to try a version of this analysis myself. It’s based on Lucas’s work, with a few modifications. I focus exclusively on water because of its importance for biological physics, and make order-of-magnitude calculations like those Russ Hobbie and I discuss in the first section of Intermediate Physics for Medicine and Biology.

My goal is to estimate the properties of water using three numbers: the size, mass, and energy associated with water molecules. We take the size to be the center-to-center distance between molecules, which is about 3 , or 3 × 10−10 m. The mass of a water molecule is 18 (the molecular weight) times the mass of a proton, or about 3 × 10−26 kg. The energy associated with one hydrogen bond between water molecules is about 0.2 eV, or 3 × 10−20 J. This is roughly eight times the thermal energy kT at body temperature, where k is Boltzmann’s constant (1.4 × 10−23 J K−1) and T is the absolute temperature (310 K). A water molecule has about four hydrogen bonds with neighboring molecules.

Density

Estimating the density of water, ρ, is Homework Problem 4 in Chapter 1 of IPMB. Density is mass divided by volume, and volume is distance cubed

ρ = (3 × 10−26 kg)/(3 × 10−10 m)3 = 1100 kg m−3 = 1.1 g cm−3.

The accepted value is ρ = 1.0 g cm−3, so our calculation is about 10% off; not bad for an order-of-magnitude estimate.

Compressibility

The compressibility of water, κ, is a measure of how the volume of water decreases with increasing pressure. It has dimensions of inverse pressure. The pressure is typically thought of as force per unit area, but we can multiply numerator and denominator by distance and express it as energy per unit volume. Therefore, the compressibility is approximately distance cubed over the total energy of the four hydrogen bonds

κ = (3 × 10−10 m)3/[4(3 × 10−20 J)] = 0.25 × 10−9 Pa−1 = 0.25 GPa−1 ,

implying a bulk modulus, B (the reciprocal of the compressibility), of 4 GPa. Water has a bulk modulus of about B = 2.2 GPa, so our estimate is within a factor of two.

Speed of Sound

Once you know the density and compressibility, you can calculate the speed of sound, c, as (see Eq. 13.11 in IPMB)

c = (ρ κ)−1/2 = 1/√[(1100 kg m−3) (0.25 × 10−9 Pa−1)] = 1900 m s−1 = 1.9 km s−1.

The measured value of the speed of sound in water is about c = 1.5 km s−1, which is pretty close for a back-of-the-envelope estimate.

Latent Heat

A homework problem about vapor pressure in Chapter 3 of IPMB uses water’s latent heat of vaporization, L, which is the energy required to boil water per kilogram. We estimate it as

L = 4(3 × 10−20 J)/(3 × 10−26 kg) = 4.0 × 106 J kg−1 = 4 MJ kg−1.

The known value is L = 2.5 MJ kg−1. Not great, but not bad.

Surface Tension

The surface tension, γ, is typically expressed as force per unit length, which is equivalent to the energy per unit area. At a surface, we estimate one of the four hydrogen bonds is missing, so

γ = (3 × 10−20 J)/(3 × 10−10 m)2 = 0.33 J m−2 .

The measured value is γ = 0.07 J m−2, which is about five times less than our calculation. This is a bigger discrepancy than I’d like for an order-of-magnitude estimate, but it’s not horrible.

Viscosity

The coefficient of viscosity, η, has units of kg m−1 s−1. We can use the mass of the water molecule in kilograms, and the distance between molecules in meters, but we don’t have a time scale. However, energy has units of kg m2 s−2, so we can take the square root of mass times distance squared over energy and get a unit of time, τ

τ = √[(3 × 10−26 kg) (3 × 10−10 m)2/4(3 × 10−20 J)] = 0.15 × 10−12 s = 0.15 ps.

We can think of this as a time characterizing the vibrations about equilibrium of the molecules. 
 
The viscosity of water should therefore be on the order of

η = (3 × 10−26 kg)/[(3 × 10−10 m) (0.15 × 10−12 s)] = 0.67 × 10−3 kg m−1 s−1.

Water has a viscosity coefficient of about η = 1 × 10−3 kg m−1 s−1. I admit this analysis provides little insight into the mechanism underlying viscous effects, and it doesn’t explain the enormous temperature dependence of η, but it gets the right order of magnitude.

Specific Heat

The heat capacity is the energy needed to raise the temperature of water by one degree. Thermodynamics implies that the heat capacity is typically equal to Boltzmann’s constant times the number of degrees of freedom per molecule times the number of molecules. The number of degrees of freedom is a subtle thermodynamic concept, but we can approximate it as the number of hydrogen bonds per molecule; about four. Often heat capacity is expressed as the specific heat, C, which is the heat capacity per unit mass. In that case, the specific heat is

C = 4 (1.4 × 10−23 J K−1)/(3 × 10−26 kg) = 1900 J K−1 kg−1.

The measured value is C = 4200 J K−1 kg−1, which is more than a factor of two larger than our estimate. I’m not sure why our value is so low, but probably there are rotational degrees of freedom in addition to the four vibrational modes we counted.

Diffusion

The self-diffusion constant of water can be estimated using the Stokes-Einstein equation relating diffusion and viscosity, D = kT/(6πηa), where a is the size of the molecule. The thermal energy kT is about one eighth of the energy of a hydrogen bond. Therefore,

D = [(3 × 10−20 J)/8]/[(6)(3.14)(0.67 × 10−3 kg m−1 s−1)(3 × 10−10 m)] = 10−9 m2 s−1.

Figure 4.11 in IPMB suggests the measured diffusion constant is about twice this estimate: D = 2 × 10−9 m2 s−1

 
Air and Water,
by Mark Denny.
We didn’t do too bad. Three microscopic parameters, plus the temperature, gave us estimates of density, compressibility, speed of sound, latent heat, surface tension, viscosity, specific heat, and the diffusion constant. This is almost all the properties of water discussed in Mark Denny’s wonderful book Air and Water. Fermi would be proud.

Friday, February 26, 2021

Bridging Physics and Biology Teaching Through Modeling

In this blog, I often stress the value of toy models. I’m not the only one who feels this way. Anne-Marie Hoskinson and her colleagues suggest that modeling is an important tool for teaching at the interface of physics and biology (“Bridging Physics and Biology Teaching Through Modeling,” American Journal of Physics, Volume 82, Pages 434–441, 2014). They write
While biology and physics might appear quite distinct to students, as scientific disciplines they both rely on observations and measurements to explain or to make predictions about the natural world. As a shared scientific practice, modeling is fundamental to both biology and physics. Models in these two disciplines serve to explain phenomena of the natural world; they make predictions that drive hypothesis generation and data collection, or they explain the function of an entity. While each discipline may prioritize different types of representations (e.g., diagrams vs mathematical equations) for building and depicting their underlying models, these differences reflect merely alternative uses of a common modeling process. Building on this foundational link between the disciplines, we propose that teaching science courses with an overarching emphasis on scientific practices, particularly modeling, will help students achieve an integrated and coherent understanding that will allow them to drive discovery in the interdisciplinary sciences.
One of their examples is the cardiac cycle, which they compare and contrast with the thermodynamic Carnot cycle. The cardiac cycle is best described graphically, using a pressure-volume diagram. Russ Hobbie and I present a PV plot of the left ventricle in Figure 1.34 of Intermediate Physics for Medicine and Biology. Below, I modify this plot, trying to capture its essence while simplifying it for easier analysis. As is my wont, I present this toy model as a new homework problem.
Sec. 1.19

Problem 38 ½. Consider a toy model for the behavior of the heart’s left ventricle, as expressed in the pressure-volume diagram

(a) Which sections of the cycle (AB, BC, CD, DA) correspond to relaxation, contraction, ejection, and filling?

(b) Which points during the cycle (A, B, C, D) correspond to the aortic value opening, the aortic value closing, the mitral value opening, and the mitral valve closing?

(c) Plot the pressure versus time and the volume versus time (use a common horizontal time axis, but individual vertical pressure and volume axes).

(d) What is the systolic pressure (in mm Hg)?

(e) Calculate the stroke volume (in ml). 

(f) If the heart rate is 70 beats per minute, calculate the cardiac output (in m3 s–1).

(g) Calculate the work done per beat (in joules).

(h) If the heart rate is 70 beats per minute, calculate the average power output (in watts).

(i) Describe in words the four phases of the cardiac cycle.

(j) What are some limitations of this toy model?

The last two parts of the problem are crucial. Many students can analyze equations or plots, but have difficulty relating them to physical events and processes. Translation between words, pictures, and equations is an essential skill. 

All toy models are simplifications; one of their primary uses is to point the way toward more realistic—albeit more complex—descriptions. Many scientific papers contain a paragraph in the discussion section describing the approximations and assumptions underlying the research.

Below is a Wiggers diagram from Wikipedia, which illustrates just how complex cardiac physiology can be. Yet, our toy model captures many general features of the diagram.


A Wiggers diagram summarizing cardiac physiology.
Source: adh30 revised work by DanielChangMD who revised original work of DestinyQx;
Redrawn as SVG by xavax, CC BY-SA 4.0, via Wikimedia Commons

I’ll give Hoskinson and her coworkers the last word.

“We have provided a complementary view to transforming undergraduate science courses by illustrating how physics and biology are united in their underlying use of scientific models and by describing how this practice can be leveraged to bridge the teaching of physics and biology.”

The Wiggers diagram explained in three minutes!
https://www.youtube.com/watch?v=0sogXvxxV0E

Friday, February 19, 2021

Magnetic Coil Stimulation of Straight and Bent Amphibian and Mammalian Peripheral Nerve in Vitro: Locus of Excitation

In this blog, I like to highlight important journal articles. One of my favorites is “Magnetic Coil Stimulation of Straight and Bent Amphibian and Mammalian Peripheral Nerve in Vitro: Locus of Excitation” by Paul Maccabee and his colleagues (Journal of Physiology, Volume 460, Pages 201–219, 1993). This paper isn’t cited in Intermediate Physics for Medicine and Biology, but it should be. It’s related to Homework Problem 32 in Chapter 8, about magnetic stimulation of a peripheral nerve.

The best part of Maccabee’s article is the pictures. I reproduce three of them below, somewhat modified from the originals. 

The electric field and its derivative produced by magnetic stimulation using a figure-of-eight coil. Based on an illustration in Maccabee et al. (1993).
Fig. 1. The electric field and its derivative produced by magnetic stimulation using a figure-of-eight coil. Based on an illustration in Maccabee et al. (1993).

The main topic of the paper was how an electric field induced in tissue during magnetic stimulation could excite a nerve. The first order of business was to map the induced electric field. Figure 1 shows the measured y-component of the electric field, Ey, and its derivative dEy/dy, in a plane below a figure-of-eight coil. The electric field was strongest under the center of the coil, while the derivative had a large positive peak about 2 cm from the center, with a large negative peak roughly 2 cm in the other direction. Maccabee et al. included the derivative of the electric field in their figure because cable theory predicted that if you placed a nerve below the coil parallel to the y axis, the nerve would be excited where −dEy/dy was largest. 

An experiment to show how the stimulus location changes with the stimulus polarity. Based on an illustration in Maccabee et al. (1993).
Fig. 2. An experiment to show how the stimulus location changes with the stimulus polarity. Based on an illustration in Maccabee et al. (1993).

The most important experiment is shown in Figure 2. The goal was to test the prediction that the nerve was excited where dEy/dy is maximum. The method was to simulate the nerve using one polarity and then the other, and determine if the location where the nerve is stimulated shifted by about 4 cm, as Figure 1 suggests.

A bullfrog sciatic nerve (green) was dissected out of the animal and placed in a bath containing saline (light blue). An electrode (dark blue dot) recorded the action potential as it reached the end of the nerve. A figure-of-eight coil (red) was placed under the bath. First Maccabee et al. stimulated with one polarity so the stimulation site was to the right of the coil center, relatively close to the recording electrode. The recorded signal (yellow) consisted of a large, brief stimulus artifact followed by an action potential that propagated down the nerve with a speed of 40.5 m/s. Then, they reversed the stimulus polarity. As we saw in Fig. 1, this shifted the location of excitation to another point to the left of the coil center. The recorded signal (purple) again consisted of a stimulus artifact followed by an action potential. The action potential, however, arrived 0.9 ms later because it started from the left side of the coil and therefore had to travel farther to reach the recording electrode. They could determine the distance between the stimulation sites by dividing the speed by the latency shift; (40.5 m/s)/(0.9 ms) = 4.5 cm. This was almost the same as the distance between the two peaks in the plot of dEy/dy in Figure 1. The cable theory prediction was confirmed. 

The effect of insulating obstacles on the site of magnetic stimulation. Based on an illustration in Maccabee et al. (1993).
Fig. 3. The effect of insulating obstacles on the site of magnetic stimulation. Based on an illustration in Maccabee et al. (1993).

In another experiment, Maccabee and his coworkers further tested the theory (Fig. 3). The electric field induced during magnetic stimulation was perturbed by an obstruction. They placed two insulating lucite cylinders (yellow) on either side of the nerve, which forced the induced current to pass through the narrow opening between them. This increased the strength of the electric field (green), and caused the negative and positive peaks of the derivative of the electric field (dark blue) to move closer together. Cable theory predicted that if the cylinders were not present the latency shift upon change in polarity would be relatively long, while with the cylinders the latency shift would be relatively short. The experiment found a long latency (1.2 ms) without the cylinders and a short latency (0.3 ms) with them, confirming the prediction. This behavior might be important when stimulating, say, the median nerve as it passes between two bones in the arm.

In addition, Maccabee examined nerves containing bends, which created “hot spots” where excitation preferentially occurred. They also examined polyphasic stimuli, which caused excitation at both the negative and positive peaks of dEy/dy nearly simultaneously. I won’t reproduce all their figures, but I recommend you download a copy of the paper and see them for yourself.

Why do I like this paper so much? For several reasons.

  • It’s an elegant example of how theory suggests an experiment, which once confirmed leads to additional predictions, resulting in even more experiments, and so on; a virtuous cycle
  • Their illustrations are informative and clear (although I do like the color in my versions). You should be able to get the main point of a scientific paper by merely looking through the figures, and you can do that with Maccabee et al.’s article.
  • In vitro experiments (nerve in a dish) are nice because they strip away all the confounding details of in vivo (nerve in an arm) experiments. You can manipulate the system (say, by adding a couple lucite cylinders) and determine how the nerve responds. Of course, some would say in vivo experiments are better because they include all the complexities of an actual arm. As you might guess, I prefer the simplicity and elegance of in vitro experiments. 
  • If you want a coil that stimulates a peripheral nerve below its center, as opposed to off to one side, you can use a four-leaf-coil.
  • Finally, I like this article because Peter Basser and I were the ones who made the theoretical prediction that magnetic stimulation should occur where dEy/dy, not Ey, is maximum (Roth and Basser, “Model of the Stimulation of a Nerve Fiber by Electromagnetic Induction,” IEEE Transactions on Biomedical Engineering, Volume 37, Pages 588-597, 1990). I always love to see my own predictions verified. 

I’ve lost track of my friend Paul Maccabee, but I can assure you that he did good work studying magnetic stimulation of nerves. His article is well worth reading.

Friday, February 12, 2021

A Mechanism for the Dip in the Strength-Interval Curve During Anodal Stimulation of Cardiac Tissue

Scientific articles aren’t published until they’ve undergone peer review. When a manuscript is submitted to a scientific journal, the editor asks several experts to read it and provide their recommendation. All my papers were reviewed and most were accepted and published, although usually after a revision. Today, I’ll tell you about one of my manuscripts that did not survive peer review. I’m glad it didn’t.

In the early 1990s, I was browsing in the library at the National Institutes of Health—where I worked—and stumbled upon an article by Egbert Dekker about the dip in the anodal strength-interval curve.

Dekker, E. (1970)  “Direct Current Make and Break Thresholds for Pacemaker Electrodes on the Canine Ventricle,” Circulation Research, Volume 27, Pages 811–823.
In Dekker’s experiment, he stimulated a dog heart twice: first (S1) to excite an action potential, and then again (S2) during or after the refractory period. You expect that for a short interval between S1 and S2 the tissue is still refractory, or unexcitable, and you’ll get no response to S2. Wait a little longer and the tissue is partially refractory; you’ll excite a second action potential if S2 is strong enough. Wait longer still and the tissue will have returned to rest; a weak S2 will excite it. So, a plot of S2 threshold strength versus S1-S2 interval (the strength-interval curve) ought to decrease.

Dekker observed that the strength-interval curve behaved as expected when S2 was provided by a cathode (an electrode having a negative voltage). A positive anode, however, produced a strength-interval curve containing a dip. In other words, there was an oddball section of the anodal curve that increased with the interval. 

The cathodal and anodal strength-interval curves.

Moreover, Dekker observed two types of excitation: make and break. Make occurred after a stimulus pulse began, and break after it ended. Both anodal and cathodal stimuli could cause make and break excitation. (For more about make and break, see my previous post.)

I decided to examine make and break excitation and the dip in the anodal strength-interval curve using a computer simulation. The bidomain model (see Section 7.9 in Intermediate Physics for Medicine and Biology) represented the anisotropic electrical properties of cardiac tissue. The introduction of the resulting paper stated

In this study, my primary goal is to present a hypothesis for the mechanism of the dip in the anodal strength-interval curve: The dip arises from a complex interaction between anode-break and anode-make excitation. This hypothesis is explored in detail and supported by numerical calculations using the bidomain model. The same mechanism may explain the no-response phenomenon. I also consider the induction of periodic responses [a cardiac arrhythmia] from a premature anodal stimulus. The bidomain model was used previously to investigate the cathodal strength-interval curve; in this study, these calculations are extended to investigate anodal stimulation.
When I submitted this manuscript to a journal, it was rejected! Why? It contained a fatal flaw. To represent how the membrane ion channels opened and closed, I had used the Hodgkin and Huxley model, appropriate for a nerve axon. Yet, the nerve and cardiac action potentials are different. For example, the action potential in the heart lasts a hundred times longer than in a nerve.

After swearing and pouting, I calmed down and redid the calculation using an ion channel model more appropriate for cardiac tissue, and then published a series of papers that are among my best.
Roth, B. J. (1995) “A Mathematical Model of Make and Break Electrical Stimulation of Cardiac Tissue by a Unipolar Anode or Cathode,” IEEE Transactions on Biomedical Engineering, Volume 42, Pages 1174-1184.

Roth, B. J. (1996) “Strength-Interval Curves for Cardiac Tissue Predicted Using the Bidomain Model,” Journal of Cardiovascular Electrophysiology, Volume 7, Pages 722-737.

Roth, B. J. (1997) “Nonsustained Reentry Following Successive Stimulation of Cardiac Tissue Through a Unipolar Electrode,” Journal of Cardiovascular Electrophysiology, Volume 8, Pages 768-778.
I kept a copy of the rejected paper (you can download it here). It’s interesting for what it got right, and what it got wrong.
 
The response of cardiac tissue to S1/S2 stimulation, for a cathode (top) and anode (bottom).
"Strength" is the S2 strength, and "Interval" is the S1-S2 interval.
"No Response" (N) means S2 did not excite an action potential,
"Make" means an action potential was excited after S2 turned on,
"Break" means an action potential was excited after S2 turned off, and
"E" means one (gray) or more (black) extra action potentials were triggered by S2 (reentry).
Beware! These calculations were from my rejected paper.

What it got right: The paper identified make and break regions of the strength-interval curve, predicted a dip in the anodal curve but not the cathodal curve, and produced reentry for strong stimuli near the make/break transition. It even reproduced the no-response phenomenon, in which a strong stimulus excites an action potential but an even stronger stimulus does not.

What it got wrong: Cathode-break excitation was missing. The mechanism for anode-break excitation was incorrect. The Hodgkin-Huxley model predicts that anode-break excitation arises from the ion channel kinetics (for the cognoscenti, hyperpolarization removes sodium channel inactivation). This type of anode-break excitation doesn’t happen in the heart but did occur in my simulations, leading me astray. This wrong anode-break mechanism led to wrong explanations for the dip in the anodal strength-interval curve and the no-response phenomenon. (For the correct mechanism, look here.)

Below I reproduce the final paragraph of the manuscript, with the parts that were wrong in red.
“What useful conclusions result from these simulations?” is a fair question, given the limitations of the model. I believe the primary contribution is a hypothetical mechanism for the dip in the anodal strength-interval curve. The dip may arise from a complex interaction of anode-break and anode-make stimulation: A nonpropagating active response at the virtual cathode raises the threshold for anode-break stimulation under the anode. The same interaction could explain the no-response phenomenon. A second contribution is a hypothesis for the mechanism generating periodic responses to strong anodal stimuli: Anode-make stimulation cannot propagate back toward the anode because of the strong hyperpolarization, and the subsequent excitation of the tissue under the anode occurs with sufficient delay that a reentrant loop arises. This hypothesis is related to, but not the same as, the one presented by Saypol and Roth for cathodally induced periodic responses. These mechanisms are suggested by my numerical simulation using a simplified model; whether they play a role in the behavior of real cardiac tissue is unknown. Hopefully, my results will encourage more accurate simulations and, even more importantly, additional experimental measurements of the spatial-temporal distribution of transmembrane potential around the stimulating electrode during premature stimulation of cardiac tissue.
Even though this manuscript was flawed, it foreshadowed much of my research program for the mid 1990s; it was all there, in the rough. Moreover, in this case the reviewers were right and I was wrong. At the time, I was angry that anyone would reject my paper. Now, in retrospect, I realize they did me a favor; I benefited from their advice. For any young scientist who might be reading this post, don’t be too discouraged by critical reviews and rejection. Give yourself a day to whine and fuss, then fix the problems that need fixing and move on. That’s the way peer review works.

Friday, February 5, 2021

The Spectrum of Scattered X-Rays

Chapter 15 of Intermediate Physics for Medicine and Biology discusses Compton Scattering. In this process, an x-ray photon scatters off a free electron, creating a scattered photon and a recoiling electron. The wavelength shift between the incident and scattered photons, Δλ, is given by the Compton scattering formula (Eq. 15.11 in IPMB)

Δλ = h/mc (1 − cosθ) ,

where h is Planck’s constant, c is the speed of light, m is the mass of the electron, and θ is the scattering angle. The quantity h/mc is called the Compton wavelength of the electron.

I enjoy studying experiments that first measure fundamental quantities like the Compton wavelength. Such an experiment is described in Arthur Compton’s article
Compton, A. H. (1923) The Spectrum of Scattered X-Rays. Physical Review, Volume 22, Pages 409–413.
Compton’s x-ray source (emitting the Kα line from molybdenum) irradiated a scatterer (graphite). He performed his experiment for different scattering angles θ. For each angle, he first collimated the scattered beam (using small holes in lead sheets) and then reflected it from a crystal (calcite). The purpose of the crystal was to determine the wavelength of the scattered photon by x-ray diffraction. Russ Hobbie and I don’t analyze x-ray diffraction in IPMB. The wavelength λ of the diffracted photon is given by Bragg’s law

λ = 2 d sinϕ ,

where d is the spacing of atomic planes (for calcite, d = 3.036 Å) and ϕ is the angle between the x-ray beam and the crystal surface. For fixed θ, Compton would rotate the crystal, thereby scanning ϕ and analyzing the beam as a function of wavelength. The intensity of the beam would be recorded by a detector (an ionization chamber).

Let’s analyze Compton’s experiment in a new homework problem.
Section 15.4

Problem 7½. Use the data below to calculate the Compton wavelength of the electron (in nm). Estimate the uncertainty in your value. Compton’s experiment detected x-rays at both the incident wavelength (coherent scattering) and at a modified or shifted wavelength (Compton scattering).
A drawing of Compton's data he used to determine the Compton wavelength of the electron.

I like this exercise because it requires the reader to do many things: 

  • Decide which spectral line is coherent scattering and which is Compton scattering. 
  • Choose which angle θ to analyze. 
  • Estimate the angle ϕ of each spectral peak from the data. 
  • Approximate the uncertainty in the estimation of ϕ
  • Convert the values of ϕ from degrees/minutes to decimal degrees
  • Determine the wavelength for each angle using Bragg’s law. 
  • Calculate the wavelength shift. 
  • Relate the wavelength shift to the Compton wavelength. 
  • Compute the Compton wavelength. 
  • Propagate the uncertainty
  • Convert from Ångstroms to nanometers.
If you can do all that, you know what you’re doing.

Compton’s experiment played a key role in establishing the quantum theory of light and wave-particle duality. He was awarded the 1927 Nobel Prize in Physics for this research. Let’s give him the last word. Here are the final two sentences of his paper.
This satisfactory agreement between the experiments and the theory gives confidence in the quantum formula for the change in wave-length due to scattering. There is, indeed, no indication of any discrepancy whatever, for the range of wave-length investigated, when this formula is applied to the wave-length of the modified ray.

Friday, January 29, 2021

Stable Nuclei

Fig. 17.2 in Intermediate Physics for Medicine and Biology.
Fig. 17.2 in Intermediate
Physics for Medicine and Biology
.
In Figure 17.2 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I show a plot of all the stable nuclei. The vertical axis is the number of protons, Z (the atomic number), and the horizontal axis is the number of neutrons, N (the neutron number). The mass number A equals Z + N. Each tiny black box in the figure corresponds to a stable isotope

This figure summarizes a tremendous amount of information about nuclear physics. Unfortunately, the drawing is too small to show much detail. We must magnify part of the drawing to tell what boxes correspond to what isotopes. In this post, I provide several such magnified views.

Figure 17.2, with part of the drawing magnified.

Let’s begin by magnifying the bottom left corner, corresponding to the lightest nuclei. The most abundant isotope of hydrogen consists of a single proton. In general, an isotope is denoted using the chemical symbol with a left subscript Z and a left superscript A. The chemical symbol and atomic number are redundant, so we’ll drop the subscript. A proton is written as 1H.

The nucleus to the right of 1H is 2H, called a deuteron, consisting of one proton and one neutron. Deuterium exists in nature but is rare. The isotope 3H also exists but isn’t stable (its half life is 12 years) so it’s not included in the drawing.

The row above hydrogen is helium, with two stable isotopes: 3He and 4He. You probably know the nucleus 4He by another name; the alpha particle. As you move up to higher rows you find the elements lithium, beryllium, and  boron (10B is used for boron neutron capture therapy). Light isotopes tend to cluster around the dashed line Z = N.

Figure 17.2, with part of the drawing magnified.

Moving up and to the right brings us to essential elements for life: carbon, nitrogen, and oxygen. Certain values of Z and N, called magic numbers, lead to particularly stable nuclei: 2, 8, 20, 28, 50, 82, and 126. Oxygen is element eight, and Z = 8 is magic, so it has three stable isotopes. The isotope 16O is doubly magic (Z = 8 and N = 8) and is therefore the most abundant isotope of oxygen.

Figure 17.2, with part of the drawing magnified.

The next drawing shows the region around 40Ca, which is also doubly magic (Z = 20, N = 20). It is the heaviest isotope having Z = N. Heavier isotopes need extra neutrons to overcome the Coulomb repulsion of the protons, so the region of stable isotopes bends down as it moves right. The dashed line indicating Z = N won’t appear in later figures; it’ll be way left of the magnified region. Four stable isotopes (37Cl, 38Ar, 39K, and 40Ca) have a magic number of neutrons N = 20. Calcium, with its magic number of protons, has five stable isotopes. No stable isotopes correspond to N = 19 or 21. In general, you’ll find more stable isotopes with even values of Z and N than odd.

Figure 17.2, with part of the drawing magnified.

Next we move up to the region ranging from Z = 42 (molybdenum) to Z = 44 (ruthenium). No stable isotopes exist for Z = 43 (technetium); a blank row stretches across the region of stability. As discussed in Chapter 17 of IPMB, the unstable 99Mo decays to the metastable state 99mTc (half life = 6 hours), which plays a crucial role in medical imaging.

Figure 17.2, with part of the drawing magnified.

Tin has a magic number of protons (Z = 50), resulting in ten stable isotopes, the most of any element.

Figure 17.2, with part of the drawing magnified.

As we move to the right, the region of stability ends. The heaviest stable isotope is lead (Z = 82), which has a magic number of protons and four stable isotopes. Above that, nothing. There are unstable isotopes with long half lives; so long that they are still found on earth. Scientists used to think that an isotope of bismuth, 209Bi (Z = 83, N  = 126), was stable, but now we know its half life is 2×1019 years. Uranium (Z = 92) has two unstable isotopes with half lives similar to the age of the earth, but that’s another story.

If you want to find information about all the stable isotopes, and other isotopes that are unstable, search the web for “table of the isotopes.” Here’s my favorite: https://www-nds.iaea.org/relnsd/vcharthtml/VChartHTML.html.

Friday, January 22, 2021

Oh Happy Day!

Eric Lander
Russ Hobbie and I hope that Intermediate Physics for Medicine and Biology will inspire young scientists to study at the interface between physics and physiology, and to work at the boundary between mathematics and medicine. But what sort of job can you get with such a multidisciplinary background? How about Presidential Science Advisor and Director of the White House Office of Science and Technology Policy! This week President Biden nominated Eric Lander—mathematician and geneticist—to that important position.

Lander is no amateur in mathematics. He obtained a PhD in the field from Oxford, which he attended as a Rhodes Scholar. Later his attention turned to molecular biology and he reinvented himself as a geneticist. He received a MacArthur “genius” grant in 1987, and co-led the human genome project. Now he’ll be part of the Biden administration; the most prominent scientist to hold a cabinet-level position since biological physicist Steven Chu.

Im overjoyed that respect for science has returned to national politics. As we face critical issues, such as climate change and the covid-19 pandemic, input from scientists will be crucial. Im especially excited because not only does our new president respect science, but also—as I wrote in a letter to the editor in the Oakland Press last October—the Congressional representative from my own district, Elissa Slotkin, understands and appreciates science. During her fall campaign, I volunteered to write postcards, one of which you can read below.

The last four years have been grim, but the times they are a-changin. #Scienceisback.

Oh happy day!

Pioneer in Science: Eric Lander -- The Genesis of Genius

https://www.youtube.com/watch?v=IH4rn50arSY&lc=z13awxxhimyzinzc1231zfxwxpq1cnmqz04


Friday, January 15, 2021

Projections and Filtered Projections of a Square

Chapter 12 of Intermediate Physics for Medicine and Biology describes tomography. Russ Hobbie and I write

The reconstruction problem can be stated as follows. A function f(x,y) exists in two dimensions. Measurements are made that give projections: the integrals of f(x,y) along various lines as a function of displacement perpendicular to each line. For example, integration parallel to the y axis gives a function of x,
as shown in Fig. 12.12. The scan is repeated at many different angles θ with the x axis, giving a set of functions F(θ, x'), where x' is the distance along the axis at angle θ with the x axis.

One example in IPMB is the projection of a simple object: the circular top-hat

The projection can be calculated analytically

It’s independent of θ; it looks the same in every direction.

Let’s consider a slightly more complicated object: the square top-hat

This projection can be found using high school geometry and trigonometry (evaluating it is equivalent to finding the length of lines passing through the square at different angles). I leave the details to you. If you get stuck, email me (roth@oakland.edu) and I’ll send a picture of some squares and triangles that explains it.

The plot below shows the projections at four angles. For θ = 0° the projection is a rectangle; for θ = 45° it’s a triangle, and for intermediate angles (θ = 15° and 30°) it’s a trapezoid. Unlike the circular top-hat, the projection of the square top-hat depends on the direction.

The projections of a square top-hat, at different angles.

What I just described is the forward problem of tomography: calculating the projections from the object. As Russ and I wrote, usually the measuring device records projections, so you don’t have to calculate them. The central goal of tomography is the inverse problem: calculating the object from the projections. One way to perform such a reconstruction is a two-step procedure known as filtered back projection: first high-pass filter the projections and then back project them. In a previous post, I went through this entire procedure analytically for a circular top-hat. Today, I go through the filtering process analytically, obtaining an expression for the filtered projection of a square top-hat. 

Here we go. I warn you, theres lots of math. To perform the filtering, we first calculate the Fourier transform of the projection, CF(θ,k). Because the top-hat is even, we can use the cosine transform

where k is the spatial frequency.

Next, place the expression for F(θ,x') into the integral and evaluate it. Theres plenty of book-keeping, but the projection is either constant or linear in x', so the integrals are straightforward. I leave the details to you; if you work it out yourself, youll be delighted to find that many terms cancel, leaving the simple result 

To high-pass filter CF(θ,k), multiply it by |k|/2π to get the Fourier transform of the filtered projection CG(θ,k)

Finally, take the inverse Fourier transform to obtain the filtered projection G(θ,x'


Inserting our expression for CG(θ,k), we find

This integral is not trivial, but with some help from WolframAlpha I found

where Ci is the cosine integral. I admit, this is a complicated expression. The cosine integral goes to zero for large argument, so the upper limit vanishes. It goes to negative infinity logarithmically at zero argument. Were in luck, however, because the four cosine integrals conspire to cancel all the infinities, allowing us to obtain an analytical expression for the filtered projection

We did it! Below are plots of the filtered projections at four angles. 

The filtered projections of a square top-hat, at different angles.

The last thing to do is back project G(θ,x') to get the object f(x,y). Unfortunately, I see no hope of back-projecting this function analytically; its too complicated. If you can do it, let me know.

Why must we analyze all this math? Because solving a simple example analytically provides insight into filtered back projection. You can do tomography using canned computer code, but you won’t experience the process like you will by slogging through each step by hand. If you don’t buy that argument, then another reason for doing the math is: it’s fun!