Friday, March 26, 2021

Cooling by Radiation

In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss thermal radiation. If you’re a black body, the net power you radiate, wtot, is given by Eq. 14.41

wtot = S σSB (T4Ts4) ,                (14.41)

where S is the surface area, σSB is the Stefan-Boltzmann constant (5.67 × 10−8 W m−2 K−4), T is the absolute temperature of your body (about 310 K), and Ts is the temperature of your surroundings. The T4 term is the radiation you emit, and the Ts4 term is the radiation you absorb.

The fourth power that appears in this expression is annoying. It means we must use absolute temperature in kelvins (K); you get the wrong answer if you use temperature in degrees Celsius (°C). It also means the expression is nonlinear; wtot is not proportional to the temperature difference TTs.

On the absolute temperature scale, the difference between the temperature of your body (310 K) and the temperature of your surroundings (say, 293 K at 20 °C) is only about 5%. In this case, we simplify the expression for wtot by linearizing it. To see what I mean, try Homework Problem 14.32 in IPMB.
Section 14.9 
Problem 32. Show that an approximation to Eq. 14.41 for small temperature differences is wtot = S Krad (TTs). Deduce the value of Krad at body temperature. Hint: Factor T4Ts4 =  (TTs)(…). You should get Krad = 6.76 W m−2 K−1.
The constant Krad has the same units as a convection coefficient (see Homework Problem 51 in Chapter 3 of IPMB). Think of it as an effective convection coefficient for radiative heat loss. Once you determine Krad, you can use either the kelvin or Celsius temperature scales for TTs, so you can write its units as W m−2 °C−1.
 
Air and Water, by Mark Denny, superimposed on Intermediate Physics for Medicine and Biology.
Air and Water,
by Mark Denny.
In Air and Water, Mark Denny analyzes the convection coefficient. In a stagnant fluid, the convection coefficient depends only on the fluid’s thermal conductivity and the body’s size. For a sphere, it is inversely proportional to the diameter, meaning that small bodies are more effective at convective cooling per unit surface area than large bodies. If the body undergoes free convection or forced convection (for both cases the surrounding fluid is moving), the expression for the convection coefficient is more complicated, and depends on factors such as the Reynolds number and Prandtl number of the fluid flow. Denny gives values for the convection coefficient as a function of body size for both air and water. Usually, these values are greater than the 6.76 W m−2 °C−1 for radiation. However, for large bodies in air, radiation can compete with convection as the dominate mechanism. For people, radiation is an important mechanism for cooling. For a dolphin or mouse, it isn’t. Elephants probably make good use of radiative cooling.
 
Finally, our analysis implies that when the difference between the temperatures of the body and the surroundings is small, a body whose primary mechanism for getting rid of heat is radiation will cool exponentially following Newton’s law of cooling.

Friday, March 19, 2021

The Carr-Purcell-Meiboom-Gill Pulse Sequence

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny...” 

Isaac Asimov

In Section 18.8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe the Carr-Purcell pulse sequence, used in magnetic resonance imaging.

When a sequence of π [180° radio-frequency] pulses that nutate M [the magnetization vector] about the x' axis are applied at TE/2, 3TE/2, 5TE/2, etc., a sequence of echoes are formed [in the Mx signal], the amplitudes of which decay with relaxation time T2. This is shown in Fig. 18.19.
The Carr-Purcell pulse sequence, as shown in Fig. 18.19 of Intermediate Physics for Medicine and Biology.
Fig. 18.19  The Carr-Purcell pulse sequence.
All π pulses nutate about the x' axis.
The envelope of echoes decays as et/T2.

Russ and I then discuss the Carr-Purcell-Meiboom-Gill pulse sequence.
One disadvantage of the CP [Carr-Purcell] sequence is that the π pulse must be very accurate or a cumulative error builds up in the successive pulses. The Carr-Purcell-Meiboom-Gill sequence overcomes this problem. The initial π/2 [90° radio-frequency] pulse nutates M about the x' axis as before, but the subsequent [π] pulses are shifted a quarter cycle in time, which causes them to rotate about the y' axis.
 
The Carr-Purcell-Meiboom-Gill pulse sequence, as shown in Fig. 18.21 of Intermediate Physics for Medicine and Biology.
Fig. 18.21  The Carr-Purcell-Meiboom-Gill pulse sequence.

The first page of Meiboom, S. and Gill, D. (1958) “Modified Spin-Echo Method for Measuring Nuclear Relaxation Times.” Rev. Sci. Instr. 29:688–691, superimposed on Intermediate Physics for Medicine and Biology.
Meiboom, S. and Gill, D. (1958)
“Modified Spin-Echo Method for
Measuring Nuclear Relaxation Times.”
Rev. Sci. Instr.
29:688–691.
Students might enjoy reading the abstract of Saul Meiboom and David Gill’s 1958 article published in the Review of Scientific Instruments (Volume 29, Pages 688-691).
A spin echo method adapted to the measurement of long nuclear relaxation times (T2) in liquids is described. The pulse sequence is identical to the one proposed by Carr and Purcell, but the rf [radio-frequency] of the successive pulses is coherent, and a phase shift of 90° is introduced in the first pulse. Very long T2 values can be measured without appreciable effect of diffusion.
This short paper is so highly cited that it was featured in a 1980 Citation Classic commentary, in which Meiboom reflected on the significance of the research.
The work leading to this paper was done nearly 25 years ago at the Weizmann Institute of Science, Rehovot, Israel. David Gill, who was then a graduate student… , set out to measure NMR T2-relaxation times in liquids, using the well-known Carr-Purcell pulse train scheme. He soon found that at high pulse repetition rates adjustments became very critical, and echo decays, which ideally should be exponential, often exhibited beats and other irregularities. But he also saw that on rare and unpredictable occasions a beautiful exponential decay was observed... Somehow the recognition emerged that the chance occurrence of a 90° phase shift of the nuclear polarization [magnetization] must underlie the observations. It became clear that in the presence of such a shift a stable, self-correcting state of the nuclear polarization is produced, while the original scheme results in an unstable state, for which deviations are cumulative. From here it was an easy step to the introduction of an intentional phase shift in the applied pulse train, and the consistent production of good decays.
The key point is that the delay between the initial π/2 pulse (to flip the spins into the xy plane) and the string of π pulses (to create the echoes) must be timed carefully (the pulses must be coherent). Even adding a delay corresponding to a quarter of a single oscillation changes everything. In a two-tesla MRI scanner, the Larmor frequency is 83 MHz, so one period is 12 nanoseconds. Therefore, if the timing is off by just a few nanoseconds, the method won’t work.

Initially Gill didn’t worry about timing the pulses precisely, so usually he was using the error-prone Carr-Purcell sequence. Occasionally he got lucky and the timing was just right; he was using what’s now called the Carr-Purcell-Meiboom-Gill sequence. Meiboom and Gill “somehow” were able to deduce what was happening and fix the problem. Meiboom believes their paper is cited so often because it was the first to recognize the importance of maintaining phase relations between the different pulses in an MRI pulse sequence.

In his commentary, Meiboom notes that
Although in hindsight the 90° phase shift seems the logical and almost obvious thing to do, its introduction was triggered by a chance observation, rather than by clever a priori reasoning. I suspect (though I have no proof) that this applies to many scientific developments, even if the actual birth process of a new idea is seldom described in the introduction to the relevant paper.
If you’re a grad student working on a difficult experiment that’s behaving oddly, don’t be discouraged if you hear yourself saying “that’s funny...” A discovery might be sitting right in front of you!  

Friday, March 12, 2021

The Rest of the Story 2

Hermann von Helmholtz in 1948.
Hermann in 1848.

The Rest of the Story

Hermann was born in 1821 in Potsdam, Germany. He was often sick as a child, suffering from illnesses such as scarlet fever, and started school late. He was hampered by a poor memory for disconnected facts, making subjects like languages and history difficult, so his interest turned to science. His father loaned him some cheap glass lenses that he used to build optical instruments. He wanted to become a physicist, but his family couldn’t afford to send him to college. Instead, he studied hard to pass an exam that won him a place in a government medical school in Berlin, where his education would be free if he served in the military for five years after he graduated.

The seventeen-year-old Hermann moved to Berlin in 1838. He brought his piano with him, on which he loved to play Mozart and Beethoven. He became friends with his fellow students Ernest von Brücke and Emil de Bois-Reymond, began doing scientific research under the direction of physiologist Johannes Müller, and taught himself higher mathematics in his spare time. By 1843 he graduated and began his required service as an army surgeon.

Life in the army required long hours, and Hermann was isolated from the scientific establishment in Berlin. But with the help of Brücke and de Bois-Reymond he somehow continued his research. His constitution was still delicate, and sometimes he would take time off to restore his health. Near the end of his five-year commitment to the army he fell in love with Olga von Velten, who would sing while he accompanied her on the piano. They became engaged, but he knew they could not marry until he found an academic job after his military service ended, and for that he needed to establish himself as a first-rank scientist. This illness-prone, cash-strapped, over-worked army doctor with a poor memory and a love for music needed to find a research project that would propel him to the top of German science.

Hermann rose to the challenge. He began a careful study of the balance between muscle metabolism and contraction. Using both experiments and mathematics he established the conservation of energy, and in the process showed that no vital force was needed to explain life. On July 23, 1847 he announced his discovery at a meeting of the German Physical Society.  

This research led to a faculty position in Berlin and his marriage to Olga. His career took off, and he later made contributions to the study of vision, hearing, nerve conduction, and ophthalmology. Today, the largest Association of German Research Centers bears his name. Many consider Hermann von Helmholtz to be the greatest biological physicist of all time.

And now you know THE REST OF THE STORY.

Good day! 

_____________________________________________________________

This blog post was written in the style of Paul Harvey’s “The Rest of the Story” radio program. The content is based on a biography of Helmholtz written by his friend and college roommate Leo Koenigsberger. You can read about nerve conduction and Helmholtz’s first measurement of its propagation speed in Chapter 6 of Intermediate Physics for Medicine and Biology. This August we will celebrate the 200th anniversary of Hermann von Helmholtz’s birth. 

Click here for another IPMBThe Rest of the Story” post.

 
Charles Osgood pays tribute to the master storyteller Paul Harvey.

Friday, March 5, 2021

Estimating the Properties of Water

Water
from: www.middleschoolchemistry.com
 
I found a manuscript on the arXiv by Andrew Lucas about estimating macroscopic properties of materials using just a few microscopic parameters. I decided to try a version of this analysis myself. It’s based on Lucas’s work, with a few modifications. I focus exclusively on water because of its importance for biological physics, and make order-of-magnitude calculations like those Russ Hobbie and I discuss in the first section of Intermediate Physics for Medicine and Biology.

My goal is to estimate the properties of water using three numbers: the size, mass, and energy associated with water molecules. We take the size to be the center-to-center distance between molecules, which is about 3 , or 3 × 10−10 m. The mass of a water molecule is 18 (the molecular weight) times the mass of a proton, or about 3 × 10−26 kg. The energy associated with one hydrogen bond between water molecules is about 0.2 eV, or 3 × 10−20 J. This is roughly eight times the thermal energy kT at body temperature, where k is Boltzmann’s constant (1.4 × 10−23 J K−1) and T is the absolute temperature (310 K). A water molecule has about four hydrogen bonds with neighboring molecules.

Density

Estimating the density of water, ρ, is Homework Problem 4 in Chapter 1 of IPMB. Density is mass divided by volume, and volume is distance cubed

ρ = (3 × 10−26 kg)/(3 × 10−10 m)3 = 1100 kg m−3 = 1.1 g cm−3.

The accepted value is ρ = 1.0 g cm−3, so our calculation is about 10% off; not bad for an order-of-magnitude estimate.

Compressibility

The compressibility of water, κ, is a measure of how the volume of water decreases with increasing pressure. It has dimensions of inverse pressure. The pressure is typically thought of as force per unit area, but we can multiply numerator and denominator by distance and express it as energy per unit volume. Therefore, the compressibility is approximately distance cubed over the total energy of the four hydrogen bonds

κ = (3 × 10−10 m)3/[4(3 × 10−20 J)] = 0.25 × 10−9 Pa−1 = 0.25 GPa−1 ,

implying a bulk modulus, B (the reciprocal of the compressibility), of 4 GPa. Water has a bulk modulus of about B = 2.2 GPa, so our estimate is within a factor of two.

Speed of Sound

Once you know the density and compressibility, you can calculate the speed of sound, c, as (see Eq. 13.11 in IPMB)

c = (ρ κ)−1/2 = 1/√[(1100 kg m−3) (0.25 × 10−9 Pa−1)] = 1900 m s−1 = 1.9 km s−1.

The measured value of the speed of sound in water is about c = 1.5 km s−1, which is pretty close for a back-of-the-envelope estimate.

Latent Heat

A homework problem about vapor pressure in Chapter 3 of IPMB uses water’s latent heat of vaporization, L, which is the energy required to boil water per kilogram. We estimate it as

L = 4(3 × 10−20 J)/(3 × 10−26 kg) = 4.0 × 106 J kg−1 = 4 MJ kg−1.

The known value is L = 2.5 MJ kg−1. Not great, but not bad.

Surface Tension

The surface tension, γ, is typically expressed as force per unit length, which is equivalent to the energy per unit area. At a surface, we estimate one of the four hydrogen bonds is missing, so

γ = (3 × 10−20 J)/(3 × 10−10 m)2 = 0.33 J m−2 .

The measured value is γ = 0.07 J m−2, which is about five times less than our calculation. This is a bigger discrepancy than I’d like for an order-of-magnitude estimate, but it’s not horrible.

Viscosity

The coefficient of viscosity, η, has units of kg m−1 s−1. We can use the mass of the water molecule in kilograms, and the distance between molecules in meters, but we don’t have a time scale. However, energy has units of kg m2 s−2, so we can take the square root of mass times distance squared over energy and get a unit of time, τ

τ = √[(3 × 10−26 kg) (3 × 10−10 m)2/4(3 × 10−20 J)] = 0.15 × 10−12 s = 0.15 ps.

We can think of this as a time characterizing the vibrations about equilibrium of the molecules. 
 
The viscosity of water should therefore be on the order of

η = (3 × 10−26 kg)/[(3 × 10−10 m) (0.15 × 10−12 s)] = 0.67 × 10−3 kg m−1 s−1.

Water has a viscosity coefficient of about η = 1 × 10−3 kg m−1 s−1. I admit this analysis provides little insight into the mechanism underlying viscous effects, and it doesn’t explain the enormous temperature dependence of η, but it gets the right order of magnitude.

Specific Heat

The heat capacity is the energy needed to raise the temperature of water by one degree. Thermodynamics implies that the heat capacity is typically equal to Boltzmann’s constant times the number of degrees of freedom per molecule times the number of molecules. The number of degrees of freedom is a subtle thermodynamic concept, but we can approximate it as the number of hydrogen bonds per molecule; about four. Often heat capacity is expressed as the specific heat, C, which is the heat capacity per unit mass. In that case, the specific heat is

C = 4 (1.4 × 10−23 J K−1)/(3 × 10−26 kg) = 1900 J K−1 kg−1.

The measured value is C = 4200 J K−1 kg−1, which is more than a factor of two larger than our estimate. I’m not sure why our value is so low, but probably there are rotational degrees of freedom in addition to the four vibrational modes we counted.

Diffusion

The self-diffusion constant of water can be estimated using the Stokes-Einstein equation relating diffusion and viscosity, D = kT/(6πηa), where a is the size of the molecule. The thermal energy kT is about one eighth of the energy of a hydrogen bond. Therefore,

D = [(3 × 10−20 J)/8]/[(6)(3.14)(0.67 × 10−3 kg m−1 s−1)(3 × 10−10 m)] = 10−9 m2 s−1.

Figure 4.11 in IPMB suggests the measured diffusion constant is about twice this estimate: D = 2 × 10−9 m2 s−1

 
Air and Water,
by Mark Denny.
We didn’t do too bad. Three microscopic parameters, plus the temperature, gave us estimates of density, compressibility, speed of sound, latent heat, surface tension, viscosity, specific heat, and the diffusion constant. This is almost all the properties of water discussed in Mark Denny’s wonderful book Air and Water. Fermi would be proud.

Friday, February 26, 2021

Bridging Physics and Biology Teaching Through Modeling

In this blog, I often stress the value of toy models. I’m not the only one who feels this way. Anne-Marie Hoskinson and her colleagues suggest that modeling is an important tool for teaching at the interface of physics and biology (“Bridging Physics and Biology Teaching Through Modeling,” American Journal of Physics, Volume 82, Pages 434–441, 2014). They write
While biology and physics might appear quite distinct to students, as scientific disciplines they both rely on observations and measurements to explain or to make predictions about the natural world. As a shared scientific practice, modeling is fundamental to both biology and physics. Models in these two disciplines serve to explain phenomena of the natural world; they make predictions that drive hypothesis generation and data collection, or they explain the function of an entity. While each discipline may prioritize different types of representations (e.g., diagrams vs mathematical equations) for building and depicting their underlying models, these differences reflect merely alternative uses of a common modeling process. Building on this foundational link between the disciplines, we propose that teaching science courses with an overarching emphasis on scientific practices, particularly modeling, will help students achieve an integrated and coherent understanding that will allow them to drive discovery in the interdisciplinary sciences.
One of their examples is the cardiac cycle, which they compare and contrast with the thermodynamic Carnot cycle. The cardiac cycle is best described graphically, using a pressure-volume diagram. Russ Hobbie and I present a PV plot of the left ventricle in Figure 1.34 of Intermediate Physics for Medicine and Biology. Below, I modify this plot, trying to capture its essence while simplifying it for easier analysis. As is my wont, I present this toy model as a new homework problem.
Sec. 1.19

Problem 38 ½. Consider a toy model for the behavior of the heart’s left ventricle, as expressed in the pressure-volume diagram

(a) Which sections of the cycle (AB, BC, CD, DA) correspond to relaxation, contraction, ejection, and filling?

(b) Which points during the cycle (A, B, C, D) correspond to the aortic value opening, the aortic value closing, the mitral value opening, and the mitral valve closing?

(c) Plot the pressure versus time and the volume versus time (use a common horizontal time axis, but individual vertical pressure and volume axes).

(d) What is the systolic pressure (in mm Hg)?

(e) Calculate the stroke volume (in ml). 

(f) If the heart rate is 70 beats per minute, calculate the cardiac output (in m3 s–1).

(g) Calculate the work done per beat (in joules).

(h) If the heart rate is 70 beats per minute, calculate the average power output (in watts).

(i) Describe in words the four phases of the cardiac cycle.

(j) What are some limitations of this toy model?

The last two parts of the problem are crucial. Many students can analyze equations or plots, but have difficulty relating them to physical events and processes. Translation between words, pictures, and equations is an essential skill. 

All toy models are simplifications; one of their primary uses is to point the way toward more realistic—albeit more complex—descriptions. Many scientific papers contain a paragraph in the discussion section describing the approximations and assumptions underlying the research.

Below is a Wiggers diagram from Wikipedia, which illustrates just how complex cardiac physiology can be. Yet, our toy model captures many general features of the diagram.


A Wiggers diagram summarizing cardiac physiology.
Source: adh30 revised work by DanielChangMD who revised original work of DestinyQx;
Redrawn as SVG by xavax, CC BY-SA 4.0, via Wikimedia Commons

I’ll give Hoskinson and her coworkers the last word.

“We have provided a complementary view to transforming undergraduate science courses by illustrating how physics and biology are united in their underlying use of scientific models and by describing how this practice can be leveraged to bridge the teaching of physics and biology.”

The Wiggers diagram explained in three minutes!
https://www.youtube.com/watch?v=0sogXvxxV0E

Friday, February 19, 2021

Magnetic Coil Stimulation of Straight and Bent Amphibian and Mammalian Peripheral Nerve in Vitro: Locus of Excitation

In this blog, I like to highlight important journal articles. One of my favorites is “Magnetic Coil Stimulation of Straight and Bent Amphibian and Mammalian Peripheral Nerve in Vitro: Locus of Excitation” by Paul Maccabee and his colleagues (Journal of Physiology, Volume 460, Pages 201–219, 1993). This paper isn’t cited in Intermediate Physics for Medicine and Biology, but it should be. It’s related to Homework Problem 32 in Chapter 8, about magnetic stimulation of a peripheral nerve.

The best part of Maccabee’s article is the pictures. I reproduce three of them below, somewhat modified from the originals. 

The electric field and its derivative produced by magnetic stimulation using a figure-of-eight coil. Based on an illustration in Maccabee et al. (1993).
Fig. 1. The electric field and its derivative produced by magnetic stimulation using a figure-of-eight coil. Based on an illustration in Maccabee et al. (1993).

The main topic of the paper was how an electric field induced in tissue during magnetic stimulation could excite a nerve. The first order of business was to map the induced electric field. Figure 1 shows the measured y-component of the electric field, Ey, and its derivative dEy/dy, in a plane below a figure-of-eight coil. The electric field was strongest under the center of the coil, while the derivative had a large positive peak about 2 cm from the center, with a large negative peak roughly 2 cm in the other direction. Maccabee et al. included the derivative of the electric field in their figure because cable theory predicted that if you placed a nerve below the coil parallel to the y axis, the nerve would be excited where −dEy/dy was largest. 

An experiment to show how the stimulus location changes with the stimulus polarity. Based on an illustration in Maccabee et al. (1993).
Fig. 2. An experiment to show how the stimulus location changes with the stimulus polarity. Based on an illustration in Maccabee et al. (1993).

The most important experiment is shown in Figure 2. The goal was to test the prediction that the nerve was excited where dEy/dy is maximum. The method was to simulate the nerve using one polarity and then the other, and determine if the location where the nerve is stimulated shifted by about 4 cm, as Figure 1 suggests.

A bullfrog sciatic nerve (green) was dissected out of the animal and placed in a bath containing saline (light blue). An electrode (dark blue dot) recorded the action potential as it reached the end of the nerve. A figure-of-eight coil (red) was placed under the bath. First Maccabee et al. stimulated with one polarity so the stimulation site was to the right of the coil center, relatively close to the recording electrode. The recorded signal (yellow) consisted of a large, brief stimulus artifact followed by an action potential that propagated down the nerve with a speed of 40.5 m/s. Then, they reversed the stimulus polarity. As we saw in Fig. 1, this shifted the location of excitation to another point to the left of the coil center. The recorded signal (purple) again consisted of a stimulus artifact followed by an action potential. The action potential, however, arrived 0.9 ms later because it started from the left side of the coil and therefore had to travel farther to reach the recording electrode. They could determine the distance between the stimulation sites by dividing the speed by the latency shift; (40.5 m/s)/(0.9 ms) = 4.5 cm. This was almost the same as the distance between the two peaks in the plot of dEy/dy in Figure 1. The cable theory prediction was confirmed. 

The effect of insulating obstacles on the site of magnetic stimulation. Based on an illustration in Maccabee et al. (1993).
Fig. 3. The effect of insulating obstacles on the site of magnetic stimulation. Based on an illustration in Maccabee et al. (1993).

In another experiment, Maccabee and his coworkers further tested the theory (Fig. 3). The electric field induced during magnetic stimulation was perturbed by an obstruction. They placed two insulating lucite cylinders (yellow) on either side of the nerve, which forced the induced current to pass through the narrow opening between them. This increased the strength of the electric field (green), and caused the negative and positive peaks of the derivative of the electric field (dark blue) to move closer together. Cable theory predicted that if the cylinders were not present the latency shift upon change in polarity would be relatively long, while with the cylinders the latency shift would be relatively short. The experiment found a long latency (1.2 ms) without the cylinders and a short latency (0.3 ms) with them, confirming the prediction. This behavior might be important when stimulating, say, the median nerve as it passes between two bones in the arm.

In addition, Maccabee examined nerves containing bends, which created “hot spots” where excitation preferentially occurred. They also examined polyphasic stimuli, which caused excitation at both the negative and positive peaks of dEy/dy nearly simultaneously. I won’t reproduce all their figures, but I recommend you download a copy of the paper and see them for yourself.

Why do I like this paper so much? For several reasons.

  • It’s an elegant example of how theory suggests an experiment, which once confirmed leads to additional predictions, resulting in even more experiments, and so on; a virtuous cycle
  • Their illustrations are informative and clear (although I do like the color in my versions). You should be able to get the main point of a scientific paper by merely looking through the figures, and you can do that with Maccabee et al.’s article.
  • In vitro experiments (nerve in a dish) are nice because they strip away all the confounding details of in vivo (nerve in an arm) experiments. You can manipulate the system (say, by adding a couple lucite cylinders) and determine how the nerve responds. Of course, some would say in vivo experiments are better because they include all the complexities of an actual arm. As you might guess, I prefer the simplicity and elegance of in vitro experiments. 
  • If you want a coil that stimulates a peripheral nerve below its center, as opposed to off to one side, you can use a four-leaf-coil.
  • Finally, I like this article because Peter Basser and I were the ones who made the theoretical prediction that magnetic stimulation should occur where dEy/dy, not Ey, is maximum (Roth and Basser, “Model of the Stimulation of a Nerve Fiber by Electromagnetic Induction,” IEEE Transactions on Biomedical Engineering, Volume 37, Pages 588-597, 1990). I always love to see my own predictions verified. 

I’ve lost track of my friend Paul Maccabee, but I can assure you that he did good work studying magnetic stimulation of nerves. His article is well worth reading.

Friday, February 12, 2021

A Mechanism for the Dip in the Strength-Interval Curve During Anodal Stimulation of Cardiac Tissue

Scientific articles aren’t published until they’ve undergone peer review. When a manuscript is submitted to a scientific journal, the editor asks several experts to read it and provide their recommendation. All my papers were reviewed and most were accepted and published, although usually after a revision. Today, I’ll tell you about one of my manuscripts that did not survive peer review. I’m glad it didn’t.

In the early 1990s, I was browsing in the library at the National Institutes of Health—where I worked—and stumbled upon an article by Egbert Dekker about the dip in the anodal strength-interval curve.

Dekker, E. (1970)  “Direct Current Make and Break Thresholds for Pacemaker Electrodes on the Canine Ventricle,” Circulation Research, Volume 27, Pages 811–823.
In Dekker’s experiment, he stimulated a dog heart twice: first (S1) to excite an action potential, and then again (S2) during or after the refractory period. You expect that for a short interval between S1 and S2 the tissue is still refractory, or unexcitable, and you’ll get no response to S2. Wait a little longer and the tissue is partially refractory; you’ll excite a second action potential if S2 is strong enough. Wait longer still and the tissue will have returned to rest; a weak S2 will excite it. So, a plot of S2 threshold strength versus S1-S2 interval (the strength-interval curve) ought to decrease.

Dekker observed that the strength-interval curve behaved as expected when S2 was provided by a cathode (an electrode having a negative voltage). A positive anode, however, produced a strength-interval curve containing a dip. In other words, there was an oddball section of the anodal curve that increased with the interval. 

The cathodal and anodal strength-interval curves.

Moreover, Dekker observed two types of excitation: make and break. Make occurred after a stimulus pulse began, and break after it ended. Both anodal and cathodal stimuli could cause make and break excitation. (For more about make and break, see my previous post.)

I decided to examine make and break excitation and the dip in the anodal strength-interval curve using a computer simulation. The bidomain model (see Section 7.9 in Intermediate Physics for Medicine and Biology) represented the anisotropic electrical properties of cardiac tissue. The introduction of the resulting paper stated

In this study, my primary goal is to present a hypothesis for the mechanism of the dip in the anodal strength-interval curve: The dip arises from a complex interaction between anode-break and anode-make excitation. This hypothesis is explored in detail and supported by numerical calculations using the bidomain model. The same mechanism may explain the no-response phenomenon. I also consider the induction of periodic responses [a cardiac arrhythmia] from a premature anodal stimulus. The bidomain model was used previously to investigate the cathodal strength-interval curve; in this study, these calculations are extended to investigate anodal stimulation.
When I submitted this manuscript to a journal, it was rejected! Why? It contained a fatal flaw. To represent how the membrane ion channels opened and closed, I had used the Hodgkin and Huxley model, appropriate for a nerve axon. Yet, the nerve and cardiac action potentials are different. For example, the action potential in the heart lasts a hundred times longer than in a nerve.

After swearing and pouting, I calmed down and redid the calculation using an ion channel model more appropriate for cardiac tissue, and then published a series of papers that are among my best.
Roth, B. J. (1995) “A Mathematical Model of Make and Break Electrical Stimulation of Cardiac Tissue by a Unipolar Anode or Cathode,” IEEE Transactions on Biomedical Engineering, Volume 42, Pages 1174-1184.

Roth, B. J. (1996) “Strength-Interval Curves for Cardiac Tissue Predicted Using the Bidomain Model,” Journal of Cardiovascular Electrophysiology, Volume 7, Pages 722-737.

Roth, B. J. (1997) “Nonsustained Reentry Following Successive Stimulation of Cardiac Tissue Through a Unipolar Electrode,” Journal of Cardiovascular Electrophysiology, Volume 8, Pages 768-778.
I kept a copy of the rejected paper (you can download it here). It’s interesting for what it got right, and what it got wrong.
 
The response of cardiac tissue to S1/S2 stimulation, for a cathode (top) and anode (bottom).
"Strength" is the S2 strength, and "Interval" is the S1-S2 interval.
"No Response" (N) means S2 did not excite an action potential,
"Make" means an action potential was excited after S2 turned on,
"Break" means an action potential was excited after S2 turned off, and
"E" means one (gray) or more (black) extra action potentials were triggered by S2 (reentry).
Beware! These calculations were from my rejected paper.

What it got right: The paper identified make and break regions of the strength-interval curve, predicted a dip in the anodal curve but not the cathodal curve, and produced reentry for strong stimuli near the make/break transition. It even reproduced the no-response phenomenon, in which a strong stimulus excites an action potential but an even stronger stimulus does not.

What it got wrong: Cathode-break excitation was missing. The mechanism for anode-break excitation was incorrect. The Hodgkin-Huxley model predicts that anode-break excitation arises from the ion channel kinetics (for the cognoscenti, hyperpolarization removes sodium channel inactivation). This type of anode-break excitation doesn’t happen in the heart but did occur in my simulations, leading me astray. This wrong anode-break mechanism led to wrong explanations for the dip in the anodal strength-interval curve and the no-response phenomenon. (For the correct mechanism, look here.)

Below I reproduce the final paragraph of the manuscript, with the parts that were wrong in red.
“What useful conclusions result from these simulations?” is a fair question, given the limitations of the model. I believe the primary contribution is a hypothetical mechanism for the dip in the anodal strength-interval curve. The dip may arise from a complex interaction of anode-break and anode-make stimulation: A nonpropagating active response at the virtual cathode raises the threshold for anode-break stimulation under the anode. The same interaction could explain the no-response phenomenon. A second contribution is a hypothesis for the mechanism generating periodic responses to strong anodal stimuli: Anode-make stimulation cannot propagate back toward the anode because of the strong hyperpolarization, and the subsequent excitation of the tissue under the anode occurs with sufficient delay that a reentrant loop arises. This hypothesis is related to, but not the same as, the one presented by Saypol and Roth for cathodally induced periodic responses. These mechanisms are suggested by my numerical simulation using a simplified model; whether they play a role in the behavior of real cardiac tissue is unknown. Hopefully, my results will encourage more accurate simulations and, even more importantly, additional experimental measurements of the spatial-temporal distribution of transmembrane potential around the stimulating electrode during premature stimulation of cardiac tissue.
Even though this manuscript was flawed, it foreshadowed much of my research program for the mid 1990s; it was all there, in the rough. Moreover, in this case the reviewers were right and I was wrong. At the time, I was angry that anyone would reject my paper. Now, in retrospect, I realize they did me a favor; I benefited from their advice. For any young scientist who might be reading this post, don’t be too discouraged by critical reviews and rejection. Give yourself a day to whine and fuss, then fix the problems that need fixing and move on. That’s the way peer review works.

Friday, February 5, 2021

The Spectrum of Scattered X-Rays

Chapter 15 of Intermediate Physics for Medicine and Biology discusses Compton Scattering. In this process, an x-ray photon scatters off a free electron, creating a scattered photon and a recoiling electron. The wavelength shift between the incident and scattered photons, Δλ, is given by the Compton scattering formula (Eq. 15.11 in IPMB)

Δλ = h/mc (1 − cosθ) ,

where h is Planck’s constant, c is the speed of light, m is the mass of the electron, and θ is the scattering angle. The quantity h/mc is called the Compton wavelength of the electron.

I enjoy studying experiments that first measure fundamental quantities like the Compton wavelength. Such an experiment is described in Arthur Compton’s article
Compton, A. H. (1923) The Spectrum of Scattered X-Rays. Physical Review, Volume 22, Pages 409–413.
Compton’s x-ray source (emitting the Kα line from molybdenum) irradiated a scatterer (graphite). He performed his experiment for different scattering angles θ. For each angle, he first collimated the scattered beam (using small holes in lead sheets) and then reflected it from a crystal (calcite). The purpose of the crystal was to determine the wavelength of the scattered photon by x-ray diffraction. Russ Hobbie and I don’t analyze x-ray diffraction in IPMB. The wavelength λ of the diffracted photon is given by Bragg’s law

λ = 2 d sinϕ ,

where d is the spacing of atomic planes (for calcite, d = 3.036 Å) and ϕ is the angle between the x-ray beam and the crystal surface. For fixed θ, Compton would rotate the crystal, thereby scanning ϕ and analyzing the beam as a function of wavelength. The intensity of the beam would be recorded by a detector (an ionization chamber).

Let’s analyze Compton’s experiment in a new homework problem.
Section 15.4

Problem 7½. Use the data below to calculate the Compton wavelength of the electron (in nm). Estimate the uncertainty in your value. Compton’s experiment detected x-rays at both the incident wavelength (coherent scattering) and at a modified or shifted wavelength (Compton scattering).
A drawing of Compton's data he used to determine the Compton wavelength of the electron.

I like this exercise because it requires the reader to do many things: 

  • Decide which spectral line is coherent scattering and which is Compton scattering. 
  • Choose which angle θ to analyze. 
  • Estimate the angle ϕ of each spectral peak from the data. 
  • Approximate the uncertainty in the estimation of ϕ
  • Convert the values of ϕ from degrees/minutes to decimal degrees
  • Determine the wavelength for each angle using Bragg’s law. 
  • Calculate the wavelength shift. 
  • Relate the wavelength shift to the Compton wavelength. 
  • Compute the Compton wavelength. 
  • Propagate the uncertainty
  • Convert from Ångstroms to nanometers.
If you can do all that, you know what you’re doing.

Compton’s experiment played a key role in establishing the quantum theory of light and wave-particle duality. He was awarded the 1927 Nobel Prize in Physics for this research. Let’s give him the last word. Here are the final two sentences of his paper.
This satisfactory agreement between the experiments and the theory gives confidence in the quantum formula for the change in wave-length due to scattering. There is, indeed, no indication of any discrepancy whatever, for the range of wave-length investigated, when this formula is applied to the wave-length of the modified ray.

Friday, January 29, 2021

Stable Nuclei

Fig. 17.2 in Intermediate Physics for Medicine and Biology.
Fig. 17.2 in Intermediate
Physics for Medicine and Biology
.
In Figure 17.2 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I show a plot of all the stable nuclei. The vertical axis is the number of protons, Z (the atomic number), and the horizontal axis is the number of neutrons, N (the neutron number). The mass number A equals Z + N. Each tiny black box in the figure corresponds to a stable isotope

This figure summarizes a tremendous amount of information about nuclear physics. Unfortunately, the drawing is too small to show much detail. We must magnify part of the drawing to tell what boxes correspond to what isotopes. In this post, I provide several such magnified views.

Figure 17.2, with part of the drawing magnified.

Let’s begin by magnifying the bottom left corner, corresponding to the lightest nuclei. The most abundant isotope of hydrogen consists of a single proton. In general, an isotope is denoted using the chemical symbol with a left subscript Z and a left superscript A. The chemical symbol and atomic number are redundant, so we’ll drop the subscript. A proton is written as 1H.

The nucleus to the right of 1H is 2H, called a deuteron, consisting of one proton and one neutron. Deuterium exists in nature but is rare. The isotope 3H also exists but isn’t stable (its half life is 12 years) so it’s not included in the drawing.

The row above hydrogen is helium, with two stable isotopes: 3He and 4He. You probably know the nucleus 4He by another name; the alpha particle. As you move up to higher rows you find the elements lithium, beryllium, and  boron (10B is used for boron neutron capture therapy). Light isotopes tend to cluster around the dashed line Z = N.

Figure 17.2, with part of the drawing magnified.

Moving up and to the right brings us to essential elements for life: carbon, nitrogen, and oxygen. Certain values of Z and N, called magic numbers, lead to particularly stable nuclei: 2, 8, 20, 28, 50, 82, and 126. Oxygen is element eight, and Z = 8 is magic, so it has three stable isotopes. The isotope 16O is doubly magic (Z = 8 and N = 8) and is therefore the most abundant isotope of oxygen.

Figure 17.2, with part of the drawing magnified.

The next drawing shows the region around 40Ca, which is also doubly magic (Z = 20, N = 20). It is the heaviest isotope having Z = N. Heavier isotopes need extra neutrons to overcome the Coulomb repulsion of the protons, so the region of stable isotopes bends down as it moves right. The dashed line indicating Z = N won’t appear in later figures; it’ll be way left of the magnified region. Four stable isotopes (37Cl, 38Ar, 39K, and 40Ca) have a magic number of neutrons N = 20. Calcium, with its magic number of protons, has five stable isotopes. No stable isotopes correspond to N = 19 or 21. In general, you’ll find more stable isotopes with even values of Z and N than odd.

Figure 17.2, with part of the drawing magnified.

Next we move up to the region ranging from Z = 42 (molybdenum) to Z = 44 (ruthenium). No stable isotopes exist for Z = 43 (technetium); a blank row stretches across the region of stability. As discussed in Chapter 17 of IPMB, the unstable 99Mo decays to the metastable state 99mTc (half life = 6 hours), which plays a crucial role in medical imaging.

Figure 17.2, with part of the drawing magnified.

Tin has a magic number of protons (Z = 50), resulting in ten stable isotopes, the most of any element.

Figure 17.2, with part of the drawing magnified.

As we move to the right, the region of stability ends. The heaviest stable isotope is lead (Z = 82), which has a magic number of protons and four stable isotopes. Above that, nothing. There are unstable isotopes with long half lives; so long that they are still found on earth. Scientists used to think that an isotope of bismuth, 209Bi (Z = 83, N  = 126), was stable, but now we know its half life is 2×1019 years. Uranium (Z = 92) has two unstable isotopes with half lives similar to the age of the earth, but that’s another story.

If you want to find information about all the stable isotopes, and other isotopes that are unstable, search the web for “table of the isotopes.” Here’s my favorite: https://www-nds.iaea.org/relnsd/vcharthtml/VChartHTML.html.

Friday, January 22, 2021

Oh Happy Day!

Eric Lander
Russ Hobbie and I hope that Intermediate Physics for Medicine and Biology will inspire young scientists to study at the interface between physics and physiology, and to work at the boundary between mathematics and medicine. But what sort of job can you get with such a multidisciplinary background? How about Presidential Science Advisor and Director of the White House Office of Science and Technology Policy! This week President Biden nominated Eric Lander—mathematician and geneticist—to that important position.

Lander is no amateur in mathematics. He obtained a PhD in the field from Oxford, which he attended as a Rhodes Scholar. Later his attention turned to molecular biology and he reinvented himself as a geneticist. He received a MacArthur “genius” grant in 1987, and co-led the human genome project. Now he’ll be part of the Biden administration; the most prominent scientist to hold a cabinet-level position since biological physicist Steven Chu.

Im overjoyed that respect for science has returned to national politics. As we face critical issues, such as climate change and the covid-19 pandemic, input from scientists will be crucial. Im especially excited because not only does our new president respect science, but also—as I wrote in a letter to the editor in the Oakland Press last October—the Congressional representative from my own district, Elissa Slotkin, understands and appreciates science. During her fall campaign, I volunteered to write postcards, one of which you can read below.

The last four years have been grim, but the times they are a-changin. #Scienceisback.

Oh happy day!

Pioneer in Science: Eric Lander -- The Genesis of Genius

https://www.youtube.com/watch?v=IH4rn50arSY&lc=z13awxxhimyzinzc1231zfxwxpq1cnmqz04