Friday, April 30, 2021

A Dozen Electrocardiograms Everyone Should Know

In Chapter 7 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the electrocardiogram. Today, I present twelve ECGs that everyone should know. I’ve drawn them in a stylized and schematic way, ignoring differences between individuals, changes from beat to beat, and noise.

When I taught medical physics at Oakland University, my lecture on ECGs was preceded by a lesson on cardiac anatomy. If some anatomical terms in this post are unfamiliar, I suggest reviewing the Texas Heart Institute website.

1. Normal Heartbeat

The electrocardiogram (ECG).

Electrocardiograms are plotted on graph paper that consists of large squares, each divided into a five-by-five grid of small squares. The horizontal axis is time and the vertical axis is voltage. Each large square (or box) corresponds to a fifth of a second and a half of a millivolt.

The normal ECG contains three deflections: a small P wave associated with depolarization of the atria, a large narrow QRS complex associated with depolarization of the ventricles, and a T wave associated with repolarization of the ventricles.

A normal heart rate ranges from 60 to 100 beats per minute. The ECG below repeats every four boxes, so the time between beats is 0.8 seconds or 0.0133 minutes, which is equivalent to a heart rate of 75 beats per minute. 

An ECG of a normal heartbeat.
Normal heartbeat.

2. Sinus Bradycardia

A sinus bradycardia is a slow heart beat (slower than 60 beats per minute, or five boxes). The term sinus means the sinoatrial node is causing the slow rate. This node in the right atrium is the heart’s natural pacemaker. Other than its slow rate, the ECG looks normal. A sinus bradycardia may need to be treated by drugs or an implantable pacemaker, or it may represent the healthy heartbeat of a trained athlete who pumps a large amount of blood with each beat. 

An ECG of a sinus bradycardia.
Sinus bradycardia.

3. Sinus Tachycardia

A sinus tachycardia is the opposite of a sinus bradycardia. It’s a fast heart beat (faster than 100 beats per minute, or three boxes). The rapid rate arises because the sinoatrial node paces the heart too quickly. A sinus tachycardia may trigger other more severe arrhythmias we discuss later.

An ECG of a sinus tachycardia.
Sinus tachycardia.

4. Atrial Flutter

During atrial flutter a reentrant circuit exists in the atria. That is, a wave front propagates in a loop, constantly chasing its tail. The nearly sinusoidal, small-magnitude signal in the ECG originates in the atria. Every few rotations around the circuit, the wave front passes through the atrioventricular node—the only connection between the atria and the ventricles—giving rise to a normal-looking QRS complex (the T wave is buried in the atrial signal).

An ECG of atrial flutter.
Atrial flutter.

5. Atrial Fibrillation

Atrial fibrillation is similar to atrial flutter, except the wave fronts in the atrium are not as well organized, propagating in complicated, chaotic patterns resembling turbulence. The part of the ECG arising from the atria looks like noise. Occasionally the wave front passes through the atrioventricular node and produces a normal QRS complex. During atrial fibrillation the ventricles do not fill with blood effectively because the atria and ventricles are not properly synchronized.

An ECG of atrial fibrillation.
Atrial fibrillation.

6. First-Degree Atrioventricular Block

In first-degree atrioventricular block, the time between the end of the P wave and the start of the QRS complex is longer than it should be. Otherwise, the ECG appears normal. This rarely results in a problem for the patient, but does imply that the atrioventricular node is not healthy and trouble with it may develop in the future.

An ECG of first-degree atrioventricular block.
First-degree atrioventricular block.

7. Second-Degree Atrioventricular Block

In second-degree atrioventricular block, the P waves appear like clockwork. The signal often passes through the atrioventricular node to the ventricles, but occasionally it does not. Different types of second-degree block exist. Sometimes the delay between the P wave and the QRS complex gets progressively longer with each beat until propagation through the atrioventricular node fails. Other times, the node will periodically drop a beat; for example if every second beat fails you have 2:1 AV block. Still other times, the node fails sporadically. 

An ECG of second-degree atrioventricular block.
Second-degree atrioventricular block.

8. Third-Degree Atrioventricular Block

If your atrioventricular node stops working entirely, you have third-degree atrioventricular block (also known as complete heart block). A ventricular beat exists because some other part of the conduction system (say, the Bundle of His or the Purkinje fibers) serves as the pacemaker. In this case, the P wave and QRS complex are unsynchronized, like two metronomes set to different rates. In complete heart block, the ventricles typically fire at a slow rate. Sometimes a patient will intermittently go in and out of third-degree block, causing fainting spells.

An ECG of third-degree atrioventricular block.
Third-degree atrioventricular block.

9. Premature Ventricular Contraction

Sometimes the heart will beat with a normal ECG, and than sporadically have an extra ventricular beat: a premature ventricular contraction. Usually these beats originate within the ventricle, so they are not distributed through the cardiac conduction system and therefore produce a wide, bizzare QRS complex. As long as these premature contractions are rare they are not too dangerous, but they risk triggering more severe ventricular arrhythmias.

An ECG of premature ventricular contraction.
Premature ventricular contraction.

10. Ventricular Tachycardia

A ventricular tachycardia is a rapid heartbeat arising from a reentrant circuit in the ventricles. The ECG looks like a series of premature ventricular contractions following one after another. The VT signal typically has a large amplitude, and the atrial signal is often too small to be seen. This is a serious arrhythmia, because at such a fast rate the heart doesn’t pump blood effectively. It’s not lethal itself, but can become deadly if it decays into ventricular fibrillation.

An ECG of ventricular tachycardia.
Ventricular tachycardia.

11. Ventricular Fibrillation

In ventricular fibrillation, different parts of the ventricles contract out of sync, resulting in the heart quivering rather than beating. A heart in VF does not pump blood, and the patient will die in ten to fifteen minutes unless defibrillated by a strong electric shock. Ventricular fibrillation is the most common cause of sudden cardiac death.

An ECG of ventricular fibrillation.
Ventricular fibrillation.

12. Asystole

In asystole, the heart has no electrical activity, so the ECG is a flat line. Asystole is the end stage of ventricular fibrillation, when the chaotic electrical activity dies away and nothing remains.

An ECG of asystole.
Asystole.


Once you master these twelve ECGs, you’ll be on your way to understanding the electrical behavior of the heart. If you want to learn more, I suggest trying the SkillStat six-second ECG game, which includes 27 ECGs. It’s fun.

Friday, April 23, 2021

Electric and Magnetic Fields From Two-Dimensional Anisotropic Bisyncytia


Page 223 of Intermediate Physics for Medicine and Biology.

Figure 8.18 on page 223 of Intermediate Physics for Medicine and Biology contains a plot of the magnetic field produced by action currents in a slice of cardiac tissue. The measured magnetic field contours have approximately a four-fold symmetry. The experiment by Staton et al. that produced this data was a tour de force, demonstrating the power of high-spatial-resolution biomagnetic techniques. 
 
Sepulveda NG, Wikswo JP (1987) “Electric and Magnetic Fields From Two-Dimensional Anisotropic Bisyncytia,” Biophysical Journal, Volume 51, Pages 557-568, superimposed on Intermediate Physics for Medicine and Biology.
Sepulveda and Wikswo (1987).
In this post, I discuss the theoretical prediction by Nestor Sepulveda and John Wikswo in the mid 1980s that preceded and motivated the experiment.
Sepulveda NG, Wikswo JP (1987) “Electric and Magnetic Fields From Two-Dimensional Anisotropic Bisyncytia,” Biophysical Journal, Volume 51, Pages 557-568.
Their abstract is presented below.
Cardiac tissue can be considered macroscopically as a bidomain, anisotropic conductor in which simple depolarization wavefronts produce complex current distributions. Since such distributions may be difficult to measure using electrical techniques, we have developed a mathematical model to determine the feasibility of magnetic localization of these currents. By applying the finite element method to an idealized two-dimensional bisyncytium [a synonym for bidomain] with anisotropic conductivities, we have calculated the intracellular and extracellular potentials, the current distributions, and the magnetic fields for a circular depolarization wavefront. The calculated magnetic field 1 mm from the tissue is well within the sensitivity of a SQUID magnetometer. Our results show that complex bisyncytial current patterns can be studied magnetically, and these studies should provide valuable insight regarding the electrical anisotropy of cardiac tissue.
Sepulveda and Wikswo assumed the tissue was excited by a brief stimulus through an electrode (purple dot in the illustration below), resulting in a circular wave front propagating outward. The transmembrane potential coutours at one instant are shown in red. The assumption of a circular wave front is odd, because cardiac tissue is anisotropic. A better assumption would have been an elliptical wave front with its long axis parallel to the fibers. Nevertheless, the circular wave front captures the essential features of the problem.

If the tissue were isotropic, the intracellular current density would point radially outward and the extracellular current density would point radially inward. The intracellular and extracellular currents would exactly cancel, so the net current (their sum) would be zero. Moreover, the net current would vanish if the tissue were anisotropic but had equal anisotropy ratios. That is, if the ratios of the electrical conducivities parallel and perpendicular to the fibers were the same in the intracellular and extracellular spaces. The only way to produce a net current (shown as the blue loops in the illustration below) is if the tissue has unequal anisotropy ratios. In that case, the loops are four-fold symmetric, rotating clockwise in two quadrants and counterclockwise in the other two.

Current loops produce magnetic fields. The right-hand-rule implies that the magnetic field points up out of the plane in the top-right and the bottom-left quadrants, and down into the plane in the other two. The contours of magnetic field are green in the illustration below, and the peak magnitude for a 1 mm thick sheet of is about one fourth of a nanotesla.

Jut for fun, I superimposed the transmembrane potential, net current density, and magnetic field plots in the picture below.

Notes:
  1. The measurement of the magnetic field is a null detector of unequal anisotropy ratios. In other words, in tissue with equal anisotropy ratios the magnetic field vanishes, so the mere existence of a magnetic field implies the anisotropy ratios are unequal. The condition of unequal anisotropy ratios has many implications for cardiac tissue. One is discussed in Homework Problem 50 in Chapter 7 of IPMB
  2. If the sheet of cardiac tissue is superfused by a saline bath, the magnetic field distribution changes.
  3. Wikswo was a pioneer in the field of biomagnetism. In particular, he developed small scanning magnetometers that had sub-millimeter spatial resolution. He was in a unique position of being able to measure the magnetic fields that he and Sepulveda predicted, which led to the figure included in IPMB
  4. I was a graduate student in Wikswo’s laboratory when Sepulveda and Wikswo wrote this article. Sepulveda, a delightful Columbian biomedical engineer and a good friend of mine, worked as a research scientist in Wikswo’s lab. He was an expert on the finite element method—the numerical technique used in his paper with Wikswo—and had written his own finite element code that no one else in the lab understood. He died a decade ago, and I miss him. 
  5. Sepulveda and Wikswo were building on a calculation published in 1984 by Robert Plonsey and Roger Barr (“Current Flow Patterns in Two Dimensional Anisotropic Bisyncytia With Normal and Extreme Conductivities,” Biophys. J., 45:557-571). Wikswo heard either Plonsey or Barr give a talk about their results at a scientific meeting. He realized immediately that their predicted current loops implied a biomagnetic field. When Wikswo returned to the lab, he described Plonsey and Barr’s current loops at a group meeting. As I listened, I remember thinking “Wikswo’s gone mad,” but he was right.
  6. Two years after their magnetic field article, Sepulveda and Wikswo (now with me included as a coauthor) calculated the transmembrane potential produced when cardiac tissue is stimulated by a point electrode. But that’s another story.
I’ll give Sepulveda and Wikswo the last word. Below is the concluding paragraph of their article, which looks forward to the experimental measurement of the magnetic field pattern that was shown in IPMB.
The bidomain model of cardiac tissue provides a tool that can be explored and used to study and explain features of cardiac conduction. However, it should be remembered that “a model is valid when it measures what it is intended to measure” (31). Thus, experimental data must be used to evaluate the validity of the bidomain model. This evaluation must involve comparison of the model's predictions not only with measured intracellular and extracellular potentials but also with the measured magnetic fields. When the applicability of the bidomain model to a particular cardiac preparation and the validity and reliability of our calculations have been determined experimentally, this mathematical approach should then provide a new technique for analyzing normal and pathological cardiac activation.
Members of John Wikswo's laboratory at Vanderbilt University in the mid 1980s.
Members of Wikswo's lab at Vanderbilt University in the mid 1980s: John Wikswo is on the phone, Nestor Sepulveda is in the white shirt and gray pants, and I am on the far right. The other people are Frans Gielen (the tall guy with arms crossed on the left), Ranjith Wijesinghe (between Gielen and Sepulveda), Peng Zhang (between Wikswo and Sepulveda), and Pat Henry (knelling).

Friday, April 16, 2021

The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race

The Code Breaker, by Walter Isaacson, superimposed on Intermediate Physics for Medicine and Biology.
The Code Breaker,
by Walter Isaacson

My favorite authors are Simon Winchester, David Quammen, and Walter Isaacson. This week I read Isaacson’s latest book: The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race. I would place it alongside The Eighth Day of Creation and The Making of the Atomic Bomb as one of the best books about the history of science

In his introduction, Isaacson writes

The invention of CRISPR and the plague of COVID will hasten our transition to the third great revolution of modern times. These revolutions arose from the discovery, beginning just over a century ago, of the three fundamental kernels of our existence: the atom, the bit, and the gene.

The first half of the twentieth century, beginning with Albert Einstein’s 1905 papers on relativity and quantum theory, featured a revolution driven by physics. In the five decades following his miracle year, his theories led to atom bombs and nuclear power, transistors and spaceships, lasers and radar.

The second half of the twentieth century was an information-technology era, based on the idea that all information could be encoded by binary digits—known as bits—and all logical processes could be performed by circuits with on-off switches. In the 1950s, this led to the development of the microchip, the computer, and the internet. When these three innovations were combined, the digital revolution was born.

Now we have entered a third and even more momentous era, a life-science revolution. Children who study digital coding will be joined by those who study genetic code.
Early in the book, Isaacson describes Francisco Mojica’s discovery that bacteria have “CRISPR spacer sequences”: strands of DNA that serve as an immune system protecting them from viruses.
As we humans struggle to fight off novel strains of viruses, it’s useful to note that bacteria have been doing this for about three billion years, give or take a few million centuries. Almost from the beginning of life on this planet, there’s been an intense arms race between bacteria, which developed elaborate methods of defending against viruses, and the ever-evolving viruses, which sought ways to thwart those defenses.

Mojica found that bacteria with CRISPR space sequences seems to be immune from infection by a virus that had the same sequence. But bacteria without the spacer did get infected. It was a pretty ingenious defense system, but there was something even cooler: it appeared to adapt to new threats. When new viruses came along, the bacteria that survived were able to incorporate some of that virus’s DNA and thus create, in its progeny, an acquired immunity to that new virus. Mojica recalls being so overcome by emotion at this realization that he got tears in his eyes. The beauty of nature can sometimes do that to you.

The Code Breaker focuses on the life and work of Jennifer Doudna, who won the 2020 Nobel Prize in Chemistry. However, the star of the book is not Doudna, nor Emmanuelle Charpentier (who shared the prize with Doudna), nor Mojica, nor any of the other scientific heroes. The star is RNA, the molecule that carries genetic information from DNA in the nucleus to the cytoplasm where proteins are produced.

By 2008, scientists had discovered a handful of enzymes produced by genes that are adjacent to the CRISPR sequences in a bacteria’s DNA. These CRISPR-associated (Cas) enzymes enable the system to cut and paste new memories of viruses that attack the bacteria. They also create short segments of RNA, known as CRISPR RNA (crRNA), that can guide a scissors-like enzyme to a dangerous virus and cut up its genetic material. Presto! That’s how the wily bacteria create an adaptive immune system!
Doudna and Charpentier’s Nobel Prize resulted from their developing the CRISPR-Cas9 system into a powerful technique for gene editing.
The study of CRISPR would become a vivid example of the call-and-response duet between basic science and translational medicine. At the beginning it was driven by the pure curiosity of microbe-hunters who wanted to explain an oddity they had stumbled upon when sequencing the DNA of offbeat bacteria. Then it was studied in an effort to protect the bacteria in yogurt cultures from attacking viruses. That led to a basic discovery about the fundamental workings of biology. Now a biochemical analysis was pointing the way to the invention of a tool with potential practical uses. “Once we figured out the components of the CRISPR-Cas9 assembly, we realized that we could program it on our own,” Doudna says. “In other words, we could add a different crRNA and get it to cut any different DNA sequence we chose.”

Several other themes appear throughout The Code Breaker

  • The role of competition and collaboration in science, 
  • How industry partnerships and intellectual property affect scientific discovery, 
  • The ethics of gene editing, and
  • The epic scientific response to the COVID-19 pandemic

I’m amazed that Isaacson’s book is so up-to-date. I received my second dose of the Pfizer-BioNTech vaccine last Saturday and then read The Code Breaker in a three-day marathon. My arm was still sore while reading the chapter near the end of the book about RNA Covid vaccines like Pfizer’s.

There’s a lot of biology and medicine in The Code Breaker, but not much physics. Yet, some of the topics discussed in Intermediate Physics for Medicine and Biology appear briefly. Doudna uses x-ray diffraction to decipher the structure of RNA. Electroporation helps get vaccines and drugs into cells. Electrophoresis, microfluidics, and electron microscopy are mentioned. I wonder if injecting more physics and math into this field would supercharge its progress. 

CRISPR isn’t the first gene-editing tool, but it increases the precision of the technique. As Winchester noted in The Perfectionists, precision is a hallmark of technology in the modern world. Quammen’s book Spillover suggests that humanity may be doomed by an endless flood of viral pandemics, but The Code Breaker offers hope that science will provide the tools needed to prevail over the viruses.

I will close with my favorite passage from The Code Breaker: Isaacson’s paean to curiosity-driven scientific research.

The invention of easily reprogrammable RNA vaccines was a lightning-fast triumph of human ingenuity, but it was based on decades of curiosity-driven research into one of the most fundamental aspects of life on planet earth: how genes encoded by DNA are transcribed into snippets of RNA that tell cells what proteins to assemble. Likewise, CRISPR gene-editing technology came from understanding the way that bacteria use snippets of RNA to guide enzymes to chop up dangerous viruses. Great inventions come from understanding basic science. Nature is beautiful that way.

 

.
“How CRISPR lets us edit our DNA,” a TED talk by Jennifer Doudna. 

Nobel Lecture, Jennifer Doudna, 2020 Nobel Prize in Chemistry.

Friday, April 9, 2021

The Vitamin D Questions: How Much Do You Need and How Should You Get It?

A bottle of vitamin D capsules, on Intermediate Physics for Medicine and Biology.
Vitamin D
Two years ago my doctor recommended I start taking vitamin D. I’m annoyed at having to take a supplement every day, but I do what the doc says.

Your body needs exposure to ultraviolet light to produce its own vitamin D, but too much UV light causes skin cancer. Russ Hobbie and I address this trade-off in Section 14.10 of Intermediate Physics for Medicine and Biology.

There has been an alarming increase in the use of tanning parlors by teenagers and young adults. These emit primarily UVA, which can cause melanoma. Exposure rates are two to three times greater than solar radiation at the equator at noon (Schmidt 2012). Many states now prohibit minors from using tanning parlors. Proponents of tanning parlors point out that UVB promotes the synthesis of vitamin D; however, the exposure to UVB in a tanning parlor is much higher than needed by the body for vitamin D production. Tanning as a source of Vitamin D is no longer recommended at any age level (Barysch et al. 2010).
To learn more about how much vitamin D people need, I read an article by Deon Wolpowitz and Barbara Gilchrest.
Wolpowitz D, Gilchrest BA (2006) “The Vitamin D Questions: How Much Do You Need and How Should You Get It?Journal of the American Academy of Dermatology, Volume 54, Pages 301–317.

Below are excerpts from their conclusion.

Given the scarcity of naturally occurring vit [vitamin] D in many otherwise adequate diets, human beings may once have depended on unprotected exposure to natural sunlight as the primary environmental source of vit D, at least during those periods of the year when sunlight can produce [previtamin] D3 in the skin... However, chronic unprotected exposure to carcinogenic UV radiation in sunlight not only results in photoaging, but also greatly increases the risk of skin cancer. This risk is further exacerbated by the extended lifespan of human beings in the 21st century. Fortunately, there is a noncarcinogenic alternative—intestinal absorption of vit D-fortified foods and/or dietary supplements…

All available evidence indicates that younger, lighter-skinned individuals easily maintain desirable serum 25-OH vit D levels year-round by incidental protected sun exposure and customary diet. Daily intake of two 8-oz glasses of fortified milk or orange juice or one standard vit or incidental protected exposure of the face and backs of hands to 0.25% minimum erythema [reddening of the skin] dose of UVB radiation 3 times weekly each generates adequate serum 25-OH levels by classic criteria. Dietary supplementation of vit D is efficient, efficacious, and safe. Thus, it is prudent for those at high statistical risk for vit D deficiency, such as patients who are highly protected against the sun [or old folks like me], to take daily supplemental vit D (200-1000 IU) with concurrent dietary calcium to meet current and future RDA [recommended daily allowance] levels.
So, maybe my physician was right when she put me on vitamin D. But I decided to double my dose in the winter. Is that a good idea? A recent paper out of South Korea has some interesting data.
Park SS, Lee YG, Kim M, Kim J, Koo J-H, Kim CK, Um J, Yoon J (2019) “Simulation of Threshold UV Exposure Time for Vitamin D Synthesis in South Korea,” Advances in Meteorology, Volume 2019, Article 4328151.
Sang Seo Park and his colleagues analyze UV exposure, and estimate the threshold exposure time at noon to sunlight in Seoul for vitamin D synthesis (blue) and erythema (red). 

Threshold exposure time to sunlight in Seoul for vitamin D synthesis (blue) and erythema (red). from Park et al. (2019) Simulation of Threshold UV Exposure Time for Vitamin D Synthesis in South Korea. Adv. Meteorol., 2019:4328151.
Threshold exposure time to sunlight in Seoul for vitamin D synthesis (blue) and erythema (red). From Park et al. (2019).

This is a semilog plot, so the difference between July and January is about a factor of ten. I think increasing my vitamin D dose in the winter is, if anything, conservative. I could probably do without the supplement in the summer. Rochester (42° latitude) is farther north than Seoul (38°), so the seasonal effect might be even greater where I live.

In conclusion, I think Russ and I are correct to stress the harmful effects of UV light in IPMB. If you’re worried about not getting enough vitamin D, take a supplement in the winter.

Friday, April 2, 2021

The Boltzmann Distribution Applied to a Harmonic Oscillator

In Chapter 3 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Boltzmann distribution. If you have a system with energy levels of energy En populated by particles in thermal equilibrium, the Boltzmann distribution gives the probability of finding a particle in the nth level.

A classic example of the Boltzmann distribution is for the energy levels of a harmonic oscillator. These levels are equally spaced starting from a ground state and increasing without bound. To see the power of the Boltzmann distribution, solve this new homework problem.
Section 3.7

Problem 29½. Suppose the energy levels, En, of a system are given by

En = n ε,     for  n = 0, 1, 2, 3, …

where ε is the energy difference between adjacent levels. Assume the probability of a particle occupying the nth energy level, Pn, obeys the Boltzmann distribution
where A is a constant, T is the absolute temperature, and k is the Boltzmann constant.

(a) Determine A in terms of ε, k, and T. (Hint: the sum of the probabilities over all levels is one.)
(b) Find the average energy E of the particles. (Hint: E = ∑PnEn.)
(c) Calculate the heat capacity C of a system of N such particles. (Hint: U = N E and C = dU/dT.)
(d) What is the limiting value of C for high temperatures (kT >> ε)? (Hint: use the Taylor series of the exponential.)
(e) What is the limiting value of C for low temperatures (kT << ε)?
(f) Sketch a plot of C versus T.
You may need these infinite series

1 + x + x2 + x3 + ⋯ = 1/(1x) 
x + 2x2 + 3x3 + ⋯ = x/(1x)2 

This is a somewhat advanced problem in statistical mechanics, so I gave several hints to guide the reader. The calculation contains much interesting physics. For instance, the answer to part (e) is known as the third law of thermodynamics. Albert Einstein was the first to calculate the heat capacity of a collection of harmonic oscillators (a good model for a crystalline solid). Theres more physics than biology in this problem, because most of the interesting behavior occurs at cold temperatures but biology operates at hot temperatures.

If you’re having difficulty solving this problem, here
s one more hint:

enx = (ex)n

Enjoy!

Friday, March 26, 2021

Cooling by Radiation

In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss thermal radiation. If you’re a black body, the net power you radiate, wtot, is given by Eq. 14.41

wtot = S σSB (T4Ts4) ,                (14.41)

where S is the surface area, σSB is the Stefan-Boltzmann constant (5.67 × 10−8 W m−2 K−4), T is the absolute temperature of your body (about 310 K), and Ts is the temperature of your surroundings. The T4 term is the radiation you emit, and the Ts4 term is the radiation you absorb.

The fourth power that appears in this expression is annoying. It means we must use absolute temperature in kelvins (K); you get the wrong answer if you use temperature in degrees Celsius (°C). It also means the expression is nonlinear; wtot is not proportional to the temperature difference TTs.

On the absolute temperature scale, the difference between the temperature of your body (310 K) and the temperature of your surroundings (say, 293 K at 20 °C) is only about 5%. In this case, we simplify the expression for wtot by linearizing it. To see what I mean, try Homework Problem 14.32 in IPMB.
Section 14.9 
Problem 32. Show that an approximation to Eq. 14.41 for small temperature differences is wtot = S Krad (TTs). Deduce the value of Krad at body temperature. Hint: Factor T4Ts4 =  (TTs)(…). You should get Krad = 6.76 W m−2 K−1.
The constant Krad has the same units as a convection coefficient (see Homework Problem 51 in Chapter 3 of IPMB). Think of it as an effective convection coefficient for radiative heat loss. Once you determine Krad, you can use either the kelvin or Celsius temperature scales for TTs, so you can write its units as W m−2 °C−1.
 
Air and Water, by Mark Denny, superimposed on Intermediate Physics for Medicine and Biology.
Air and Water,
by Mark Denny.
In Air and Water, Mark Denny analyzes the convection coefficient. In a stagnant fluid, the convection coefficient depends only on the fluid’s thermal conductivity and the body’s size. For a sphere, it is inversely proportional to the diameter, meaning that small bodies are more effective at convective cooling per unit surface area than large bodies. If the body undergoes free convection or forced convection (for both cases the surrounding fluid is moving), the expression for the convection coefficient is more complicated, and depends on factors such as the Reynolds number and Prandtl number of the fluid flow. Denny gives values for the convection coefficient as a function of body size for both air and water. Usually, these values are greater than the 6.76 W m−2 °C−1 for radiation. However, for large bodies in air, radiation can compete with convection as the dominate mechanism. For people, radiation is an important mechanism for cooling. For a dolphin or mouse, it isn’t. Elephants probably make good use of radiative cooling.
 
Finally, our analysis implies that when the difference between the temperatures of the body and the surroundings is small, a body whose primary mechanism for getting rid of heat is radiation will cool exponentially following Newton’s law of cooling.

Friday, March 19, 2021

The Carr-Purcell-Meiboom-Gill Pulse Sequence

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny...” 

Isaac Asimov

In Section 18.8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe the Carr-Purcell pulse sequence, used in magnetic resonance imaging.

When a sequence of π [180° radio-frequency] pulses that nutate M [the magnetization vector] about the x' axis are applied at TE/2, 3TE/2, 5TE/2, etc., a sequence of echoes are formed [in the Mx signal], the amplitudes of which decay with relaxation time T2. This is shown in Fig. 18.19.
The Carr-Purcell pulse sequence, as shown in Fig. 18.19 of Intermediate Physics for Medicine and Biology.
Fig. 18.19  The Carr-Purcell pulse sequence.
All π pulses nutate about the x' axis.
The envelope of echoes decays as et/T2.

Russ and I then discuss the Carr-Purcell-Meiboom-Gill pulse sequence.
One disadvantage of the CP [Carr-Purcell] sequence is that the π pulse must be very accurate or a cumulative error builds up in the successive pulses. The Carr-Purcell-Meiboom-Gill sequence overcomes this problem. The initial π/2 [90° radio-frequency] pulse nutates M about the x' axis as before, but the subsequent [π] pulses are shifted a quarter cycle in time, which causes them to rotate about the y' axis.
 
The Carr-Purcell-Meiboom-Gill pulse sequence, as shown in Fig. 18.21 of Intermediate Physics for Medicine and Biology.
Fig. 18.21  The Carr-Purcell-Meiboom-Gill pulse sequence.

The first page of Meiboom, S. and Gill, D. (1958) “Modified Spin-Echo Method for Measuring Nuclear Relaxation Times.” Rev. Sci. Instr. 29:688–691, superimposed on Intermediate Physics for Medicine and Biology.
Meiboom, S. and Gill, D. (1958)
“Modified Spin-Echo Method for
Measuring Nuclear Relaxation Times.”
Rev. Sci. Instr.
29:688–691.
Students might enjoy reading the abstract of Saul Meiboom and David Gill’s 1958 article published in the Review of Scientific Instruments (Volume 29, Pages 688-691).
A spin echo method adapted to the measurement of long nuclear relaxation times (T2) in liquids is described. The pulse sequence is identical to the one proposed by Carr and Purcell, but the rf [radio-frequency] of the successive pulses is coherent, and a phase shift of 90° is introduced in the first pulse. Very long T2 values can be measured without appreciable effect of diffusion.
This short paper is so highly cited that it was featured in a 1980 Citation Classic commentary, in which Meiboom reflected on the significance of the research.
The work leading to this paper was done nearly 25 years ago at the Weizmann Institute of Science, Rehovot, Israel. David Gill, who was then a graduate student… , set out to measure NMR T2-relaxation times in liquids, using the well-known Carr-Purcell pulse train scheme. He soon found that at high pulse repetition rates adjustments became very critical, and echo decays, which ideally should be exponential, often exhibited beats and other irregularities. But he also saw that on rare and unpredictable occasions a beautiful exponential decay was observed... Somehow the recognition emerged that the chance occurrence of a 90° phase shift of the nuclear polarization [magnetization] must underlie the observations. It became clear that in the presence of such a shift a stable, self-correcting state of the nuclear polarization is produced, while the original scheme results in an unstable state, for which deviations are cumulative. From here it was an easy step to the introduction of an intentional phase shift in the applied pulse train, and the consistent production of good decays.
The key point is that the delay between the initial π/2 pulse (to flip the spins into the xy plane) and the string of π pulses (to create the echoes) must be timed carefully (the pulses must be coherent). Even adding a delay corresponding to a quarter of a single oscillation changes everything. In a two-tesla MRI scanner, the Larmor frequency is 83 MHz, so one period is 12 nanoseconds. Therefore, if the timing is off by just a few nanoseconds, the method won’t work.

Initially Gill didn’t worry about timing the pulses precisely, so usually he was using the error-prone Carr-Purcell sequence. Occasionally he got lucky and the timing was just right; he was using what’s now called the Carr-Purcell-Meiboom-Gill sequence. Meiboom and Gill “somehow” were able to deduce what was happening and fix the problem. Meiboom believes their paper is cited so often because it was the first to recognize the importance of maintaining phase relations between the different pulses in an MRI pulse sequence.

In his commentary, Meiboom notes that
Although in hindsight the 90° phase shift seems the logical and almost obvious thing to do, its introduction was triggered by a chance observation, rather than by clever a priori reasoning. I suspect (though I have no proof) that this applies to many scientific developments, even if the actual birth process of a new idea is seldom described in the introduction to the relevant paper.
If you’re a grad student working on a difficult experiment that’s behaving oddly, don’t be discouraged if you hear yourself saying “that’s funny...” A discovery might be sitting right in front of you!  

Friday, March 12, 2021

The Rest of the Story 2

Hermann von Helmholtz in 1948.
Hermann in 1848.

The Rest of the Story

Hermann was born in 1821 in Potsdam, Germany. He was often sick as a child, suffering from illnesses such as scarlet fever, and started school late. He was hampered by a poor memory for disconnected facts, making subjects like languages and history difficult, so his interest turned to science. His father loaned him some cheap glass lenses that he used to build optical instruments. He wanted to become a physicist, but his family couldn’t afford to send him to college. Instead, he studied hard to pass an exam that won him a place in a government medical school in Berlin, where his education would be free if he served in the military for five years after he graduated.

The seventeen-year-old Hermann moved to Berlin in 1838. He brought his piano with him, on which he loved to play Mozart and Beethoven. He became friends with his fellow students Ernest von Brücke and Emil de Bois-Reymond, began doing scientific research under the direction of physiologist Johannes Müller, and taught himself higher mathematics in his spare time. By 1843 he graduated and began his required service as an army surgeon.

Life in the army required long hours, and Hermann was isolated from the scientific establishment in Berlin. But with the help of Brücke and de Bois-Reymond he somehow continued his research. His constitution was still delicate, and sometimes he would take time off to restore his health. Near the end of his five-year commitment to the army he fell in love with Olga von Velten, who would sing while he accompanied her on the piano. They became engaged, but he knew they could not marry until he found an academic job after his military service ended, and for that he needed to establish himself as a first-rank scientist. This illness-prone, cash-strapped, over-worked army doctor with a poor memory and a love for music needed to find a research project that would propel him to the top of German science.

Hermann rose to the challenge. He began a careful study of the balance between muscle metabolism and contraction. Using both experiments and mathematics he established the conservation of energy, and in the process showed that no vital force was needed to explain life. On July 23, 1847 he announced his discovery at a meeting of the German Physical Society.  

This research led to a faculty position in Berlin and his marriage to Olga. His career took off, and he later made contributions to the study of vision, hearing, nerve conduction, and ophthalmology. Today, the largest Association of German Research Centers bears his name. Many consider Hermann von Helmholtz to be the greatest biological physicist of all time.

And now you know THE REST OF THE STORY.

Good day! 

_____________________________________________________________

This blog post was written in the style of Paul Harvey’s “The Rest of the Story” radio program. The content is based on a biography of Helmholtz written by his friend and college roommate Leo Koenigsberger. You can read about nerve conduction and Helmholtz’s first measurement of its propagation speed in Chapter 6 of Intermediate Physics for Medicine and Biology. This August we will celebrate the 200th anniversary of Hermann von Helmholtz’s birth. 

Click here for another IPMBThe Rest of the Story” post.

 
Charles Osgood pays tribute to the master storyteller Paul Harvey.

Friday, March 5, 2021

Estimating the Properties of Water

Water
from: www.middleschoolchemistry.com
 
I found a manuscript on the arXiv by Andrew Lucas about estimating macroscopic properties of materials using just a few microscopic parameters. I decided to try a version of this analysis myself. It’s based on Lucas’s work, with a few modifications. I focus exclusively on water because of its importance for biological physics, and make order-of-magnitude calculations like those Russ Hobbie and I discuss in the first section of Intermediate Physics for Medicine and Biology.

My goal is to estimate the properties of water using three numbers: the size, mass, and energy associated with water molecules. We take the size to be the center-to-center distance between molecules, which is about 3 , or 3 × 10−10 m. The mass of a water molecule is 18 (the molecular weight) times the mass of a proton, or about 3 × 10−26 kg. The energy associated with one hydrogen bond between water molecules is about 0.2 eV, or 3 × 10−20 J. This is roughly eight times the thermal energy kT at body temperature, where k is Boltzmann’s constant (1.4 × 10−23 J K−1) and T is the absolute temperature (310 K). A water molecule has about four hydrogen bonds with neighboring molecules.

Density

Estimating the density of water, ρ, is Homework Problem 4 in Chapter 1 of IPMB. Density is mass divided by volume, and volume is distance cubed

ρ = (3 × 10−26 kg)/(3 × 10−10 m)3 = 1100 kg m−3 = 1.1 g cm−3.

The accepted value is ρ = 1.0 g cm−3, so our calculation is about 10% off; not bad for an order-of-magnitude estimate.

Compressibility

The compressibility of water, κ, is a measure of how the volume of water decreases with increasing pressure. It has dimensions of inverse pressure. The pressure is typically thought of as force per unit area, but we can multiply numerator and denominator by distance and express it as energy per unit volume. Therefore, the compressibility is approximately distance cubed over the total energy of the four hydrogen bonds

κ = (3 × 10−10 m)3/[4(3 × 10−20 J)] = 0.25 × 10−9 Pa−1 = 0.25 GPa−1 ,

implying a bulk modulus, B (the reciprocal of the compressibility), of 4 GPa. Water has a bulk modulus of about B = 2.2 GPa, so our estimate is within a factor of two.

Speed of Sound

Once you know the density and compressibility, you can calculate the speed of sound, c, as (see Eq. 13.11 in IPMB)

c = (ρ κ)−1/2 = 1/√[(1100 kg m−3) (0.25 × 10−9 Pa−1)] = 1900 m s−1 = 1.9 km s−1.

The measured value of the speed of sound in water is about c = 1.5 km s−1, which is pretty close for a back-of-the-envelope estimate.

Latent Heat

A homework problem about vapor pressure in Chapter 3 of IPMB uses water’s latent heat of vaporization, L, which is the energy required to boil water per kilogram. We estimate it as

L = 4(3 × 10−20 J)/(3 × 10−26 kg) = 4.0 × 106 J kg−1 = 4 MJ kg−1.

The known value is L = 2.5 MJ kg−1. Not great, but not bad.

Surface Tension

The surface tension, γ, is typically expressed as force per unit length, which is equivalent to the energy per unit area. At a surface, we estimate one of the four hydrogen bonds is missing, so

γ = (3 × 10−20 J)/(3 × 10−10 m)2 = 0.33 J m−2 .

The measured value is γ = 0.07 J m−2, which is about five times less than our calculation. This is a bigger discrepancy than I’d like for an order-of-magnitude estimate, but it’s not horrible.

Viscosity

The coefficient of viscosity, η, has units of kg m−1 s−1. We can use the mass of the water molecule in kilograms, and the distance between molecules in meters, but we don’t have a time scale. However, energy has units of kg m2 s−2, so we can take the square root of mass times distance squared over energy and get a unit of time, τ

τ = √[(3 × 10−26 kg) (3 × 10−10 m)2/4(3 × 10−20 J)] = 0.15 × 10−12 s = 0.15 ps.

We can think of this as a time characterizing the vibrations about equilibrium of the molecules. 
 
The viscosity of water should therefore be on the order of

η = (3 × 10−26 kg)/[(3 × 10−10 m) (0.15 × 10−12 s)] = 0.67 × 10−3 kg m−1 s−1.

Water has a viscosity coefficient of about η = 1 × 10−3 kg m−1 s−1. I admit this analysis provides little insight into the mechanism underlying viscous effects, and it doesn’t explain the enormous temperature dependence of η, but it gets the right order of magnitude.

Specific Heat

The heat capacity is the energy needed to raise the temperature of water by one degree. Thermodynamics implies that the heat capacity is typically equal to Boltzmann’s constant times the number of degrees of freedom per molecule times the number of molecules. The number of degrees of freedom is a subtle thermodynamic concept, but we can approximate it as the number of hydrogen bonds per molecule; about four. Often heat capacity is expressed as the specific heat, C, which is the heat capacity per unit mass. In that case, the specific heat is

C = 4 (1.4 × 10−23 J K−1)/(3 × 10−26 kg) = 1900 J K−1 kg−1.

The measured value is C = 4200 J K−1 kg−1, which is more than a factor of two larger than our estimate. I’m not sure why our value is so low, but probably there are rotational degrees of freedom in addition to the four vibrational modes we counted.

Diffusion

The self-diffusion constant of water can be estimated using the Stokes-Einstein equation relating diffusion and viscosity, D = kT/(6πηa), where a is the size of the molecule. The thermal energy kT is about one eighth of the energy of a hydrogen bond. Therefore,

D = [(3 × 10−20 J)/8]/[(6)(3.14)(0.67 × 10−3 kg m−1 s−1)(3 × 10−10 m)] = 10−9 m2 s−1.

Figure 4.11 in IPMB suggests the measured diffusion constant is about twice this estimate: D = 2 × 10−9 m2 s−1

 
Air and Water,
by Mark Denny.
We didn’t do too bad. Three microscopic parameters, plus the temperature, gave us estimates of density, compressibility, speed of sound, latent heat, surface tension, viscosity, specific heat, and the diffusion constant. This is almost all the properties of water discussed in Mark Denny’s wonderful book Air and Water. Fermi would be proud.