Friday, April 23, 2021

Electric and Magnetic Fields From Two-Dimensional Anisotropic Bisyncytia


Page 223 of Intermediate Physics for Medicine and Biology.

Figure 8.18 on page 223 of Intermediate Physics for Medicine and Biology contains a plot of the magnetic field produced by action currents in a slice of cardiac tissue. The measured magnetic field contours have approximately a four-fold symmetry. The experiment by Staton et al. that produced this data was a tour de force, demonstrating the power of high-spatial-resolution biomagnetic techniques. 
 
Sepulveda NG, Wikswo JP (1987) “Electric and Magnetic Fields From Two-Dimensional Anisotropic Bisyncytia,” Biophysical Journal, Volume 51, Pages 557-568, superimposed on Intermediate Physics for Medicine and Biology.
Sepulveda and Wikswo (1987).
In this post, I discuss the theoretical prediction by Nestor Sepulveda and John Wikswo in the mid 1980s that preceded and motivated the experiment.
Sepulveda NG, Wikswo JP (1987) “Electric and Magnetic Fields From Two-Dimensional Anisotropic Bisyncytia,” Biophysical Journal, Volume 51, Pages 557-568.
Their abstract is presented below.
Cardiac tissue can be considered macroscopically as a bidomain, anisotropic conductor in which simple depolarization wavefronts produce complex current distributions. Since such distributions may be difficult to measure using electrical techniques, we have developed a mathematical model to determine the feasibility of magnetic localization of these currents. By applying the finite element method to an idealized two-dimensional bisyncytium [a synonym for bidomain] with anisotropic conductivities, we have calculated the intracellular and extracellular potentials, the current distributions, and the magnetic fields for a circular depolarization wavefront. The calculated magnetic field 1 mm from the tissue is well within the sensitivity of a SQUID magnetometer. Our results show that complex bisyncytial current patterns can be studied magnetically, and these studies should provide valuable insight regarding the electrical anisotropy of cardiac tissue.
Sepulveda and Wikswo assumed the tissue was excited by a brief stimulus through an electrode (purple dot in the illustration below), resulting in a circular wave front propagating outward. The transmembrane potential coutours at one instant are shown in red. The assumption of a circular wave front is odd, because cardiac tissue is anisotropic. A better assumption would have been an elliptical wave front with its long axis parallel to the fibers. Nevertheless, the circular wave front captures the essential features of the problem.

If the tissue were isotropic, the intracellular current density would point radially outward and the extracellular current density would point radially inward. The intracellular and extracellular currents would exactly cancel, so the net current (their sum) would be zero. Moreover, the net current would vanish if the tissue were anisotropic but had equal anisotropy ratios. That is, if the ratios of the electrical conducivities parallel and perpendicular to the fibers were the same in the intracellular and extracellular spaces. The only way to produce a net current (shown as the blue loops in the illustration below) is if the tissue has unequal anisotropy ratios. In that case, the loops are four-fold symmetric, rotating clockwise in two quadrants and counterclockwise in the other two.

Current loops produce magnetic fields. The right-hand-rule implies that the magnetic field points up out of the plane in the top-right and the bottom-left quadrants, and down into the plane in the other two. The contours of magnetic field are green in the illustration below, and the peak magnitude for a 1 mm thick sheet of is about one fourth of a nanotesla.

Jut for fun, I superimposed the transmembrane potential, net current density, and magnetic field plots in the picture below.

Notes:
  1. The measurement of the magnetic field is a null detector of unequal anisotropy ratios. In other words, in tissue with equal anisotropy ratios the magnetic field vanishes, so the mere existence of a magnetic field implies the anisotropy ratios are unequal. The condition of unequal anisotropy ratios has many implications for cardiac tissue. One is discussed in Homework Problem 50 in Chapter 7 of IPMB
  2. If the sheet of cardiac tissue is superfused by a saline bath, the magnetic field distribution changes.
  3. Wikswo was a pioneer in the field of biomagnetism. In particular, he developed small scanning magnetometers that had sub-millimeter spatial resolution. He was in a unique position of being able to measure the magnetic fields that he and Sepulveda predicted, which led to the figure included in IPMB
  4. I was a graduate student in Wikswo’s laboratory when Sepulveda and Wikswo wrote this article. Sepulveda, a delightful Columbian biomedical engineer and a good friend of mine, worked as a research scientist in Wikswo’s lab. He was an expert on the finite element method—the numerical technique used in his paper with Wikswo—and had written his own finite element code that no one else in the lab understood. He died a decade ago, and I miss him. 
  5. Sepulveda and Wikswo were building on a calculation published in 1984 by Robert Plonsey and Roger Barr (“Current Flow Patterns in Two Dimensional Anisotropic Bisyncytia With Normal and Extreme Conductivities,” Biophys. J., 45:557-571). Wikswo heard either Plonsey or Barr give a talk about their results at a scientific meeting. He realized immediately that their predicted current loops implied a biomagnetic field. When Wikswo returned to the lab, he described Plonsey and Barr’s current loops at a group meeting. As I listened, I remember thinking “Wikswo’s gone mad,” but he was right.
  6. Two years after their magnetic field article, Sepulveda and Wikswo (now with me included as a coauthor) calculated the transmembrane potential produced when cardiac tissue is stimulated by a point electrode. But that’s another story.
I’ll give Sepulveda and Wikswo the last word. Below is the concluding paragraph of their article, which looks forward to the experimental measurement of the magnetic field pattern that was shown in IPMB.
The bidomain model of cardiac tissue provides a tool that can be explored and used to study and explain features of cardiac conduction. However, it should be remembered that “a model is valid when it measures what it is intended to measure” (31). Thus, experimental data must be used to evaluate the validity of the bidomain model. This evaluation must involve comparison of the model's predictions not only with measured intracellular and extracellular potentials but also with the measured magnetic fields. When the applicability of the bidomain model to a particular cardiac preparation and the validity and reliability of our calculations have been determined experimentally, this mathematical approach should then provide a new technique for analyzing normal and pathological cardiac activation.
Members of John Wikswo's laboratory at Vanderbilt University in the mid 1980s.
Members of Wikswo's lab at Vanderbilt University in the mid 1980s: John Wikswo is on the phone, Nestor Sepulveda is in the white shirt and gray pants, and I am on the far right. The other people are Frans Gielen (the tall guy with arms crossed on the left), Ranjith Wijesinghe (between Gielen and Sepulveda), Peng Zhang (between Wikswo and Sepulveda), and Pat Henry (knelling).

Friday, April 16, 2021

The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race

The Code Breaker, by Walter Isaacson, superimposed on Intermediate Physics for Medicine and Biology.
The Code Breaker,
by Walter Isaacson

My favorite authors are Simon Winchester, David Quammen, and Walter Isaacson. This week I read Isaacson’s latest book: The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race. I would place it alongside The Eighth Day of Creation and The Making of the Atomic Bomb as one of the best books about the history of science

In his introduction, Isaacson writes

The invention of CRISPR and the plague of COVID will hasten our transition to the third great revolution of modern times. These revolutions arose from the discovery, beginning just over a century ago, of the three fundamental kernels of our existence: the atom, the bit, and the gene.

The first half of the twentieth century, beginning with Albert Einstein’s 1905 papers on relativity and quantum theory, featured a revolution driven by physics. In the five decades following his miracle year, his theories led to atom bombs and nuclear power, transistors and spaceships, lasers and radar.

The second half of the twentieth century was an information-technology era, based on the idea that all information could be encoded by binary digits—known as bits—and all logical processes could be performed by circuits with on-off switches. In the 1950s, this led to the development of the microchip, the computer, and the internet. When these three innovations were combined, the digital revolution was born.

Now we have entered a third and even more momentous era, a life-science revolution. Children who study digital coding will be joined by those who study genetic code.
Early in the book, Isaacson describes Francisco Mojica’s discovery that bacteria have “CRISPR spacer sequences”: strands of DNA that serve as an immune system protecting them from viruses.
As we humans struggle to fight off novel strains of viruses, it’s useful to note that bacteria have been doing this for about three billion years, give or take a few million centuries. Almost from the beginning of life on this planet, there’s been an intense arms race between bacteria, which developed elaborate methods of defending against viruses, and the ever-evolving viruses, which sought ways to thwart those defenses.

Mojica found that bacteria with CRISPR space sequences seems to be immune from infection by a virus that had the same sequence. But bacteria without the spacer did get infected. It was a pretty ingenious defense system, but there was something even cooler: it appeared to adapt to new threats. When new viruses came along, the bacteria that survived were able to incorporate some of that virus’s DNA and thus create, in its progeny, an acquired immunity to that new virus. Mojica recalls being so overcome by emotion at this realization that he got tears in his eyes. The beauty of nature can sometimes do that to you.

The Code Breaker focuses on the life and work of Jennifer Doudna, who won the 2020 Nobel Prize in Chemistry. However, the star of the book is not Doudna, nor Emmanuelle Charpentier (who shared the prize with Doudna), nor Mojica, nor any of the other scientific heroes. The star is RNA, the molecule that carries genetic information from DNA in the nucleus to the cytoplasm where proteins are produced.

By 2008, scientists had discovered a handful of enzymes produced by genes that are adjacent to the CRISPR sequences in a bacteria’s DNA. These CRISPR-associated (Cas) enzymes enable the system to cut and paste new memories of viruses that attack the bacteria. They also create short segments of RNA, known as CRISPR RNA (crRNA), that can guide a scissors-like enzyme to a dangerous virus and cut up its genetic material. Presto! That’s how the wily bacteria create an adaptive immune system!
Doudna and Charpentier’s Nobel Prize resulted from their developing the CRISPR-Cas9 system into a powerful technique for gene editing.
The study of CRISPR would become a vivid example of the call-and-response duet between basic science and translational medicine. At the beginning it was driven by the pure curiosity of microbe-hunters who wanted to explain an oddity they had stumbled upon when sequencing the DNA of offbeat bacteria. Then it was studied in an effort to protect the bacteria in yogurt cultures from attacking viruses. That led to a basic discovery about the fundamental workings of biology. Now a biochemical analysis was pointing the way to the invention of a tool with potential practical uses. “Once we figured out the components of the CRISPR-Cas9 assembly, we realized that we could program it on our own,” Doudna says. “In other words, we could add a different crRNA and get it to cut any different DNA sequence we chose.”

Several other themes appear throughout The Code Breaker

  • The role of competition and collaboration in science, 
  • How industry partnerships and intellectual property affect scientific discovery, 
  • The ethics of gene editing, and
  • The epic scientific response to the COVID-19 pandemic

I’m amazed that Isaacson’s book is so up-to-date. I received my second dose of the Pfizer-BioNTech vaccine last Saturday and then read The Code Breaker in a three-day marathon. My arm was still sore while reading the chapter near the end of the book about RNA Covid vaccines like Pfizer’s.

There’s a lot of biology and medicine in The Code Breaker, but not much physics. Yet, some of the topics discussed in Intermediate Physics for Medicine and Biology appear briefly. Doudna uses x-ray diffraction to decipher the structure of RNA. Electroporation helps get vaccines and drugs into cells. Electrophoresis, microfluidics, and electron microscopy are mentioned. I wonder if injecting more physics and math into this field would supercharge its progress. 

CRISPR isn’t the first gene-editing tool, but it increases the precision of the technique. As Winchester noted in The Perfectionists, precision is a hallmark of technology in the modern world. Quammen’s book Spillover suggests that humanity may be doomed by an endless flood of viral pandemics, but The Code Breaker offers hope that science will provide the tools needed to prevail over the viruses.

I will close with my favorite passage from The Code Breaker: Isaacson’s paean to curiosity-driven scientific research.

The invention of easily reprogrammable RNA vaccines was a lightning-fast triumph of human ingenuity, but it was based on decades of curiosity-driven research into one of the most fundamental aspects of life on planet earth: how genes encoded by DNA are transcribed into snippets of RNA that tell cells what proteins to assemble. Likewise, CRISPR gene-editing technology came from understanding the way that bacteria use snippets of RNA to guide enzymes to chop up dangerous viruses. Great inventions come from understanding basic science. Nature is beautiful that way.

 

.
“How CRISPR lets us edit our DNA,” a TED talk by Jennifer Doudna. 

Nobel Lecture, Jennifer Doudna, 2020 Nobel Prize in Chemistry.

Friday, April 9, 2021

The Vitamin D Questions: How Much Do You Need and How Should You Get It?

A bottle of vitamin D capsules, on Intermediate Physics for Medicine and Biology.
Vitamin D
Two years ago my doctor recommended I start taking vitamin D. I’m annoyed at having to take a supplement every day, but I do what the doc says.

Your body needs exposure to ultraviolet light to produce its own vitamin D, but too much UV light causes skin cancer. Russ Hobbie and I address this trade-off in Section 14.10 of Intermediate Physics for Medicine and Biology.

There has been an alarming increase in the use of tanning parlors by teenagers and young adults. These emit primarily UVA, which can cause melanoma. Exposure rates are two to three times greater than solar radiation at the equator at noon (Schmidt 2012). Many states now prohibit minors from using tanning parlors. Proponents of tanning parlors point out that UVB promotes the synthesis of vitamin D; however, the exposure to UVB in a tanning parlor is much higher than needed by the body for vitamin D production. Tanning as a source of Vitamin D is no longer recommended at any age level (Barysch et al. 2010).
To learn more about how much vitamin D people need, I read an article by Deon Wolpowitz and Barbara Gilchrest.
Wolpowitz D, Gilchrest BA (2006) “The Vitamin D Questions: How Much Do You Need and How Should You Get It?Journal of the American Academy of Dermatology, Volume 54, Pages 301–317.

Below are excerpts from their conclusion.

Given the scarcity of naturally occurring vit [vitamin] D in many otherwise adequate diets, human beings may once have depended on unprotected exposure to natural sunlight as the primary environmental source of vit D, at least during those periods of the year when sunlight can produce [previtamin] D3 in the skin... However, chronic unprotected exposure to carcinogenic UV radiation in sunlight not only results in photoaging, but also greatly increases the risk of skin cancer. This risk is further exacerbated by the extended lifespan of human beings in the 21st century. Fortunately, there is a noncarcinogenic alternative—intestinal absorption of vit D-fortified foods and/or dietary supplements…

All available evidence indicates that younger, lighter-skinned individuals easily maintain desirable serum 25-OH vit D levels year-round by incidental protected sun exposure and customary diet. Daily intake of two 8-oz glasses of fortified milk or orange juice or one standard vit or incidental protected exposure of the face and backs of hands to 0.25% minimum erythema [reddening of the skin] dose of UVB radiation 3 times weekly each generates adequate serum 25-OH levels by classic criteria. Dietary supplementation of vit D is efficient, efficacious, and safe. Thus, it is prudent for those at high statistical risk for vit D deficiency, such as patients who are highly protected against the sun [or old folks like me], to take daily supplemental vit D (200-1000 IU) with concurrent dietary calcium to meet current and future RDA [recommended daily allowance] levels.
So, maybe my physician was right when she put me on vitamin D. But I decided to double my dose in the winter. Is that a good idea? A recent paper out of South Korea has some interesting data.
Park SS, Lee YG, Kim M, Kim J, Koo J-H, Kim CK, Um J, Yoon J (2019) “Simulation of Threshold UV Exposure Time for Vitamin D Synthesis in South Korea,” Advances in Meteorology, Volume 2019, Article 4328151.
Sang Seo Park and his colleagues analyze UV exposure, and estimate the threshold exposure time at noon to sunlight in Seoul for vitamin D synthesis (blue) and erythema (red). 

Threshold exposure time to sunlight in Seoul for vitamin D synthesis (blue) and erythema (red). from Park et al. (2019) Simulation of Threshold UV Exposure Time for Vitamin D Synthesis in South Korea. Adv. Meteorol., 2019:4328151.
Threshold exposure time to sunlight in Seoul for vitamin D synthesis (blue) and erythema (red). From Park et al. (2019).

This is a semilog plot, so the difference between July and January is about a factor of ten. I think increasing my vitamin D dose in the winter is, if anything, conservative. I could probably do without the supplement in the summer. Rochester (42° latitude) is farther north than Seoul (38°), so the seasonal effect might be even greater where I live.

In conclusion, I think Russ and I are correct to stress the harmful effects of UV light in IPMB. If you’re worried about not getting enough vitamin D, take a supplement in the winter.

Friday, April 2, 2021

The Boltzmann Distribution Applied to a Harmonic Oscillator

In Chapter 3 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Boltzmann distribution. If you have a system with energy levels of energy En populated by particles in thermal equilibrium, the Boltzmann distribution gives the probability of finding a particle in the nth level.

A classic example of the Boltzmann distribution is for the energy levels of a harmonic oscillator. These levels are equally spaced starting from a ground state and increasing without bound. To see the power of the Boltzmann distribution, solve this new homework problem.
Section 3.7

Problem 29½. Suppose the energy levels, En, of a system are given by

En = n ε,     for  n = 0, 1, 2, 3, …

where ε is the energy difference between adjacent levels. Assume the probability of a particle occupying the nth energy level, Pn, obeys the Boltzmann distribution
where A is a constant, T is the absolute temperature, and k is the Boltzmann constant.

(a) Determine A in terms of ε, k, and T. (Hint: the sum of the probabilities over all levels is one.)
(b) Find the average energy E of the particles. (Hint: E = ∑PnEn.)
(c) Calculate the heat capacity C of a system of N such particles. (Hint: U = N E and C = dU/dT.)
(d) What is the limiting value of C for high temperatures (kT >> ε)? (Hint: use the Taylor series of the exponential.)
(e) What is the limiting value of C for low temperatures (kT << ε)?
(f) Sketch a plot of C versus T.
You may need these infinite series

1 + x + x2 + x3 + ⋯ = 1/(1x) 
x + 2x2 + 3x3 + ⋯ = x/(1x)2 

This is a somewhat advanced problem in statistical mechanics, so I gave several hints to guide the reader. The calculation contains much interesting physics. For instance, the answer to part (e) is known as the third law of thermodynamics. Albert Einstein was the first to calculate the heat capacity of a collection of harmonic oscillators (a good model for a crystalline solid). Theres more physics than biology in this problem, because most of the interesting behavior occurs at cold temperatures but biology operates at hot temperatures.

If you’re having difficulty solving this problem, here
s one more hint:

enx = (ex)n

Enjoy!

Friday, March 26, 2021

Cooling by Radiation

In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss thermal radiation. If you’re a black body, the net power you radiate, wtot, is given by Eq. 14.41

wtot = S σSB (T4Ts4) ,                (14.41)

where S is the surface area, σSB is the Stefan-Boltzmann constant (5.67 × 10−8 W m−2 K−4), T is the absolute temperature of your body (about 310 K), and Ts is the temperature of your surroundings. The T4 term is the radiation you emit, and the Ts4 term is the radiation you absorb.

The fourth power that appears in this expression is annoying. It means we must use absolute temperature in kelvins (K); you get the wrong answer if you use temperature in degrees Celsius (°C). It also means the expression is nonlinear; wtot is not proportional to the temperature difference TTs.

On the absolute temperature scale, the difference between the temperature of your body (310 K) and the temperature of your surroundings (say, 293 K at 20 °C) is only about 5%. In this case, we simplify the expression for wtot by linearizing it. To see what I mean, try Homework Problem 14.32 in IPMB.
Section 14.9 
Problem 32. Show that an approximation to Eq. 14.41 for small temperature differences is wtot = S Krad (TTs). Deduce the value of Krad at body temperature. Hint: Factor T4Ts4 =  (TTs)(…). You should get Krad = 6.76 W m−2 K−1.
The constant Krad has the same units as a convection coefficient (see Homework Problem 51 in Chapter 3 of IPMB). Think of it as an effective convection coefficient for radiative heat loss. Once you determine Krad, you can use either the kelvin or Celsius temperature scales for TTs, so you can write its units as W m−2 °C−1.
 
Air and Water, by Mark Denny, superimposed on Intermediate Physics for Medicine and Biology.
Air and Water,
by Mark Denny.
In Air and Water, Mark Denny analyzes the convection coefficient. In a stagnant fluid, the convection coefficient depends only on the fluid’s thermal conductivity and the body’s size. For a sphere, it is inversely proportional to the diameter, meaning that small bodies are more effective at convective cooling per unit surface area than large bodies. If the body undergoes free convection or forced convection (for both cases the surrounding fluid is moving), the expression for the convection coefficient is more complicated, and depends on factors such as the Reynolds number and Prandtl number of the fluid flow. Denny gives values for the convection coefficient as a function of body size for both air and water. Usually, these values are greater than the 6.76 W m−2 °C−1 for radiation. However, for large bodies in air, radiation can compete with convection as the dominate mechanism. For people, radiation is an important mechanism for cooling. For a dolphin or mouse, it isn’t. Elephants probably make good use of radiative cooling.
 
Finally, our analysis implies that when the difference between the temperatures of the body and the surroundings is small, a body whose primary mechanism for getting rid of heat is radiation will cool exponentially following Newton’s law of cooling.

Friday, March 19, 2021

The Carr-Purcell-Meiboom-Gill Pulse Sequence

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny...” 

Isaac Asimov

In Section 18.8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe the Carr-Purcell pulse sequence, used in magnetic resonance imaging.

When a sequence of π [180° radio-frequency] pulses that nutate M [the magnetization vector] about the x' axis are applied at TE/2, 3TE/2, 5TE/2, etc., a sequence of echoes are formed [in the Mx signal], the amplitudes of which decay with relaxation time T2. This is shown in Fig. 18.19.
The Carr-Purcell pulse sequence, as shown in Fig. 18.19 of Intermediate Physics for Medicine and Biology.
Fig. 18.19  The Carr-Purcell pulse sequence.
All π pulses nutate about the x' axis.
The envelope of echoes decays as et/T2.

Russ and I then discuss the Carr-Purcell-Meiboom-Gill pulse sequence.
One disadvantage of the CP [Carr-Purcell] sequence is that the π pulse must be very accurate or a cumulative error builds up in the successive pulses. The Carr-Purcell-Meiboom-Gill sequence overcomes this problem. The initial π/2 [90° radio-frequency] pulse nutates M about the x' axis as before, but the subsequent [π] pulses are shifted a quarter cycle in time, which causes them to rotate about the y' axis.
 
The Carr-Purcell-Meiboom-Gill pulse sequence, as shown in Fig. 18.21 of Intermediate Physics for Medicine and Biology.
Fig. 18.21  The Carr-Purcell-Meiboom-Gill pulse sequence.

The first page of Meiboom, S. and Gill, D. (1958) “Modified Spin-Echo Method for Measuring Nuclear Relaxation Times.” Rev. Sci. Instr. 29:688–691, superimposed on Intermediate Physics for Medicine and Biology.
Meiboom, S. and Gill, D. (1958)
“Modified Spin-Echo Method for
Measuring Nuclear Relaxation Times.”
Rev. Sci. Instr.
29:688–691.
Students might enjoy reading the abstract of Saul Meiboom and David Gill’s 1958 article published in the Review of Scientific Instruments (Volume 29, Pages 688-691).
A spin echo method adapted to the measurement of long nuclear relaxation times (T2) in liquids is described. The pulse sequence is identical to the one proposed by Carr and Purcell, but the rf [radio-frequency] of the successive pulses is coherent, and a phase shift of 90° is introduced in the first pulse. Very long T2 values can be measured without appreciable effect of diffusion.
This short paper is so highly cited that it was featured in a 1980 Citation Classic commentary, in which Meiboom reflected on the significance of the research.
The work leading to this paper was done nearly 25 years ago at the Weizmann Institute of Science, Rehovot, Israel. David Gill, who was then a graduate student… , set out to measure NMR T2-relaxation times in liquids, using the well-known Carr-Purcell pulse train scheme. He soon found that at high pulse repetition rates adjustments became very critical, and echo decays, which ideally should be exponential, often exhibited beats and other irregularities. But he also saw that on rare and unpredictable occasions a beautiful exponential decay was observed... Somehow the recognition emerged that the chance occurrence of a 90° phase shift of the nuclear polarization [magnetization] must underlie the observations. It became clear that in the presence of such a shift a stable, self-correcting state of the nuclear polarization is produced, while the original scheme results in an unstable state, for which deviations are cumulative. From here it was an easy step to the introduction of an intentional phase shift in the applied pulse train, and the consistent production of good decays.
The key point is that the delay between the initial π/2 pulse (to flip the spins into the xy plane) and the string of π pulses (to create the echoes) must be timed carefully (the pulses must be coherent). Even adding a delay corresponding to a quarter of a single oscillation changes everything. In a two-tesla MRI scanner, the Larmor frequency is 83 MHz, so one period is 12 nanoseconds. Therefore, if the timing is off by just a few nanoseconds, the method won’t work.

Initially Gill didn’t worry about timing the pulses precisely, so usually he was using the error-prone Carr-Purcell sequence. Occasionally he got lucky and the timing was just right; he was using what’s now called the Carr-Purcell-Meiboom-Gill sequence. Meiboom and Gill “somehow” were able to deduce what was happening and fix the problem. Meiboom believes their paper is cited so often because it was the first to recognize the importance of maintaining phase relations between the different pulses in an MRI pulse sequence.

In his commentary, Meiboom notes that
Although in hindsight the 90° phase shift seems the logical and almost obvious thing to do, its introduction was triggered by a chance observation, rather than by clever a priori reasoning. I suspect (though I have no proof) that this applies to many scientific developments, even if the actual birth process of a new idea is seldom described in the introduction to the relevant paper.
If you’re a grad student working on a difficult experiment that’s behaving oddly, don’t be discouraged if you hear yourself saying “that’s funny...” A discovery might be sitting right in front of you!  

Friday, March 12, 2021

The Rest of the Story 2

Hermann von Helmholtz in 1948.
Hermann in 1848.

The Rest of the Story

Hermann was born in 1821 in Potsdam, Germany. He was often sick as a child, suffering from illnesses such as scarlet fever, and started school late. He was hampered by a poor memory for disconnected facts, making subjects like languages and history difficult, so his interest turned to science. His father loaned him some cheap glass lenses that he used to build optical instruments. He wanted to become a physicist, but his family couldn’t afford to send him to college. Instead, he studied hard to pass an exam that won him a place in a government medical school in Berlin, where his education would be free if he served in the military for five years after he graduated.

The seventeen-year-old Hermann moved to Berlin in 1838. He brought his piano with him, on which he loved to play Mozart and Beethoven. He became friends with his fellow students Ernest von Brücke and Emil de Bois-Reymond, began doing scientific research under the direction of physiologist Johannes Müller, and taught himself higher mathematics in his spare time. By 1843 he graduated and began his required service as an army surgeon.

Life in the army required long hours, and Hermann was isolated from the scientific establishment in Berlin. But with the help of Brücke and de Bois-Reymond he somehow continued his research. His constitution was still delicate, and sometimes he would take time off to restore his health. Near the end of his five-year commitment to the army he fell in love with Olga von Velten, who would sing while he accompanied her on the piano. They became engaged, but he knew they could not marry until he found an academic job after his military service ended, and for that he needed to establish himself as a first-rank scientist. This illness-prone, cash-strapped, over-worked army doctor with a poor memory and a love for music needed to find a research project that would propel him to the top of German science.

Hermann rose to the challenge. He began a careful study of the balance between muscle metabolism and contraction. Using both experiments and mathematics he established the conservation of energy, and in the process showed that no vital force was needed to explain life. On July 23, 1847 he announced his discovery at a meeting of the German Physical Society.  

This research led to a faculty position in Berlin and his marriage to Olga. His career took off, and he later made contributions to the study of vision, hearing, nerve conduction, and ophthalmology. Today, the largest Association of German Research Centers bears his name. Many consider Hermann von Helmholtz to be the greatest biological physicist of all time.

And now you know THE REST OF THE STORY.

Good day! 

_____________________________________________________________

This blog post was written in the style of Paul Harvey’s “The Rest of the Story” radio program. The content is based on a biography of Helmholtz written by his friend and college roommate Leo Koenigsberger. You can read about nerve conduction and Helmholtz’s first measurement of its propagation speed in Chapter 6 of Intermediate Physics for Medicine and Biology. This August we will celebrate the 200th anniversary of Hermann von Helmholtz’s birth. 

Click here for another IPMBThe Rest of the Story” post.

 
Charles Osgood pays tribute to the master storyteller Paul Harvey.

Friday, March 5, 2021

Estimating the Properties of Water

Water
from: www.middleschoolchemistry.com
 
I found a manuscript on the arXiv by Andrew Lucas about estimating macroscopic properties of materials using just a few microscopic parameters. I decided to try a version of this analysis myself. It’s based on Lucas’s work, with a few modifications. I focus exclusively on water because of its importance for biological physics, and make order-of-magnitude calculations like those Russ Hobbie and I discuss in the first section of Intermediate Physics for Medicine and Biology.

My goal is to estimate the properties of water using three numbers: the size, mass, and energy associated with water molecules. We take the size to be the center-to-center distance between molecules, which is about 3 , or 3 × 10−10 m. The mass of a water molecule is 18 (the molecular weight) times the mass of a proton, or about 3 × 10−26 kg. The energy associated with one hydrogen bond between water molecules is about 0.2 eV, or 3 × 10−20 J. This is roughly eight times the thermal energy kT at body temperature, where k is Boltzmann’s constant (1.4 × 10−23 J K−1) and T is the absolute temperature (310 K). A water molecule has about four hydrogen bonds with neighboring molecules.

Density

Estimating the density of water, ρ, is Homework Problem 4 in Chapter 1 of IPMB. Density is mass divided by volume, and volume is distance cubed

ρ = (3 × 10−26 kg)/(3 × 10−10 m)3 = 1100 kg m−3 = 1.1 g cm−3.

The accepted value is ρ = 1.0 g cm−3, so our calculation is about 10% off; not bad for an order-of-magnitude estimate.

Compressibility

The compressibility of water, κ, is a measure of how the volume of water decreases with increasing pressure. It has dimensions of inverse pressure. The pressure is typically thought of as force per unit area, but we can multiply numerator and denominator by distance and express it as energy per unit volume. Therefore, the compressibility is approximately distance cubed over the total energy of the four hydrogen bonds

κ = (3 × 10−10 m)3/[4(3 × 10−20 J)] = 0.25 × 10−9 Pa−1 = 0.25 GPa−1 ,

implying a bulk modulus, B (the reciprocal of the compressibility), of 4 GPa. Water has a bulk modulus of about B = 2.2 GPa, so our estimate is within a factor of two.

Speed of Sound

Once you know the density and compressibility, you can calculate the speed of sound, c, as (see Eq. 13.11 in IPMB)

c = (ρ κ)−1/2 = 1/√[(1100 kg m−3) (0.25 × 10−9 Pa−1)] = 1900 m s−1 = 1.9 km s−1.

The measured value of the speed of sound in water is about c = 1.5 km s−1, which is pretty close for a back-of-the-envelope estimate.

Latent Heat

A homework problem about vapor pressure in Chapter 3 of IPMB uses water’s latent heat of vaporization, L, which is the energy required to boil water per kilogram. We estimate it as

L = 4(3 × 10−20 J)/(3 × 10−26 kg) = 4.0 × 106 J kg−1 = 4 MJ kg−1.

The known value is L = 2.5 MJ kg−1. Not great, but not bad.

Surface Tension

The surface tension, γ, is typically expressed as force per unit length, which is equivalent to the energy per unit area. At a surface, we estimate one of the four hydrogen bonds is missing, so

γ = (3 × 10−20 J)/(3 × 10−10 m)2 = 0.33 J m−2 .

The measured value is γ = 0.07 J m−2, which is about five times less than our calculation. This is a bigger discrepancy than I’d like for an order-of-magnitude estimate, but it’s not horrible.

Viscosity

The coefficient of viscosity, η, has units of kg m−1 s−1. We can use the mass of the water molecule in kilograms, and the distance between molecules in meters, but we don’t have a time scale. However, energy has units of kg m2 s−2, so we can take the square root of mass times distance squared over energy and get a unit of time, τ

τ = √[(3 × 10−26 kg) (3 × 10−10 m)2/4(3 × 10−20 J)] = 0.15 × 10−12 s = 0.15 ps.

We can think of this as a time characterizing the vibrations about equilibrium of the molecules. 
 
The viscosity of water should therefore be on the order of

η = (3 × 10−26 kg)/[(3 × 10−10 m) (0.15 × 10−12 s)] = 0.67 × 10−3 kg m−1 s−1.

Water has a viscosity coefficient of about η = 1 × 10−3 kg m−1 s−1. I admit this analysis provides little insight into the mechanism underlying viscous effects, and it doesn’t explain the enormous temperature dependence of η, but it gets the right order of magnitude.

Specific Heat

The heat capacity is the energy needed to raise the temperature of water by one degree. Thermodynamics implies that the heat capacity is typically equal to Boltzmann’s constant times the number of degrees of freedom per molecule times the number of molecules. The number of degrees of freedom is a subtle thermodynamic concept, but we can approximate it as the number of hydrogen bonds per molecule; about four. Often heat capacity is expressed as the specific heat, C, which is the heat capacity per unit mass. In that case, the specific heat is

C = 4 (1.4 × 10−23 J K−1)/(3 × 10−26 kg) = 1900 J K−1 kg−1.

The measured value is C = 4200 J K−1 kg−1, which is more than a factor of two larger than our estimate. I’m not sure why our value is so low, but probably there are rotational degrees of freedom in addition to the four vibrational modes we counted.

Diffusion

The self-diffusion constant of water can be estimated using the Stokes-Einstein equation relating diffusion and viscosity, D = kT/(6πηa), where a is the size of the molecule. The thermal energy kT is about one eighth of the energy of a hydrogen bond. Therefore,

D = [(3 × 10−20 J)/8]/[(6)(3.14)(0.67 × 10−3 kg m−1 s−1)(3 × 10−10 m)] = 10−9 m2 s−1.

Figure 4.11 in IPMB suggests the measured diffusion constant is about twice this estimate: D = 2 × 10−9 m2 s−1

 
Air and Water,
by Mark Denny.
We didn’t do too bad. Three microscopic parameters, plus the temperature, gave us estimates of density, compressibility, speed of sound, latent heat, surface tension, viscosity, specific heat, and the diffusion constant. This is almost all the properties of water discussed in Mark Denny’s wonderful book Air and Water. Fermi would be proud.

Friday, February 26, 2021

Bridging Physics and Biology Teaching Through Modeling

In this blog, I often stress the value of toy models. I’m not the only one who feels this way. Anne-Marie Hoskinson and her colleagues suggest that modeling is an important tool for teaching at the interface of physics and biology (“Bridging Physics and Biology Teaching Through Modeling,” American Journal of Physics, Volume 82, Pages 434–441, 2014). They write
While biology and physics might appear quite distinct to students, as scientific disciplines they both rely on observations and measurements to explain or to make predictions about the natural world. As a shared scientific practice, modeling is fundamental to both biology and physics. Models in these two disciplines serve to explain phenomena of the natural world; they make predictions that drive hypothesis generation and data collection, or they explain the function of an entity. While each discipline may prioritize different types of representations (e.g., diagrams vs mathematical equations) for building and depicting their underlying models, these differences reflect merely alternative uses of a common modeling process. Building on this foundational link between the disciplines, we propose that teaching science courses with an overarching emphasis on scientific practices, particularly modeling, will help students achieve an integrated and coherent understanding that will allow them to drive discovery in the interdisciplinary sciences.
One of their examples is the cardiac cycle, which they compare and contrast with the thermodynamic Carnot cycle. The cardiac cycle is best described graphically, using a pressure-volume diagram. Russ Hobbie and I present a PV plot of the left ventricle in Figure 1.34 of Intermediate Physics for Medicine and Biology. Below, I modify this plot, trying to capture its essence while simplifying it for easier analysis. As is my wont, I present this toy model as a new homework problem.
Sec. 1.19

Problem 38 ½. Consider a toy model for the behavior of the heart’s left ventricle, as expressed in the pressure-volume diagram

(a) Which sections of the cycle (AB, BC, CD, DA) correspond to relaxation, contraction, ejection, and filling?

(b) Which points during the cycle (A, B, C, D) correspond to the aortic value opening, the aortic value closing, the mitral value opening, and the mitral valve closing?

(c) Plot the pressure versus time and the volume versus time (use a common horizontal time axis, but individual vertical pressure and volume axes).

(d) What is the systolic pressure (in mm Hg)?

(e) Calculate the stroke volume (in ml). 

(f) If the heart rate is 70 beats per minute, calculate the cardiac output (in m3 s–1).

(g) Calculate the work done per beat (in joules).

(h) If the heart rate is 70 beats per minute, calculate the average power output (in watts).

(i) Describe in words the four phases of the cardiac cycle.

(j) What are some limitations of this toy model?

The last two parts of the problem are crucial. Many students can analyze equations or plots, but have difficulty relating them to physical events and processes. Translation between words, pictures, and equations is an essential skill. 

All toy models are simplifications; one of their primary uses is to point the way toward more realistic—albeit more complex—descriptions. Many scientific papers contain a paragraph in the discussion section describing the approximations and assumptions underlying the research.

Below is a Wiggers diagram from Wikipedia, which illustrates just how complex cardiac physiology can be. Yet, our toy model captures many general features of the diagram.


A Wiggers diagram summarizing cardiac physiology.
Source: adh30 revised work by DanielChangMD who revised original work of DestinyQx;
Redrawn as SVG by xavax, CC BY-SA 4.0, via Wikimedia Commons

I’ll give Hoskinson and her coworkers the last word.

“We have provided a complementary view to transforming undergraduate science courses by illustrating how physics and biology are united in their underlying use of scientific models and by describing how this practice can be leveraged to bridge the teaching of physics and biology.”

The Wiggers diagram explained in three minutes!
https://www.youtube.com/watch?v=0sogXvxxV0E

Friday, February 19, 2021

Magnetic Coil Stimulation of Straight and Bent Amphibian and Mammalian Peripheral Nerve in Vitro: Locus of Excitation

In this blog, I like to highlight important journal articles. One of my favorites is “Magnetic Coil Stimulation of Straight and Bent Amphibian and Mammalian Peripheral Nerve in Vitro: Locus of Excitation” by Paul Maccabee and his colleagues (Journal of Physiology, Volume 460, Pages 201–219, 1993). This paper isn’t cited in Intermediate Physics for Medicine and Biology, but it should be. It’s related to Homework Problem 32 in Chapter 8, about magnetic stimulation of a peripheral nerve.

The best part of Maccabee’s article is the pictures. I reproduce three of them below, somewhat modified from the originals. 

The electric field and its derivative produced by magnetic stimulation using a figure-of-eight coil. Based on an illustration in Maccabee et al. (1993).
Fig. 1. The electric field and its derivative produced by magnetic stimulation using a figure-of-eight coil. Based on an illustration in Maccabee et al. (1993).

The main topic of the paper was how an electric field induced in tissue during magnetic stimulation could excite a nerve. The first order of business was to map the induced electric field. Figure 1 shows the measured y-component of the electric field, Ey, and its derivative dEy/dy, in a plane below a figure-of-eight coil. The electric field was strongest under the center of the coil, while the derivative had a large positive peak about 2 cm from the center, with a large negative peak roughly 2 cm in the other direction. Maccabee et al. included the derivative of the electric field in their figure because cable theory predicted that if you placed a nerve below the coil parallel to the y axis, the nerve would be excited where −dEy/dy was largest. 

An experiment to show how the stimulus location changes with the stimulus polarity. Based on an illustration in Maccabee et al. (1993).
Fig. 2. An experiment to show how the stimulus location changes with the stimulus polarity. Based on an illustration in Maccabee et al. (1993).

The most important experiment is shown in Figure 2. The goal was to test the prediction that the nerve was excited where dEy/dy is maximum. The method was to simulate the nerve using one polarity and then the other, and determine if the location where the nerve is stimulated shifted by about 4 cm, as Figure 1 suggests.

A bullfrog sciatic nerve (green) was dissected out of the animal and placed in a bath containing saline (light blue). An electrode (dark blue dot) recorded the action potential as it reached the end of the nerve. A figure-of-eight coil (red) was placed under the bath. First Maccabee et al. stimulated with one polarity so the stimulation site was to the right of the coil center, relatively close to the recording electrode. The recorded signal (yellow) consisted of a large, brief stimulus artifact followed by an action potential that propagated down the nerve with a speed of 40.5 m/s. Then, they reversed the stimulus polarity. As we saw in Fig. 1, this shifted the location of excitation to another point to the left of the coil center. The recorded signal (purple) again consisted of a stimulus artifact followed by an action potential. The action potential, however, arrived 0.9 ms later because it started from the left side of the coil and therefore had to travel farther to reach the recording electrode. They could determine the distance between the stimulation sites by dividing the speed by the latency shift; (40.5 m/s)/(0.9 ms) = 4.5 cm. This was almost the same as the distance between the two peaks in the plot of dEy/dy in Figure 1. The cable theory prediction was confirmed. 

The effect of insulating obstacles on the site of magnetic stimulation. Based on an illustration in Maccabee et al. (1993).
Fig. 3. The effect of insulating obstacles on the site of magnetic stimulation. Based on an illustration in Maccabee et al. (1993).

In another experiment, Maccabee and his coworkers further tested the theory (Fig. 3). The electric field induced during magnetic stimulation was perturbed by an obstruction. They placed two insulating lucite cylinders (yellow) on either side of the nerve, which forced the induced current to pass through the narrow opening between them. This increased the strength of the electric field (green), and caused the negative and positive peaks of the derivative of the electric field (dark blue) to move closer together. Cable theory predicted that if the cylinders were not present the latency shift upon change in polarity would be relatively long, while with the cylinders the latency shift would be relatively short. The experiment found a long latency (1.2 ms) without the cylinders and a short latency (0.3 ms) with them, confirming the prediction. This behavior might be important when stimulating, say, the median nerve as it passes between two bones in the arm.

In addition, Maccabee examined nerves containing bends, which created “hot spots” where excitation preferentially occurred. They also examined polyphasic stimuli, which caused excitation at both the negative and positive peaks of dEy/dy nearly simultaneously. I won’t reproduce all their figures, but I recommend you download a copy of the paper and see them for yourself.

Why do I like this paper so much? For several reasons.

  • It’s an elegant example of how theory suggests an experiment, which once confirmed leads to additional predictions, resulting in even more experiments, and so on; a virtuous cycle
  • Their illustrations are informative and clear (although I do like the color in my versions). You should be able to get the main point of a scientific paper by merely looking through the figures, and you can do that with Maccabee et al.’s article.
  • In vitro experiments (nerve in a dish) are nice because they strip away all the confounding details of in vivo (nerve in an arm) experiments. You can manipulate the system (say, by adding a couple lucite cylinders) and determine how the nerve responds. Of course, some would say in vivo experiments are better because they include all the complexities of an actual arm. As you might guess, I prefer the simplicity and elegance of in vitro experiments. 
  • If you want a coil that stimulates a peripheral nerve below its center, as opposed to off to one side, you can use a four-leaf-coil.
  • Finally, I like this article because Peter Basser and I were the ones who made the theoretical prediction that magnetic stimulation should occur where dEy/dy, not Ey, is maximum (Roth and Basser, “Model of the Stimulation of a Nerve Fiber by Electromagnetic Induction,” IEEE Transactions on Biomedical Engineering, Volume 37, Pages 588-597, 1990). I always love to see my own predictions verified. 

I’ve lost track of my friend Paul Maccabee, but I can assure you that he did good work studying magnetic stimulation of nerves. His article is well worth reading.