Friday, January 29, 2021

Stable Nuclei

Fig. 17.2 in Intermediate Physics for Medicine and Biology.
Fig. 17.2 in Intermediate
Physics for Medicine and Biology
.
In Figure 17.2 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I show a plot of all the stable nuclei. The vertical axis is the number of protons, Z (the atomic number), and the horizontal axis is the number of neutrons, N (the neutron number). The mass number A equals Z + N. Each tiny black box in the figure corresponds to a stable isotope

This figure summarizes a tremendous amount of information about nuclear physics. Unfortunately, the drawing is too small to show much detail. We must magnify part of the drawing to tell what boxes correspond to what isotopes. In this post, I provide several such magnified views.

Figure 17.2, with part of the drawing magnified.

Let’s begin by magnifying the bottom left corner, corresponding to the lightest nuclei. The most abundant isotope of hydrogen consists of a single proton. In general, an isotope is denoted using the chemical symbol with a left subscript Z and a left superscript A. The chemical symbol and atomic number are redundant, so we’ll drop the subscript. A proton is written as 1H.

The nucleus to the right of 1H is 2H, called a deuteron, consisting of one proton and one neutron. Deuterium exists in nature but is rare. The isotope 3H also exists but isn’t stable (its half life is 12 years) so it’s not included in the drawing.

The row above hydrogen is helium, with two stable isotopes: 3He and 4He. You probably know the nucleus 4He by another name; the alpha particle. As you move up to higher rows you find the elements lithium, beryllium, and  boron (10B is used for boron neutron capture therapy). Light isotopes tend to cluster around the dashed line Z = N.

Figure 17.2, with part of the drawing magnified.

Moving up and to the right brings us to essential elements for life: carbon, nitrogen, and oxygen. Certain values of Z and N, called magic numbers, lead to particularly stable nuclei: 2, 8, 20, 28, 50, 82, and 126. Oxygen is element eight, and Z = 8 is magic, so it has three stable isotopes. The isotope 16O is doubly magic (Z = 8 and N = 8) and is therefore the most abundant isotope of oxygen.

Figure 17.2, with part of the drawing magnified.

The next drawing shows the region around 40Ca, which is also doubly magic (Z = 20, N = 20). It is the heaviest isotope having Z = N. Heavier isotopes need extra neutrons to overcome the Coulomb repulsion of the protons, so the region of stable isotopes bends down as it moves right. The dashed line indicating Z = N won’t appear in later figures; it’ll be way left of the magnified region. Four stable isotopes (37Cl, 38Ar, 39K, and 40Ca) have a magic number of neutrons N = 20. Calcium, with its magic number of protons, has five stable isotopes. No stable isotopes correspond to N = 19 or 21. In general, you’ll find more stable isotopes with even values of Z and N than odd.

Figure 17.2, with part of the drawing magnified.

Next we move up to the region ranging from Z = 42 (molybdenum) to Z = 44 (ruthenium). No stable isotopes exist for Z = 43 (technetium); a blank row stretches across the region of stability. As discussed in Chapter 17 of IPMB, the unstable 99Mo decays to the metastable state 99mTc (half life = 6 hours), which plays a crucial role in medical imaging.

Figure 17.2, with part of the drawing magnified.

Tin has a magic number of protons (Z = 50), resulting in ten stable isotopes, the most of any element.

Figure 17.2, with part of the drawing magnified.

As we move to the right, the region of stability ends. The heaviest stable isotope is lead (Z = 82), which has a magic number of protons and four stable isotopes. Above that, nothing. There are unstable isotopes with long half lives; so long that they are still found on earth. Scientists used to think that an isotope of bismuth, 209Bi (Z = 83, N  = 126), was stable, but now we know its half life is 2×1019 years. Uranium (Z = 92) has two unstable isotopes with half lives similar to the age of the earth, but that’s another story.

If you want to find information about all the stable isotopes, and other isotopes that are unstable, search the web for “table of the isotopes.” Here’s my favorite: https://www-nds.iaea.org/relnsd/vcharthtml/VChartHTML.html.

Friday, January 22, 2021

Oh Happy Day!

Eric Lander
Russ Hobbie and I hope that Intermediate Physics for Medicine and Biology will inspire young scientists to study at the interface between physics and physiology, and to work at the boundary between mathematics and medicine. But what sort of job can you get with such a multidisciplinary background? How about Presidential Science Advisor and Director of the White House Office of Science and Technology Policy! This week President Biden nominated Eric Lander—mathematician and geneticist—to that important position.

Lander is no amateur in mathematics. He obtained a PhD in the field from Oxford, which he attended as a Rhodes Scholar. Later his attention turned to molecular biology and he reinvented himself as a geneticist. He received a MacArthur “genius” grant in 1987, and co-led the human genome project. Now he’ll be part of the Biden administration; the most prominent scientist to hold a cabinet-level position since biological physicist Steven Chu.

Im overjoyed that respect for science has returned to national politics. As we face critical issues, such as climate change and the covid-19 pandemic, input from scientists will be crucial. Im especially excited because not only does our new president respect science, but also—as I wrote in a letter to the editor in the Oakland Press last October—the Congressional representative from my own district, Elissa Slotkin, understands and appreciates science. During her fall campaign, I volunteered to write postcards, one of which you can read below.

The last four years have been grim, but the times they are a-changin. #Scienceisback.

Oh happy day!

Pioneer in Science: Eric Lander -- The Genesis of Genius

https://www.youtube.com/watch?v=IH4rn50arSY&lc=z13awxxhimyzinzc1231zfxwxpq1cnmqz04


Friday, January 15, 2021

Projections and Filtered Projections of a Square

Chapter 12 of Intermediate Physics for Medicine and Biology describes tomography. Russ Hobbie and I write

The reconstruction problem can be stated as follows. A function f(x,y) exists in two dimensions. Measurements are made that give projections: the integrals of f(x,y) along various lines as a function of displacement perpendicular to each line. For example, integration parallel to the y axis gives a function of x,
as shown in Fig. 12.12. The scan is repeated at many different angles θ with the x axis, giving a set of functions F(θ, x'), where x' is the distance along the axis at angle θ with the x axis.

One example in IPMB is the projection of a simple object: the circular top-hat

The projection can be calculated analytically

It’s independent of θ; it looks the same in every direction.

Let’s consider a slightly more complicated object: the square top-hat

This projection can be found using high school geometry and trigonometry (evaluating it is equivalent to finding the length of lines passing through the square at different angles). I leave the details to you. If you get stuck, email me (roth@oakland.edu) and I’ll send a picture of some squares and triangles that explains it.

The plot below shows the projections at four angles. For θ = 0° the projection is a rectangle; for θ = 45° it’s a triangle, and for intermediate angles (θ = 15° and 30°) it’s a trapezoid. Unlike the circular top-hat, the projection of the square top-hat depends on the direction.

The projections of a square top-hat, at different angles.

What I just described is the forward problem of tomography: calculating the projections from the object. As Russ and I wrote, usually the measuring device records projections, so you don’t have to calculate them. The central goal of tomography is the inverse problem: calculating the object from the projections. One way to perform such a reconstruction is a two-step procedure known as filtered back projection: first high-pass filter the projections and then back project them. In a previous post, I went through this entire procedure analytically for a circular top-hat. Today, I go through the filtering process analytically, obtaining an expression for the filtered projection of a square top-hat. 

Here we go. I warn you, theres lots of math. To perform the filtering, we first calculate the Fourier transform of the projection, CF(θ,k). Because the top-hat is even, we can use the cosine transform

where k is the spatial frequency.

Next, place the expression for F(θ,x') into the integral and evaluate it. Theres plenty of book-keeping, but the projection is either constant or linear in x', so the integrals are straightforward. I leave the details to you; if you work it out yourself, youll be delighted to find that many terms cancel, leaving the simple result 

To high-pass filter CF(θ,k), multiply it by |k|/2π to get the Fourier transform of the filtered projection CG(θ,k)

Finally, take the inverse Fourier transform to obtain the filtered projection G(θ,x'


Inserting our expression for CG(θ,k), we find

This integral is not trivial, but with some help from WolframAlpha I found

where Ci is the cosine integral. I admit, this is a complicated expression. The cosine integral goes to zero for large argument, so the upper limit vanishes. It goes to negative infinity logarithmically at zero argument. Were in luck, however, because the four cosine integrals conspire to cancel all the infinities, allowing us to obtain an analytical expression for the filtered projection

We did it! Below are plots of the filtered projections at four angles. 

The filtered projections of a square top-hat, at different angles.

The last thing to do is back project G(θ,x') to get the object f(x,y). Unfortunately, I see no hope of back-projecting this function analytically; its too complicated. If you can do it, let me know.

Why must we analyze all this math? Because solving a simple example analytically provides insight into filtered back projection. You can do tomography using canned computer code, but you won’t experience the process like you will by slogging through each step by hand. If you don’t buy that argument, then another reason for doing the math is: it’s fun!

Friday, January 8, 2021

A Portable Scanner for Magnetic Resonance Imaging of the Brain

A Portable Scanner for Magnetic Resonance Imaging of the Brain, superimposed on Intermediate Physics for Medicine and Biology.
A Portable Scanner for Magnetic
Resonance Imaging of the Brain
Cooley et al., Nat. Biomed. Eng., 2020

Chapter 18 of Intermediate Physics for Medicine and Biology describes magnetic resonance imaging. MRI machines are usually heavy, expensive devices installed in hospitals and clinics. A recent article by Clarissa Cooley and her colleagues in Nature Biomedical Engineering, however, describes a portable MRI scanner. The abstract states

Access to scanners for magnetic resonance imaging (MRI) is typically limited by cost and by infrastructure requirements. Here, we report the design and testing of a portable prototype scanner for brain MRI that uses a compact and lightweight permanent rare-earth magnet with a built-in readout field gradient. The 122-kg low-field (80 mT) magnet has a Halbach cylinder design that results in a minimal stray field and requires neither cryogenics nor external power. The built-in magnetic field gradient reduces the reliance on high-power gradient drivers, lowering the overall requirements for power and cooling, and reducing acoustic noise. Imperfections in the encoding fields are mitigated with a generalized iterative image reconstruction technique that leverages previous characterization of the field patterns. In healthy adult volunteers, the scanner can generate T1-weighted, T2-weighted and proton density-weighted brain images with a spatial resolution of 2.2 × 1.3 × 6.8 mm3. Future versions of the scanner could improve the accessibility of brain MRI at the point of care, particularly for critically ill patients.
Cooley et al.’s design has four attributes.
  1. It’s designed for imaging the head only. Most critical care MRIs are of the brain, so focusing on imaging the head is not as limiting as you might think. By restricting the device to the head they are able to reduce the weight of their prototype to 230 kg (about 500 pounds); not something you could carry in your pocket, but light enough to be transported in an ambulance or wheeled on a cart. The power required, about 1.7 kW, is far less than for a traditional MRI device, so the portable scanner can be operated from a standard wall outlet.
  2. The static magnetic field is produced by permanent magnets. Typical MRI scanners create a static field of a few Tesla using a superconductor, which must be kept cold. Cooley et al.’s device avoids cryogenics completely by using room-temperature, permanent neodymium magnets in a Halbach configuration, producing a static magnetic field of 0.08 T. The lower field strength reduces the signal-to-noise ratio, so advanced MRI techniques such as echo-planar, functional, or diffusion tensor imaging are not feasible. However, many emergency MRIs are used to diagnose traumatic brain injury and don’t rely on these more advanced techniques. The Halbach design results in a small magnetic field outside the scanner, which minimizes safety hazards associated with iron-containing objects being sucked into the scanner.
  3. The readout gradient is static. In IPMB, Russ Hobbie and I describe how magnetic field gradients are used to map the Larmor frequency to position. Usually the readout gradient of an MRI pulse sequence is turned on and off as needed. By making this gradient static, Cooley and her collaborators eliminate the need for a power supply to drive it. Most MRI pulse sequences require gradients in three directions, and in Cooley et al.’s device the gradients in the other two directions must still be switched on and off in the traditional way. One side effect of the reduced gradient switching is that this MRI scanner is quieter than a traditional device. This may seem like a minor advantage, but try having your head imaged in a typical MRI scanner with its gradient switching causing a deafening racket.
  4. Much of the signal analysis is switched from hardware to software. Because of nonlinearities in the gradient magnetic field, traditional Fourier transform algorithms to convert from spatial frequency to position produce artifacts, and iterative methods that correct for these errors are needed.
Cooley et al.’s article fascinated me because of its educational value; the challenges they face force readers to think carefully about the design parameters and limitations of traditional MRI. If you want to learn more about normal MRI scanners, read this article to see how researchers had to modify the traditional design to overcome its limitations. 

Low-cost MRI systems for brain imaging. by Clarissa Cooley.

https://www.youtube.com/watch?v=bZz3-lmWv4I

Friday, January 1, 2021

An Assessment of Illness in U.S. Government Employees and their Families at Overseas Embassies

“An Assessment of Illness in
U.S. Government Employees and
their Families at Overseas Embassies”
(2020) The National Academies Press.
Recently, a National Academies report examined the illness of staff at overseas embassies.
National Academies of Sciences, Engineering, and Medicine. 2020. “An Assessment of Illness in U.S. Government Employees and their Families at Overseas Embassies.” Washington, DC: The National Academies Press. 
The summary of the report begins
In late 2016, U.S. Embassy personnel in Havana, Cuba, began to report the development of an unusual set of symptoms and clinical signs. For some of these patients, their case began with the sudden onset of a loud noise, perceived to have directional features, and accompanied by pain in one or both ears or across a broad region of the head, and in some cases, a sensation of head pressure or vibration, dizziness, followed in some cases by tinnitus, visual problems, vertigo, and cognitive difficulties. Other personnel attached to the U.S. Consulate in Guangzhou, China, reported similar symptoms and signs to varying degrees, beginning in the following year. As of June 2020, many of these personnel continue to suffer from these and/or other health problems. Multiple hypotheses and mechanisms have been proposed to explain these clinical cases, but evidence has been lacking, no hypothesis has been proven, and the circumstances remain unclear. The Department of State (DOS), as part of its effort to inform government employees more effectively about health risks at posts abroad, ascertain potential causes of the illnesses, and determine best medical practices for screening, prevention, and treatment for both short and long-term health problems, asked the National Academies of Sciences, Engineering, and Medicine (the National Academies) to provide independent, expert guidance.
Then, under the heading of “Plausible Mechanisms,” the summary states
The committee found the unusual presentation of acute, directional or location-specific early phase signs, symptoms and observations reported by DOS employees to be consistent with the effects of directed, pulsed radio frequency (RF) energy.
I’m reluctant to disagree with a report from the National Academies. I wasn’t a member of the committee, I wasn’t consulted by them, and I haven’t analyzed the data myself. Moreover, I’m disturbed by the recent tendency of political leaders to ignore science (for example, on the Covid-19 pandemic and climate change), so I hesitate to reject a review by some of the nation’s top scientists and medical doctors. Nevertheless, I don’t find this report convincing. As I’ve written before in this blog, I’m skeptical that radio-frequency or microwave radiation can explain these effects.

The report states that
Low-level RF exposures typically deposit energy below the threshold for significant heating (often called “nonthermal” effects), while high-level RF exposures can provide enough energy for significant heating (“thermal” effects) or even burns, and for stimulation of nervous and muscle tissues (“shock” effects)... While much of the general public discussion on RF biological effects has focused on cancer, there is a growing amount of data demonstrating a variety of non-cancer effects as well, in addition to those associated with thermal heating.

The absence of certain observed phenomena can also help to constrain potential RF source characteristics. For example, the absence of reporting of a heating sensation or internal thermal damage may exclude certain types of high-level RF energy.
Microwaves affect the body if they’re strong enough to heat tissue (like in a microwave oven). But low-level radio-frequency and microwave effects are less well established (to put it politely). I don’t think there is a growing amount of reliable data demonstrating effects from RF electromagnetic radiation. The same issue arises when discussing the safety of cellular phone radiation (I fear this report will be used by some to support dubious claims of 5G hazards). The report twice cites studies by Martin Pall. As I said previously in this blog “Pall’s central hypothesis is that cell phone radiation affects calcium ion channels, which if true could trigger a cascade of biological effects…. I don’t agree with Pall’s claims.” Neither do Ken Foster and John Moulder (both cited frequently in Intermediate Physics for Medicine and Biology), who wrote that
Despite some level of public controversy and an ongoing stream of reports of highly variable quality of biological effects of RF energy... health agencies consistently conclude that there are no proven hazards from exposure to RF fields within current exposure limits.
The National Academies report emphasizes the Frey effect, in which pulsed electromagnetic radiation causes slight local heating, resulting in tiny transient changes in pressure that can be sensed by the inner ear. In a previous post, I wrote
I am no expert on thermoelastic effects, but it seems plausible that they could be responsible for the perception of sound by embassy workers in Cuba. By modifying the shape and frequency of the microwave pulses, you might even induce sounds more distinct than vague clicks. However, I don’t know how you get from little noises to brain damage and cognitive dysfunction. My brain isn’t damaged by listening to clicky sounds.
The most ridiculous paragraph in the report relates transcranial magnetic stimulation—a technique I worked on while at the National Institutes of Health—to the Frey effect.
If a Frey-like effect can be induced on central nervous system tissue responsible for space and motion information processing, it likely would induce similarly idiosyncratic responses. More general neuropsychiatric effects from electromagnetic stimuli are well-known and are being used increasingly to treat psychiatric and neurologic disorders. In 2008, the Food and DrugAdministration (FDA) approved transcranial magnetic stimulation (TMS) to treat major depression in adults who do not respond to antidepressant medications... Ten years later, the FDA approved office-based TMS as a treatment for obsessive compulsive disorder (OCD)... and portable TMS to treat migraine.

Magnetic stimulation uses large (about 1 Tesla) magnetic fields, and works at low frequencies (1–10 kHz, which are the frequencies that nerves operate at, and are well below the microwave range). Much higher frequencies are unlikely to activate nerves, and such strong magnetic fields will have tell-tale signs. For instance, one way to demonstrate the power of TMS is to place a quarter at the center of a coil and deliver a single pulse. The quarter shoots up into the air, sometimes denting a ceiling tile (I admit, this wasn’t the safest parlor trick I’ve ever witnessed). TMS only activates the brain when the coil is held within a few centimeters of the head. Trying to generate magnetic stimulation remotely would require Herculean magnetic fields that would certainly be noticed (anything metallic would start flying around and heating up). The reason the Frey effect works with weak fields is the extreme sensitivity of the human ear for detecting minuscule pressure oscillations. If the authors of the report think citing transcranial magnetic stimulation is appropriate as evidence to support the plausibility of a Frey-like effect, then they don’t understand the basic physics of how electric and magnetic fields interact with the body.

My view is much closer to that expressed in the article “Havana Syndrome Skepticism” by Robert Bartholomew, which was published in eSkeptic, the email newsletter of the Skeptics Society.

The Frey effect is named after Allan Frey, a pioneer in radiation research. But there are many problems with the explanation. It is highly speculative, and none of the panel members appeared to be experts on the biological impact of microwaves and the Frey effect. Someone who is a specialist on the effect, University of Pennsylvania bioengineer Kenneth Foster, is critical of the report, observing that there is no evidence that the Frey effect can cause injuries. Furthermore, the effect requires a tremendous amount of energy to create a sound that is barely audible... Foster should know, in 1974, he and Edward Finch were the first scientists to describe the mechanism involved in the effect while working at the Naval Medical Research Institute in Maryland...

Foster views any link between his eponymous effect and Havana Syndrome as pure fantasy. “It is just a totally incredible explanation for what happened to these diplomats…. It’s just not possible. The idea that someone could beam huge amounts of microwave energy at people and not have it be obvious defies credibility...” The former head of the Electromagnetics division of the Environmental Protection Agency, Ric Tell, also views the microwave link as science fiction. Tell spent decades working on standards for safe exposure to electromagnetic radiation, including microwaves. “If a guy is standing in front of a high-powered radio antenna — and it’s got to be high, really high — then he could experience his body getting warmer,” Tell said. “But to cause brain-tissue damage, you would have to impart enough energy to heat it up to the point where it’s cooking. I don’t know how you could do that, especially if you were trying to transmit through a wall. It’s just not plausible,” he said.
At the risk of sounding like a grumpy old curmudgeon, I don’t believe the National Academies report. I don’t agree that directed radio-frequency or microwave energy is the most likely explanation for the “Havana Syndrome.” I don’t have a better explanation, but I don’t accept theirs. It’s not consistent with what we know about how electromagnetic fields interact with biological tissue.

Friday, December 25, 2020

The Mathematical Approach to Physiological Problems

In Chapter 2 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I analyze decay with multiple half-lives, and practice fitting data to exponentials. Near the end of this discussion, we write (really, Russ wrote these words, since they go back to his solo first edition of IPMB)
Estimating the parameters [governing exponential decay] for the longest-lived term may be difficult because of the potentially large error bars associated with the data for small values of y. For a discussion of this problem, see Riggs (1970, pp. 146–163).

The Mathematical Approach of Physiological Problems, by Douglas S. Riggs, superimposed on Intermediate Physics for Medicine and Biology.
The Mathematical Approach
to Physiological Problems
,
by Douglas S. Riggs.
The citation is to the book The Mathematical Approach to Physiological Problems, by Douglas Riggs. I  wanted to take a look, so I went to the Oakland University library and checked out the 1963 first edition.

It’s a gold mine. I particularly like the beginning of the preface. Riggs starts with what seems like an odd digression about hiking in the mountains, but then in the second paragraph he skillfully brings us back to math.

Before settling down in the village of Shepreth, in Cambridgeshire, England, to start working in earnest upon this book, I had the unusual pleasure of taking my family on a month-long youth hosteling trip through Northern England and the Scottish Highlands. For the most part, we bicycled, but occasionally we would make an excursion on foot up the steep hillsides and along the rocky ridges where no bicycle could go. We soon learned that the British discriminate carefully between “hill walking” and “mountain climbing.” To be a mountain climber, you must coil 100 feet of nylon rope around you slantwise from shoulder to waist, and have a few pitons dangling somewhere about. You are then entitled to adopt an ever-so-faintly condescending attitude toward any hill-walkers whom you may encounter along the trail, even if you meet them where the trail is practically level. Hill-walkers, on the other hand, remain hill-walkers even when the “walk” turns into a hands-and-knees job up a 40° slope of talus which is barely anchored to the mountain by a few wisps of grass and a clump or two of scraggly heather.

Mathematically speaking, this is a hill-walking book. It is necessarily so, since I myself have never learned the ropes of higher mathematics. But I do believe that the amount of wandering I have done on the lower slopes, the number of sorry hours I have spent lost in a mathematical fog, and the miles I have stumbled down false trails have made me a kind of backwoods expert on mathematical pitfalls, and have given me some practical knowledge of how to plan a safe mathematical ascent of the more accessible physiological hills.
One theme I stress in this blog is the value of simple models. Riggs agrees.
Precisely because living systems are so very complex, one can never expect to achieve anything like a complete mathematical description of their behavior. Before the mathematical analysis itself is begun, it is therefore invariably necessary to reduce the complexity of the real system by making various simplifying assumptions about how it behaves. In effect, these assumptions allow us to replace the actual biological system by an imaginary model system which is simple enough to be described mathematically. The results of our mathematical analysis will then be rigorously applicable to the model. But they will be applicable to the original biological system only to the extent that our underlying assumptions are reasonable. Hence, the ultimate value of our mathematical labors will be determined in large part by our choice of simplifying assumptions.
He then advocates for an approach that I call “think before you calculation.”
An investigator who publishes an erroneous equation has no place to hide! It is therefore prudent to check each calculation, each algebraic manipulation, and each transcription from a table of figures before going on to the next step. Above all, whenever you are engaged in mathematical work you should keep asking yourself over and over and over again, “Does this make sense?” and “Is this of the correct magnitude?
Riggs’s introduction ends with some wise advice.
All too frequently, students are willing to accept on faith whatever mathematical formulations they encounter in their reading. And why not? After all, mathematics is the exact science, and presumably an author would not express his theories or his conclusions mathematically without due regard for mathematical rigor and precision. It is only by bitter experience that we learn never to trust a published mathematical statement or equation, particularly in a biological publication, unless we ourselves have checked it to see whether or not it makes sense… Misprints are common. Copying errors are common. Blunders are common. Editors rarely have the time or training to check mathematical derivations. The author may be ignorant of mathematical laws, or he may use ambiguous notation. His basic premises may be fallacious even though he uses impressive mathematical expressions to formulate his conclusions… Caveat lector! Let the reader beware!
What about the problem of fitting multiple exponentials to data, which is why Russ and I cite Riggs in the first place? After analyzing several specific examples, Riggs concludes
Page 157 of The Mathematical Approach to Physiological Problems, showing a fit to a double exponential, superimposed on Intermediate Physics for Medicine and Biology.
Page 157 of The Mathematical
Approach to Physiological Problems
,
showing a fit to a double exponential.
These examples warn us not to take too seriously any particular set of coefficients and rate constants which we may get by plotting data on semi-logarithmic paper and ferreting out the exponential terms in the fashion described above. The need for such a warning is all too evident from the preposterously elaborate exponential equations which are sometimes published. The technique of “peeling off” successive terms is so deceptively easy! Fit a straight line, subtract, plot the differences, fit another straight line, subtract, plot the differences. How solid and impressive the resulting sum of exponentials looks! And how remarkably well the curve agrees with the observations. Surely the investigator can be pardoned a certain self-satisfaction for having so clearly identified the individual components which were contributing to the overall change. Yet, the examples discussed above show how groundless his satisfaction may be. It is undoubtedly true that the particular sum of exponentials which he happened to pick, plot, and publish fits the points with gratifying accuracy. But so also may other equations of the same general form but with quite different parameters. It is not great trick to have found one such equation. Even with a single exponential declining from a known value at time zero toward an unknown constant asymptote there are two parameters—the rate constant and the asymptote—to be fitted to the data. The effect of a considerable change in either may be largely offset by making a compensatory change in the other… Add a second exponential term with two more parameters to be estimated from the data, and the number and variety of “closely fitting” equations becomes truly bewildering. Worst of all, there are no simple statistical measures of the precision with which any of the parameters have been estimated. These considerations do not destroy the value of fitting an exponential equation to experimental data when it is suggested by some underlying theory or when it provides a convenient empirical way of summarizing a group of observations mathematically… But they make very clear the danger of using an empirical exponential equation to predict what may happen beyond the period actually covered by the observations. It is equally clear that we must be exceedingly skeptical when attempts are made to match the individual terms of an empirical exponential equation with supposedly corresponding processes or regions of the body.
Riggs’s book examines many of the same topics that appear in the first half of IPMB, such as exponential growth, diffusion, and feedback. He has a wonderful chapter suggesting questions you should ask when checking the validity of an equation. Is it dimensionally correct? How does it behave when variables approach zero or infinity? Does it give reasonable answers after numerical substitution?

I’m impressed that Riggs, a professor and head of a Department of Pharmacology at SUNY Buffalo, could write so insightfully about mathematics. I give the book two thumbs up.

Friday, December 18, 2020

Life at Low Reynolds Number

The first page of "Life at Low Reynolds Number," by Edward Purcell, superimposed on Intermediate Physics for Medicine and Biology.
“Life at Low Reynolds Number,”
by Edward Purcell.

In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I cite Edward Purcell’s wonderful article “Life at Low Reynolds Number” (American Journal of Physics, Volume 45, Pages 3–11, 1977). This paper is a transcript of a talk Purcell gave to honor physicist Victor Weisskopf. The transcript captures the casual tone of the talk, and the hand-drawn figures are charming. Below I quote excerpts from the article, including my own versions of a couple of the drawings. Notice how his words emphasize insight. As Purcell says, “some essential hand-waving could not be reproduced.” Enjoy!

I’m going to talk about a world which, as physicists, we almost never think about. The physicist hears about viscosity in high school when he’s repeating Millikan’s oil drop experiment and never hears about it again, as least not in what I teach. And Reynolds’s number, of course, is something for the engineers. And the low Reynolds number regime most engineers aren’t even interested in… But I want to take you into the world of very low Reynolds number—a world which is inhabited by the overwhelming majority of the organisms in this room. This world is quite different from the one that we have developed our intuitions in…

Based on Figure 1 from
“Life at Low Reynolds Number.”

In Fig. 1, you see an object which is moving through a fluid with velocity v. It has dimension a,… η and ρ are the viscosity and density of the fluid. The ratio of the inertial forces to the viscous forces, as Osborne Reynolds pointed out slightly less than a hundred years ago, is given by avρ/η or av/ν, where ν is called the kinematic viscosity. It’s easier to remember its dimensions; for water, ν = 10−2 cm2/sec. The ratio is called the Reynolds number and when that number is small the viscous forces dominate… Now consider things that move through a liquid... The Reynolds number for a man swimming in water might be 104, if we put in reasonable dimensions. For a goldfish or a tiny guppy it might get down to 102. For the animals that we’re going to be talking about, as we’ll see in a moment, it’s about 10−4 or 10−5. For these animals inertia is totally irrelevant. We know F = ma, but they could scarcely care less. I’ll show you a picture of the real animals in a bit but we are going to be talking about objects which are the order of a micron in size… In water where the kinematic viscosity is 10−2 cm2/sec these things move around with a typical speed of 10 μm/sec. If I have to push that animal to move it, and suddenly I stop pushing, how far will it coast before it slows down? The answer is, about 0.1 Å. And it takes it about 0.6 μsec to slow down. I think this makes it clear what low Reynolds number means. Inertia plays no role whatsoever. If you are at very low Reynolds number, what you are doing at the moment is entirely determined by the forces that are exerted on you at that moment, and by nothing in the past…

Based on Figure 18 from
“Life at Low Reynolds Number.”

Diffusion is important because of [a] very peculiar feature of the world at low Reynolds number, and that is, stirring isn’t any good… At low Reynolds number you can’t shake off your environment. If you move, you take it along; it only gradually falls behind. We can use elementary physics to look at this in a very simple way. The time for transporting anything a distance by stirring is about divided by the stirring speed v. Whereas, for transport by diffusion, it’s 2 divided by D, the diffusion constant. The ratio of those two times is a measure of the effectiveness of stirring versus that of diffusion for any given distance and diffusion constant. I’m sure this ratio has someone’s name but I don’t know the literature and I don’t know whose number that’s called. Call it S for stirring number. It’s just v/D. You’ll notice by the way that the Reynolds number was v/ν. ν is the kinematic viscosity in cm2/sec, and D is the diffusion constant in cm2/sec, for whatever it is that we are interested in following—let’s say a nutrient molecule in water. Now, in water the diffusion constant is pretty much the same for every reasonably sized molecule, something like 10−5 cm2/sec. In the size domain that we’re interested in, of micron distances, we find that the stirring number S is 10−2, for the velocities that we are talking about (Fig. 18). In other words, this bug can’t do anything by stirring its local surroundings. It might as well wait for things to diffuse, either in or out. The transport of wastes away from the animal or food to the animal is entirely controlled locally by diffusion. You can thrash around a lot, but the fellow who just sits there quietly waiting for stuff to diffuse will collect just as much.

 

The Physics of Life: Life at Low Reynolds Number
https://www.youtube.com/watch?v=gZk2bMaqs1E

 

 Edward Purcell
https://www.youtube.com/watch?v=0uATCx7WMMs

Friday, December 11, 2020

Selig Hecht (1892-1947)

A photo of Selig Hecht
Selig Hecht,
History of the Marine Biological Laboratory,
http://hpsrepository.asu.edu/handle/10776/3269.
In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I analyze the “classic experiment on scotopic vision” by Hecht, Shlaer, and Pirenne. George Wald wrote an obituary about Selig Hecht in 1948 (Journal of General Physiology, Volume 32, Pages 1–16). He writes that Hecht was
Intensely interested in the relation of light quanta (photons) to vision. Reexamining earlier measurements of the minimum threshold for human rod vision, he and his colleagues confirmed that vision requires only fifty to 150 photons. When all allowances had been made for surface reflections, the absorption of light by ocular tissues, and the absorption by rhodopsin (which alone is an effective stimulant), it emerged that the minimum visual sensation corresponds to the absorption in the rods of, at most, five to fourteen photons. An entirely independent statistical analysis suggested that an absolute threshold involves about five to seven photons. Both procedures, then, confirmed the estimation of the minimum visual stimulus at five to fourteen photons. Since the test field in which these measurements were performed contained about 500 rods, it was difficult to escape the conclusion that one rod is stimulated by a single photon.
Wald also describes the coauthor on the study, Shlaer.
Among Hecht’s first students was Simon Shlaer, who became Hecht’s assistant in his first year at Columbia and continued as his associate for twenty years thereafter. A man infinitely patient with things and impatient with people, Shlaer gave Hecht his entire devotion. He was a master of instrumentation, and though he also had a keen grasp of theory, he devoted himself by choice to the development of new technical devices. Hecht and Shlaer built a succession of precise instruments for visual measurement, among them an adaptometer and an anomaloscope that have since gone into general use. The entire laboratory came to rely on Shlaer’s ingenuity and skill. “I am like a man who has lost his right arm,” remarked Hecht on leaving Columbia—and Shlaer—in 1947, “and his right leg.”

In his Columbia laboratory, Hecht instituted investigations of human dark adaptation, brightness discrimination, visual acuity, the visual response to flickered light, the mechanism of the visual threshold, and normal and anomalous color vision. His lab also made important contributions regarding the biochemistry of visual pigments, the relation of night blindness to vitamin A deficiency in humans, the spectral sensitivities of man and other animals, and the light reactions of plants—phototropism, photosynthesis, and chlorophyll formation.
Hecht and Shlaer both contributed to the war effort during the Second World War.
Throughout the late years of World War II, Hecht devoted his energies and the resources of his laboratory to military problems. He and Shlaer developed a special adaptometer for night-vision testing that was adopted as standard equipment by several Allied military services. Hecht also directed a number of visual projects for the Army and Navy and was consultant and advisor on many others. He was a member of the National Research Council Committee on Visual Problems and of the executive board of the Army-Navy Office of Scientific Research and Development Vision Committee.

Explaining the Atom, by Selig Hecht, superimposed on Intermediate Physics for Medicine and Biology.
Explaining the Atom,
by Selig Hecht.
Hecht straddled the fields of physics and physiology, and was comfortable with both math and medicine. He entered college studying mathematics. After World War II ended, he wrote the book Explaining the Atom, which Wald described as “a lay approach to atomic theory and its recent developments that the New York Times (in a September 20, 1947, editorial) called ‘by far the best so far written for the multitude.’”

An obituary in Nature by Maurice Henri Pirenne concludes

The death of Prof. Selig Hecht in New York on September 18, 1947, at the age of fifty-five, deprives the physiology of vision of one of its most outstanding workers. Hecht was born in Austria and was brought to the United States as a child. He studied and worked in the United States, in England, Germany and Italy. After a broad biological training, he devoted his life to the study of the mechanisms of vision, considered as a branch of general physiology. He became professor of biophysics at Columbia University and made his laboratory an international centre of visual research.

Friday, December 4, 2020

Role of Virtual Electrodes in Arrhythmogenesis: Pinwheel Experiment Revisited

The Journal of Cardiovascular Electrophysiology, with a figure from Lindblom et al. on the cover, superimposed on Intermediate Physics for Medicine and Biology.
The Journal of Cardiovascular Electrophysiology,
with a figure from Lindblom et al. on the cover.

Twenty years ago, I published an article with Natalia Trayanova and her student Annette Lindblom about initiating an arrhythmia in cardiac muscle (“Role of Virtual Electrodes in Arrhythmogenesis: Pinwheel Experiment Revisited,” Journal of Cardiovascular Electrophysiology, Volume 11, Pages 274-285, 2000). We performed computer simulations based on the bidomain model, which Russ Hobbie and I discuss in Section 7.9 of Intermediate Physics for Medicine and Biology. A key feature of a bidomain is anisotropy: the electrical conductivity varies with direction relative to the axis of the myocardial fibers.

Our results are summarized in the figure below (Fig. 14 of our article). An initial stimulus (S1) launched a planar wavefront through the tissue, either parallel to (longitudinal, L) or perpendicular to (transverse, T) the fibers (horizontal). As the tissue recovered from the first wave front, we applied a second stimulus (S2) to a point cathodal electrode (C), inducing a complicated pattern of depolarization under the cathode and two regions of hyperpolarization (virtual anodes) adjacent to the cathode along the fiber axis (see my previous blog post for more about how cardiac tissue responds to a point stimulus). In some simulations, we reversed the polarity of S2 so the electrode was an anode (A). This pair of stimuli (S1-S2) underlies the “pinwheel experiment” that has been studied by many investigators, but never before using the anisotropic bidomain model. 

Fig. 14 from Lindblom et al. (2000).

We found a variety of behaviors, depending on the direction of the S1 wave front, the polarity of the S2 stimulus, and the time between S1 and S2, known as the coupling interval (CI). In some cases, we induced a figure-of-eight reentrant circuit: an arrhythmia consisting of two spiral waves, one rotating clockwise and the other counterclockwise. In other cases, we induced quatrefoil reentry: an arrhythmia consisting of four spiral waves (see my previous post for more about the difference between these two behaviors).

I began working on these calculations in the winter of 1999, shortly after I arrived at Oakland University as an Assistant Professor. The photograph below is of a page from my research notebook on March 5 showing initial results, including my first observation of quatrefoil reentry in the pinwheel experiment (look for “Quatrefoil!”).

The March 5, 1999 entry from my research notebook,
showing my first observation of quatrefoil reentry
induced during the pinwheel experiment.

A few weeks later I got a call from my friend Natalia (see my previous post about an earlier collaboration with her). She was organizing a session for the IEEE Engineering in Medicine and Biology Society conference, to be held in Atlanta that October, and asked me to give a talk. We got to chatting and she started to describe simulations she and Lindblom were doing. They were the same calculations I was analyzing! I told her about my results, and we decided to collaborate on the project, which ultimately led to our Journal of Cardiovascular Electrophysiology paper.

Our article was full of beautiful color figures showing the different types of arrhythmias. Below is a photo of two pages of the article. Those familiar with my previous publications will notice that the color scheme representing the transmembrane potential is different than what I usually used. Lindblom and Trayanova had their own color scale, and we decided to adopt it rather than mine. One of the figures was featured on the cover of the March, 2000 issue the journal. Lindblom made some lovely movies to go along with these figures, but they’re now lost in antiquity. I later discovered that a simple cellular automata model could reproduce many of these results (see my previous post for details).

Two pages from Lindblom et al. (2000),
showing some of the color figures.

The editor asked Art Winfree to write an editorial to go along with our article (see my previous post about Winfree). I especially like his closing remarks.

This is clearly a landmark event in cardiac electrophysiology at the end of our century. It is sure to have major implications for clinical electrophysiologic work and for defibrillator design.
In retrospect, he was overly optimistic; the paper was an incremental contribution, not a landmark event of the 20th century. But I appreciated his kind words.

Friday, November 27, 2020

Defibrillation Mechanisms: The Parable of the Blind Men and the Elephant

“Defibrillation Mechanisms:
The Parable of the Blind Men
and the Elephant,”
by Ideker, Chattipakorn, and Gray.

I’ve read many scientific papers, but only one began with an eight-stanza poem about an elephant. Twenty years ago, Ray Ideker, Nipon Chattipakorn, and Rick Gray published “Defibrillation Mechanisms: The Parable of the Blind Men and the Elephant” in the Journal of Cardiovascular Electrophysiology (Volume 11, Pages 1008-1013, 2000). The opening poem by John Godfrey Saxe is reproduced below.

The purpose of the article was to review the different hypotheses that explain defibrillation of the heart. Russ Hobbie and I discuss defibrillation in Chapter 7 of Intermediate Physics for Medicine and Biology.

Ventricular fibrillation occurs when the ventricles contain many interacting reentrant wavefronts that propagate chaotically… During fibrillation the ventricles no longer contract properly, blood is no longer pumped through the body, and the patient dies in a few minutes. Implantable defibrillators are similar to pacemakers, but are slightly larger. An implanted defibrillator continually measures the [electrocardiogram]. When a signal indicating fibrillation is sensed, it delivers a much stronger shock that can eliminate the reentrant wavefronts and restore normal heart rhythm.
Ideker et al. discuss several possible mechanisms that explain how an electrical shock terminates fibrillation. This is a difficult problem, and I’ve spent much of my career trying to figure it out (I guess I’m one of the blind men).
It is possible that most of the electrical and optical mapping studies and the associated hypotheses about the mechanism of interaction of electrical stimuli with myocardium are all valid. It may be that shocks of low strength do not halt the activation fronts of fibrillation; and shocks of higher strength, depending on the circumstances, cause polarization critical points, field-recovery critical points, and/or action potential prolongation; whereas still stronger shocks slightly below the defibrillation threshold cause activation that appears focal on the epicardium either by intramural reentry, by reentry involving the Purkinje fibers, or by true focal activity, perhaps caused by delayed or early afterdepolarization… If so, then just as in the parable of the blind men and elephant, most of the reported studies and proposed defibrillation mechanisms all may be partially correct, yet all may be partially wrong because they are incomplete.
Defibrillation is a fine example of how a knowledge of physics can help solve a critical problem in medicine. Apparently a knowledge of poetry helps too.