Monday, March 16, 2020

Visual Acuity

The coronavirus has led to events being canceled, people being isolated, and classes being disrupted. What can I do to help? I plan to post to this blog more often, so students can learn how physics is applied to medicine and biology. I can’t promise daily posts, but I’ll do what I can.

Russ Hobbie and I discuss visual acuity—how sharp your vision is—in Chapter 14 of Intermediate Physics for Medicine and Biology.
The maximum photopic (bright-light) resolution of the eye is limited by four effects: diffraction of the light passing through the circular aperture of the pupil (5–8 μm), spacing of the receptors (≈ 3 μm), chromatic and spherical aberrations (10–20 μm), and noise in eyeball aim (a few micrometers)… The total standard deviation is (62+32+152+52)1/2 = 17 μm in the image on the retina. Since the diameter of the eyeball is about 2 cm, this corresponds to an angular size... of (17 × 10-6)/(2 × 10-2) = 8.5 × 10-4 rad = 0.048 ° = 2.9 min of arc.
Let’s examine the factors contributing to acuity, one by one.

Diffraction

The Rayleigh criterion specifies the minimum angular separation, θmin, of two objects that can just be resolved. The criterion can be expressed as θmin = 1.22 λ/D, where λ is the wavelength of light and D is the diameter of the pupil. If we use light from the center of the visible spectrum—say green light with wavelength 550 nm—and a pupil diameter of 2.5 mm, we get θmin = 0.00027 radians, which is 0.015° or 0.93 minutes of arc. If we take the eyeball diameter to be 2 cm, that translates into a minimum separation on the retina of 5.4 μm.

The Spacing of Receptors

According to The First Steps in Seeing, by Robert Rodieck, in the fovea cones have a density of about 0.1 per square micron. That translates roughly into a 3 micron separation between cones. The cone density is down by a factor of ten in other parts of the retina.

Chromatic and Spherical Aberration

The First Steps in Seeing, by Robert Rodieck, superimposed on Intermediate Physics for Medicine and Biology.
The First Steps in Seeing,
by Robert Rodieck
Chromatic aberration arises because the index of refraction of the eye, including the lens, depends on wavelength of the light. Therefore, different colors form images at different locations. Spherical aberration arises because a spherical lens is not ideal for forming images; off-axis rays have a different focal point than on-axis rays. The eye and its lens, however, are not truly spherical, so when we speak of spherical aberration in the context of vision, we mean heterogeneities in the imaging system that cause the image to be blurred. Rodieck says
At night the pupil is fully open, and the spread of photons is due mainly to the optical imperfections of the eye; the effects of these imperfections increase rapidly with pupil size. The other factor that contributes to the spread of photons is intrinsic to the nature of how photons go from place to place, and is termed diffraction. This factor is not significant here, but in daylight, when the pupil is small, the spread of photons in the retinal image is due mainly to diffraction.

Noise in Eyeball Aim

Rodieck explains how your gaze is always moving, even when staring at a stationary object.
Gazing at a stationary object also involves smooth eye movements. This is because your head is always in slight motion as the muscle of your body and neck attempt to maintain your posture. Thus when you look as steadily as possible at some small stationary object, such as a pebble on the ground, your slight head movements cause the image of the pebble to move on your retina.
This motion has some noise, which limits our visual acuity.

A Snellen chart for testing visual acuity.
A Snellen chart.

A Snellen chart is the traditional way to measure visual acuity. You stand 6 meters (20 feet) from the chart and read the letters with one eye. If you plan to print out this chart, you need to make sure it is the correct size; the topmost “E” should be 87.3 mm tall. In that case, the 20/20 row corresponds to letters that subtend 5 minutes of arc.

The Big Dipper. The second star in the handle is a double.
The Big Dipper. The second star in the handle is a double.
Another test of visual acuity arises because the second star from the end of the handle of the Big Dipper is actually a double star: Mizar and Alcor. They are separated by 12 minutes of arc. George Bohigian published an article in Survey of Ophthalmology (Volume 53, Pages 536-539, 2008) about “An Ancient Eye Test—Using the Stars.” He begins
A common vision test in ancient Persia used the double star of the Big Dipper in the constellation Ursa Major or the Big Bear. This vision evaluation test was given to elite warriors in the ancient Persian army and was called “the test” or “the riddle.” The desert Arabs, especially the Bedouins, used the separation of Mizar and Alcor as a test of good vision. The separation of these two stars is known as the Arab Eye Test , and has been used in antiquity to test children's eyesight. This article explores the origin, history, and practicality of this eye test and how it correlates with the present-day Snellen visual acuity test.
He concludes
The Arab Eye Test using the double star of Mizar and Alcor remains a practical test of visual acuity and visual function as it was over 1000 years ago. This test is somewhat equivalent to the 20/20 in the Snellen visual acuity nomenclature. This is the first report that correlates the Mizar–Alcor naked eye test with the current Snellen visual acuity test. With the spread of Islam from Spain to Central Asia, the Arabs brought their knowledge of astronomy mixed with the traditions of Greece, India, Babylonia, and Persia to Western civilization.

Throughout our history the stars have been a constant guide to navigation, measure the seasons, to divine the future, and to measure eyesight. The Arab Eye test is an example of how a natural phenomenon has been used for a practical purpose.

Friday, March 13, 2020

Arguing With Zombies

Arguing With Zombies:
Economics, Politics, and the
Fight for a Better Future
,
by Paul Krugman.
Recently I read Arguing With Zombies: Economics, Politics, and the Fight for a Better Future, by Paul Krugman. The book is a collection of editorials and blog posts Krugman wrote for the New York Times, plus a few other previously-published articles. I enjoy Krugman’s writings, but what do they have to do with biological physics or medical physics? Based on the first 390 pages of his book, the answer is: nothing. But near the end was a 1993 article that appeared in The American Economist titled “How I Work” that is relevant to Intermediate Physics for Medicine and Biology. One feature I like best about IPMB is its emphasis on deriving simple “toy models” that provide insight. Simple models aren’t in fashion in biomedical research, but I like them and so does Krugman.

“How I Work” lists Krugman’s four basic rules governing his research. You can read excerpts of his analysis below. Whenever he starts applying his rules specifically to economics, just replace all the financial talk with illustrations from physics applied to medicine and biology.

Listen to the Gentiles

What I mean by this rule is “Pay attention to what intelligent people are saying, even if they do not have your customs or speak you analytical language.”…

I am a strong believer in the importance of models, which are to our minds what spear-throwers were to stone age arms: they greatly extend the power and range of our insight. In particular, I have no sympathy for those people who criticize the unrealistic simplifications of model builders, and imagine that they achieve greater sophistication by avoiding stating their assumptions clearly. The point is to realize that economic models are metaphors, not truth.
For a physicist working in medicine and biology, the “gentiles” would be the biologists and medical doctors. They have much to tell us. For example, when I worked at the National Institutes of Health I learned a lot about magnetic stimulation of nerves from Mark Hallett and Leo Cohen, even if sometimes they mixed up their electricity and magnetism.

I like Krugman’s emphasis on using models to extend our insight. Models may not be as common in pure physics, where you can deduce things from fundamental principles, but biology is so complicated that you can rarely start from Schrödinger's equation and get anywhere. You build models to make sense of biological complexity.

Question the Question

In people in a field have bogged down on questions that seem very hard, it is a good idea to ask whether they are really working on the right questions. Often some other question is not only easier to answer but actually more interesting!
Organisms are so complex that often the right questions aren’t obvious. It’s hard to teach a student how to ask better questions, but we must try.

Dare to be Silly

If you want to publish a paper in economic theory, there is a safe approach: make a conceptually minor but mathematically difficult extension to some familiar model. Because the basic assumptions of the model are already familiar, people will not regard them as strange; because you have done something technically difficult, you will be respected for your demonstration of firepower. Unfortunately, you will not have added much to human knowledge.

What I found myself doing in the new trade theory was pretty much the opposite. I found myself using assumptions that were unfamiliar, and doing very simple things with them. Doing this requires a lot of self-confidence, because initially people (especially referees) are almost certain not simply to criticize your work but to ridicule it….

The age of creative silliness is not past. Virtue, as an economic theorist, does not consist in squeezing the last drop of blood out of assumptions that have come to seem natural because they have been used in a few hundred earlier papers. If a new set of assumptions seems to yield a valuable set of insights, then never mind if they seem strange.
Throughout my career, I have developed simple models. For example, one of my favorite publications is “How to Explain Why Unequal Anisotropy Ratios is Important Using Pictures but No Mathematics.” It consists of some almost silly hand-waving that is amazingly effective at explaining how electric fields interact with cardiac tissue. Another example is my paper “Virtual Electrodes Made Simple” in which I use a trivial little cellular automaton to explain how certain cardiac arrhythmias begin.

Biomedical engineers are doing some incredibly sophisticated calculations to simulate how our bodies work, and these studies are necessary and valuable. But I believe that for those of us who apply physics to medicine and biology, the age of creative silliness expressed through simple models is not yet over. That’s why Russ Hobbie and I stress building models in Intermediate Physics for Medicine and Biology.

Simplify, Simplify

The injunction to dare to be silly is not a license to be undisciplined. In fact, doing really innovative theory requires much more intellectual discipline than working in a well-established literature. What is really hard is to stay on course: since the terrain is unfamiliar, it is all too easy to find yourself going around in circles…. And it is also crucial to express your ideas in a way that other people, who have not spent the last few years wrestling with your problems and are not eager to spend the next few years wrestling with your answers, can understand without too much effort.

Fortunately, there is a strategy that does double duty: it both helps you keep control of your own insights, and makes those insights accessible to others. The strategy is: always try to express your ideas in the simplest possible model. The act of stripping down to this minimalist model will force you to get to the essence of what you are trying to say….

The downside of this strategy is, of course, that many of your colleagues will tend to assume that an insight that can be expressed in a cute little model must be trivial and obvious—it takes some sophistication to realize that simplicity may be the result of years of hard thinking…. There is a special delight in managing not only to boldly go where no economist has gone before, but to do so in a way that seems after the fact to be almost child’s play.
Physicists working in medicine share some of the frustrations that Krugman experiences. Reviewers of papers—and especially reviewers of grant proposals for the National Institutes of Health—often don’t appreciate simple models. My simulations of cardiac electrophysiology have always lacked the particular ion channel that the referee believed was critical, and my biomechanics models tend to use simplifications such as linear strains that trigger objections. (A referee for one of my National Science Foundation applications claimed “this proposal should never have been submitted.”😮)

I often discard biological realism in order to focus on the one or two new features of a model. I’m not asserting that the discarded behavior is unimportant. Rather, I want a simple model so I can highlight the new feature that I’m studying. I don’t want my message to be frittered away by detail. Like Thoreau, Krugman and I strive to simplify, simplify! I hope students using Intermediate Physics for Medicine and Biology learn to appreciate the value of a simple model.

Read “How I Work” online for free.

Listen to Paul Krugman explain how he revolutionized trade theory.
He and I are both big Asimov fans.

Friday, March 6, 2020

The American Physical Society March Meeting: A Victim of the Coronavirus

I planned to devote this blog post to a discussion of the American Physical Society March Meeting, which was to be held in Denver this week. Unfortunately, the APS cancelled the meeting because of concerns about the coronavirus.
An email from the American Physical Society cancelling the March Meeting because of the coronavirus.
Email from the American Physical Society cancelling the March Meeting.
I learned of the cancellation eight hours before I was to leave for the airport. I’m not angry with the APS; I understand the difficult situation the organizers faced. Frankly, I was worried about contracting the virus at the meeting, and then carrying it back to southeast Michigan. Nevertheless, the last minute cancellation was frustrating.

On the bright side, this blog offers me an opportunity to share what I was going to say at my presentation. The talk was, in fact, closely related to Intermediate Physics for Medicine and Biology. Phil Nelson—author of the trilogy Biological Physics, Physical Models of Living Systems, and From Photon to Neuron—organized a session about “Bringing Together Biology, Medicine, and Physics in Education,” and invited me to speak.
The session “Bringing Together Biology, Medicine, and Physics in Education”
that was supposed to be held at the American Physical Society March Meeting.
Below is my abstract.
The Purpose of Homework Problems is Insight, Not Numbers:
Crafting Exercises for an Intermediate Biological Physics Class

Bradley Roth 
Oakland University 

Richard Hamming famously said “The purpose of computing is insight, not numbers.” This view is true also for homework problems in an intermediate-level physics class. I constantly tell my students “an equation is not something you plug numbers into to get other numbers; it tells a story.” I will use examples from courses in Biological Physics and Medical Physics to illustrate this idea. A well-formed homework problem must balance brevity with storytelling. Often the problem is constructed by creating a “toy model” of an important biological system, and analysis of the toy model reveals some important idea or insight. A collection of such problems becomes a short-course in mathematical modeling as applied to medicine and biology, which is a skill that needs to be cultivated in biology majors, pre-med students, and anyone interested in using physical and mathematical tools to study biology and medicine.
If you want to hear more, download the powerpoint presentation at the book’s website: https://sites.google.com/view/hobbieroth/home.

Russ Hobbie and I are proud of the homework problems in Intermediate Physics for Medicine and Biology. We hope you will gain much insight from them.

***************************************************

The pile of books that I used as props during my online talk, including Intermediate Physics for Medicine and Biology.
The pile of books that I used as props
during my online talk.
That’s how the post ended when I wrote it Sunday evening. Then a miracle happened. Physicists began spontaneously organizing an online version of the APS March Meeting! By Tuesday I was listening to Leon Glass give a wonderful talk about cardiac dynamics. On Wednesday I heard Harry McNamara give a fascinating lecture about stimulating and recording electrical activity using light. On Thursday afternoon all the speakers (including myself) in the “Bringing Together Biology, Medicine, and Physics in Education” session presented our talks remotely. I greatly enjoyed it. Over 35 people listened online; I wonder if we would have had that many in Denver? Because I was sitting in my office, I was able to use many of the textbooks that I mentioned in my powerpoint as props. A video was made of each talk, and I’ll post a link to it in the comments when it’s available.

Phil Nelson is a hero of this story. He led the effort in the APS Division of Biological Physics, exhorting us that
Although we are all reeling from the abrupt cancellation of the March Meeting, it’s time for resilience. Science continues despite big bumps in the road, because science is important and it’s what we do.
 Amen!

Friday, February 28, 2020

Magnetoencephalography: Theory, Instrumentation, and Applications to Noninvasive Studies of the Working Human Brain

Screenshot of the title and abstract of Hämäläinen, Hari, Ilmoniemi, Knuutila, and Lounasmaa, "Magnetoencephalography: Theory, Instrumentation, and Applications to Noninvasive Studies of the Working Human Brain. Rev. Mod. Phys. 65:413-497, 1993.
Hämäläinen et al. (1993).
In Chapter 8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I cite one of my favorite review papers: “Magnetoencephalography: Theory, Instrumentation, and Applications to Noninvasive Studies of the Working Human Brain,” by Matti Hämäläinen, Rita Hari, Risto Ilmoniemi, Jukka Knuutila and Olli Lounasmaa (Reviews of Modern Physics, Volume 65, Pages 413-497, 1993). The authors worked in the Low Temperature Laboratory at Helsinki University of Technology in Espoo, Finland. Even though this review is over 25 years old, it remains an excellent introduction to recording biomagnetic fields of the brain. According to Google Scholar, this classic 84-page reference has been cited over 4500 times. Below is the abstract.
Magnetoencephalography (MEG) is a noninvasive technique for investigating neuronal activity in the living human brain. The time resolution of the method is better than 1 ms and the spatial discrimination is, under favorable circumstances, 2—3 mm for sources in the cerebral cortex. In MEG studies, the weak 10 fT—1 pT magnetic fields produced by electric currents flowing in neurons are measured with multichannel SQUID (superconducting quantum interference device) gradiometers. The sites in the cerebral cortex that are activated by a stimulus can be found from the detected magnetic-field distribution, provided that appropriate assumptions about the source render the solution of the inverse problem unique. Many interesting properties of the working human brain can be studied, including spontaneous activity and signal processing following external stimuli. For clinical purposes, determination of the locations of epileptic foci is of interest. The authors begin with a general introduction and a short discussion of the neural basis of MEG. The mathematical theory of the method is then explained in detail, followed by a thorough description of MEG instrumentation, data analysis, and practical construction of multi-SQUID devices. Finally, several MEG experiments performed in the authors laboratory are described, covering studies of evoked responses and of spontaneous activity in both healthy and diseased brains. Many MEG studies by other groups are discussed briefly as well.
Russ and I mention this review several times in IPMB. When we want to show typical MEG data, we reproduce their Figure 47, showing the auditory magnetic response evoked by listening to words (our Fig. 8.20). Below is a version of the figure with some color added.

Reproduction of Fig 47 from Hämäläinen et al. (1993), showing an auditory evoked magnetic field from the brain.
Effect of attention on responses evoked by auditorily presented words.
The subject was either ignoring the stimuli by reading (solid trace)
or listening to the sounds during a word categorization task (dotted trace);
the mean duration of the words is given by the bar on the time axis.
The field maps are shown during the N100m deflection and the sustained
field for both conditions. The contours are separated by 20 fT and the dots
illustrate the measurement locations. The origin of the coordinate system,
shown on the schematic head, is 7 cm backwards from the eye corner,
and the x axis forms a 45 angle with the line connecting the ear to the eye.
Adapted from Hämäläinen et al. (1993).
We also refer to the article when discussing SQUID gradiometers, which they discuss in detail. Russ and I have a figure in IPMB showing two types of gradiometers; here I show a color version of this figure adapted from Hämäläinen et al.
Red: a magnetometer. Green: a planer gradiometer.  Blue: an axial gradiometer. Purple: a second-order gradiometer.  Adapted from Hämäläinen et al. (1993).
Red: a magnetometer. Green: a planer gradiometer.
Blue: an axial gradiometer. Purple: a second-order gradiometer.
Adapted from Hämäläinen et al. (1993).
In Chapter 11 of IPMB, Russ and I reproduce my favorite figure from Hämäläinen et al.: their Fig. 8, showing the spectrum of several magnetic noise sources. Earlier in our book, Russ and I warn readers to beware of log-log plots in which the distance spanned by a decade is not the same on the vertical and horizontal axes. Below I redraw Hämäläinen et al.’s figure with the same scaling for each axis. The advantage of this version is that you can easily estimate the power law relating noise to frequency from the slope of the curve. The disadvantage is that you get a tall, skinny illustration.

A reproduction of Fig. 1 from Hämäläinen et al. (1993), showing peak amplitudes and spectral densities of fields due to typical biomagnetic and noise sources.
Peak amplitudes (arrows) and spectral densities of
fields due to typical biomagnetic and noise sources.
Adapted from Hämäläinen et al. (1993).
I like many things about Hämäläinen et al.’s the review. They present some lovely pictures of neurons drawn by Ramon Cajal. There’s a detailed discussion of the magnetic inverse problem, and a long analysis of evoked magnetic fields. In IPMB, Russ and I mention using a magnetically shielded room to reduce the noise in MEG data, but don’t give details. Hämäläinen et al. describe their shielded room:
The room is a cube of 2.4-m inner dimensions with three layers of μ-metal, which are effective for shielding at low frequencies of the external magnetic noise spectrum (particularly important for biomagnetic measurements), and three layers of aluminum, which attenuate very well the high-frequency band. The shielding factor is 103—104 for low-frequency magnetic fields and about 105 for fields of 10 Hz and above.
They show a nice photo of a subject having her MEG measured in this room; I hope she’s not claustrophobic.

The authors were members of a leading biomagnetism group in the 1990s. Matti Hämäläinen is now with the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital and is a professor at Harvard. Rita Hari is an emeritus professor at Aalto University (formerly the Helsinki University of Technology). Risto Ilmoniemi is now head of the Department of Neuroscience and Biomedical Engineering at Aalto. Olli Lounasmaa (1930—2002), the leader of this impressive group, was known for his research in low temperature physics. In 2012 the Low Temperature Laboratory at Aalto was renamed the O. V. Lounasmaa Laboratory in his honor.

What do I like best about the Finn’s landmark review? They cite me! In particular, the experiment I performed as a graduate student working with John Wikswo to measure the magnetic field of a single axon.
Wikswo et al. (1980) reported the first measurements of the magnetic field of a peripheral nerve. They used the sciatic nerve in the hip of a frog; the fiber was threaded through a toroid in a saline bath. When action potentials were triggered in the nerve, a biphasic magnetic signal of about 1 ms duration was detected. Later, the magnetic field of an action potential propagating in a single giant crayfish axon was recorded as well (Roth and Wikswo, 1985). The measured transmembrane potential closely resembled that calculated from the observed magnetic field. From these two sets of data, it was possible to determine the intracellular conductivity.
The videos below, presented by several of the authors, augment the discussion of biomagnetism in Intermediate Physics for Medicine and Biology, and provide a short course in magnetoencephalography. Enjoy!

Matti Hämäläinen: MEG and EEG Signals and Their Sources, 2014.


Rita Hari: How Does a Neuroscientist View Signals and Noise in MEG Recordings, 2015.

Interview with Risto Ilmoniemi, Helsinki, 2015.

Friday, February 21, 2020

Replacement of the Axoplasm of Giant Nerve Fibres with Artificial Solutions

When discussing the electrophysiology of nerve and muscle fibers in Section 6.1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
The axoplasm has been squeezed out of squid giant axons and replaced by an electrolyte solution without altering appreciably the propagation of the impulses—for a while, until the ion concentrations change significantly.
Really? The axoplasm can be squeezed out of an axon like toothpaste? Who does that?

Screenshot of the start of Baker et al. "Replacement of the Axoplasm of Giant Nerve Fibres with Artificial Solutions," J. Physiol., 164:330-354, 1962.
Baker et al. (1962).
The technique is described in an article by Baker, Hodgkin and Shaw (“Replacement of the Axoplasm of Giant Nerve Fibres with Artificial Solutions,” Journal of Physiology, Volume 164, Pages 330-354, 1962). The second author is Alan Hodgkin, Nobel Prize winner with Andrew Huxley “for their discoveries concerning the ionic mechanisms involved in excitation… of the nerve cell membrane.”

Below is my color version of Baker et al.’s Figure 1.
Internal perfusion of an axon.  Adapted from Figure 1D of Baker, Hodgkin and Shaw  J. Physiol., 164:330-354, 1962.
Internal perfusion of an axon.
Adapted from Figure 1D of Baker, Hodgkin and Shaw,
J. Physiol., 164:330-354, 1962.
Their methods section (with my color coding in brackets) states
A cannula [red] filled with perfusion fluid [baby blue] was tied [green] into the distal end of a giant axon [black] of length 6-8 cm. The axon was placed on a rubber pad [dark blue] and axoplasm [yellow] was extruded by passing a rubber-covered roller [purple] over it in a series of sweeps…
I like how a little mound of axoplasm piles up at the end of the fiber (yellow, right). They continue
After an axon had been extruded and perfused it was tied at the lower end, filled with perfusion fluid and impaled with an internal electrode by almost exactly the same method as that used with an intact axon…

One might suppose that this would be disastrous and axons were occasionally damaged by the internal electrode. However, in many instances we recorded action potentials of 100-110 mV for several hours.
This experiment is a tour de force. I can think of no better way to demonstrate that the action potential is a property of the nerve membrane, not the axoplasm.

You may already know Hodgkin, but who is Baker?

Hodgkin was coauthor of an obituary of Peter Frederick Baker (1939-1987), published in the Biographical Memoirs of Fellows of the Royal Society. After describing Baker’s childhood, Hodgkin wrote that he met the undergraduate Baker
when he had just obtained a first class in biochemistry part II. Partly at the suggestion of Professor F. G. Young, Peter decided that he would like to join Hodgkin’s group in the Physiological Laboratory in Cambridge. He also welcomed the suggestion that he should divide his time between Cambridge and the Laboratory of the Marine Biological Association at Plymouth, where there were many experiments to be done on the giant nerve fibres of the squid.
Hodgkin then describes Baker's experiments on internal perfusion of nerve axons.
Peter started work at Plymouth with Trevor Shaw in September 1960 and almost immediately the pair struck gold by showing that after the protoplasm had been squeezed out of a giant nerve fibre, conduction of impulses could be restored by perfusing the remaining membrane and sheath with an appropriate solution... Later, Baker, Hodgkin and Shaw… spent some time working out the best method of changing internal solutions while recording electrical phenomena with an internal electrode. It turned out that it did not matter much what solution was inside the nerve fibre as long as it contained potassium and little sodium. Provided that this condition is satisfied, a perfused nerve fibre is able to conduct nearly a million impulses without the intervention of any biochemical process. ATP is needed to pump out sodium and reabsorb potassium but not for the actual conduction of impulse.

There were also several unexpected findings of which perhaps the most interesting was that reducing the internal ionic strength caused a dramatic shift in the operating voltage characteristic of the membrane... This effect, which finds a straightforward explanation in terms of the potential gradients generated by charged groups on the inside of the membrane, helped to explain several unexpected results that were sometimes thought to be inconsistent with the ionic theory of nerve conduction.
Baker went on to perform an impressive list of research projects (his obituary cites nearly 200 publications). Unfortunately, he died young. Hodgkin concludes
Peter Baker’s sudden death from a heart attack at the early age of 47 has deprived British science of one of its most gifted and versatile biologists. He was at the height of his scientific powers and had many ideas for new lines of research, particularly in the borderland between molecular biology and physiology.
Both Baker and Hodgkin appear in this video. They are demonstrating voltage clamping, not internal perfusion.

Watch Alan Hodgkin and Peter Baker demonstrate voltage clamping.

Friday, February 14, 2020

Titan: The Life of John D. Rockefeller

Titan: The Life of John D. Rockefeller, Sr., by Ron Chernow, superimposed on Intermediate Physics for Medicine and Biology.
Titan: The Life of John D. Rockefeller, Sr.,
by Ron Chernow.
Recently I listened to an audio recording of Titan: The Life of John D. Rockefeller, Sr. by Ron Chernow. Rockefeller reminds me of Bill Gates: corporate corruption, fantastic fortune, and phenomenal philanthropy. Chernow says that Rockefeller’s “good side was every bit as good as his bad side was bad. Seldom has history produced such a contradictory figure.”

Rockefeller intersects with Intermediate Physics for Medicine and Biology through the Rockefeller Foundation and The Rockefeller Institute for Medical Research, now known as the Rockefeller University. Russ Hobbie and I mention the name “Rockefeller” once in IPMB, a Chapter 6 reference to a report written by neuroscientist Rafael Lorente de Nó, who worked at Rockefeller University for decades.
Davis L Jr, Lorente de Nó R (1947) Contribution to the mathematical theory of the electrotonus. Stud Rockefeller Inst Med Res Repr 131(Part 1):442–496.
Lady Luck, by Warren Weaver, superimposed on Intermediate Physics for Medicine and Biology.
Lady Luck, by Warren Weaver.
In Chapter 3 we cite Lady Luck by Warren Weaver, the director of the Division of Natural Sciences at the Rockefeller Foundation. Their website states that in 1932
Warren Weaver comes to the Foundation and during his 27-year association becomes the principal architect of programs in the natural sciences. He sees his task as being “to encourage the application of the whole range of scientific tools and techniques, and specially those which had been so superbly developed in the physical sciences, to the problems of living matter.”
He sounds like an IPMB kind of guy.

Ion Channels of Excitable Membranes,  by Bertil Hille, superimposed on Intermediate Physics for Medicine and Biology.
Ion Channels of Excitable Membranes,
by Bertil Hille.
In Chapter 9, Russ and I discuss Roderick MacKinnon, who first determined the structure of the potassium channel. MacKinnon leads a laboratory at Rockefeller University, located in Manhattan along the East River about a mile north of the United Nations headquarters. Nowadays students earn graduate degrees from Rockefeller University. Bertil Hille, author of Ion Channels of Excitable Membranes, is an alum.

Rockefeller University hosts the Center for Studies in Physics and Biology, whose website states
The Center for Studies in Physics and Biology was conceived by physicists and biologists to increase communication between their disciplines, with the goal of developing innovative solutions to biological questions. Much of the work at the center aims to understand how physical laws govern the operation of biochemical machinery and the processing of information inside cells. To this end, researchers study both the basic physical properties of biological systems (such as elasticity of DNA and DNA-protein interactions) and the application of physical techniques to the modeling of neural, genetic, and metabolic networks.
Although the research is more microscopic that you would typically find in our book, the Center epitomizes the goal of Intermediate Physics for Medicine and Biology: apply physics and mathematics to research in medicine and biology. I sometimes see the Center advertising for fellows, and suspect it would be an interesting place to work.

A photo of John D. Rockefeller.
John D. Rockefeller,
from Wikipedia.

John D. Rockefeller was one of the greatest philanthropists of all time. Besides Rockefeller University and the Rockefeller Foundation, he helped found both the University of Chicago and Spelman College. His family has carried on his philanthropic tradition. Three years ago, Rockefeller’s grandson David Rockefeller passed away. The university website said
The entire Rockefeller University community deeply mourns the loss of David Rockefeller, our beloved friend and benefactor, Honorary Chairman, and Life Trustee. During its long and storied history, no single individual had a more profound influence on the University than David. His inspired leadership, extraordinary vision, and immense generosity have been essential factors in the University’s success. His integrity, strength, wisdom, and judgment—and especially his unequivocal commitment to excellence—shaped the University and made it the powerhouse of biomedical discovery it is today.
One of the greatest philanthropists of our time, as well as one of the world’s foremost leaders in the spheres of finance, international relations, and public service, David Rockefeller dedicated his life to improving the world and the lives of all who share our planet. David was born in New York City in 1915, the youngest of Abby Aldrich Rockefeller and John D. Rockefeller, Jr.’s six children and a grandson of John D. Rockefeller.
One of my favorite parts of Titan is the story about Rockefeller’s dad, William Rockefeller, a bigamist, con artist, and snake oil salesman. Chernow isn’t fond of Ida Tarbell, the muckraking journalist who wrote influential articles in McClure’s Magazine condemning Standard Oil, the company founded by Rockefeller. Tarbell’s articles led the trust buster Teddy Roosevelt to brake up the monopoly.

Ron Chernow is an excellent writer who’s written fine books about Grant and Washington. He’s best known for his wonderful biography of Alexander Hamilton, which inspired Lin Manuel-Miranda’s musical masterpiece Hamilton.

Listen to Ron Chernow talk about John D. Rockefeller.
https://www.youtube.com/watch?v=-PkYARGlj_Y


My favorite song from Hamilton: “It’s Quiet Uptown.”

Friday, February 7, 2020

Influence of a Perfusing Bath on the Foot of the Cardiac Action Potential

Roth BJ, “Influence of a Perfusing Bath on the Foot of the Cardiac Action Potential,” Circ. Res., 86:e19-e22, 2000.
Roth BJ, “Influence of a Perfusing Bath on the
Foot of the Cardiac Action Potential,”
Circ. Res., 86:e19-e22, 2000.
Twenty years ago this week, I published a Research Commentary in Circulation Research about the “Influence of a Perfusing Bath on the Foot of the Cardiac Action Potential” (Volume 86, Pages e19-e22, 2000). I like this article for several reasons: it’s short and to the point; it’s a theoretical paper closely tied to data; it’s well written; and it challenges a widely-accepted interpretation of an experiment by a major figure in cardiac electrophysiology.

Back in my more pugnacious days, I wouldn’t hesitate to take on senior scientists when I disagreed with them. In this case, I critiqued the work of Madison Spach, a Professor at Duke University and a towering figure in the field. In 1981, Spach led an all-star team that measured cardiac action potentials propagating either parallel to or perpendicular to the myocardial fibers.
Spach MS, Miller WT III, Geselowitz DB, Barr RC, Kootsey JM, Johnson EA. “The Discontinuous Nature of Propagation in Normal Canine Cardiac Muscle: Evidence for Recurrent Discontinuities of Intracellular Resistance that Affect the Membrane Currents. Circulation Research, Volume 48, Pages 39–45, 1981.
Spach et al., “The Discontinuous Nature of Propagation in Normal Canine Cardiac Muscle: Evidence for Recurent Discontinuities of Intracellular Resistance that Affect the Membrane Currents,” Circ. Res., 48:39-45, 1981.
Spach et al., “The Discontinuous Nature of
Propagation in Normal Canine Cardiac Muscle:
Evidence for Recurrent Discontinuities of
Intracellular Resistance that Affect the
Membrane Currents,” Circ. Res., 48:39-45, 1981.
They found that the rate-of-rise of the action potential and the time constant of the action potential foot depend on the direction of propagation. Continuous cable theory predicts that the rate-of-rise and time constant should be the same, regardless of direction. Therefore, they concluded, cardiac tissue is not continuous. Instead, they claimed that their experiment revealed the tissue’s discrete structure.

To be sure, cardiac tissue is discrete in a sense. It’s made of individual cells, coupled by intercellular junctions to form a “syncytium.” Often, however, you can average over the cellular structure and treat the tissue as a continuum, just as you can often treat a material as a continuum even through it’s made from discrete atoms. For example, the bidomain model is a continuous description of the electrical properties of a microscopically heterogeneous tissue (See Section 7.9 of Intermediate Physics for Medicine and Biology for more about the bidomain model).

I’m skeptical of Spach’s interpretation of his data, and I’m not convinced that his observations imply the tissue’s discrete nature. I didn’t waste any time making this point in my article; I mention Spach by name in the first sentence of the Introduction. (In all quotes, I don’t include the references.)
In 1981, Spach et al observed a smaller maximum rate of rise of the action potential,max, and a larger time constant of the action potential foot, τfoot, during propagation parallel to the myocardiac [sic] fibers (longitudinal) than during propagation perpendicular to the fibers (transverse). They attributed these differences to the discrete cellular structure of the myocardium. Their research has been cited widely and is often taken as evidence for discontinuous propagation in cardiac tissue.

Several researchers have suggested that the observations of Spach et al may be caused by the bath perfusing the tissue rather than the discrete nature of the tissue itself... The purpose of this commentary is to model the experiment of Spach et al using a numerical simulation and to show that the perfusing bath plays an important role in determining the time course of the action potential foot.
I performed a computer simulation of wave fronts propagating through a slab of cardiac tissue that is perfused by a tissue bath. The tissue is represented as a bidomain, so its discrete nature was not incorporated into the model. I found that the rate-of-rise of the action potential is slower when propagation is parallel to the fibers compared to perpendicular to the fibers, just as Spach et al. observed. However, when I eliminated the purfusing bath this effect disappeared and the rate-of-rise was the same in both directions.

My favorite part of the article is in the Discussion, where I summarize my conclusion using a syllogism.
The data of Spach et al are cited widely as evidence for discontinuous propagation in cardiac tissue. Their hypothesis of discontinuous propagation is supported by the following logic: (1) During 1-dimensional propagation in a tissue with continuous electrical properties, the time course of the action potential (including max and τfoot) does not depend on the intracellular and interstitial conductivities; (2) experiments indicate that in cardiac tissue max and τfoot differ with the direction of propagation and therefore with conductivity; and (3) therefore, the conductivity of cardiac tissue is not continuous. A flaw exists in this line of reasoning: when a conductive bath perfuses the tissue, the propagation is not 1-dimensional. The extracellular conductivity is higher for the tissue near the surface (adjacent to the bath) than it is for the tissue far from the surface (deep within the bulk). Therefore, gradients in Vm exist not only in the direction of propagation, but also in the direction perpendicular to the tissue surface. Reasoning based on the 1-dimensional cable model (such as used in the first premise of the syllogism above) is not applicable.
In biology and medicine, the main purpose of computer simulations is to suggest new experiments, so I proposed one.
One way to distinguish between the 2 mechanisms ([the discrete structure] versus perfusing bath) would be to repeat the experiments of Spach et al with and without a perfusing bath present. The tissue would have to be kept alive when the perfusing bath was absent, perhaps by arterial perfusion. The results … indicate that when the bath is eliminated, the action potential foot should become exponential, with no differences between longitudinal and transverse propagation. Furthermore, the maximum rate of rise of the action potential should increase and become independent of propagation direction. Although this experiment is easy to conceive, it would be susceptible to several sources of error. If Vm were measured optically, the data would represent an average over a depth of a few hundred microns. Because the model predicts that Vm changes dramatically over such distances, the data would be difficult to interpret. Microelectrode measurements, on the other hand, are sensitive to capacitative [sic] coupling to the perfusing bath, and the degree of such coupling depends on the bath depth. The rapid depolarization phase of the action potential is particularly sensitive to electrode capacitance. Although it is possible to correct the data for the influence of electrode capacitance, these corrections would be crucial when comparing data measured at different bath depths.
A later paper by Oleg Sharifov and Vladimir Fast (Heart Rhythm, Volume 3, Pages 1063-1073, 2006) suggests a better way to perform this experiment: use optical mapping but with the membrane dye introduced through the perfusing bath so it stains only the surface tissue. In this case, there is no capacitive coupling (no microelectrode) and little averaging over depth (the optical signal arises from only surface tissue). This would be an important experiment, but it hasn’t been performed yet. Until it is, we can’t resolve the debate over discrete versus continuous behavior. 

The last paragraph in the paper sums it all up. I particularly like the final sentence.
We cannot conclude from our study that [discrete structures] are not important during action potential propagation. Nor can we conclude that discontinuous propagation does not occur (particularly in diseased tissue). These factors may well play a role in propagation. We can conclude, however, that the influence of a perfusing bath must be taken into account when interpreting data showing differences in the shape of the action potential foot with propagation direction... Therefore, differences in action potential shape with direction cannot be taken as definitive evidence supporting discontinuous propagation... if a perfusing bath is present. Finally, without additional experiments, we cannot exclude the possibility that in healthy tissue the difference in the shape of the action potential upstroke with propagation direction is simply an artifact of the way the tissue was perfused.
Has my commentary had much impact? Nope. Compared to other papers I’ve written, this one is a citation dud. It has been cited only 27 times (22 if you remove self-citations); barely once a year. Spach’s 1981 paper has over 800 citations; over 20 per year. Even a response by Spach and Barr (Circ. Res., Volume 86, Pages e23-e28, 2000) to my commentary has almost twice as many citations as my original commentary. Does this difference in citation rate arise because I’m wrong and Spach’s right? Maybe. The only way to know is to do the experiment.

Friday, January 31, 2020

The Future of Low Dose Radiation Research in the United States

In Section 16.12 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the risk of radiation. In recent posts I’ve considered the risk of low-frequency electromagnetic radiation (such as microwaves), but today I’m talking about ionizing radiation (x-rays, gamma rays, and charged particles). A central concept for assessing risk is the linear no-threshold model.
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear no-threshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.
A screenshot of the National Academies Press website where you can download The Future of Low Dose Radiation Research in the United States.
The National Academies Press website
where you can download The Future of Low
Dose Radiation Research in the United States
.
Recently, the National Academies Press published the proceedings of a symposium about The Future of Low Dose Radiation Research in the United States. You can download a pdf copy for free, or purchase a paper copy. It begins
Exposures at low doses of radiation, generally taken to mean doses below 100 millisieverts, are of primary interest for setting standards for protecting individuals against the adverse effects of ionizing radiation. However, there are considerable uncertainties associated with current best estimates of risks and gaps in knowledge on critical scientific issues that relate to low dose radiation. Nevertheless, in the United States there is no program that is dedicated to advancing knowledge on low dose radiation exposures. Starting in 1999, the Department of Energy’s (DOE’s) Low Dose Radiation Research Program funded experimental research on cellular and molecular responses to low dose radiation but was terminated in 2016 after ramping down funding over several years. Since then, Congress attempted to re-establish a low dose radiation research program in the United States but negotiations within the government have not yet resulted in its establishment.

The Nuclear and Radiation Studies Board of the National Academies hosted the symposium on The Future of Low Dose Radiation Research in the United States on May 8 and 9, 2019. The goal of the symposium was to provide an open forum for a national discussion on the need for a long-term strategy to guide a low dose radiation research program in the United States.
My favorite part of the symposium was a talk by David Brenner, Director of the Columbia University Center for Radiological Research. His remarks emphasize why the risk of low doses of radiation is an important question. He cites seven specific instances where the validity of the linear no-threshold model impacts public health decisions.
Dr. Brenner argued that there are significant health, social, and economic consequences for both under- and overprotecting against radiation. He and others provided examples of how uncertainties regarding the appropriate level of protection are affecting decisions of national and global significance:

1. Protective action guidelines during the 2011 Fukushima nuclear power plant accident were based on incomplete knowledge about radiation risks at low doses. The differing recommendations for evacuation during the accident issued by the United States and Japanese governments caused confusion and stress, and a number of people died because of the evacuation process. Also, many evacuees still remain displaced or have chosen not to return to areas that have been declared safe for habitation, citing radiation fears.

2. The true health effects of the Fukushima nuclear power plant accident have not been assessed due to incomplete information about radiation risks at low doses. Dr. Brenner said that various attempts to quantify health risks from the accident have reached different conclusions, ranging from no predicted future cancer deaths to hundreds of deaths attributed to the releases from the accident.

3. Cleanup activities at sites that were utilized for nuclear weapons production and testing in the United States are estimated to cost more than $377 billion and take longer than 50 years to complete. DOE has committed to cleaning these sites to below background radiation levels and this commitment is based on incomplete scientific understanding of risks at those levels.

4. Planning for high-level radioactive waste disposal and constructing a deep geological repository is impeded by current requirements for protecting future generations from low dose radiation risks.

5. A global move toward phasing out nuclear power is the result of concerns about the environmental and health consequences of nuclear power plant accidents and the lack of planning for long-term storage of high-level radioactive waste.

6. Risks from radon exposure in homes are uncertain, and better estimates could provide support (or not) for reducing radon exposure by mitigation strategies.

7. Risks associated with medical procedures such as CT scans are not fully understood and therefore a balanced consideration of probable benefits and probable risks is not always possible.
I can think of other examples:
8. I often hear about plans for a manned mission to Mars. Any months-long space mission would expose astronauts to radiation. Are such missions justified given the risks?
9. Health care providers receive small radiation exposures when administering nuclear medicine procedures such as positron emission tomography or single-photon computed emission tomography. At what point does the risk to the doctor or nurse outweigh the benefit to the patient?
10. How much should we worry about, and defend against, terrorist attacks involving wide-spread, low-dose radiation; for instance contamination of a municipal water supply?
11. What is the risk to distant neutral countries during a small-scale nuclear war (for example, the risk to the United States resulting from wind-blown radioactive fallout following a limited nuclear war between India and Pakistan)?
12. The lingering risks of the Chernobyl accident are unclear. First responders at the site received lethal doses of radiation during and immediately following the accident, but people living far away, or working near the site long after the accident, suffer from low-dose exposure, and governments must decide how much effort and expense are justified to mitigate these risks.
Assessing the effect of low doses of radiation is a critical issue. You can’t weigh the benefits against the risks if you don’t know the risks. Intermediate Physics for Medicine and Biology introduces readers to this topic, and the National Academies symposium adds more depth. I know it is a cliché to always whine that “we need more research,” but in this case we really do.

 David Brenner talking about “Living with Uncertainty About Low Dose Radiation Risks” in 2013.

Friday, January 24, 2020

Takuo Aoyagi and the Discovery of Pulse Oximetry

A pulse oximeter.
A pulse oximeter.
Photograph by Rama,
Wikimedia Commons, Cc-by-sa-2.0-fr
The pulse oximeter is among the most significant applications of physics and engineering to medical and biology. In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe oximetry.
Near-infrared light in the range 600-1000 nm is used to measure the oxygenation of the blood as a function of time by determining the absorption at two different wavelengths…

Pulse oximeters that fit over a finger are widely used… The basic feature is that arterial blood flow is pulsatile, not continuous. Therefore, measuring the time-varying (AC) signal selectively monitors arterial blood and eliminates the contribution from venous blood and tissue.
A photograph of Takuo Aoyagi
Takuo Aoyagi,
From the Engineering and Technology History Wiki.
The pulse oximeter has a long history, but an important milestone was reached by Takuo Aoyagi, a Japanese engineer. He tells his story in “Pulse Oximetry: It’s Invention, Theory, and Future” (Journal of Anesthesia, Volume 17, Pages 259-266, 2003). Below I present excerpts from the article. As you read, notice how Aoyagi transforms an annoying artifact into a breakthrough.
In 1958, I graduated from Niigata University and was employed by Shimadzu Corporation at Kyoto. There, I became interested in patient monitoring. In 1969, I attended the summer school of physiology and measurement organized by H.A. Hoff and L.A. Geddes held at Baylor University, Houston, TX, USA. This was a very valuable experience for me. After that, I visited several institutions to see patient monitoring systems in the USA. Based on these experiences I came to have a belief that the final goal of patient monitoring must be the automatic control of patient treatment…

Just after I was employed by Shimadzu Corporation, I read a report on an interview with Dr. Yoshio Ogino, founder of Nihon Kohden Corporation, in a newspaper. I was deeply impressed by his words: “A skilled physician can treat only a limited number of patients. But an excellent medical instrument can treat countless patients in the world…”

The first order made by our Research and Development division manager, Mr. S. Ouchi, was “Develop something unique.” And he made me leader of a group of several members newly assigned to the division. In those days, research on automatic control of artificial ventilation was being carried out at Tokyo University in the Department of Anesthesiology by Professor H. Yamamura. I was very interested in this project and visited Professor Yamamura’s group. Assistant Professor M. Kamiyama explained the system and told me that, “To make this system a practical product, a reliable continuous measurement of arterial O2 (SaO2 ) and CO2 is indispensable…”

As a theme of our research group I decided to develop a high-accuracy noninvasive dye densitometer for cardiac output measurement. My new idea was to adopt the principle of Wood’s earpiece oximeter to improve the accuracy of previous earpiece dye densitometers... In Wood’s oximeter, the blood in the ear is expelled pneumatically before the measurement, and light transmitted through the blood is measured and the value is stored as a reference. Next, the blood is readmitted to the ear. After that, the optical density of the blood is calculated continuously against the reference value. Two light wavelengths, red and infrared, are used. The ratio of the optical densities at the two wavelengths is calculated and converted to SaO2 by using an empirical calibration curve…

I appointed Mr. K. Yamaguchi chief of this project. An experimental model was constructed. For animal experiments, secondhand monitors and instruments were brought into an old hut… Just after starting the experiments, we noticed a pulsatile variation in the tissue optical density caused by arterial pulsation. This phenomenon made us anxious…

At this point… I thought as follows:

(1) If the optical density of the pulsating portion is measured at two appropriate wavelengths and the ratio of the optical densities is obtained, the result must be equivalent to Wood’s ratio.

(2) In this method, the arterial blood is selectively measured, and the venous blood does not affect the measurement. Therefore, the probe site is not restricted to the ear.

(3) In this method, the reference for optical density calculation is set for each pulse. Therefore, an accidental shift of probe location introduces a short artifact and quick return to normal measurements.

This was my conception of the pulse oximeter principle... It was December 1972.
In the 2007 article “Takuo Aoyagi: Discovery of Pulse Oximetry” (Anesthesia and Analgesia, Volume 105, Pages S1-S4), John Severinghaus writes
Greatness in science often, as here, comes from the well-prepared mind turning a chance observation into a major discovery. “One man’s noise is another man’s signal” commented the respiratory physiologist Jere Mead half a century ago.
Severinghaus concludes
Introduction of pulse oximetry coincided with a 90% reduction in anesthesia-related fatalities. Takuo Aoyagi’s invention was serendipitous. Although he could use the infrared signal to cancel pulsatile “noise” in the dye decay optical signal, hypoxic desaturation spoiled the smooth dye curve. In that noise, he recognized a useful signal—oximetry—because his mind was well prepared to understand what he saw happen. The process of turning his insight into more accurate, convenient and inexpensive saturation monitors still continues in dozens of laboratories and firms, while he continues to innovate.
Intermediate Physics for Medicine and Biology can’t teach readers how to make creative leaps leading to innovations and discoveries. But perhaps it can prepare the mind, so when you encounter a chance observation you can recognize it as an opportunity.