Friday, August 28, 2020

An Advanced Undergraduate Laboratory in Living State Physics

One weakness of Intermediate Physics for Medicine and Biology is that it doesn’t have an associated laboratory. Students need to learn how to perform experiments and use instruments.

An Advanced Undergraduate Laboratory
In Living State Physics
,
by Wikswo, Vickery, and Venable.

Fortunately, instructors wanting to develop a lab don’t need to start from scratch. My PhD advisor, John Wikswo, and his colleagues Barbara Vickery and John Venable created An Advanced Undergraduate Laboratory in Living State Physics at Vanderbilt University around 1980. I didn’t take this lab class, but my wife Shirley did (she obtained a masters degree in physics from Vanderbilt), and she still has the lab manual. 

Wikswo obtained a grant from the National Science Foundation to support the development of the lab. He collaborated with John Venable, a biologist on the Vanderbilt faculty. When I was a graduate student, Venable was the Associate Dean of the College of Arts and Sciences. Barbara Vickery was a Vanderbilt undergraduate biomedical engineering major.

The lab wasn’t designed for any particular textbook, but Wikswo was an early adopter of Russ Hobbie’s Intermediate Physics for Medicine and Biology, and I think I can see its influence. I don’t have an electronic copy of the 250-page lab manual; you would have to contact Wikswo for that. Below I quote parts of it.

1.1 An Introduction to the Living State Physics Laboratory

The undergraduate physics curriculum at a typical university might include an introductory class in biophysics or medical physics in addition to the more traditional curriculum of mechanics, electricity and magnetism, light and sound, thermodynamics, and modern physics. While introductory and advanced laboratory classes cover these standard fields of physics, generally there has been little opportunity for an undergraduate student to gain laboratory experience in biophysics or medical physics. The need for such experience is particularly acute today for preprofessional and scientifically oriented students. Of these students, physics majors are not being exposed to an important area of experimental physics, and pre-medical students and majors in other departments such as Molecular Biology, Chemistry, and Biomedical Engineering are presently receiving only a minimal exposure to modern biophysical techniques and instrumentation. Thus by introducing an advanced undergraduate laboratory in physics applied to living systems, we expect to broaden the experience in experimental physics for physics majors and non-majors alike.

Several options were available to us in designing this laboratory. We could, for example, have structured the laboratory to emphasize applications of physics to certain living systems such as the nervous system, the cardiovascular system, and the special senses. Rather than take this system-oriented approach, we have chosen to organize the course by areas of physics. The course will draw on techniques and ideas from the whole breadth of physics (mechanics, electricity, thermodynamics, optics, etc.) and apply these to topics of biophysical interest [the same approach as IPMB]. Since we will study intact living systems such as people and frogs, as well as isolated living preparations and inanimate molecules and models, this laboratory will use physics to study topics conventionally identified with both biophysics and medical physics, as well as with electrophysiology, physical chemistry, biomedical engineering and molecular biology. Because of the intended breadth of the planned experiments and their organization by area of physics rather than by biological system, we have chosen to title this laboratory “An Advanced Undergraduate Laboratory in Living State Physics”. The generality of the term “Living State Physics” is intended to parallel the generality of the term “solid state physics”, which as an experimental discipline utilizes the complete spectrum of physical concepts and techniques...

1.2 Summary of Experiments

a. Introduction to Bioelectric Phenomena. The first of the three experiments in this section is an exercise with an oscilloscope and an electronic stimulator which will allow the student to obtain a familiarity with the use of these instruments. In the second and third experiments, the Thornton Modular Plug-In System is used to provide familiarity with the basic physics describing the electromyogram and the electroencephalogram…

b. The Heart Experiments. This section should enable the student to gain an understanding of the basic principles of cardiac physiology. In the laboratory, the student will measure the frog and the human electrocardiogram…

c. Nerve Action Potential… [Students perform an] in-depth study of the properties of nerve propagation in the isolated sciatic nerve of a frog. In both experiments, from extracellular recordings of the nerve action potential it will be possible to demonstrate the graded response of the nerve bundle, the strength-duration relationship of stimuli producing a threshold response, bi-directional conduction, and the monophasic response…

d. Nerve Modeling. In the first experiment, the passive cable properties of the nerve are studied by using a resistor-capacitor network that represents a section of a nerve axon… The active properties of the nerve are investigated in the second experiment. An electronic nerve model which has a design based on a system of equations similar to those developed by Hodgkin and Huxley is used…

e. Skeletal Muscle. The first of the two experiments in this section is an introduction to the active and passive mechanical properties of skeletal muscle using the frog gastrocnemius muscle. The experiment includes measurement of the muscle twitch, the ability of the muscle to do work, and the maximum tension developed by the muscle at different lengths, as well as demonstration of the phenomena of temporal summation and the graded response of muscle. The second experiment involves characterization of the mechanical properties of muscle in its resting and contractile states…

f. Diffusion. In this experiment, a Cenco model is used for qualitative demonstration of the transport phenomenon of diffusion, showing the exponential approach to equilibrium and how the relative sizes of molecules and pores affect diffusion rates.

g. Compartmental Modeling. The usefulness of compartmental modeling in analysis of some systems is demonstrated by constructing one- and two-compartment models for several open and closed thermal systems. The theoretical models are analyzed mathematically…

h. The Physical Aspects of Vision. The minimum number of photons that the human eye can detect in a single detectable flash is the minimum number of photons whose absorption by photoreceptor cells in the eye leads to the firing of an impulse in the brain. This threshold value is determined by recording the fraction of detected flashes as a function of relative intensity of the flashes… by utilizing Poisson statistics.

i. Ultrasound… The experiments introduce the physics of mechanical waves by using ultrasound transducers, a two-dimensional ultrasound target, and an existing ultrasound scanner and transient analyzer to demonstrate wave propagation, attenuation, reflection, refraction, pulse-echo principles, piezoelectric crystals and the concepts of cross-section and spatial resolution.
The first time I ever saw my wife was when she was in Wikswo's office asking a question about one of the lab exercises. I needed to talk to him about some very important issue related to my research, and she was in the way! Well, one thing led to another and....

I recall how Shirley and my friend Ranjith Wijesinghe were lab partners doing the vision experiment. It required sitting in a small, dark enclosure for about half an hour while their eyes became adapted to the dark. I had only recently met Shirley, and I recall being jealous of Ranjith for getting to spend such a private time with her! 

One of the most memorable parts of the lab was the pithing of the frog. None of the students liked doing that. Wikswo had a fun way of demonstrating the fight-of-flight response during the electrocardiogram lab. He would measure the ECG on one of the students, and then take out a giant syringe and say something like “now watch what happens to her heart rate when I inject her with this adrenaline.” Of course no one ever got injected, but the student was always so startled that her heart rate would jump dramatically.

If you are considering developing you own laboratory for Intermediate Physics for Medicine and Biology, you could start with Wikswo’s lab, and then add some of the experiments discussed in these American Journal of Physics papers. Good luck!

J. D. Prentice and K. G. McNeill (1962) “Measurement of the Beta Spectrum of I128 in an Undergraduate Laboratory,” American Journal of Physics, Volume 30, Pages 66–67.  
Peter J. Limon and Robert H. Webb (1964) “A Magnetic Resonance Experiment for the Undergraduate Laboratory,” American Journal of Physics, Volume 32, Pages 361–364.    
L. J. Bruner (1979) “Cardiovascular Simulator for the Undergraduate Physics Laboratory,” American Journal of Physics, Volume 47, Pages 608–611.  
H. W. White, P. E. Chumbley, R. L. Berney, and V. H. Barredo (1982) “Undergraduate Laboratory Experiment to Measure the Threshold of Vision,” American Journal of Physics, Volume 50, Pages 448–450. 
Colin Delaney and Juan Rodriguez (2002) “A Simple Medical Physics Experiment Based on a Laser Pointer,” American Journal of Physics, Volume 70, Pages 1068–1070. 

Danny G. Miles Jr. and David W. Bushman (2005) “Protein Gel Electrophoresis in the Undergraduate Physics Laboratory,” American Journal of Physics, Volume 73, Pages 1186–1189. 
Luis Peralta (2006) “A Simple Electron-Positron Pair Production Experiment,” American Journal of Physics, Volume 74, Pages 457–461.  
Joseph Peidle, Chris Stokes, Robert Hart, Melissa Franklin, Ronald Newburgh, Joon Pahk, Wolfgang Rueckner, and Aravi Samuel (2009) “Inexpensive Microscopy for Introductory Laboratory Courses,” American Journal of Physics, Volume 77, Pages 931–938. 
Timothy A. Stiles (2014) “Ultrasound Imaging as an Undergraduate Physics Laboratory Exercise,” American Journal of Physics, Volume 82, Pages 490–501.  
Elliot Mylotta, Ellynne Kutschera, and Ralf Widenhorn (2014) “Bioelectrical Impedance Analysis as a Laboratory Activity: At the Interface of Physics and the Body,” American Journal of Physics, Volume 82, Pages 521–528.    
Alexander Hydea and Oleg Batishchevb (2015) “Undergraduate Physics Laboratory: Electrophoresis in Chromatography Paper,” American Journal of Physics, Volume 83, Pages 1003–1011.

Owen Paetkau, Zachary Parsons, and Mark Paetkau (2017) “Computerized Tomography Platform Using Beta Rays,” American Journal of Physics, Volume 85, Pages 896–900. 

Friday, August 21, 2020

Heaps of Precessing Protons

Spin Dynamics, by Malcolm Levitt, superimposed on Intermediate Physics for Medicine and Biology.
Spin Dynamics,
by Malcolm Levitt.

Last week’s post quoted from Spin Dynamics: Basics of Nuclear Magnetic Resonance, by Malcolm Levitt. This week I’ll talk more about this excellent textbook. Russ Hobbie and I cite Spin Dynamics in Intermediate Physics for Medicine and Biology when relating the proton relaxation time constants T1 and T2 to the correlation time τc. Our Fig. 18.12 shows this relationship in a log-log plot.

Fig. 18.12  Plot of T1 and T2 vs correlation time of the fluctuating magnetic field at the nucleus. The dashed lines are for a Larmor frequency of 29 MHz; the solid lines are for 10 MHz. Experimental points are shown for water (open dot) and ice (solid dots).

What do we mean by the “correlation time”? Levitt explains.

The parameter τc is called the correlation time of the fluctuations. Rapid fluctuations have a small value of τc, while slow fluctuations have a large value of τc. For rotating molecules in a liquid, τc is in the range of tens of picoseconds to several nanoseconds.

Qualitatively, the correlation time indicates how long it takes before the random field changes sign.

In practice, the correlation time depends on the physical parameters of the system, such as the temperature. Generally, correlation times are decreased by warming the sample, since an increase in temperature corresponds to more rapid molecular motion. Conversely, correlation times are increased by cooling the sample.

Levitt presents a plot similar to Fig. 18.12 in IPMB, except on linear-linear rather than log-log axes. 

Adapted from Fig. 16.16 of Spin Dynamics. The T1 relaxation time as a function of the correlation time for random field fluctuations.

His curve is calculated for a static magnetic field of 11.74 T, which corresponds to a Larmor frequency, fLarmor, of 500 MHz (a considerably stronger magnetic field than in our Fig. 18.12). The minimum of the curve is when τc equals the reciprocal of 2πfLarmor, or about 0.32 ns. Levitt writes

It is a fortuitous circumstance that the most common experimental situation in solution NMR, namely medium-size molecules in non-viscous solutions near room temperature, falls close to the T1 minimum. The small values of T1 permit more rapid averaging of NMR signals, and hence a relatively high signal-to-noise ratio within a given experimental time. 

Think of the correlation time as a measure of the molecule’s rotation or tumbling time, characteristic of the molecular environment. One reason magnetic resonance imaging provides such excellent soft tissue contrast is because the relaxation times T1 and T2 are so sensitive to their surroundings. Relaxation happens most quickly when the tumbling time is similar to the period of precession, just as spin flipping is most effective when the radiofrequency field is in resonance with the precessing protons.

I like Spin Dynamics, in part because it has its own sound track. Russ and I have a lot of auxiliary stuff associated with Intermediate Physics for Medicine and Biology, but we don’t have a sound track. I’ll have to work on that.

To close, I quote from Levitt’s lyrical introduction to Spin Dynamics. Enjoy!

Commonplace as such experiments have become in our laboratories, I have not yet lost that sense of wonder, and of delight, that this delicate motion should reside in all ordinary things around us, revealing itself only to him who looks for it.
E. M. Purcell, Nobel Lecture, 1952
In December 1945, Purcell, Torrey and Pound detected weak radiofrequency signals generated by the nuclei of atoms in ordinary matter (in fact, about 1 kg of paraffin wax). Almost simultaneously, Bloch, Hansen and Packard independently performed a different experiment in which they observed radio signals from the atomic nuclei in water. There two experiments were the birth of the field we now know as Nuclear Magnetic Resonance (NMR).

Before then, physicists knew a lot about atomic nuclei, but only through experiments on exotic states of matter, such as those found in particle beams, or through energetic collisions in accelerators. How amazing to detect atomic nuclei using nothing more sophisticated than a few army surplus electronic components, a rather strong magnet, and a block of wax!

In his Nobel prize address, Purcell was moved to the poetic description of his feeling of wonder, cited above. He went on to describe how
in the winter of our first experiments… looking on snow with new eyes. There the snow lay around my doorstep—great heaps of protons quietly precessing in the Earth’s magnetic field. To see the world for a moment as something rich and strange is the private reward for many a discovery…”
In this book, I want to provide the basic theoretical and conceptual equipment for understanding these amazing experiments. At the same time, I want to reinforce Purcell’s beautiful vision—the heaps of snow, concealing innumerable nuclear magnets, in constant precessional motion. The years since 1945 have shown us that Purcell was right. Matter really is like that. My aim in this book is to communicate the rigorous theory of NMR, which is necessary for really understanding NMR expeirments, but without losing sight of Purcell’s heaps of precessing protons.

Friday, August 14, 2020

Can T2 Be Longer Than T1?

In Chapter 18 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss magnetic resonance imaging. A key process in MRI is when the magnetization vector M is rotated away from the static magnetic field and is then allowed to relax back to equilibrium. To be specific, let’s assume that the static field is in the z direction, and the magnetization is rotated into the x-y plane. The magnetization Mz along the static field returns to its equilibrium value M0 exponentially with time constant T1. The Mx and My components relax to zero with time constant T2. Russ and I write

The transverse relaxation time [T2] is always shorter than T1. Here is why. A change of Mz requires an exchange of energy with the [thermal] reservoir. This is not necessary for changes confined to the xy plane... Mx and My can change as Mz changes, but they can also change by other mechanisms, such as when individual spins precess at slightly different frequencies, a process known as dephasing.

Is T2 always less than T1? Let me start by giving you the bottom line: T2 is usually less than T1, and for most purposes we can assume T2 < T1. But Russ and I wrote “always,” meaning no exceptions. It’s not always true that T2 < T1.

“Relaxation: Can T2 Be Longer Than T1?” by Daniel Traficante, superimposed on Intermediate Physics for Medicine and Biology.
“Relaxation: Can T2 Be Longer Than T1?
by Daniel Traficante.

To see why, look at the 1991 article by Daniel Traficante in the journal Concepts in Magnetic Resonance (Volume 3, Pages 171–177), “Relaxation: Can T2 Be Longer Than T1?” Traficante begins by analyzing the relaxation equations introduced in Section 18.4 of IPMB,

      dMx/dt = − Mx/T2     dMy/dt = − My/T2    dMz/dt = (M0Mz)/T1 .

If we start at t = 0 with Mx = M0 and My = Mz = 0 (the situation after a 90° radiofrequency pulse), the magnetization is

       Mx = M0 et/T2          My = 0                    Mz = M0 (1 − et/T1) . 

(For the experts, this is correct in the frame of reference rotating with the Larmor frequency.) We are particularly interested in how the magnitude of the magnetization vector |M| changes (or, to avoid taking a square root, how the square of the magnetization changes, M2 = Mx2 + My2 + Mz2). In our example, we find

                M2/M02 = e−2t/T2 + (1 − et/T1)2.

Traficante claims that many researchers mistakenly believe that |M| is equal to M0 at all times; the vector simply rotates in the x-z plane, with its tip following the blue dashed arc in each figure below. Figure 18.5 in IPMB proves that Russ and I did not make that mistake. For the usual case when T2 << T1, the x-component decays quickly, while the z-component grows slowly, so |M| starts at M0, quickly shrinks to a small value, and then slowly rises back to M0. In the x-z plane, the tip of M follows the red path shown below. Clearly |M| is always less than M0 (the red curve is well under the blue arc).

The path of the tip of M, for T2 << T1.
The path of the tip of M, for T2 << T1.  

If T2 equals T1, Traficante shows that in the x-z plane the tip of M follows a straight line, and again |M| is less than M0.

The path of the tip of M, for T2 = T1.
The path of the tip of M, for T2 = T1.

What if T2 >> T1? Then Mz would rapidly rise to its equilibrium value M0 while Mx would slowly fall to zero. 

The path of the tip of M, for T2 >> T1.
The path of the tip of M, for T2 >> T1.

In this case, |M| would become larger than M0 (the red curve passes outside of the blue arc). Traficante argues that an increase in |M| above M0 would be unphysical (I suspect it would violate one of the laws of thermodynamics), so T2 cannot be much larger than T1.

Can T2 be just a little larger than T1? The straight-line plot for T2 = T1 suggests that |M| stays less than M0 with room to spare. I tried to make a new homework problem asking you to find the relation between T1 and T2 that would prevent |M| from ever rising above M0. The analysis was more complicated than I expected, so I skipped the homework problem. Below is my hand-waving argument to find the largest allowed value of T2.

You can use a Taylor series analysis to show that |M| is less than M0 for small times (corresponding to the lower right corner of the plots above), regardless of the values of T1 and T2. For longer times, I’ll suppose that |M| might become larger than M0, but it can’t oscillate back-and-forth, going from smaller to larger to smaller and so on (I haven’t proven this, hence the hand waving). So, what we need to focus on is how |M| (or, equivalently, M2) behaves as t goes to infinity (corresponding to the upper left corner of the plots). If M2 is less than M02 at large times, then it should be less than M02 at all times and we have not violated any laws of physics. If M2 is greater than M02 at large times, then we have a problem.

A little algebra applied to our previous equation gives

                       M2/M02 = 1 + e–2t/T2  + e–2t/T1 – 2e–t/T1 .

At long times, the term with –2t/T1 in the exponent must be smaller than the term with –t/T1, so we can ignore it. That leaves two terms to compete, a positive term with –2t/T2 in the exponent and a negative one with –t/T1. The term with the smaller decay constant will ultimately win, so M2 will never become greater than M02 if T2 < 2T1.

I admit, my argument is complicated. If you see an easier way to prove this, let me know.

Traficante concludes

It is a common misconception that after a pulse, the net magnetization vector simply tips backwards toward the z axis, while maintaining a constant length. Instead, under the normal conditions when T2* [for now, let’s ignore the difference between T2 and T2*] is less than T1, the resultant first shrinks, and then grows back toward its initial value as it tips back toward the z axis. This behavior is clearly shown by examining the basic equations that describe both the decay of the magnetization in the xy plane and its growth up along the z axis. From these equations, the magnitudes of the xy and z components, as well as their [vector] sums, can be calculated as a function of time. This same behavior is demonstrated even when T2* is equal to T1—the resultant still does not maintain a constant value of 1.0 as it tips back. 
The resultant does not exceed 1.0 at any time during the relaxation if the T2/T1 ratio does not exceed 2. However, experimental evidence has been obtained that shows that the ratio can be greater than 1.

Spin Dynamics, by Malcom Levitt, superimposed on Intermediate Physics for Medicine and Biology.
Spin Dynamics,
by Malcom Levitt

Malcolm Levitt, in his book Spin Dynamics: Basics of Nuclear Magnetic Resonance, comes to the same conclusion.

The following relationship holds absolutely

        T2 < 2 T1 (theoretical limit).

In most cases, however, it is usually found that T2 is less than, or equal to, T1:

        T2 < T1 (usual practical limit).

The case where 2T1 > T2 > T1 is possible but rarely encountered.
  In a footnote, Levitt expands on this idea.

The case where T2 > T1 is encountered when the spin relaxation is caused by fluctuating microscopic fields which are predominately transverse rather than longitudinal.
I would like to thank Steven Morgan for calling this issue to my attention. Russ and I now address it in the errata. In general, we appreciate readers finding mistakes in Intermediate Physics for Medicine and Biology. If you find something in our book that looks wrong, please let us know.

Friday, August 7, 2020

The SI Logo

Intermediate Physics for Medicine and Biology uses the metric system. On page 1, Russ Hobbie and I write
“The metric system is officially called the SI system (systeme internationale). It used to be called the MKS (meter kilogram second) system.”
In 2018, the International Bureau of Weights and Measures changed how the seven SI base units are defined. They are now based on seven defining constants. This change is summarized in the SI logo.

The SI logo, produced by the
International Bureau of Weights and Measures.

First let’s see where the seven base units appear in IPMB. Then we’ll examine the seven defining constants.

kilogram

The most basic units of the SI system are so familiar that Russ and I don’t bother defining them. The kilogram (mass, kg) appears throughout IPMB, but especially in Chapter 1, where density plays a major role in our analysis of fluid dynamics.

meter

We define the meter (distance, m) in Chapter 1 when discussing distances and scales: “The basic unit of length in the metric system is the meter (m): about the height of a 3-year-old child.” Both the meter and the kilogram are critical when discussing scaling in Chapter 2.

second

The second (time, s) is another unit that’s so basic Russ and I take it for granted. It plays a particularly large role in Chapter 10 when discussing nonlinear dynamics.

ampere

The SI system becomes more complicated when you add electrical units. IPMB defines the ampere (electrical current, A) in Section 6.8 about current and Ohm’s law: “The units of the current are C s−1 [C is the unit of charge, a coulomb] or amperes (A) (sometimes called amps).”

kelvin

The unit for absolute temperature—the kelvin (temperature, K)—plays a central role in Chapter 3 of IPMB, when describing thermodynamics.

mole

The mole (number of molecules, mol) appears in Chapter 3 when relating microscopic quantities (Boltzmann’s constant, elementary charge) to macroscopic quantities (the gas constant, the Faraday). John Wikswo and I have introduced a name for a mole of differential equations (the leibniz), but the International Bureau of Weights and Measures inexplicably did not add it to their logo.

candela

Russ and I introduce the candela (luminous intensity, cd) in Section 14.12 of IPMB, when comparing radiometry to photometry: “The number of lumens per steradian is the luminous intensity, in lm sr−1. The lumen per steradian is also called the candela.” The steradian (the unit of solid angle) used to play a more central role in the SI system, but appears to have been demoted.
Now we examine the seven constants that define these units.

Planck’s constant

In IPMB, the main role of Planck’s constant (h, 6.626 × 10−34 J s) is to relate the frequency and energy of a photon. Quantum mechanics doesn’t play a major role in IPMB, so Planck’s constant appears less often than you might expect.

speed of light

Like quantum mechanics, relativity does not take center stage in IPMB, so the speed of light (c, 2.998 × 108 m s−1) appears rarely. We use it in Chapter 14 when relating the frequency of light to its wavelength, and in Chapter 17 when relating the mass of an elementary particle to its energy.

cesium hyperfine frequency

The cesium hyperfine frequency (Δν, 9.192 × 109 Hz) defines the second. It never appears in IPMB. Why cesium? Why this particular atomic transition? I don’t know.

elementary charge

The elementary charge (e, 1.602 × 10−19 C) is used throughout IPMB, but is particularly important in Chapter 6 about bioelectricity.

Boltzmann’s constant

Boltzmann’s constant (kB, 1.381 × 10−23 J K−1) appears primarily in Chapter 3 of IPMB, but also anytime Russ and I mention the Boltzmann factor.

Avogadro’s number

Like Boltzmann’s constant, Avogadro’s number (NA, 6.022 × 1023 mol−1) shows up first in Chapter 3.

luminous efficacy

The luminous efficacy (Kcd, 683 lm W−1) appears in Chapter 14 of IPMB: “The ratio Pv/P at 555 nm is the luminous efficacy for photopic vision, Km = 683 lm W−1.” I find this constant to be different from all the others. It’s a prime number specified to only three digits. Suppose a society of intelligent beings evolved on another planet. Their physicists would probably measure a set of constants similar to ours, and once we figured out how to convert units we would get the same values for six of the constants. The luminous efficacy, however, would depend on the physiology of their eyes (assuming they even have eyes). Perhaps I make too much about this. Perhaps the luminous efficacy merely defines the candela, just as Avogardo’s number defines the mole and Boltzmann’s constant defines the kelvin. Still, to me it has a different feel.
You can learn more about the SI units and constants in the International Bureau of Weights and Measures’ SI brochure. I’m fond of the SI logo, which reminds me of the circle of fifths. If you’re new to the metric systems, you might want to paste the logo into your copy of Intermediate Physics for Medicine and Biology; I suggest placing it in the white space on page 1, just above Table 1.1.

Page 1 of Intermediate Physics for Medicine and Biology,
with the SI Logo added at the top.

Friday, July 31, 2020

Free Convection and the Origin of Life

Free convection is an important process in fluid dynamics. Yet Russ Hobbie and I rarely discuss it in Intermediate Physics for Medicine and Biology. It appears only once, in a homework problem analyzing Rayleigh-Benard convection cells.

How does free convection work? If water is heated from below, it expands as it becomes hotter, reducing its density. Less dense water is buoyant and rises. As the water moves away from the source of heat, it cools, becomes denser, and sinks. The process then repeats. The fluid flow caused by all this rising, sinking, heating, and cooling is what’s known as free convection. One reason Russ and I don’t dwell on this topic is that our body is isothermal. You need a temperature gradient to drive convection.

“Thermal Habitat for RNA Amplification and Accumulation,”  by Salditt et al. (Phys. Rev. Lett., 125:048104, 2020), superimposed on Intermeidate Physics for Medicine and Biology.
Thermal Habitat for RNA Amplification and Accumulation,”
by Salditt et al. (Phys. Rev. Lett., 125:048104, 2020).
Is free convection ever important in biology? According to a recent article in Physical Review Letters (Volume 125, Article Number 048104) by Annalena Salditt and her coworkers (“Thermal Habitat for RNA Amplification and Accumulation”), free convection may be responsible for the origin of life!

Many scientists believe early life was based on ribonucleic acid, or RNA, rather than DNA and proteins. RNA replication is aided by temperature oscillations, which allow the double-stranded RNA to separate and make complementary copies (hot), and then accumulate without being immediately degraded (cold). Molecules moving with water during free convection undergo such a periodic heating and cooling. One more process is needed, called thermophoresis, which causes long strands of RNA to move from hot to cold regions preferentially compared to short strands. Salditt et al. write
The interplay of convective and thermophoretic transport resulted in a length-dependent net transport of molecules away from the warm temperature spot. The efficiency of this transport increased for longer RNAs, stabilizing them against cleavage that would occur at higher temperatures.
Where does free convection happen? Around hydrothermal vents at the bottom of the ocean.
A natural setting for such a heat flow could be the dissipation of heat across volcanic or hydrothermal rocks. This leads to temperature differences over porous structures of various shapes and lengths.
The authors conclude
The search for the origin of life implies finding a location for informational molecules to replicate and undergo Darwinian evolution against entropic obstacles such as dilution and spontaneous degradation. The experiments described here demonstrate how a heat flow across a millimeter-sized, water-filled porous rock can lead to spatial separation of molecular species resulting in different reaction conditions for different species. The conditions inside such a compartment can be tuned according to the requirements of the partaking molecules due to the scalable nature of this setting. A similar setting could have driven both the accumulation and RNA-based replication in the emergence of life, relying only on thermal energy, a plausible geological energy source on the early Earth. Current forms of RNA polymerase ribozymes can only replicate very short RNA strands. However, the observed thermal selection bias toward long RNA strands in this system could guide molecular evolution toward longer strands and higher complexity.
You can learn more about this research from a focus article in Physics, an online magazine published by the American Physical Society.

Salditt et al.’s article provides yet another example of why I find the interface of physics and biology is so fascinating.

Friday, July 24, 2020

Tests for Human Perception of 60 Hz Moderate Strength Magnetic Fields

The first page of “Tests for Human Perception of 60 Hz Moderate Strength Magnetic Fields,” by Tucker and Schmitt (IEEE Trans. Biomed. Eng. 25:509-518, 1978), superimposed on Intermediate Physics for Medicine and Biology.
The first page of “Tests for Human Perception
of 60 Hz Moderate Strength Magnetic Fields,”
by Tucker and Schmitt (IEEE Trans. Biomed. Eng.
25:509-518, 1978).
In Chapter 9 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss possible effects of weak external electric and magnetic fields on the body. In a footnote, we write
Foster (1996) reviewed many of the laboratory studies and described cases where subtle cues meant the observers were not making truly “blind” observations. Though not directly relevant to the issue under discussion here, a classic study by Tucker and Schmitt (1978) at the University of Minnesota is worth noting. They were seeking to detect possible human perception of 60-Hz magnetic fields. There appeared to be an effect. For 5 years they kept providing better and better isolation of the subject from subtle auditory clues. With their final isolation chamber, none of the 200 subjects could reliably perceive whether the field was on or off. Had they been less thorough and persistent, they would have reported a positive effect that does not exist.
In this blog, I like to revisit articles that we cite in IPMB.
Robert Tucker and Otto Schmitt (1978) “Tests for Human Perception of 60 Hz Moderate Strength Magnetic Fields.” IEEE Transactions on Biomedical Engineering, Volume 25, Pages 509-518.
The abstract of their paper states
After preliminary experiments that pointed out the extreme cleverness with which perceptive individuals unintentionally used subtle auxiliary clues to develop impressive records of apparent magnetic field detection, we developed a heavy, tightly sealed subject chamber to provide extreme isolation against such false detection. A large number of individuals were tested in this isolation system with computer randomized sequences of 150 trials to determine whether they could detect when they were, and when they were not, in a moderate (7.5-15 gauss rms) alternating magnetic field, or could learn to detect such fields by biofeedback training. In a total of over 30,000 trials on more than 200 persons, no significantly perceptive individuals were found, and the group performance was compatible, at the 0.5 probability level, with the hypothesis that no real perception occurred.
The Tucker-Schmitt study illustrates how observing small effects can be a challenge. Their lesson is valuable, because many weak-field experiments are subject to systematic errors that provide an illusion of a positive result. Near the start of their article, Tucker and Schmitt write
We quickly learned that some individuals are incredibly skillful at sensing auxiliary non-magnetic clues, such as coil hum associated with field, so that some “super perceivers” were found who seemed to sense the fields with a statistical probability as much as 10–30 against happening by chance. A vigorous campaign had then to be launched technically to prevent the subject from sensing “false” clues while leaving him completely free to exert any real magnetic perceptiveness he might have.
Few authors are as forthright as Tucker and Schmitt when recounting early, unsuccessful experiments. Yet, their tale shows how experimental scientists work.
Early experiments, in which an operator visible to the test subject controlled manually, according to a random number table, whether a field was to be applied or not, alerted us to the necessity for careful isolation of the test subject from unintentional clues from which he could consciously, or subconsciously, deduce the state of coil excitation. No poker face is good enough to hide, statistically, knowledge of a true answer, and even such feeble clues as changes in building light, hums, vibrations and relay clatter are converted into low but significant statistical biases.
IPMB doesn’t teach experimental methods, but all scientists must understand the difference between systematic and random errors. Uncertainty from random errors is suppressed by taking additional data, but eliminating systematic errors may require you to redesign your experiment.
In a first round of efforts to prevent utilization of such clues, the control was moved to a remote room and soon given over to a small computer. A “fake” air-core coil system, remotely located but matched in current drain and phase angle to the real large coil system was introduced as a load in the no-field cases. An acoustically padded cabinet was introduced to house the experimental subject, to isolate him from sound and vibration. Efforts were also made to silence the coils by clamping them every few centimeters with plastic ties and by supporting them on air pocket packing material. We tried using masking sound and vibrations, but soon realized that this might also mask real perception of magnetic fields.
Designing experiments is fun; you get to build stuff in a machine shop! I imagine Tucker and Schmitt didn’t expect they would have this much fun. Their initial efforts being insufficient, they constructed an elaborate cabinet to perform their experiments in.
This cabinet was fabricated with four layers of 2 in plywood, full contact epoxy glued and surface coated into a monolithic structure with interleaved corners and fillet corner reinforcement to make a very rigid heavy structure weighing, in total, about 300 kg. The structure was made without ferrous metal fastening and only a few slender brass screws were used. The door was of similar epoxyed 4-ply construction but faced with a thin bonded melamine plastic sheet. The door was hung on two multi-tongue bakelite hinges with thin brass pins. The door seals against a thin, closed-cell foam-rubber gasket, and is pressure sealed with over a metric ton of force by pumping a mild vacuum inside the chamber of means of a remote acoustically silenced hose-connected large vacuum-cleaner blower. The subject received fresh air through a small acoustic filter inlet leak that also assures sufficient air flow to cool the blower. The chosen “cabin altitude” at about 2500 ft above ambient presented no serious health hazard and was fail-safe protected.
An experimental scientist must be persistent. I remember learning that lesson as a graduate student when I tried for weeks to measure the magnetic field of a single nerve axon. I scrutinized every part of the experiment and fixed every problem I could find, but I still couldn’t measure an action current. Finally, I realized the coaxial cable connecting the nerve to the stimulator was defective. It was a rookie mistake, but I was tenacious and ultimately figured it out. Tucker and Schmitt personify tenacity.
As still more isolation seemed necessary to guarantee practically complete exclusion of auxiliary acoustic and mechanical clues, an extreme effort was made to improve, even further, the already good isolation. The cabinet was now hung by aircraft “Bungee” shock cord running through the ceiling to roof timbers. The cabinet was prevented from swinging as a pendulum by four small non-load-bearing lightly inflated automotive type inner tubes placed between the floor and the cabinet base. Coils already compliantly mounted to isolate intercoil force vibration were very firmly reclamped to discourage intracoil “buzzing.” The cabinet was draped inside with sound absorbing material and the chair for the subject shock-mounted with respect to the cabinet floor. The final experiments, in which minimal perception was found, were done with this system.
Once Tucker and Schmitt heroically eliminated even the most subtle cues about the presence of a magnetic field, subjects could no longer detect whether or not a magnetic field was present. People can’t perceive 60-Hz, 0.0015-T magnetic fields.

Russ and I relegate this tale to a footnote, but it’s an important lesson when analyzing the effects of weak electric and magnetic fields. Small systematic errors abound in these experiments, both when studying humans and when recording from cells in a dish. Experimentalists must ruthlessly design controls that can compensate for or eliminate confounding effects. The better the experimentalist, the more doggedly they root out systematic errors. One reason the literature on the biological effects of weak fields is so mixed may be that few experimentalists take the time to eradicate all sources of error.

Tucker and Schmitt’s experiment is a lesson for us all.

Friday, July 17, 2020

Physics World: Medical Physics

I subscribe to a weekly newsletter from Physics World about medical physics. This newsletter and its associated website (physicsworld.com/c/medical-physics) replace what used to be medicalphysicsweb.org. Like medicalphysicsweb, the newsletter is edited by Tami Freeman, which means the quality remains high. It’s one of the best ways to learn what’s new in medical physics.

On the website you find videos, podcasts, research updates, webinars, interviews, career advice, and job ads related to medical physics. You may find it almost as useful as hobbieroth.blogspot.com! Seriously, it has more and better content than this blog, but I suspect it has more resources behind it. In any event, both cost you the same: nothing. Sign up for an account at Physics World, then subscribe to the medical physics weekly newsletter. You won’t regret it.

Below is a sampler; some videos from Physics World that readers of Intermediate Physics for Medicine and Biology might find useful or interesting. Enjoy!

What are the benefits of proton therapy.

Reality check: Covid-19 and UV disinfection.

How neutrons can help in the Covid-19 pandemic.

The curious case of the porpoises and the wind farm.

Faces of physics: human organs on a chip.

Friday, July 10, 2020

An S1 Gradient of Refractoriness is Not Essential for Reentry Induction by an S2 Stimulus

Sometimes the shortest papers are my favorites. Take, for example, an article that I published twenty years ago last month: a two-page communication in the IEEE Transactions on Biomedical Engineering titled “An S1 Gradient of Refractoriness is Not Essential for Reentry Induction by an S2 Stimulus” (Volume 47, Pages 820–821, 2000). It analyzes the electrical stimulation of cardiac tissue, and focuses on the mechanism for inducing an arrhythmia.

The introduction is two short paragraphs (a mere hundred words). The first puts the work in context.
Successive stimulation (S1, then S2) of cardiac tissue can induce reentry. In many cases, an S1 stimulus triggers a propagating action potential that creates a gradient of refractoriness. The S2 stimulus then interacts with this S1 refractory gradient, causing reentry. Many theoretical and experimental studies of reentry induction are variations on this theme [1]–[9].
When I wrote this communication, the critical point hypothesis was a popular explanation for how to induce reentry in cardiac tissue. I cited nine papers discussing this hypothesis, but I associate it primarily with the books of Art Winfree and the experiments of Ray Ideker.
A schematic illustration of the critical point hypothesis. The top panel shows the S1 wave front just before the S2 stimulus. The bottom panel shows the tissue just after the S2 stimulus, and the resulting reentry.
The critical point hypothesis.
The figure above illustrates the critical point hypothesis. A first (S1) stimulus is applied to the right edge of the tissue, launching a planar wavefront that propagates to the left (arrow). By the time of the upper snapshot, the tissue on the right (purple) has returned to rest and recovered excitability, while the tissue on the left (red) remains refractory. The green line represents the boundary between refractory and excitable regions: the line of critical refractoriness.

The lower snapshot is immediately after a second (S2) stimulus is applied through a central cathode (black dot). The tissue near the cathode experiences a strong stimulus above threshold (yellow), while the remaining tissue experiences a weak stimulus below threshold. The green curve represents the boundary between the above-threshold and below-threshold regions: the circle of critical stimulus. S2 only excites tissue that is excitable and has a stimulus above threshold (inside the circle on the right). It launches a wave front that propagates to the right, but cannot propagate to the left because of refractoriness. Only when the refractory tissue recovers excitability will the wave front begin to propagate leftward (curved arrow). Critical points (blue dots) are located where the line of critical refractoriness intersects the circle of critical stimulus. Two spiral waves—a type of cardiac arrhythmia where a wave front circles around a critical point, chasing its tail—rotate clockwise on the bottom and counterclockwise on the top.

A beautiful paper from Ideker’s lab provides evidence supporting the critical point hypothesis: N. Shibata, P.-S. Chen, E. G. Dixon, P. D. Wolf, N. D. Danieley, W. M. Smith, and R. E. Ideker (1988) “Influence of Shock Strength and Timing on Induction of Ventricular Arrhythmias in Dogs,” American Journal of Physiology, Volume 255, Pages H891–H901.

The second paragraph of my communication begins with a question.
Is the S1 gradient of refractoriness essential for the induction of reentry? In this communication, my goal is to show by counterexample that the answer is no. In my numerical simulation, the transmembrane potential is uniform in space before the S2 stimulus. Nevertheless, the stimulus induces reentry.
The critical point hypothesis implies the answer is yes; without a refractory gradient there is no line of critical refractoriness, no critical point, no spiral wave, no reentry. Yet I claimed that the gradient of refractoriness is not essential. To explain why, we must consider what happens following the second stimulus.
An illustration of cathode break excitation, and the resulting quatrefoil reentry.
Cathode break excitation.
The tissue is depolarized (D, yellow) under the cathode but is hyperpolarized (H, purple) in adjacent regions along the fiber direction on each side of the cathode, often called virtual anodes. Hyperpolarization lowers the membrane potential toward rest, shortening the refractory period (deexcitation) and carving out an excitable path. When S2 ends, the depolarization under the cathode diffuses into the newly excitable tissue (dashed arrows), launching a wave front that propagates initially in the fiber direction (solid arrows): break excitation. Only after the surrounding tissue recovers excitability does the wave front begin to rotate back, as if there were four critical points: quatrefoil reentry.

Russ Hobbie and I discuss break excitation in a homework problem in Chapter 7 of Intermediate Physics for Medicine and Biology.
Problem 48. During stimulation of cardiac tissue through a small anode, the tissue under the electrode and in the direction perpendicular to the myocardial fibers is hyperpolarized, and adjacent tissue on each side of the anode parallel to the fiber direction is depolarized. Imagine that just before this stimulus pulse is turned on the tissue is refractory. The hyperpolarization during the stimulus causes the tissue to become excitable. Following the end of the stimulus pulse, the depolarization along the fiber direction interacts electrotonically with the excitable tissue, initiating an action potential (break excitation). (This type of break excitation is very different than the break excitation analyzed on page 181.)
(a) Sketch pictures of the transmembrane potential distribution during the stimulus. Be sure to indicate the fiber direction, the location of the anode, the regions that are depolarized and hyperpolarized by the stimulus, and the direction of propagation of the resulting action potential.
(b) Repeat the analysis for break excitation caused by a cathode instead of an anode. For a hint, see Wikswo and Roth (2009).
Now we come to the main point of the communication; the reason I wrote it. Look at the first snapshot in the illustration above, the one labeled S1 that occurs just before the S2 stimulus. The tissue is all red. It is uniformly refractory. The S1 action potential has no gradient of refractoriness, yet reentry occurs. This is the counterexample that proves the point: a gradient of refractoriness is not essential.

The communication contains one figure, showing the results of a calculation based on the bidomain model. The time in milliseconds after S1 is in the upper right corner of each panel. S1 was applied uniformly to the entire tissue, so at 70 ms the refractoriness is uniform. The 80 ms frame is during S2. Subsequent frames show break excitation the development of reentry.

A figure based on Fig. 1 in “An S1 Gradient of Refractoriness is Not Essential for Reentry Induction by an S2 Stimulus” (IEEE Trans. Biomed. Eng., Volume 47, Pages 820–821, 2000). It is the same as the figure in the communication, except the color and quality are improved.
An illustration based on Fig. 1 in “An S1 Gradient of Refractoriness is Not Essential for Reentry Induction by an S2 Stimulus” (IEEE TBME, 47:820–821, 2000). It is the same as the figure in the communication, except the color and quality are improved.
The communication concludes:
My results support the growing realization that virtual electrodes, hyperpolarization, deexcitation, and break stimulation may be important during reentry induction [8], [9], [14], [15], [21]–[24]. An S1 gradient of refractoriness may underlie reentry induction in many cases [1]–[6], but this communication provides a counterexample demonstrating that an S1 gradient of refractoriness is not necessary in every case.
This is a nice calculation, but is it consistent with experiment? Look at Y. Cheng, V. Nikolski, and I. R. Efimov (2000) “Reversal of Repolarization Gradient Does Not Reverse the Chirality of Shock-Induced Reentry in the Rabbit Heart,” Journal of Cardiovascular Electrophysiology, Volume 11, Pages 998–1007. These researchers couldn’t produce uniform refractoriness, so they did the next best thing: repeated the experiment using S1 wave fronts propagating in different directions. They always obtained the same result, independent of the location and timing of the critical line of refractoriness.

Does this calculation mean the critical point hypothesis is wrong? No. See my paper with Natalia Trayanova and her student Annette Lindblom (“The Role of Virtual Electrodes in Arrhythmogenesis: Pinwheel Experiment Revisited,” Journal of Cardiovascular Electrophysiology, Volume 11, Pages 274-285, 2000) to examine how this view of reentry can be reconciled with the critical point hypothesis.

One of the best things about this calculation is that you don’t need a fancy computer to demonstrate that the S1 gradient of refracoriness is not essential; A simple cellular automata will do. The figure below sums it up (look here if you don’t understand).

A cellular automata demonstrating that an S1 gradient of refractoriness is not essential for reentry induction by an S2 stimulus.
A cellular automata demonstrating that an S1 gradient of refractoriness is not essential for reentry induction by an S2 stimulus.

Friday, July 3, 2020

Dreyer’s English

Dreyer’s English, by Benjamin Dreyer, superimposed on Intermediate Physics for Medicine and Biology.
Dreyer’s English,
by Benjamin Dreyer.
In this blog I’ve reviewed several books about writing (On Writing Well, Plain Words, Do I Make Myself Clear?). I do this because many readers of Intermediate Physics for Medicine and Biology will become writers of scientific articles, grant proposals, or textbooks. Today, I review the funniest of these books: Dreyer’s English: An Utterly Correct Guide to Clarity and Style. If you believe a book about writing must be dull, read Dreyer’s English; you’ll change your mind.

At the start of his book, Benjamin Dreyer writes
Here’s your first challenge: Go a week without writing
• Very
• Rather
• Really
• Quite
• In fact
And you can toss in—or, that is, toss out—“just” (not in the sense of “righteous” but in the sense of “merely”) and “so” (in the “extremely” sense, through as conjunctions go it’s pretty disposable too).

Oh yes: “pretty.” As in “pretty tedious.” Or “pretty pedantic.” Go ahead and kill that particular darling.

And “of course.” That’s right out. And “surely.” And “that said.”

And “actually”? Feel free to go the rest of your life without another “actually.”

If you can last a week without writing any of what I’ve come to think of as the Wan Intensifiers and Throat Clearers—I wouldn’t ask you to go a week without saying them; that would render most people, especially British people, mute—you will at the end of that week be a considerably better writer than your were at the beginning.
Let’s go through Intermediate Physics for Medicine and Biology and see how often Russ Hobbie and I use these empty words.

Very

I tried to count how many times Russ and I use “very” in IPMB. I thought using the pdf file and search bar would make this simple. However, when I reached page 63 (a tenth of the way through the book) with 30 “very”s I quit counting, exhausted. Apparently “very” appears about 300 times.

Sometimes our use of “very” is unnecessary. For instance, “Biophysics is a very broad subject” would sound better as “Biophysics is a broad subject,” and “the use of a cane can be very effective” would be more succinct as “the use of a cane can be effective.” In some cases, we want to stress that something is extremely small, such as “the nuclei of atoms (Chap. 17) are very small, and their sizes are measured in femtometers (1 fm = 10−15 m).” If I were writing the book again, I would consider replacing “very small” by “tiny.” In other cases, a “very” seems justified to me, as in “the resting concentration of calcium ions, [Ca++], is about 1 mmol l−1 in the extracellular space but is very low (10−4 mmol l−1) inside muscle cells,” because inside the cell the calcium concentration is surprisingly low (maybe we should have replaced “very” by “surprisingly”). Finally, sometimes we use “very” in the sense of finding the limit of a function as a variable goes to zero or infinity, as in “for very long pulses there is a minimum current required to stimulate that is called rheobase.” To my ear, this is a legitimate “very” (if infinity isn’t very big, then nothing is). Nevertheless, I concede that we could delete most “very”s and the book would be improved.

Rather

I counted 33 “rather”s in IPMB. Usually Russ and I use “rather” in the sense of “instead” (“this rather than that”), as in “the discussion associated with Fig. 1.5 suggests that torque is taken about an axis, rather than a point.” I’m assuming Dreyer won’t object to this usage (but you know what happens when you assume...). Only occasionally do we use “rather” in its rather annoying sense: “the definition of a microstate of a system has so far been rather vague,” and “this gives a rather crude image, but we will see how to refine it.”

Really

Russ and I do really well, with only seven “really”s. Dreyer or no Dreyer, I’m not getting rid of the first one: “Finally, thanks to our long-suffering families. We never understood what these common words really mean, nor the depth of our indebtedness, until we wrote the book.”

Quite

I quit counting “quite” part way through IPMB. The first half contains 33, so we probably have sixty to seventy in the whole book. Usually we use “quite” in the sense of “very”: “in the next few sections we will develop some quite remarkable results from statistical mechanics,” or “there is, of course, something quite unreal about a sheet of charge extending to infinity.” These could be deleted with little loss. I would keep this one: “while no perfectly selective channel is known, most channels are quite selective,” because, in fact, I’m really quite amazed how so very selective these channels are. I would also keep “the lifetime in the trapped state can be quite long—up to hundreds of years,” because hundreds of years for a trapped state! Finally, I’m certain our students would object if we deleted the “quite” in “This chapter is quite mathematical.”

In Fact

I found only 24 “in fact”s, which isn’t too bad. One’s in a quote, so it’s not our fault. All the rest could go. The worst one is “This fact is not obvious, and in fact is true only if…”. Way too much “fact.”

Just

Russ and I use “just” a lot. I found 39 “just”s in the first half of the book, so we probably have close to eighty in all. Often we use “just” in a way that is neither “righteous” nor “merely,” but closer to “barely.” For instance, “the field just outside the cell is roughly the same as the field far away.” I don’t know what Dreyer would say, but this usage is just alright with me.

So

Searching the pdf for “so” was difficult; I found every “also,” “some,” “absorb,” “solute,” “solution,” “sodium,” “source,” and a dozen other words. I’m okay (and so is Dreyer) with “so” being used as a conjunction to mean “therefore,” as in “only a small number of pores are required to keep up with the rate of diffusion toward or away from the cell, so there is plenty of room on the cell surface for many different kinds of pores and receptor sites.” I also don’t mind the “so much…that” construction, such as “the distance 0.1 nm (100 pm) is used so much at atomic length scales that it has earned a nickname: the angstrom.” I doubt Russ and I ever use “so” in the sense of “dude, you are so cool,” but I got tired of searching so I’m not sure.

Pretty

Only one “pretty”: “It is interesting to compare the spectral efficiency function with the transmission of light through 2 cm of water (Fig. 14.36). The eye’s response is pretty well centered in this absorption window.” We did a pretty good job with this one.

Of Course

I didn’t expect to find many “of course”s in our book, but there are fourteen of them. For example, “both assumptions are wrong, of course, and later we will improve upon them.” I hope, of course, that readers are not offended by this. We could do without most or all of them.

Surely

None. Fussy Mr. Dreyer surely can’t complain.

That Said

None.

Actually

I thought Russ and I would do okay with “actually,” but no; we have 38 of them. Dreyer says that “actually…serves no purpose I can think of except to irritate.” I’m not so sure. We sometimes use it in the sense of “you expect this, but actually get that.” For example, “the total number of different ways to arrange the particles is N! But if the particles are identical, these states cannot be distinguished, and there is actually only one microstate,” and “we will assume that there is no buildup of concentration in the dialysis fluid… (Actually, proteins cause some osmotic pressure difference, which we will ignore.)” Dreyer may not see its purpose, but I actually think this usage is justified. I admit, however, that it’s a close call, and most “actually”s could go.


Books I keep on my desk
(except for Dreyer’s English, which is a
library copy; I need to buy my own).
I was disappointed to find so many appearances of “very,” “rather,” “really,” “quite,” “in fact,” “just,” “so,” “pretty,” “of course,” “surely,” “that said,” and “actually” in Intermediate Physics for Medicine and Biology. We must do better.

Dreyer concludes
For your own part, if you can abstain from these twelve terms for a week, and if you read not a single additional word of this book—if you don’t so much as peek at the next page—I’ll be content.
 The next page says
Well, no.

But it sounded good.