Thursday, April 30, 2020

Radiation Oncology: A Physicist's Eye View

I miss Oakland University’s library. It’s been locked up now for about six weeks because of the coronavirus pandemic, so I can’t checkout books. As the physics department’s liaison to the library, I’ve used our meager annual allocation to purchase many of the books cited in Intermediate Physics for Medicine and Biology. I enjoy browsing through the stacks more than I realized.

I now better appreciate online access to textbooks. IPMB can be downloaded through the OU library catalog at no charge for anyone logging on as an OU student, faculty,or staff. I have access to other books online, and the number seems to be increasing every year. 

Radiation Oncology: A Physicist's Eye View, by Michael Goitein, superimposed on Intermediate Physics for Medicine and Biology.
Radiation Oncology:
A Physicist's Eye View
,
by Michael Goitein.

For example, I can download a textbook cited in Chapter 16 (Medical Uses of X-Rays) of IPMB.
Goitein M (2008) Radiation Oncology: A Physicist’s Eye View. Springer, New York.
It’s an excellent book, and you may hear more about it from me in the coming weeks. The preface begins:
This book describes how radiation is used in the treatment of cancer. It is written from a physicist’s perspective, describing the physical basis for radiation therapy, and does not address the medical rationale or clinical aspects of such treatments. Although the physics of radiation therapy is a technical subject, I have used, to the extent possible, non-technical language. My intention is to give my readers an overview of the broad issues and to whet their appetite for more detailed information, such as is available in textbooks.
“Whetting their appetite” is also a goal of IPMB, and of this blog. In fact, education in general is a process of whetting the appetite of readers so they will go learn more on their own.

I particularly like Michael Goitein’s chapter on uncertainty. The lessons it provides apply far beyond radiation oncology. In fact, it’s a lesson that is never more relevant than during a pandemic. Enjoy!

UNCERTAINTY MUST BE MADE EXPLICIT

ISO [International Organization for Standardization] (1995) states that “the result of a measurement [or calculation] … is complete only when accompanied by a statement of uncertainty.” Put more strongly, a measured or computed value which is not accompanied by an uncertainty estimate is meaningless. One simply does not know what to make of it. For reasons which I do not understand, and vehemently disapprove of, the statement of uncertainty in the clinical setting is very often absent. And, when one is given, it is usually unaccompanied by the qualifying information as to the confidence associated with the stated uncertainty interval—which largely invalidates the statement of uncertainty.

The importance of first estimating and then providing an estimate of uncertainty has led me to promulgate the following law:
Goitein's First Law of Uncertainty.
[Goitein's First Law of Uncertainty.]
There is simply no excuse for violating either part of Law number 1. The uncertainty estimate may be generic, based on past experience with similar problems; it may be a rough “back-of-the-envelope” calculation; or it may be the result of a detailed analysis of the particular measurement. Sometimes it will be sufficient to provide an umbrella statement such as “all doses have an associated confidence interval of ±2% (SD) unless otherwise noted.” In any event, the uncertainty estimate should never be implicit; it should be stated.

In graphical displays such as that of a dose distribution in a two- dimensional plane, the display of uncertainty can be quite challenging. This is for two reasons. First, it imposes an additional dimension of information which must somehow be graphically presented. And second because, in the case, for example, of the value of the dose at a point, the uncertainty may be expressed as either a numerical uncertainty in the dose value, or as a positional uncertainty in terms of the distance of closest approach. One approach to the display of dose uncertainty is shown in Figure 6.4 of Chapter 6.

HOW TO DEAL WITH UNCERTAINTY

To act in the face of uncertainty is to accept risk. Of course, deciding not to act is also an action, and equally involves risk. One’s decision as to what action to take, or not to take, should be based on the probability of a given consequence of the action and the importance of that consequence. In medical practice, it is particularly important that the importance assigned to a particular consequence is that of the patient, and not his or her physician. I know a clinician who makes major changes in his therapeutic strategy because of what I consider to be a trivial cosmetic problem. Of course, some patients might not find it trivial at all. So, since he assumes that all patients share his concern, I judge that he does not reflect the individual patient’s opinion very well. Parenthetically, it is impressive how illogically most of us perform our risk analyses, accepting substantial risks such as driving to the airport while refusing other, much smaller ones, such as flying to Paris (Wilson and Crouch, 2001). (I hasten to add that I speak here of the risk of flying, not that of being in Paris.)

People are often puzzled as to how to proceed once they have analyzed and appreciated the full range of factors which make a given value uncertain. How should one act in the face of the uncertainty? Luckily there is a simple answer to this conundrum, which is tantamount to a tautology. Even though it may be uncertain, the value that you should use for some quantity as a basis for action is your best estimate of that quantity. It’s as simple as that. You should plunge ahead, using the measured or estimated value as though it were the “truth”. There is no more correct approach; one has to act in accordance with the probabilities. To reinforce this point, here is my second law:
Goitein's Second Law of Uncertainty.
[Goitein's Second Law of Uncertainty.]
It may seem irresponsible to promote gambling when there are life-or-death matters for a patient at stake; the word has bad connotations. But in life, since almost everything is uncertain, we in fact gamble all the time. We assess probabilities, take into account the risks, and then act. We have no choice. We could not walk through a doorway if it were otherwise. And that is what we must do in the clinic, too. We cannot be immobilized by uncertainty. We must accept its inevitability and make the best judgment we can, given the state [of] our knowledge.

Wednesday, April 29, 2020

The Toroid Illustration (Fig. 8.26)

In Chapter 8 (Biomagnetism) of Intermediate Physics for Medicine and Biology, Russ Hobbie and I show an illustration of a nerve axon threaded through a magnetic toroid to measure its magnetic field (Fig. 8.26).
Fig. 8.26. A nerve cell preparation is threaded through the magnetic
toroid to measure the magnetic field. The changing magnetic flux in
the toroid induces an electromotive force in the winding. Any external
current that flows through the hole in the toroid diminishes the magnetic field.
While this figure is clear and correct, I wonder if we could do better? I started with a figure of a toroidal coil from a paper I published with my PhD advisor John Wikswo and his postdoc Frans Gielen.
Gielen FLH, Roth BJ, Wikswo JP Jr (1986) Capabilities of a Toroid-Amplifier System for Magnetic Measurements of Current in Biological Tissue. IEEE Trans. Biomed. Eng. 33:910-921.
Starting with Figure 1 from that paper (you can find a copy of that figure in a previous post), I modified it to resemble Fig. 8.26, but with a three-dimensional appearance. I also added color. The result is shown below.

An axon (purple) is threaded through a toroid to measure the magnetic field.
The toroid has a ferrite core (green) that is wound with insulated copper wire
(blue). It is then sealed in a coating of epoxy (pink). The entire preparation
is submerged in a saline bath. The changing magnetic flux in the ferrite induces
an electromotive force in the winding. Any current in the bath that flows
through the hole in the toroid diminishes the magnetic field.
Do you like it?

To learn more about how I wound the wire and applied the epoxy coating, see my earlier post about The Magnetic Field of a Single Axon. The part about "any current in the bath that flows through the hole in the toroid diminishes the magnetic field" is described in more detail in my post about the Bubble Experiment.

Tuesday, April 28, 2020

Chernobyl Then and Now: A Global Perspective

Last year I was supposed to give a talk at Oakland University for a symposium about “Chernobyl Then and Now: A Global Perspective.” It was part of an exhibition at the OU Art Gallery titled “McMillan’s Chernobyl: An Intimation of the Way the World Would End.” My role at the symposium was to explain the factors that led to the explosion of the Chernobyl nuclear power plant. I was chosen by the organizer, OU Professor of Art History Claude Baillargeon, because I had taught a class about The Making of the Atomic Bomb in the Oakland’s Honors College.

Readers of Intermediate Physics for Medicine and Biology should become familiar with the Chernobyl disaster because it illustrates how exposure to radiation can affect people over different time scales, from short term acute radiation sickness to long-term radiation-induced cancer.

It turned out I could not attend the symposium. My friend Gene Surdutovich stepped in at the last minute to replace me, and because he is from Ukraine—where the disaster occurred—he provided more insight than I could have. However, I thought the readers of this blog might want to read a transcript of the talk I planned to present. It was supposed to be my “TED Talk,” aimed at a broad audience with limited scientific background. No Powerpoint, no blackboard; just a few balls and a pencil as props.
The nuclear reactor in Chernobyl had an inherently unstable design that led to the worst nuclear accident in history. To understand why the design was so unstable, we need to review some physics.

The nucleus of an atom contains protons and neutrons. The number of protons determines what element you have. For instance, a nucleus with 92 protons is uranium. The number of neutrons determines the isotope. If a nucleus has 92 protons and 146 neutrons it is uranium-238 (because 92 + 146 = 238). Uranium-238 is the most common isotope of uranium (about 99% of natural uranium is uranium-238). If the nucleus has three fewer neutrons, that is only 143 neutrons instead of 146, it’s uranium-235, a rare isotope of uranium (about 1% of natural uranium is uranium-235).

No stable isotopes of uranium exist, but both uranium-235 and uranium-238 have very long half-lives (a half-life is how long it takes for half the nuclei to decay). Their half-lives are several billion years, which is about the same as the age of the earth. So many of the atoms of uranium that originally formed with the earth have not decayed away yet, and still exist in our rocks. We can use them as nuclear fuel.

Although uranium-235 is the rarer of the two long-lived isotopes, it is the one that is the fuel for a nuclear reactor. The uranium-235 nucleus is “fissile” meaning that it is so close to being unstable that a single neutron can trigger it to break in two pieces, releasing energy and two additional neutrons. This is called nuclear fission.

A nuclear chain reaction can start with a lot of uranium-235 and a single neutron. The neutron causes a uranium-235 nucleus to fission, breaking into two pieces plus releasing two additional neutrons and energy. These two neutrons hit two other uranium-235 nuclei, causing each of them to fission, releasing a total of four neutrons plus more energy. These four neutrons hit four other uranium-235 nuclei, releasing eight neutrons….and so on. The atomic bomb dropped on Hiroshima at the end of World War Two was based on just such an uncontrolled uranium-235 chain reaction. Fortunately, there are ways to control the chain reaction, so it can be used for more peaceful purposes, such as a nuclear reactor.

One surprising feature of uranium-235 is that SLOW neutrons are more likely to split the nucleus than FAST neutrons. How this effect was discovered is an interesting story. Enrico Fermi, an Italian physicist, was studying nuclear reactions in the 1930s by bombarding different materials with neutrons. He observed more nuclear reactions if his apparatus sat on a wooden table top than if it sat on a marble table top! What? It turns out wood was better at slowing the neutrons than marble. Think how confusing this must have been for Fermi. He was so confused that he tried submerging the apparatus in a pond behind the physics building and the reactions increased even more!

A uranium-235 chain reaction triggered by neutrons works best with slow neutrons. Therefore, nuclear reactors need a “moderator”: a substance that slows the neutrons down. The moderator is the key to understanding what happened at Chernobyl.

The best moderators are materials whose nuclear mass is about the same as the mass of a neutron. If the nucleus was a lot heavier than the neutron, the neutron would not slow down after the collision. Imagine this tennis ball is the light neutron, and this big basketball is the heavy nucleus. When the neutron hits the nucleus, it just bounces off. It changes direction but doesn’t slow down. Now, imagine this neutron collides with a very light particle, represented by this ping pong ball. When the relatively heavy neutron hits the light particle, it will just push it out of the way like a train hitting a mosquito. The neutron itself won’t slow down much. To be effective at slowing the neutron down, the nucleus needs to be about the same mass as the neutron. What has a similar mass to a neutron? A proton. What nucleus contains a single proton? Hydrogen. Watch what happens when a neutron and a hydrogen nucleus collide? This ball is the neutron, and this ball is the proton: the hydrogen nucleus. Right after the collision, the neutron stops! It is like when a moving billiard ball slams into a stationary billiard ball; the one that was moving stops, and the one that was stationary starts moving. Interacting with hydrogen is a great way to slow down neutrons. Therefore, hydrogen is a great moderator. Where do you find a lot of hydrogen? Water (H2O). It was the hydrogen in the water of the wooden table top that was so effective at slowing Fermi’s neutrons. The water in the pond behind the physics building was even better; it had even more hydrogen.

Other elements that have relatively light nuclei are also good moderators, such as carbon (carbon’s nucleus has 6 protons and 6 neutrons). It’s somewhat heavier than you want in order to slow neutrons optimally, but it’s not bad, and its abundant, cheap, and dense. During the Manhattan Project, Fermi (who had fled fascist Italy and settled in the United States) built the first nuclear reactor in a squash court under the football stadium at the University of Chicago. His reactor was a “pile” of uranium balls, with each ball surrounded by blocks of graphite (almost pure carbon, like the lead in this pencil). The uranium was the fuel and the graphite was the moderator.

Before we talk more about moderators, you might be wondering why Fermi’s reactor didn’t explode, destroying Chicago? One reason was that his uranium was a mix of uranium-235 and uranium-238, and was in fact 99% uranium-238. The uranium-238 doesn’t contribute to the chain reaction; it’s not fissile. To make matters worse, uranium-238 can absorb a neutron and dampen the chain reaction. When uranium-238 captures a neutron to become uranium-239, it takes a neutron “out of action” so to speak. During the Manhattan Project the United States spent enormous amounts of time and money separating uranium-235 from uranium-238, so it could use almost pure uranium-235 in the atomic bomb. But Fermi didn’t have any such enriched uranium. Also, Fermi controlled his reactor using a super-duper neutron absorber, cadmium. Cadmium sucks up the neutrons, stopping the chain reaction. Fermi could push in or pull out cadmium control rods to keep the speed of the reaction “just right.” As an emergency backup he had one big cadmium control rod suspended over the reactor by a rope. One of Fermi’s assistants stood by with an axe. If things started to go out of control, his job was to cut the rope dropping the cadmium rod and stopping the reaction. Fortunately, Fermi took great pains to operate the reactor carefully, and no such problems occurred. Had things gone wrong, the reactor probably wouldn’t have exploded like a bomb. It would have just gotten very hot and melted, causing a “meltdown” with all sorts of radiation release, like at Chernobyl. It’s a scary thought because it was in the middle of Chicago, but we were at war against the Nazis, so people took some risks.

Now back to the moderator. Let’s consider three different moderators. First, “heavy water.” This is water containing a rare, heavy isotope of hydrogen, hydrogen-2 (its nucleus consists of one proton and one neutron). While it is not quite as good as hydrogen-1 at slowing down neutrons, it’s still very good, and it has one advantage. Hydrogen-1 (a single proton) can sometimes absorb a neutron to become hydrogen-2. It’s as if occasionally these two balls stick together when they hit. This capture of a neutron slows the chain reaction. Hydrogen-2, however, rarely absorbs a neutron to become hydrogen-3, so it’s a great moderator: it slows the neutrons without absorbing them. During World War Two, the Germans tried to construct a nuclear reactor using heavy water as the moderator. The problem was, heavy water is difficult and expensive to make. There was a plant in Norway that produced heavy water, and it was controlled by the Germans. The British sent in a commando raid that sabotaged the plant, causing all that precious heavy water to flow down the drain. Heavy water is so expensive it isn’t used nowadays in reactors, and we won’t discuss it anymore.

The second moderator we’ll consider is regular water made using hydrogen-1 (I’ll call it just “water” as opposed to “heavy water”). Nowadays most nuclear reactors in the United States use water as the moderator. They also use water as the coolant. You need a coolant to keep the reactor from getting too hot and melting. Also, the coolant is how you get the heat out of the reactor so you can use it to run a steam engine and generate power. So in the United States, water in a nuclear reactor has two purposes: it’s the moderator and the coolant. Suppose that the reactor, for some reason, gets too hot and the water starts boiling off. That will cause the moderator to boil away. No more moderator, no more slowing down the neutrons. No more slowing down the neutrons, no more chain reaction. This is a type of a negative feedback loop that makes the reactor inherently safe. It’s like the thermostat in your house: if the house gets too hot, the thermostat turns off the furnace, and the house cools down. Recall that hydrogen-1 can also absorb neutrons, and in theory that could cause the reactor to speed up when the water boils away because there is less neutron absorption. So neutron absorption and moderation are opposite effects. But a reduction of neutron absorption is less important than the disappearance of the moderator, so on the whole when water boils the reaction slows down. We say that the reactor has a “negative void coefficient.” The “void” means the water is boiling, forming bubbles. The “negative” means this negative feedback loop occurs, keeping the reaction from increasing out of control.

Now for the third moderator: carbon. The Russians built something called an RBMK reactor. This is a Russian acronym, so I won’t try to explain what the different letters mean. Suffice to say, an RBMK reactor is a nuclear reactor that uses carbon as the moderator. Chernobyl was an RBMK reactor. Like Fermi’s original reactor, the carbon was in the form of graphite. In addition, an RBMK reactor uses water as the coolant. Graphite is the moderator and water is the coolant. Now, suppose this type of reactor begins to heat up and the water starts to boil away. The hydrogen in the water is not the primary moderator, the carbon in the graphite is. So, the reaction doesn’t slow down when the water boils away; the carbon moderator is still there, slowing the neutrons. But remember, the hydrogen in water sometimes absorbs a neutron, taking it out of action. This neutron capture decreases as the water boils away, so the reaction increases. Increased heat causes water to boil, causing the reaction to speed up, causing increased heat, causing more water to boil, causing the reaction to speed up even more, causing yet more increased heat … This is a positive feedback loop; a vicious cycle. The reactor has a “positive void coefficient.” It’s as if the thermostat in your house was wired wrong, so when the house got hot the furnace turned ON, heating the house more. Normally the reactor is designed with all sorts of controls to prevent this positive feedback loop from taking off. For instance, control rods can be pushed in and out as needed. But, if for some reason these controls are not in place, the reactor will heat up dramatically and quickly, just as it did at Chernobyl.

Why do we have nuclear reactors? Nuclear reactors produce heat to power a steam engine, which in turn generates electricity. The steam needs to be at high pressure, so it can turn the turbine. Therefore, the reactor is in a pressure container. It’s like a pressure cooker. If the water boils too much, the pressure builds up until the container can’t handle it anymore and bursts, releasing steam. It’s a little like your whistling tea pot, except instead of whistling when the water boils, the reactor explodes. And unlike your teapot, the reactor releases radioactive elements along with the steam. You get a cloud of radioactivity.

Another problem with an RBMK reactor is that graphite burns. It’s pure carbon. It’s like coal. Once the pressure container bursts, oxygen can get in igniting the graphite, starting a fire. The graphite then spews radioactive smoke up into the atmosphere. Many of the people killed in the Chernobyl accident were firemen, trying to put out the fire.

Another issue, a little less important but worth mentioning, was the control rods. Chernobyl had control rods made out of boron, which like cadmium is an excellent neutron absorber. It vacuums up the neutrons and stops the chain reaction. The problem was, the control rods were tipped with graphite. As you push in a control rod, initially it would be like adding moderator, quickening the reaction. Only when the rod was completely pushed in would the boron absorb neutrons, slowing the reaction. So, the control rods would eventually suppress the chain reaction, but initially they made things worse. If, like at Chernobyl, a problem developed quickly, the control rods couldn’t keep up.

I won’t go in to all the comedy of errors that were the immediate cause of the accident at Chernobyl. The reactor was undergoing a test, and several of the controls were turned off. Some safeguards were still in place, but mistakes, poor communication, and ignorance prevented them from working. Whatever the immediate cause of the accident, the crucial point is that the reactor design itself was unstable. It’s like trying to balance this pencil on its tip. You can do it if you are careful and have some controls, but it’s inherently unstable. If you are not always vigilant, the pencil will fall over. The unstable design of the Chernobyl reactor made it a disaster waiting to happen.
If you would like to hear me give this talk (slightly modified), you can watch the YouTube video below. This winter I was teaching the second semester of introductory physics, and when the coronavirus pandemic arrived I had to switch to an online format. I recorded a lecture about Chernobyl when we were discussing nuclear energy.

My Chernobyl talk, given to my Introductory Physics class,
online from home because of Covid-19.

Monday, April 27, 2020

Donnan Equilibrium

Russ Hobbie and I analyze Donnan equilibrium in Chapter 9 of Intermediate Physics for Medicine and Biology.
Section 9.1 discusses Donnan equilibrium, in which the presence of an impermeant ion on one side of a membrane, along with other ions that can pass through, causes a potential difference to build up across the membrane. This potential difference exists even though the bulk solution on each side of the membrane is electrically neutral.
Today I present two new homework problems based on one of Donnan’s original papers.
Donnan, F. G. (1924) “The Theory of Membrane Equilibria.” Chemical Reviews, Volume 1, Pages 73-90.
Here’s the first problem.
Section 9.1

Problem 2 ½. Suppose you have two equal volumes of solution separated by a semipermeable membrane that can pass small ions like sodium and potassium but not large anions like A. Initially, on the left is 1 mole of Na+ and 1 mole of A, and on the right is 10 moles of K+ and 10 moles of A. What is the equilibrium amount of Na+, K+, and A on each side of the membrane?
Stop and solve the problem using the methods described in IPMB. Then come back and compare your solution with mine (and Donnan’s).

In equilibrium, x moles of sodium will cross the membrane from left to right. To preserve electroneutrality, x moles of potassium will cross from right to left. So on the left you have 1 – x moles of Na+, x moles of K+, and 1 mole of A. On the right you have x moles of Na+, 10 – x moles of K+, and 10 moles of A.

Both sodium and potassium are distributed by the same Boltzmann factor, implying that

           [Na+]left/[Na+]right = [K+]left/[K+]right = exp(−eV/kT)            (Eq. 9.4)

where e is the elementary charge, V is the voltage across the membrane, k is Boltzmann’s constant, and T is the absolute temperature. Therefore

           (1 – x)/x = x/(10 – x)

or x = 10/11 = 0.91. The equilibrium amounts (in moles) are

                          left       right
           Na+        0.09      0.91
           K+          0.91      9.09
           A          1.00    10.00

The voltage across the membrane is

           V = kT/e ln([Na+]right/[Na+]left) = (26.7 mV) ln(10.1) = 62 mV .

Donnan writes
In other words, 9.1 per cent of the potassium ions originally present [on the right] diffuse to [the left], while 90.9 per cent of the sodium ions originally present [on the left] diffuse to [the right]. Thus the fall of a relatively small percentage of the potassium ions down a concentration gradient is sufficient in this case to pull a very high percentage of the sodium ions up a concentration gradient. The equilibrium state represents the simplest possible case of two electrically interlocked and balanced diffusion-gradients.
Like this problem? Here’s another. Repeat the last problem, but instead of initially having 10 moles of K+ on the right, assume you have 10 moles of Ca++. Calcium is divalent; how will that change the problem?
Problem 3 ½. Suppose you have two solutions of equal volume separated by a semi-impermeable membrane that can pass small ions like sodium and calcium but not large anions like A and B.  Initially, on the left is 1 mole of Na+ and 1 mole of A, and on the right is 10 moles of Ca++ and 10 moles of B. What is the equilibrium amount of Na+, Ca++, A and B on each side of the membrane?
Again, stop, solve the problem, and then come back to compare solutions.

Suppose 2x moles of Na+ cross the membrane from left to right. To preserve electroneutrality, x moles of Ca++ move from right to left. Both cations are distributed by a Boltzmann factor (Eq. 9.4)

           [Na+]left/[Na+]right = exp(−eV/kT)

           [Ca++]left/[Ca++]right  = exp(−2eV/kT) .

However,

          exp(−2eV/kT) = [ exp(−eV/kT) ]2

so

      { [Na+]left/[Na+]right }2 = [Ca++]left/[Ca++]right

or
        [ (1 –2 x)/(2x) ]2 = x/(10 – x)

This is a cubic equation that I can’t solve analytically. Some trial-and-error numerical work suggests x = 0.414. The equilibrium amounts are therefore

                          left       right
           Na+        0.172    0.828
           Ca++      0.414    9.586
           A          1           0
           B        0          10 

The voltage across the membrane is

           V = kT/e ln([Na+]right/[Na+]left) = (26.7 mV) ln(4.814) = 42 mV .

I think this is correct; Donnan didn’t give the answer in this case, so I’m flying solo.

Frederick Donnan. From an article in the Journal of Chemical Education, Volume 4(7), page 819.
Frederick Donnan.
From an article in the Journal of Chemical Education,
Volume 4(7), page 819.
Who was Donnan? Frederick Donnan (1870 – 1956) was an Irish physical chemist. He obtained his PhD at the University of Leipzig under Wilhelm Ostwald, and then worked for Henry van’t Hoff. Most of his career was spent at the University College London. He was elected a fellow of the Royal Society and won the Davy Medal in 1928 “for his contributions to physical chemistry and particularly for his theory of membrane equilibrium.”

Friday, April 24, 2020

The Effects of Spiral Anisotropy on the Electric Potential and the Magnetic Field at the Apex of the Heart

Readers of Intermediate Physics for Medicine and Biology may enjoy this story about some of my research as a graduate student, working for John Wikswo at Vanderbilt University. My goal was to determine if the biomagnetic field contains new information that cannot be obtained from the electrical potential.

In 1988, Wikswo, fellow grad student Wei-Qiang Guo, and I published an article in Mathematical Biosciences (Volume 88, Pages 191-221) about the magnetic field at the apex of the heart.
The Effects of Spiral Anisotropy on the Electric Potential and the Magnetic Field at the Apex of the Heart.
B. J. Roth, W.-Q. Guo, and J. P. Wikswo, Jr. 
Living State Physics Group, Department of Physics and Astronomy, Vanderbilt University, Nashville, Tennessee 37235
This paper describes a volume-conductor model of the apex of the heart that accounts for the spiraling tissue geometry. Analytic expressions are derived for the potential and magnetic field produced by a cardiac action potential propagating outward from the apex. The model predicts the existence of new information in the magnetic field that is not present in the electrical potential.
The analysis was motivated by the unique fiber geometry in the heart, as shown in the figure below, from an article by Franklin Mall. It shows how the cardiac fibers spiral outward from a central spot: the apex (or to use Mall’s word, the vortex).
The apex of the heart.
The apex of the heart.
From Mall, F. P. (1911) “On the Muscular Architecture of the Ventricles of the Human Heart.” American Journal of Anatomy, Volume 11, Pages 211-266.
Our model was an idealization of this complicated geometry. We modeled the fibers as making Archimedean spirals throughout a slab of tissue representing the heart wall, perfused by saline on the top and bottom.
The geometry of a slab of cardiac tissue.
The geometry of a slab of cardiac tissue. The thickness of the tissue is l, the conductivity of the saline bath is σe, and the conductivity tensors of the intracellular and interstitial volumes are σ̃i and σ̃o. The variables ρ, θ, and z are the cylindrical coordinates, and the red curves represent the fiber direction. Based on Fig. 2 of Roth et al. (1988).
Cardiac tissue is anisotropic; the electrical conductivity is higher parallel to the fibers than perpendicular to them. This is taken into account by using conductivity tensors. Because the fibers spiral and make a constant angle with the radial direction, the tensors have off-diagonal terms when expressed in cylindrical coordinates.

Consider a cardiac wavefront propagating outward, as if stimulated at the apex. Two behaviors occur. First, ignore the spiral geometry. A wavefront produces intracellular current propagating radially outward and extracellular current forming closed loops in the bath (blue). This current produces a magnetic field above and below the slab (green).
The current and magnetic field created by an action potential propagating outward from the apex of the heart if no off-diagonal terms are present in the conductivity tensors.
The current (blue) and magnetic field (green) created by an action potential propagating outward from the apex of the heart if no off-diagonal terms are present in the conductivity tensors. Based on Fig. 5a of Roth et al. (1988).
Second, ignore the bath but include the spiral fiber geometry. Although the wavefront propagates radially outward, the anisotropy and fiber geometry create an intracellular current that has a component in the θ direction (blue). This current produces its own magnetic field (green).
The azimuthal component of the current and the electrically silent components of the magnetic field produced by off-diagonal terms in the conductivity tensor.
The azimuthal component of the current (blue) and the electrically silent components of the magnetic field (green) produced by off-diagonal terms in the conductivity tensor, with σe = 0. Based on Fig. 5b of Roth et al. (1988).
Of course, both of these mechanisms operate simultaneously, so the total magnetic field distribution looks something like that shown below.
The total magnetic field at the apex of the heart.
The total magnetic field at the apex of the heart. This figure is only qualitatively correct; the field lines may not be quantitatively accurate. Based on Fig. 5e of Roth et al. (1988).
The original versions of these beautiful figures were prepared by a draftsman in Wikswo’s laboratory. I can’t remember who, but it might have been undergraduate David Barach, who prepared many of our illustrations by hand at the drafting desk. I added color for this blog post.

The main conclusion of this study is that there exists new information about the tissue in the magnetic field that is not present from measuring the electrical potential. The ρ and z components of the magnetic field are electrically silent; the spiraling fiber geometry has no influence on the electrical potential.

Is this mathematical model real, or just the musings of a crazy physics grad student? Two decades after we published our model, Krista McBride—another of Wikswo’s grad students, making her my academic sister—performed an experiment to test our prediction, and obtained results consistent with our calculations.

Title, authors, and abstract for McBride et al. (2010).

I’m always amazed when one of my predictions turns out to be correct.

Thursday, April 23, 2020

Consequences of the Inverse Viscosity-Temperature Relationship

In Homework Problem 50 of Chapter 1 in Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
Section 1.19

Problem 50. The viscosity of water (and therefore of blood) is a rapidly decreasing function of temperature. Water at 5° C is twice as viscous as water at 35° C. Speculate on the implications of this extreme temperature dependence for the circulatory system of cold-blooded animals. (For a further discussion, see Vogel 1994, pp. 27–31.)
Life in Moving Fluids, by Steven Vogel, superimposed on Intermediate Physics for Medicine and Biology.
Life in Moving Fluids,
by Steven Vogel.
Let’s see what Steven Vogel discusses. The citation is to Life in Moving Fluids: The Physical Biology of Flow.
CONSEQUENCES OF THE INVERSE VISCOSITY-TEMPERATURE RELATIONSHIP

At 5° C water is about twice as viscous (dynamically or kinematically) as at 35° C; organisms live at both temperatures and, indeed, at ones still higher and lower. Some experience an extreme range within their lifetimes—seasonally, diurnally, or even in different parts of the body simultaneously. Does the consequent variation in viscosity ever have biological implications?...

Consider the body temperature of animals. At elevated temperatures less power ought to be required to keep blood circulating if the viscosity of blood follows the normal behavior of liquids. And, in our case, it does behave in the ordinary way—human blood viscosity (ignoring blood’s minor non-Newtonianism) is 50% higher at 20° than at 37° C… Is this a fringe benefit of having a high body temperature? Probably the saving in power is not especially significant—circulation costs only about 6% of basal metabolic rate. More interesting is the possibility of compensatory adjustments in the bloods and circulatory systems of animals that tolerate a wide range of internal temperatures. The red blood cells of cold-blooded vertebrates, and therefore presumably their capillary diameters, are typically larger than either the nucleated cells of birds or the nonnucleated ones of mammals… The shear rate of blood is greatest in the capillaries; must these be larger in order to permit circulation at adequate rates without excessive cost in a cold body?...

Is the severe temperature dependence of viscosity perhaps a serendipitous advantage on occasion? A marine iguana of the Galapagos basks on warm rocks, heating rapidly, and then jumps into the cold Humboldt current to graze on algae, cooling only slowly. Circulatory adjustments as the animal takes the plunge have been postulated…, but no one seems to have looked at whether part of the circulatory reduction in cold water is just a passive consequence of an increase in viscosity. A variety of large, rapid, pelagic fish have circulatory arrangements that permit locomotory muscles to get quite hot when they’re in use…; blood flow ought to increase automatically at just the appropriate time.

A less speculative case is that of Antarctic mammals and birds… [They] must commonly contend with cold appendages, since full insulation of feet and flippers would be quite incompatible with their normal functions. The circulation of such an appendage often includes a [countercurrent] heat exchanger at the base of the limb so that, in effect, a cold-blooded appendage and a warm-blooded body can be run on the same circulatory system without huge loses of heat… Changes in blood viscosity will reduce flow to appendages when they get cold quite without active adjustments within the circulatory systems.
Vogel goes on for another couple pages. I love the way he uses comparative physiology to illustrate physics. He also covers a lot of ground, ranging from the Galapagos islands to Antarctica. Such discussions are typical of Life in Moving Fluids.

Wednesday, April 22, 2020

The Rayleigh-Einstein-Jeans law

In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss blackbody radiation. Max Planck’s blackbody radiation formula is given in Eq. 14.33
where λ is the wavelength and T is the absolute temperature. This equation, derived in December 1900, is the first formula that contained Planck’s constant, h.

Often you can recover a classical (non-quantum) result by taking the limit as Planck’s constant goes to zero. Here’s a new homework problem to find the classical limit of the blackbody radiation formula.
Section 14.8

Problem 26 ½. Take the limit of Planck’s blackbody radiation formula, Eq. 14.33, when Planck’s constant goes to zero. Your result should should be the classical Rayleigh-Jeans formula. Discuss how it behaves as λ goes to zero. Small wavelengths correspond to the ultraviolet and x-ray part of the electromagnetic spectrum. Why do you think this behavior is known as the “ultraviolet catastrophe”?
Subtle is the Lord, by Abraham Pais, superimposed on Intermediate Physics for Medicine and Biology.
Subtle is the Lord,
by Abraham Pais.
I always thought that the Rayleigh-Jeans formula was a Victorian result that Planck knew about when he derived Eq. 14.33. However, when thumbing through Subtle is the Lord: The Science and the Life of Albert Einstein, by Abraham Pais, I learned that the Rayleigh-Jeans formula is not much older than Planck’s formula. Lord Rayleigh derived a preliminary version of it in June 1900, just months before Planck derived Eq. 14.33. He published a more complete version in 1905, except he was off by a factor of eight. James Jeans caught Rayleigh’s mistake, corrected it, and thereby got his name attached to the Rayleigh-Jeans formula. I am amazed that Planck’s blackbody formula predates the definitive version of the Rayleigh-Jeans formula.

Einstein rederived Rayleigh’s formula from basic thermodynamics principles in, you guessed it, his annus mirabilis, 1905. Pais concludes “it follows from this chronology (not that it matters much) that the Rayleigh-Jeans law ought properly to be called the Rayleigh-Einstein-Jeans law.”

Tuesday, April 21, 2020

The Double Helix

The Double Helix,
by James Watson.
So you’re stuck at home because of the coronavirus pandemic, and you want to know what one book you should you read to get an idea what science is like? I recommend The Double Helix, by James Watson. It’s a lively and controversial story about how Watson and Francis Crick determined the structure of DNA, launching a revolution in molecular biology.

The very first few paragraphs of The Double Helix describe the influence of physics on biology. Readers of Intermediate Physics for Medicine and Biology will enjoy seeing how the two disciplines interact.

To whet your appetite, below are the opening paragraphs of The Double Helix. Enjoy!
I have never seen Francis Crick in a modest mood. Perhaps in other company he is that way, but I have never had reason so to judge hm. It has nothing to do with his present fame. Already he is much talked about, usually with reverence, and someday he may be considered in the category of Rutherford or Bohr. But this was not true when, in the fall of 1951, I came to the Cavendish Laboratory of Cambridge University to join a small group of physicists and chemists working on the three-dimensional structures of proteins. At that time he was thirty-five, yet almost totally unknown. Although some of his closest colleagues realized the value of his quick, penetrating mind and frequently sought his advice, he was often not appreciated, and most people thought he talked too much.

Leading the unit to which Francis belonged was Max Perutz, and Austrian-born chemist who came to England in 1936. He had been collecting X-ray diffraction data from hemoglobin crystals for over ten years and was just beginning to get somewhere. Helping him was Sir Lawrence Bragg, the director of the Cavendish. For almost forty years Bragg, a Nobel Prize winner and one of the founders of crystallography, and been watching X-ray diffraction methods solve structures of ever-increasing difficulty. The more complex the molecule, the happier Bragg became when a new method allowed its elucidation. Thus in the immediate postwar years he was especially keen about the possibility of solving the structures of proteins, the most complicated of all molecules. Often, when administrative duties permitted, he visited Perutz’ office to discuss recently accumulated X-ray data. Then he would return home to see if he could interpret them.

Somewhere between Bragg the theorist and Perutz the experimentalist was Francis, who occasionally did experiments but more often was immersed in the theories for solving protein structures. Often he came up with something novel, would become enormously excited, and immediately would tell it to anyone who would listen. A day or so later he would often realize that his theory did not work and return to experiments, until boredom generated a new attack on theory.

Monday, April 20, 2020

Biological Physics/Physics of Living Systems: A Decadal Survey

I want you to provide feedback to the Biological Physics/Physics of Living Systems decadal survey.
I want you to complete the
Biological Physics/Physics of Living Systems
decadal survey.
Hey readers of Intermediate Physics for Medicine and Biology! I’ve got a job for you. The National Academies is performing a decadal survey of biological physics, and they want your input.
The National Academies has appointed a committee to carry out the first decadal survey on biological physics. The survey aims to help federal agencies, policymakers, and academic leadership understand the importance of biophysics research and make informed decisions about funding, workforce, and research directions. This study is sponsored by the National Science Foundation.
Anyone who reads a blog like mine probably has plenty to say about biological physics. Here’s their request:
We invite you to share your thoughts on the future of biophysical science with the study committee and read the input already given to the committee. Input will be accepted throughout the study but will only receive maximum consideration if submitted by April 30, 2020.
Below is some more detail about what they’re looking for.
Description The committee will be charged with producing a comprehensive report on the status and future directions of physics of the living world. The committee’s report shall:

1. Review the field of Biological Physics/Physics of Living Systems (BPPLS) to date, emphasize recent developments and accomplishments, and identify new opportunities and compelling unanswered scientific questions as well as any major scientific gaps. The focus will be on how the approaches and tools of physics can be used to advance understanding of crucial questions about living systems.

2. Use selected, non-prioritized examples from BPPLS as case studies of the impact this field has had on biology and biomedicine as well as on subfields of physical and engineering science (e.g., soft condensed-matter physics, materials science, computer science). What opportunities and challenges arise from the inherently interdisciplinary nature of this interface?

3. Identify the impacts that BPPLS research is currently making and is anticipated to make in the near future to meet broader national needs and scientific initiatives.

4. Identify future educational, workforce, and societal needs for BPPLS. How should students at the undergraduate and graduate levels be educated to best prepare them for careers in this field and to enable both life and physical science students to take advantage of the advances produced by BPPLS. The range of employment opportunities in this area, including academic and industry positions, will be surveyed generally.

5. Make recommendations on how the U.S. research enterprise might realize the full potential of BPPLS, specifically focusing on how funding agencies might overcome traditional boundaries to nurture this area. In carrying out its charge, the committee should consider issues such as the state of the BPPLS community and institutional and programmatic barriers.
I’ve already submitted my comments. Now it’s your turn. The deadline is April 30.

Friday, April 17, 2020

Murray Eden

In 1992, when I was working at the National Institutes of Health, I wrote a review article about magnetic stimulation with my boss’s boss, Murray Eden. We submitted it to IEEE Potentials, a magazine aimed at engineering students. I liked our review, but somehow we never heard back from the journal. I pestered them a few times, and finally gave up and focused on other projects. I hate to waste anything, however, so I give the manuscript to you, dear readers (click here). It’s well written (thanks to our editor Barry Bowman, who improved many of my papers from that era) and describes the technique clearly. You can use it to augment the discussion in Section 8.7 (Magnetic Stimulation) in Intermediate Physics for Medicine and Biology. Unfortunately the article is out of date by almost thirty years.

I reproduce the title page and abstract below.





Eden was our fearless leader in the Biomedical Engineering and Instrumentation Program. He was an interesting character. You can learn more about him in an oral history available at the Engineering and Technology History Wiki. In our program, Eden was known for his contribution to barcodes. He was on the committee to design the ubiquitous barcode that you find on almost everything you buy nowadays. Just when the design was almost complete, Eden piped up and said they should include written numbers at the bottom of the barcode, just in case the barcode reader was down. There they have been, ever since (thank goodness!). I didn’t work too closely with Eden; I generally interacted with him through my boss, Seth Goldstein (inventor of the everting catheter). But Eden suggested we write the article, and I was a young nobody at NIH, so of course I said yes.

In Eden’s oral history interview, you can read about the unfortunate end of his tenure leading BEIP.
The world changed and I got a new director in the division, a woman who had been Director of Boston City Hospital’s Clinical Research Center. She and I battled a good deal and I just didn’t like it. By this time I was well over seventy and I said, “Okay, the hell with it. I’m going to retire.” I retired in the spring of ’94. It’s a very sad thing; I don’t like to talk about it very much. My program was essentially destroyed. A few years thereafter NIH administration took my program out of her control. They are currently trying to build the program up again, but most of the good people left.
I was one of the people who left. That woman who became the division director (I still can’t bring myself to utter her name) made it clear that all of us untenured people would not have our positions renewed, which is why I returned to acedemia after seven wonderful years at NIH. I shouldn’t complain. I’ve had an excellent time here at Oakland University and have no regrets, but 1994–1995 was a frustrating time for me.

After I left NIH, I stopped working on magnetic stimulation. I was incredibly lucky to be at NIH at a time when medical doctors were just starting to use the technique and needed a physicist to help. Even now, my most highly cited paper is from my time at NIH working on magnetic stimulation.

Announcement of Murray Eden's retirement in the NIH Record, March 15, 1994.

Thursday, April 16, 2020

NMR Imaging of Action Currents

Vanderbilt Notebook 11, Page 69, dated April 3, 1985
Vanderbilt Notebook 11,
Page 69, dated April 3, 1985
In graduate school, I kept detailed notes about my research. My PhD advisor, John Wikswo, insisted on it, and he provided me with sturdy, high-quality notebooks that are still in good shape today. I encourage my students to keep a notebook, but most prefer to record “virtual” notes on their computer, which is too newfangled for my taste.

My Vanderbilt Notebook 11 covers January 28 to April 25, 1985 (I was 24 years old). On page 69, in an entry dated April 3, I taped in a list of abstracts from the Sixth Annual Conference of the IEEE Engineering in Medicine and Biology Society, held September 15–17, 1984 in Los Angeles. A preview of the abstracts were published in the IEEE Transactions on Biomedical Engineering (Volume 31, Page 569, August, 1984). I marked one as particularly important:
NMR Imaging of Action Currents 

J. H. Nagel

The magnetic field that is generated by action currents is used as a gradient field in NMR imaging. Thus, the bioelectric sources turn out to be accessible inside the human body while using only externally fitted induction coils. Two- or three-dimensional pictures of the body’s state of excitation can be displayed.
That’s all I had: a three sentence abstract by an author with no contact information. I didn’t even know his first name. Along the margin I wrote (in blue ink):
I can’t find J H Nagel in Science Citation Index, except for 3 references to this abstract and 2 others at the same meeting (p. 575, 577 of same Journal [issue]). His address is not given in IEEE 1984 Author index. Goal: find out who he is and write him for a reprint.
How quaint; I wanted to send him a little postcard requesting reprints of any articles he had published on this topic (no pdfs back then, nor email attachments). I added in black ink:
3–25–88 checked biological abstracts 1984–March 1, 1988. None
Finally, in red ink was the mysterious note
See ROTH21 p. 1
In Notebook 21 (April 11, 1988 to December 1, 1989) I found a schedule of talks at the Sixth Annual Conference. I wrote “No Nagel in Session 14!” Apparently he didn’t attend the meeting.

Why tell you this story? Over the years I’ve wondered about using magnetic resonance imaging to detect action currents. I’ve published about it:
Wijesinghe, R. and B. J. Roth, 2009, Detection of peripheral nerve and skeletal muscle action currents using magnetic resonance imaging. Ann. Biomed. Eng., 37:2402-2406.

Jay, W. I., R. S. Wijesinghe, B. D. Dolasinski and B. J. Roth, 2012, Is it possible to detect dendrite currents using presently available magnetic resonance imaging techniques? Med. & Biol. Eng. & Comput., 50:651-657.

Xu, D. and B. J. Roth, 2017, The magnetic field produced by the heart and its influence on MRI. Mathematical Problems in Engineering, 2017:3035479.
I’ve written about it in this blog (click here and here). Russ Hobbie and I have speculated about it in Intermediate Physics for Medicine and Biology:
Much recent research has focused on using MRI to image neural activity directly, rather than through changes in blood flow (Bandettini et al. 2005). Two methods have been proposed to do this. In one, the biomagnetic field produced by neural activity (Chap. 8) acts as the contrast agent, perturbing the magnetic resonance signal. Images with and without the biomagnetic field present provide information about the distribution of neural action currents. In an alternative method, the Lorentz force (Eq. 8.2) acting on the action currents in the presence of a magnetic field causes the nerve to move slightly. If a magnetic field gradient is also present, the nerve may move into a region having a different Larmor frequency. Again, images taken with and without the action currents present provide information about neural activity. Unfortunately, both the biomagnetic field and the displacement caused by the Lorentz force are tiny, and neither of these methods has yet proved useful for neural imaging. However, if these methods could be developed, they would provide information about brain activity similar to that from the magnetoencephalogram, but without requiring the solution of an ill-posed inverse problem that makes the MEG so difficult to interpret.
Vanderbilt Research Notebook 11, superimposed on Intermediate Physics for Medicine and Biology.
Notebook 11.
Apparently all this activity began with my reading of Nagel’s abstract in 1985. Yet, I was never able to identify or contact him. Recent research indicates that the magnetic fields in the brain are tiny, and they produce effects that are barely measurable with modern technology. Could Nagel really have detected action currents with nuclear magnetic resonance three decades ago? I doubt it. But there is one thing I would like to know: who is J. H. Nagel? If you can answer this question, please tell me. I’ve been waiting 35 years!

Wednesday, April 15, 2020

Life in Moving Fluids (continued)

Life in Moving Fluids, by Steven Vogel, superimposed on Intermediate Physics for Medicine and Biology.
Life in Moving Fluids,
by Steven Vogel.
Yesterday, I quoted excerpts from Steven Vogel’s book Life in Moving Fluids about the Reynolds number. Today, I’ll provide additional quotes from Vogel’s Chapter 15, Flow at Very Low Reynolds Number.
[Low Reynolds number] is the world, as Howard Berg puts it, of a person swimming in asphalt on a summer afternoon—a world ruled by viscosity. It’s the world of a glacier of particles, the world of flowing glass, of laboriously mixing cold molasses (treacle) and corn (maise) syrup. Of more immediate relevance, it’s the everyday world of every microscopic organism that lives in a fluid medium, of fog droplets, of the particulate matter called “marine snow”… “Creeping flow” is the common term in the physical literature; for living systems small size rather than (or as well as) low speed is the more common entry ticket. And it’s a counterintuitive—which is to say unfamiliar—world.
Vogel then lists properties of low Reynolds number flow.
At very low Reynolds number, flows are typically reversible: a curious temporal symmetry sets in, and the flow may move matter around but in doing so doesn’t leave much disorder in its wake. Concomitantly, mixing is exceedingly difficulty…

Inertia is negligible compared to drag: when propulsion ceases, motion ceases…

Separation behind bluff bodies is unknown…

Boundary layers are thick because velocity gradients are gentle, and the formal definition of a boundary layer has little or no utility…

Nor can one create appreciable circulation around an airfoil… Turbulence, of course, is unimaginable…

While this queer and counterintuitive range is of some technological interest, its biological importance is enormous… since the vast majority of organisms are tiny, they live in this world of low Reynolds number. Flow at very low Reynolds number may seem bizarre to us, but the range of flow phenomena that we commonly contend would undoubtedly seem even stranger to someone whose whole experience was at Reynolds number well below unity.
Ha! Try explaining turbulence to Covid-19.

Vogel then discusses Edward Purcell’s classic paper “Life at Low Reynolds Number.” He notes
But while these slow, small-scale flows may seem peculiar, they’re orderly (Purcell calls them “majestic”) and far more amenable to theoretical treatment than the flows we’ve previously considered.
You can find an example of the theoretical analysis of low-Reynolds number flow in Homework Problem 46 in Chapter 1 of Intermediate Physics for Medicine and Biology, which discusses creeping flow around a sphere.

As you can probably tell, Vogel is a master writer. If you are suffering from boredom during this coronavirus pandemic, order a copy of Life in Moving Fluids from Amazon. I own the Second Edition, Revised and Expanded. It's the perfect read for anyone interested in biological fluid dynamics.

Tuesday, April 14, 2020

Life in Moving Fluids

In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the central concept of fluid dynamics: The Reynolds number.
The importance of turbulence (nonlaminar flow) is determined by a dimensionless number characteristic of the system called the Reynolds number NR. It is defined by

                                NR = L V ρ / η ,            (1.62)

where L is a length characteristic of the problem, V a velocity characteristic of the problem, ρ the density, and η the viscosity of the fluid.
Life in Moving Fluids, by Steven Vogel, superimposed on Intermediate Physics for Medicine and Biology.
Life in Moving Fluids,
by Steven Vogel.
To provide a more in-depth analysis of Reynolds number, I will quote some excerpts from Life in Moving Fluids by Steven Vogel. I chose this book in part because of its insights into fluid dynamics, and in part because it is written so clearly. I use Vogel’s writing as a model for how to explain complicated concepts using vivid and simple language, metaphors, and analogies. He begins by analyzing the drag force on an object immersed in a moving fluid, and then introduces the “peculiarly powerful” Reynolds number, that “centerpiece of biological... fluid dynamics.”
The utility of the Reynolds number extends far beyond mere problems of drag; it’s the nearest thing we have to a completely general guide to what’s likely to happen when solid and fluid move with respect to each other. For a biologist, dealing with systems that span an enormous size range, the Reynolds number is the central scaling parameter that makes order of a diverse set of physical phenomena. It plays a role comparable to surface-to-volume ratio in physiology….
  I love the analogy to surface-to-volume ratio. Vogel continues
One of the marvelous gifts of nature is that this index proves to be so simple—a combination of four variables [L, V, ρ, and η], each with an exponent of unity. It has, however, a few features worth some comment. First, the Reynolds number is dimensionless… so its value is independent of the system of units in which the variables are expressed. Second, in it reappears the kinematic viscosity… What matters isn’t the dynamic viscosity, μ [Russ and I use the symbol η], and the density, ρ, so much as their ratio… Finally, a bit about L, commonly called the “characteristic length.” For a circular pipe, the diameter is used; choosing the diameter rather than the radius is entirely a matter of convention… The value of the Reynolds number is rarely worth worrying about to better than one or at most two significant figures. Still, that’s not trivial when biologically interesting flows span at least fourteen orders of magnitude[!]…

Of greatest importance in the Reynolds number is the product of size and speed, telling us that the two work in concert, not counteractively. For living systems “small” almost always mean slow, and “large” almost always implies fast. That’s why the range of Reynolds numbers so far exceed the eight or so orders of magnitude over which the lengths of organisms vary…
Russ and I explain how the Reynolds number arises from the ratio of two forces, but I don't think we are as clear as Vogel.
What distinguishes regimes of flow is the relative importance of inertial and viscous forces. The former keeps things going; the latter makes them stop. High inertial forces favor turbulence… High viscous forces should prevent sustained turbulence and favor laminar flow by damping incipient eddies…

Another point should be made emphatically. If, for example, the Reynolds number is low, the situation is highly viscous. The flow will be dominated by viscous forces, vortices will be either nonexistent or nonsustained, and velocity gradients will be very gentle… If, in nature, small means slow and large means fast, then small creatures will live in a world dominated by viscous phenomena and large ones by inertial phenomena—this, even though the bacterium swims in the same water as the whale.
 The bacterium-whale comparison is just the sort of insight that Vogel excels at.

Tomorrow, I’ll provide a few more excerpts from Live in Moving Fluids, in which Vogel studies low Reynolds number flow in more detail.