Friday, August 5, 2011

Fisher-Kolmogorov equation

The two volumes of Mathematical Biology, by James Murray, superimposed on Intermediate Physics for Medicine and Biology.
Mathematical Biology,
by James Murray.
In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss many of the important partial differential equations of physics, such as Laplace’s equation, the diffusion equation, and the wave equation. One lesser-known PDE that we don’t discuss is the Fisher-Kolmogorov equation. However, our book supplies most of what you need to understand this equation.

In Section 2.10, we examine the logistic equation, an ordinary differential equation governing population growth,

du/dt = b u (1 − u) .

For u much less than one, the population grows exponentially with rate b. As u approaches one, the population levels off near a steady state value of u = 1. Our Eq. 2.28 gives an analytical solution to this nonlinear equation.

In Section 4.8, we drive the diffusion equation, which for one dimension is

du/dt = D d2u/dx2 .

This linear partial differential equation is one of the most famous in physics. It describes diffusion of particles, and also the flow of heat by conduction. D is the diffusion constant.

To get the Fisher-Kolmogorov equation, just put the logistic equation and the diffusion equation together:

du/dt = D d2u/dx2 + b u (1 u) .

The Fisher-Kolmogorov equation is an example of a “reaction-diffusion equation.” Russ and I discuss a similar reaction-diffusion equation in Homework Problem 24 of Chapter 4, when modeling intracellular calcium waves. The only difference is that we use a slightly more complicated reaction term rather than the logistic equation.

Mathematical Biology, by James Murray, with Intermediate Physics for Medicine and Biology.
Mathematical Biology,
by James Murray.
In his book Mathematical Biology, James Murray discusses the Fisher-Kolmogorov equation in detail. He states
The classic simplest case of a nonlinear reaction diffusion equation … is [The Fisher-Kolmogorov equation]… It was suggested by Fisher (1937) as a deterministic version of a stochastic model for the spatial spread of a favoured gene in a population. It is also the natural extension of the logistic growth population model discussed in Chapter 11 when the population disperses via linear diffusion. This equation and its travelling wave solutions have been widely studied, as has been the more general form with an appropriate class of functions f(u) replacing ku(1u). The seminal and now classical paper is that by Kolmogoroff et al. (1937)…. We discuss this model equation in the following section in some detail, not because in itself it has such wide applicability but because it is the prototype equation which admits travelling wavefront solutions. It is also a convenient equation from which to develop many of the standard techniques for analyzing single-species models with diffusive dispersal.”
The Fisher-Kolmogorov equation was derived independently by Ronald Fisher (1890–1962), an English biologist, and Andrey Kolmogorov (1903–1987), a Russian mathematician. The key original papers are
Fisher, R. A. (1937) “The Wave of Advance of Advantageous Genes.” Annals of Eugenics, Volume 7, Pages 353–369.

Kolmogoroff, A., I. Petrovsky, and N. Piscounoff (1937) “Etude de l’equation de la diffusion avec croissance de la quantite de matiere et son application a un probleme biologique.” Moscow University Mathematics Bulletin, Volume 1, Pages 1–25.

Friday, July 29, 2011

The terahertz

In the opening section of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I provide a table of common prefixes used in the metric system:
gigaG 109
megaM106
kilo k103
milli m10−3
microμ10−6
nanon10−9
pico p10−12
femto f10−15
attoa10−18
We went all the way down to “atto” on the small side, but stopped at “giga” on the large side. I now wish we had skipped “atto” and instead included “tera,” or “T”, corresponding to 1012. Why? Because the prefix tera can be very useful in optics when discussing the frequency of light.

To better appreciate why, take a look at the letter to the editor “Let’s Talk TeraHertz!” in the April 2011 issue of my favorite journal, the American Journal of Physics. There Roger Lewis—from the melodious University of Wollongong—argues that the terahertz is superior to the nanometer when discussing light. Lewis writes
…the terahertz shares the desirable properties of the nanometer as a unit in teaching optics… Like the nanometer, the terahertz conveniently represents visible light to three digits in numbers that fall in the midhundreds… The terahertz has other desirable properties that the nanometer lacks. First, the frequency is a more fundamental property of light than the wavelength because the frequency does not change as light traverses different media, whereas the wavelength may. Second, the energy of a photon is directly proportional to its frequency… The visible spectrum is often taken to span 400–700 nm, corresponding to 749–428 THz, falling in the octave 400–800 THz. …
I suspect that the reason I have always preferred wavelength over frequency when discussing light is that the nanometer provides such a nice, easy-to-remember unit to work with. Had I realized from the start that terahertz offered an equally useful unit for discussing frequency, I might naturally think in terms of frequency rather than wavelength. Incidentally, Planck’s constant is 0.00414 (or about 1/240) in the units of eV/THz.

After reading Lewis’s letter, I checked Intermediate Physics for Medicine and Biology to see how Russ and I characterized the properties of visible light. On page 360 I found our Table 14.2, which lists the different colors of the electromagnetic spectrum in terms of wavelength (nm), energy (eV) and frequency. We didn’t explicitly mention the unit THz, but we did list the frequency in units of 1012 Hz, so terahertz was there in every way but name. As a rule I don’t like to write in my books, but nevertheless I suggest that owners of Intermediate Physics for Medicine and Biology take a pencil and replace “(1012 Hz)” in Table 14.2 with “(THz)”.

Russ and I discuss the terahertz explicitly in Chapter 14 about Atoms and Light.
14.6.4 Far Infrared or Terahertz Radiation

For many years, there were no good sources or sensitive detectors for radiation between microwaves and the near infrared (0.1–100 THz; 1 THz = 1012 Hz). Developments in optoelectronics have solved both problems, and many investigators are exploring possible medical uses of THz radiation (“T rays”). Classical electromagnetic wave theory is needed to describe the interactions, and polarization (the orientation of the E vector of the propagating wave) is often important. The high attenuation of water in this frequency range means that studies are restricted to the skin or surface of organs such as the esophagus that can be examined endoscopically. Reviews are provided by Smye et al. (2001), Fitzgerald et al. (2002), and Zhang (2002).
The citations are to
Fitzgerald, A. J., E. Berry, N. N. Zinonev, G. C. Walker, M. A. Smith and J. M. Chamberlain (2002) “An Introduction to Medical Imaging with Coherent Terahertz Frequency Radiation,” Physics in Medicine and Biology, Volume 47, Pages R67–R84.

Smye, S. W., J. M. Chamberlain, A. J. Fitzgerald and E. Berry (2001) “The Interaction Between Terahertz Radiation and Biological Tissue,” Physics in Medicine and Biology, Volume 46, Pages R101–R112.

Zhang, X-C. (2002) “Terahertz Wave Imaging: Horizons and Hurdles,” Physics in Medicine and Biology, Volume 47, Pages 3667–3677.
So not only is the terahertz useful when talking about visible light, but also it is useful if working in the far infrared, when the frequency is about 1 THz. Such “T rays” (I hate that term) are being used nowadays for imaging during airport security and as a tool to study cell biology and cancer.

Friday, July 22, 2011

Euler: The Master of Us All

Euler: The Master of Us All,  by William Dunham, superimposed on Intermediate Physics for Medicine and Biology.
Euler: The Master of Us All,
by William Dunham.
Swiss mathematician Leonhard Euler (1707–1783) is a fascinating man. I discussed him once before in this blog, during an entry about the book e: The Story of a Number. Euler’s name never appears in the 4th edition of Intermediate Physics for Medicine and Biology, but his influence is there.
William Dunham describes Euler’s life and work in his book Euler: The Master of Us All. In the Preface, Dunham writes
This book is about one of the undisputed geniuses of mathematics, Leonhard Euler. His insight was breathtaking, his vision profound, his influence as significant as that of anyone in history. Euler contributed to long-established branches of mathematics like number theory, analysis, algebra, and geometry. He also ventured into the largely unexplored territory of analytic number theory, graph theory, and differential geometry. In addition, he was his century’s foremost applied mathematician, as his work in mechanics, optics, and acoustics amply demonstrates. There was hardly an aspect of the subject that escaped Euler’s penetrating gaze. As the twentieth-century mathematician Andre Weil put it, “All his life…he seems to have carried in his head the whole of the mathematics of his day, both pure and applied.”
In Chapter 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss one of Euler’s best known contributions, his relationship between the exponential function, trigonometric functions, and complex numbers.
The numbers that we have been using are called real numbers. The number i = √−1 is called an imaginary number. A combination of a real and imaginary number is called a complex number. The remarkable property of imaginary numbers that make them useful in this context is that e = cosθ + i sinθ.
Dunham wrote about this identity:
“From these equations,” Euler noted with evident satisfaction, “we understand how complex exponentials can be expressed by real sines and cosines.” His enthusiasm has been echoed by mathematicians ever since. Few would argue that Euler’s identity is among the most beautiful formulas of all.
Euler didn’t invent complex numbers, but he did contribute significantly to their development, including a derivation of this gem (“which seems extraordinary to me,” wrote Euler)

ii = e−π/2.

Dunham’s book gives examples of Euler’s contributions to number theory, logarithms, infinite series, analytic number theory, complex variables, algebra, geometry, and combinatorics. For instance, Dunham describes an discovery Euler made when in his 20s.
One of his earliest triumphs was a solution of the so-called “Basel Problem” that perplexed mathematicians for the better part of the previous century. The issue was to determine the exact value of the infinite series

1 + 1/4 + 1/9 + 1/16 + 1/25 + … + 1/k2 + … .

… The answer was not only a mathematical tour de force but a genuine surprise, for the series sums to π2/6. This highly non-intuitive result made the solution all the more spectacular and its solver all the more famous.
As he grew older, Euler slowly became blind. His accomplishments despite his handicap remind me of Beethoven composing his majestic 9th symphony after going deaf. Dunham writes about Euler
Although unable to see, he not only maintained but even increased his scientific output. In the year 1775, for instance, he wrote an average of one mathematical paper per week. Such productivity came in spite of the fact that he now had to have others read him the contents of scientific papers, and he in turn had to dictate his work to diligent scribes. During his descent into blindness, he wrote an influential textbook on algebra, a 775-page treatise on the motion of the moon, and a massive, three-volume development of integral calculus, the Institutiones calculi integralis. Never was his remarkable memory more useful than when he could see mathematics only in his mind’s eye.

That this blind and aging man forged ahead with such gusto is a remarkable lesson, a tale for the ages. Euler’s courage, determination, and utter unwillingness to be beaten serves, in the truest sense of the word, as an inspiration for mathematician and non-mathematician alike. The long history of mathematics provides no finer example of the triumph of the human spirit.
Dunham concludes
Euler left behind a legacy of epic proportions. So prolific was he that the journal of the St. Petersburg Academy was still publishing the backlog of his papers a full 48 years after his death. There is hardly a branch of mathematics—or for that matter of physics—in which he did not play a significant role.
 Listen to William Dunham talk about Leonhard Euler.
https://www.youtube.com/embed/fEWj93XjON0

Friday, July 15, 2011

The leibniz

In order to motivate the study of thermal physics, Chapter 3 of the 4th edition of Intermediate Physics for Medicine and Biology begins with an examination of how many equations are required to simulate the motion of all the molecules in one cubic millimeter of blood. Russ Hobbie and I write
It is possible to identify all the external forces acting on a simple system and use Newton’s second law (F = ma) to calculate how the system moves … In systems of many particles, such calculations become impossible. Consider, for example, how many particles there are in a cubic millimeter of blood. Table 3.1 shows some of the constituents of such a sample [including 3.3 × 1019 water molecules]. To calculate the translational motion in three dimensions, it would be necessary to write three equations for each particle using Newton’s second law. Suppose that at time t the force on a molecule is F. Between t and t + Δt, the velocity of the particle changes according to the three equations

vi(t+Δt) = vi(t) + FiΔt/m, (i = x, y, z).

The three equations for the change of position of the particle are of the form x(t + Δt) = x(t) + vx(t)Δt … Solving these equations requires at least six multiplications and six additions for each particle. For 1019 particles, this means about 1020 arithmetic operations per time interval … It is impossible to trace the behavior of this many molecules on an individual basis.

Nor is it necessary. We do not care which water molecule is where. The properties of a system that are of interest are averages over many molecules: pressure, concentration, average speed, and so forth. These average macroscopic properties are studied in statistical or thermal physics or statistical mechanics.
It is difficult to gain an intuitive feel for just how many differential equations are needed in such a calculation, just as it is difficult to imagine just how many molecules make up a macroscopic bit of matter. Chemists have solved the problem of dealing with large numbers of molecules by introducing the unit of a mole, corresponding to Avogadro’s number (6 × 1023) of molecules. Other quantities involving Avogadro’s number are similarly defined. For instance, the Faraday corresponds to the magnitude of the charge of one mole of electrons (I admit, the Faraday is more of a constant than a unit); see page 60 and Eq. 3.32 of Intermediate Physics for Medicine and Biology. In Problem 2 of Chapter 14, Russ and I discuss the einstein, a unit corresponding to a mole of photons. When doing large-scale numerical simulations on a computer, it would be useful to have a similar unit to handle very large numbers of differential equations, such as are required to model a drop of blood.

Fortunately, such a unit exists, called the leibniz. Sui Huang and John Wikswo coined the term in their paper “Dimensions of Systems Biology,” published in the Reviews of Physiology, Biochemistry and Pharmacology (Volume 157, Pages 81–104, 2006). They write
The electrical activity of the heart during ten seconds of fibrillation could easily require solving 1018 coupled differential equations (Cherry et al. 2000). (N.B., Avogadro’s number of differential equations may be defined as one Leibnitz, so 10 s of fibrillation corresponds to a micro-Leibnitz problem.) Multiprocessor supercomputers running for a month can execute a micromole of floating point operations, but in the cardiac case such computers may run several orders of magnitude slower than real time, such that modeling 10 s of fibrillation might require 1 exaFLOP/s × year.
The leibniz appeared again in Wikswo et al.’s paper “Engineering Challenges of BioNEMS: The Integration of Microfluidics, Micro- and Nanodevices, Models and External Control for Systems Biology” in the IEE Proceedings Nanobiotechnology (Volume 153, Pages 81–101, 2006).
What distinguishes the models of systems biology from those of many other disciplines is their multiscale richness in both space and time: these models may eventually have millions of dynamic variables with complex non-linear interactions. It is conceivable that the ultimate models for systems biology might require a mole of differential equations (called a Leibnitz) and computations that require a yottaFLOPs (floating point operations per second) computer.
If we take the leibniz (Lz) as our unit of simulation complexity, the calculation Russ and I consider at the start of Chapter 3 requires solving approximately 6 × 1019 differential equations, or about 0.1 mLz. Note that we describe two first order differential equations for each molecule, but others might prefer to speak of a single second-order differential equation. This would make a difference of a factor of two in the number of equations. I propose that when using the leibniz we consider only first order ODEs. Moreover, when using a differential equation governing a vector, we count one equation per component.

For those not familiar with Gottfried Leibniz (1646–1716), he is a German mathematician and a co-inventor of the calculus, along with Isaac Newton. Leibniz and Newton got into one of the biggest priority disputes in the history of science about this landmark development. Newton has his unit, so it’s only fair that Leibniz has one too. Leibniz also made contributions to information theory and computational science, so the liebniz is a particularly appropriate way to honor this great mathematician.

John Wikswo, my PhD advisor when I was in graduate school at Vanderbilt University, notes that there are two alternative spellings of Leibniz’s name: Leibnitz and Leibniz. I favor “Leibniz,” the spelling on Wikipedia, and so does Wikswo now, but he points out that there’s plenty of support for “Leibnitz” used in his earlier publications. I had high hopes of enjoying a bit of fun at my friend’s expense by adding an annoying “[sic]” after each appearance of “Leibnitz” in the above quotes, but then Wikswo pointed out that Richard Feynman used “Leibnitz” in The Feynman Lectures on Physics. What can I say; you can’t argue with Feynman.

Friday, July 8, 2011

Gasiorowicz

Quantum Physics,  by Stephen Gasiorowicz, superimposed on Intermediate Physics for Medicine and Biology.
Quantum Physics,
by Stephen Gasiorowicz.
One of the standard topics in any modern physics class is blackbody radiation. Indeed, it was the study of blackbody radiation that led to the development of quantum mechanics. In Chapter 14 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
The spectrum of power per unit area emitted by a completely black surface in the wavelength interval between λ and λ + dλ is … a universal function called the blackbody radiation function. …The description of [this] function …by Planck is one of the foundations of quantum mechanics… We can find the total amount of power emitted per unit surface area by integrating10 Eq. 14.32 [Planck’s blackbody radiation function]…[The result] is the Stefan-Boltzmann law.
As I was reading over this section recently, I was struck by the footnote number ten (present in earlier editions of our book, so I know it was originally written by Russ).
10This is not a simple integration. See Gasiorowicz (1974, p. 6).
This is embarrassing to admit, but although I am a coauthor on the 4th edition, there are still topics in our book that I am learning about. I always feel a little guilty about this, so recently I decided it is high time to take a look at the book by Stephen Gasiorowicz and see just how difficult this integral really is. The result was fascinating. The integral is not terribly complicated, but it involves a clever trick I would have never thought of. Because math is rather difficult to write in the html of this blog (at least for me), I will explain how to evaluate this integral through a homework problem. When revising our book for the 4th edition, I enjoyed finding “missing steps” in derivations and then creating homework problems to lead the reader through them. For instance, in Problem 24 of Chapter 14, Russ and I asked the reader to “integrate Eq. 14.32 over all wavelengths to obtain the Stephan-Boltzmann law, Eq. 14.33.” Then, we added “You will need the integral [integrated from zero to infinity]

∫ x3/(ex−1) dx = π4/15 .

Below is a new homework problem related to footnote ten, in which the reader must evaluate the integral given at the end of Problem 24. I base this homework problem on the derivation I found in Gasiorowicz. In our book, we cite the 1974 edition.
Gasiorowicz, S. (1974) Quantum Physics. New York, Wiley.
This is the edition in Kresge library at Oakland University, and is the one I used to create the homework problem. However, I found using amazon.com’s “look inside” feature that this derivation is also in the more recent 3rd edition (2003). In addition, I found the derivation repeated in another of Gasiorowicz’s books, The Structure of Matter.
Problem 24 ½ Evaluate the integral given in Problem 24.
(a) Factor out e−x, and then use the geometric series 1 + z + z2 + z3 + …=1/(1−z) to replace the denominator by an infinite sum.
(b) Make the substitution y = (n+1) x.
(c) Evaluate the resulting integral over y, either by looking it up or (better) by repeated integration by parts.
(d) Make the substitution m=n+1
(e) Use the fact that the sum of 1/m4 from 1 to infinity is equal to π4/90 to evaluate the integral.
Really, who would have thought to replace 1/(1−z) by an infinite series? Usually, I am desperately trying to do just the opposite: get rid of an infinite series, such as a geometric series, by replacing it with a simple function like 1/(1−z). The last thing I would have wanted to do is to introduce a dreaded infinite sum into the calculation. But it works. I must admit, this is a bit of a cheat. Even if in part (c) you don’t look up the integral, but instead laboriously integrate by parts several times, you still have to pull a rabbit out of the hat in step (e) when you sum up 1/m4. Purists will verify this infinite sum by calculating the Fourier series over the range 0 to of the function

f(x) = π4/90 – π2 x2/12 + π x3/12 – x4/48

and then evaluating it at x = 0. (Of course, you know how to calculate a Fourier series, since you have read Chapter 11 of Intermediate Physics for Medicine and Biology). When computing Fourier coefficients, you will need to do a bunch of integrals containing powers of x times cos(nx), but you can do those by—you guessed it—repeated integration by parts. Thus, even if lost on a deserted island without your math handbook or a table of integrals, you should still be able to complete the new homework problem using Gasiorowicz’s method. I’m assuming you know how to do some elementary calculus—integrate by parts and simple integrals of powers, exponentials, and trigonometric functions—without looking it up. (Full disclosure: I found the function f(x) given above by browsing through a table of Fourier series in my math handbook. On that lonely island, you would have to guess f(x), so let’s hope you at least remembered to bring along plenty of scrap paper.)

I checked out the website for Gasiorowicz’s textbook. There is a lot of interesting material there. The book covers many of the familiar topics of modern physics: blackbody radiation, the photoelectric effect, the Bohr model for hydrogen, the uncertainty principle, the Schrodinger equation and more, all the way up to the structure of atoms and molecules. I learned this material from Eisberg and Resnick’s Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (1985), cited several times in Intermediate Physics for Medicine and Biology, when I used their book in my undergraduate modern physics class at the University of Kansas. For an undergraduate quantum mechanics class, I like Griffith’s Introduction to Quantum Mechanics, in part because I have taught from that book. But Gasiorowicz’s book appears to be in the same class as these two. I noticed that Gasiorowicz is from the University of Minnesota, so perhaps Russ knows him.

P.S. Did any of you dear readers notice that Russ and I spelled the name “Stefan” of the “Stefan-Boltzmann law” differently in the text of Chapter 14 and in Problem 24? I asked Google, and it found sites using both spellings, but the all-knowing Wikipedia favors “Stefan”. I’m not 100% certain which is correct (it may have to do with the translation from Slovene to English), but we should at least have been consistent within our book.

Friday, July 1, 2011

Physiology is the link between the basic sciences and medicine

Textbook of Medical Physiology, by Guyton and Hall, superimposed on Intermediate Physics for Medicine and Biology.
Textbook of Medical Physiology,
by Guyton and Hall.
When Russ Hobbie and I need to cite a general physiology reference in the 4th edition of Intermediate Physics for Medicine and Biology, we often choose the Textbook of Medical Physiology. The book was originally written by Arthur Guyton, but in the most recent editions the lead author is John Hall. We cite the book in several places: 1) In Chapter 1, we reproduce one of Guyton’s figures of the human circulatory system (we reference the 8th edition of the Textbook of Medical Physiology, 1991), 2) We cite Guyton in Chapter 5 when discussing osmotic pressure and when analyzing countercurrent transport, 3) When discussing nerve synapses and neurotransmitters we cite the 10th edition (2000) on which Hall is also first author, 4) In our chapter on feedback loops (Chapter 10) we reproduce a figure showing how alveolar ventilation responds to exercise from the 9th edition (1995, Guyton sole author), and 5) In Chapter 11 on the method of least squares we base Homework Problem 15 on data from Guyton’s textbook regarding the secretion of cortisol by the adrenal gland.

When I was a graduate student at Vanderbilt University, I decided to sit in on the physiology and biochemistry classes that the medical students took. The physiology class was based on Guyton’s book (likely the 6th or 7th edition). I took the class seriously, but since I had little formal coursework in biology (only one introductory class as an undergraduate at the University of Kansas, plus a high school course), and because I didn’t get as much out of the lectures as I should have, my main accomplishment was reading the Textbook of Medical Physiology, cover to cover. Unfortunately, my copy of the book has been lost (probably loaned out to someone who forgot to return it). It’s a pity, because I have fond memories of that book, and all the physiology I learned while reading it.

The 12th edition of the Textbook of Medical Physiology (2010) was published after the 4th edition of Intermediate Physics for Medicine and Biology went to press. Hall and Guyton’s preface states
The first edition of the Textbook of Medical Physiology was written by Arthur C. Guyton almost 55 years ago. Unlike most major medical textbooks, which often have 20 or more authors, the first eight editions of the Textbook of Medical Physiology were written entirely by Dr. Guyton, with each new edition arriving on schedule for nearly 40 years. The Textbook of Medical Physiology, first published in 1956, quickly became the best-selling medical physiology textbook in the world. Dr. Guyton had a gift for communicating complex ideas in a clear and interesting manner that made studying physiology fun. He wrote the book to help students learn physiology, not to impress his professional colleagues.

I worked closely with Dr. Guyton for almost 30 years and had the privilege of writing parts of the 9th and 10th editions. After Dr. Guyton's tragic death in an automobile accident in 2003, I assumed responsibility for completing the 11th edition.

For the 12th edition of the Textbook of Medical Physiology, I have the same goal as for previous editions—to explain, in language easily understood by students, how the different cells, tissues, and organs of the human body work together to maintain life.

This task has been challenging and fun because our rapidly increasing knowledge of physiology continues to unravel new mysteries of body functions. Advances in molecular and cellular physiology have made it possible to explain many physiology principles in the terminology of molecular and physical sciences rather than in merely a series of separate and unexplained biological phenomena.

The Textbook of Medical Physiology, however, is not a reference book that attempts to provide a compendium of the most recent advances in physiology. This is a book that continues the tradition of being written for students. It focuses on the basic principles of physiology needed to begin a career in the health care professions, such as medicine, dentistry and nursing, as well as graduate studies in the biological and health sciences. It should also be useful to physicians and health care professionals who wish to review the basic principles needed for understanding the pathophysiology of human disease.

I have attempted to maintain the same unified organization of the text that has been useful to students in the past and to ensure that the book is comprehensive enough that students will continue to use it during their professional careers.

My hope is that this textbook conveys the majesty of the human body and its many functions and that it stimulates students to study physiology throughout their careers. Physiology is the link between the basic sciences and medicine. The great beauty of physiology is that it integrates the individual functions of all the body's different cells, tissues, and organs into a functional whole, the human body. Indeed, the human body is much more than the sum of its parts, and life relies upon this total function, not just on the function of individual body parts in isolation from the others…
If you are a physicist studying from Intermediate Physics for Medicine and Biology with little background in biology and medicine, you will need to find a good general source of information about physiology. The Guyton and Hall Textbook of Medical Physiology is a good choice. Another book Russ and I cite a lot is Textbook of Physiology by Patton, Fuchs, Hille, Scher and Steiner. However, I cannot find an edition more recent than 1989, so it would not be a good choice for getting up-to-date information.

Arthur Guyton (1919-2003) was a famous physiologist, known for his research on the circulatory system. An obituary published in The Physiologist says
Arthur Guyton’s research contributions, which include more than 600 papers and 40 books, are legendary and place him among the greatest figures in the history of cardiovascular physiology. His research covered virtually all areas of cardiovascular regulation and led to many seminal concepts that are now an integral part of our understanding of cardiovascular disorders such as hypertension, heart failure, and edema. It is difficult to discuss cardiovascular regulation without including his concepts of cardiac output and venous return, negative interstitial fluid pressure and regulation of tissue fluid volume and edema, regulation of tissue blood flow and whole body blood flow autoregulation, renal-pressure natriuresis, and long-term blood pressure regulation.

Perhaps his most important scientific contribution, however, was a unique quantitative approach to cardiovascular regulation through the application of principles of engineering and systems analysis. He had an extremely analytical mind and an uncanny ability to integrate bits and pieces of information, not only from his own research but also from others, into a quantitative conceptual framework. He built analog computers and pioneered the application of large-scale systems analyses to modeling the cardiovascular system before digital computers were available. With the advent of digital computers, his cardiovascular models expanded dramatically in the 1960’s and 70’s to include the kidneys and body fluids, hormones, autonomic nervous system, as well as cardiac and circulatory functions. He provided the first comprehensive systems analysis of blood pressure regulation and used this same quantitative approach in all areas of his research, leading to new insights that are now part of the everyday vocabulary of cardiovascular researchers.

Many of his concepts were revolutionary and were initially met with skepticism, and even ridicule, when they were first presented. When he first presented his mathematical model of cardiovascular function at the Council for High Blood Pressure Research meeting in 1968, the responses of some of the hypertension experts, recorded at the end of the article, reflected a tone of disbelief and even sarcasm. Guyton’s systems analysis had predicted a dominant role for the renal pressure natriuresis mechanism in long-term blood pressure regulation, a concept that seemed heretical to most investigators at that time. One of the leading figures in hypertension research commented “I realize that it is an impertinence to question a computer and systems analysis, but the answers they have given to Guyton seem authoritarian and revolutionary.” Guyton’s concepts were authoritarian and revolutionary, but after 35 years of experimental studies by investigators around the world, they have also proved to be very powerful in explaining diverse physiological and clinical observations. His far-reaching concepts will continue to be the foundation for generations of cardiovascular physiologists.
If you’re interested in the interface between physics and physiology, you’ll find the Guyton and Hall Textbook of Medical Physiology to be a valuable resource.

Friday, June 24, 2011

William Beaumont

I spent last weekend at Mackinac Island in northern Lake Huron. It’s an interesting little place that you reach by ferry and that does not allow any vehicles (except for a few fire engines and ambulances). The ferry ride is dominated by a view of the Mackinac Bridge (the “Mighty Mac”) connecting the upper and lower peninsulas of Michigan. It is a gorgeous piece of engineering (read about its construction in Henry Petroski’s book Engineers of Dreams: Great Bridge Builders and the Spanning of American). On the island, people walk, bike, and ride in horse-drawn carriages. An old 18th century fort dominates the coastline on the south side of the island, and the nearby town has many shops and restaurants (a cynic might call the town a tourist trap). We visited the fort, observed the firing of a civil war-era cannon, had a carriage tour, stopped at “Arch Rock,” and saw the famous Grand Hotel. Last week happened to be their annual Lilac festival, which included a literal “dog and pony show” (the theme this year was board games, and the little terriers carrying big Scrabble pieces on their backs won first prize).

Buying fudge is a Mackinac Island tradition. We stopped at one of the iconic fudge shops, Murdick’s Fudge, and bought a few slabs. One of the Murdick clan, Ryan Murdick, attended Oakland University, where I teach, and obtained a master’s degree in physics. He and I published several papers together, including one about the bidomain model of the electrical properties of cardiac tissue (see Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology), one about magnetocardiography (the magnetic field produced by the heart, Chapter 8), and one about how eddy currents induced in electroencephalogram electrodes can influence measurements of the magnetoencephalogram (Chapter 8; the effect on the MEG is very small). The papers are:
Murdick, R. and B. J. Roth (2003) “Magneto-encephalogram Artifacts Caused by Electro-encephalogram Electrodes,” Medical and Biological Engineering and Computing, Volume 41, Pages 203–205.

Murdick, R. A. and B. J. Roth (2004) “A Comparative Model of Two Mechanisms From Which a Magnetic Field Arises in the Heart,” Journal of Applied Physics, Volume 95, Pages 5116–5122.

Roth, B. J., S. G. Patel, and R. A. Murdick (2006) “The Effect of the Cut Surface During Electrical Stimulation of a Cardiac Wedge Preparation,” IEEE Transactions on Biomedical Engineering, Volume 53, Pages 1187–1190.
I wasn’t expecting to find material for this blog on Mackinac Island, but I did. In 1822, Alexis St. Martin was accidentally shot in the abdomen in a small trading post near Fort Mackinac. Dr. William Beaumont was summoned to treat St. Martin, and was able to save his life. However, the wound healed in an odd way, leaving an opening providing access to the inside of his stomach.

Readers of the 4th edition of Intermediate Physics for Medicine and Biology will appreciate what happened next. The resourceful Beaumont took advantage of the situation to conduct experiments on digestion. He tied different foods to a string, threaded them into St. Martin’s stomach, left them to digest for a while, and then pulled them out to see what had happened. These ground-breaking experiments were instrumental in establishing how digestion works. I toured a small museum dedicated to Beaumont, which describes these experiments in graphic (perhaps too graphic) detail.

I am interested in Beaumont not just because of his experiments studying digestion. Beaumont Hospital, in Royal Oak Michigan, is the clinical partner for a new medical school recently established at Oakland University. The first class of students at the Oakland University William Beaumont School of Medicine arrives this August. This will be a landmark event in OU’s history, and we are all excited about it.

I can’t help but be intrigued by the juxtaposition of these two stories: William Beaumont’s experiments on Alexis St. Martin, and the establishment of a new medical school bearing Beaumont’s name. St. Martin lived into his 80s. I expect our new medical school will have a similarly long and productive life.

Friday, June 17, 2011

Opus 200

In August 2007 I began posting entries to this blog, in order to highlight topics discussed in the 4th edition of Intermediate Physics for Medicine and Biology. Since then, I’ve posted an entry every Friday morning, without fail. This is my 200th (excluding two rare non-Friday posts).

Why do I keep this blog? First, I hope it sells books. Second, I want a way to keep the book up-to-date. Third, some topics Russ and I only mention in passing, and this blog lets me explore these issues in more detail. Fourth, in the blog I often feature past scientists who contributed to the intersection between physics and biology. Fifth and finally, I enjoy it. I like writing, and I find the topics fascinating.

I get some help. Russ Hobbie often sends me ideas and suggestions. My daughter Kathy posted some key entries when I was in Paris and had very limited computer access. I particularly like comments (thanks Debbie). Feel free to voice your opinion. (However, I’m glad the bozo who posted links to porno sites in the comments has stopped.) I hope the readers find this blog useful.

Remember, the book website contains many useful items, including an errata (listing all known errors in the book), a reprint of our 2009 Resource Letter that appeared in the American Journal of Physics, a link to an interview with Russ Hobbie that appeared in the December 2006 Newsletter of the Division of Biological Physics, which is part of the American Physical Society, and (my personal favorite) a link to Russ Hobbie’s MacDose video on YouTube.

Finally, if you use Facebook, you can join the group “Intermediate Physics for Medicine and Biology” and receive these postings about the book there.

Friday, June 10, 2011

National Academies Press

Getting correct and detailed information about the applications of physics to biology and medicine is important. The 4th edition of Intermediate Physics for Medicine and Biology is an excellent source of such information. Yet I know that you, dear reader, are probably saying: “Yes, but I want a FREE source of information.” Well, for those cheapskates like me, there’s some good news this week from the National Academies Press (forwarded to me via Russ Hobbie). First, what is the National Academies Press? Their website explains:
The National Academies Press (NAP) was created by the National Academies to publish the reports issued by the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council, all operating under a charter granted by the Congress of the United States. The NAP publishes more than 200 books a year on a wide range of topics in science, engineering, and health, capturing the most authoritative views on important issues in science and health policy. The institutions represented by the NAP are unique in that they attract the nation’s leading experts in every field to serve on their award-wining panels and committees. The nation turns to the work of NAP for definitive information on everything from space science to animal nutrition.
Now, what’s the good news? An email from the NAP states
As of June 2, 2011, all PDF versions of books published by the National Academies Press (NAP) will be downloadable free of charge to anyone. This includes our current catalog of more than 4,000 books plus future reports published by NAP.

Free access to our online content supports the mission of NAP—publisher for the National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council—to improve government decision making and public policy, increase public education and understanding, and promote the acquisition and dissemination of knowledge in matters involving science, engineering, technology, and health. In 1994, we began offering free content online. Before today’s announcement, all PDFs were free to download in developing countries, and 65 percent of them were available for free to any user.

Like no other organization, the National Academies can enlist the nation’s foremost scientists, engineers, health professionals, and other experts to address the scientific and technical aspects of society’s most pressing problems through the authoritative and independent reports published by NAP. We invite you to sign up for MyNAP —a new way for us to deliver free downloads of this content to loyal subscribers like you, to offer you customized communications, and to reward you with exclusive offers and discounts on our printed books.
Intermediate Physics for Medicine and Biology cites several NAP reports. For instance, in Section 9.10 about the possible effects of weak external electric and magnetic fields, Russ and I cite and quote from the NAP report Possible Health Effects of Exposure to Residential Electric and Magnetic Fields. I tested the website (free just seemed too good to be true), and was able to download a pdf version of the document with no charge (although I did have to give them my email address when I logged in). I got 379 pages of expert analysis about the biological effects of powerline fields. Russ and I quote the bottom line of this report in our book:
There is no convincing evidence that exposure to 60-Hz electric and magnetic fields causes cancer in animals... There is no evidence of any adverse effects on reproduction or development in animals, particularly mammals, from exposure to power-frequency 50- or 60-Hz electric or magnetic fields.
In Chapter 16 on the medical use of X rays, we cite three of the Biological Effects of Ionizing Radiation (BEIR) reports: V, VI, and VII. These reports provide important background about the linear nonthreshold model of radiation exposure. Then in Chapter 17 on nuclear physics and nuclear medicine we cite BEIR reports IV and VI when discussing radiation exposure caused by radon gas. The full citations listed in our book are:
"BEIR IV (1988) Committee on the Biological Effects of Ionizing Radiations. Health Risks of Radon and Other Internally Deposited Alpha-Emitters. Washington, D.C., National Academy Press.

BEIR Report V (1990) Committee on the Biological Effects of Ionizing Radiation. Health Effects of Exposure to Low Levels of Ionizing Radiation. Washington, DC, National Academy Press.

BEIR VI (1999) Committee on Health Risks of Exposure to Radon. Health Effects of Exposure to Radon. Washington, D.C., National Academy Press.

BEIR Report VII (2005) Committee to Assess Health Risks from Exposure to Low Levels of Ionizing Radiation. Health Risks from Exposure to Low Levels of Ionizing Radiation. Washington, DC, National Academy Press.
Besides the reports cited in our book, there are many others you might like to read. In a previous blog entry, I discussed the report BIO2010: Transforming Undergraduate Education for Future Research Biologists, published by NAP. You can download a copy free. It discusses how we should teach physics to future life scientists. In another blog entry I discussed the book In the Beat of a Heart, which explores biological scaling. It is also published by the NAP.

Yet another report, published just last year, that will be of interest to readers of Intermediate Physics for Medicine and Biology is the NAP report Research at the Intersection of the Physical and Life Sciences. The report summary explains the goals of the report.
Today, while it still is convenient to classify most research in the natural sciences as either biological or physical, more and more scientists are quite deliberately and consciously addressing problems lying at the intersection of these traditional areas. This report focuses on their efforts. As directed by the charges in the statement of task (see Appendix A), the goals of the committee in preparing this report are several fold. The first goal is to provide a conceptual framework for assessing work in this area—that is, a sense of coherence for those not engaged in this research about the big objectives of the field and why it is worthy of attention from fellow scientists and programmatic focus by funding agencies. The second goal is to assess current work using that framework and to point out some of the more promising opportunities for future efforts, such as research that could significantly benefit society. The third and final goal of the report is to set out strategies for realizing those benefits—ways to enable and enhance collaboration so that the United States can take full advantage of the opportunities at this intersection.
An older report that covers much of the material that is in the last half of Intermediate Physics for Medicine and Biology is Mathematics and Physics of Emerging Biomedical Imaging (1996). Finally, yet another useful report is Advancing Nuclear Medicine Through Innovation (2007).

All this and more is now available at no cost. Who says there’s no such thing as a free lunch?

Friday, June 3, 2011

Jean Perrin and Avogadro’s Number

Regular readers of this blog may recall that last summer I visited Paris for my 25th wedding anniversary, which was followed by a string of blog entries about famous French scientists. During this trip, my wife and I toured the Pantheon, where we saw the burial site of French scientist Jean Baptiste Perrin (1870–1942). Russ Hobbie and I mention Perrin in a footnote on page 85 of the 4th edition of Intermediate Physics for Medicine and Biology.
The Boltzmann factor provided Jean Perrin with the first means to determine Avogadro’s number [NA]. The density of particles in the atmosphere is proportional to exp(−mgy/kBT), where mgy is the gravitational potential energy of the particles. Using particles for which m was known, Perrin was able to determine [Boltzmann’s constant] kB for the first time. Since the gas constant R was already known, Avogadro’s number was determined from the relationship R = NAkB.
This brief footnote does not do justice to Perrin’s extensive accomplishments. He played a key role in establishing that matter is not a continuum, but rather is made out of atoms. He performed experiments not only on the exponential distribution of particles (described above, and also known as sedimentation equilibrium), but also on Brownian motion. Russ and I describe this phenomenon in Chapter 4:
This movement of microscopic-sized particles, resulting from bombardment by much smaller invisible atoms, was first observed by English botanist Robert Brown in 1827 and is called Brownian motion.
Molecular Reality: A Perspective on the Scientific Work of Jean Perrin, by Mary Jo Nye.
Molecular Reality:
A Perspective on the
Scientific Work of Jean Perrin,
by Mary Jo Nye.

One can learn more about Perrin in the book Molecular Reality: A Perspective on the Scientific Work of Jean Perrin, by Mary Jo Nye. I would not rank this book with the best histories of science I have read (my top three would be The Making of the Atomic Bomb, The Eighth Day of Creation, and The Maxwellians), or among the best scientific biographies (such as Subtle is the Lord: The Science and Life of Albert Einstein). However, it did provide some valuable insight into Perrin’s achievements. Ney states in her introduction that
What has struck me in a perusal of the literature on these topics [discoveries in physics during the early 20th century] is the tendency to assume what so many of the physical scientists of this pivotal period did not for one minute assume—the discontinuity of the matter which underlies visible reality. In looking back upon the discoveries and theories of particles, one perhaps fails to realize that the focus was not simply upon the nature of the molecules, ions and atoms, but upon the very fact of their existence…

In analyzing the role of Jean Perrin in the eventual acceptance of this assumption among the outspoken majority of the scientific community, I have concentrated upon the period of experimental, theoretical, philosophical and popular science which climaxed with the Solvay conference of 1911 and with the publication of Perrin’s book Les Atomes [read an online English translation here] in 1913…

In conclusion, I have discussed the reception of Perrin’s scientific experimentation and propagandisation on the subject of molecular reality, especially his work on Brownian movement, which climaxed in 1913 with the completion of a number of national and international conferences and the publication of Les Atomes. Though Perrin himself did not view his task as completed at that time, the question was no longer central to the basic working assumptions of scientists, and polemics on this question were no longer an impediment or impetus to the progress of general scientific conceptualization. That Perrin’s role was historically essential to this denouement cannot, in my opinion, be doubted.
Nye’s first chapter on 19th-century background contains a little too much philosophy of science for my taste. But her historical review does indicate that, despite what our footnote says, Perrin did not provide the first estimate for Avogadro’s number, but rather provided a definitive early measurement of that value. Her second chapter about Young Perrin: Initial Investigations was better, and the book really captured my attention in the third chapter on The Essential Debate.
The exponential law which Perrin announced in 1908, describing the vertical distribution of a colloid at equilibrium, was the fruit of laborious experiments on Brownian movement after several years of apprenticeship in the study of colloids. Included in his first 1908 paper on Brownian movement was a successful application of the concepts of osmotic pressure and mean kinetic energy to the visible Brownian particles, as well as a convincing calculation of Avogadro’s number. These endevours were but the prelude to a five-year drama devoted to the erection of an unassailable edifice to house the dictum of molecular reality, a structure buttressed at its most vulnerable point of criticism by the observed laws of visible Brownian movement.
I was particularly fascinated by how Perrin knew the mass of the particles he studied.
In order to find m, Perrin utilized Stoke’s law [see Section 4.5 of Intermediate Physics for Medicine and Biology], applying it to a column of the emulsion in a vertical capillary tube, and observing the fact that when the emulsion is very far from equilibrium, the Brownian granules in the upper layers of the column fall as if they were droplets of a cloud. Using Stokes’ formula relating the velocity of a spherical droplet, its radius, and the viscosity of the medium, Perrin found the radius of the granules [on the order of a micron].
Then from the known density, he could determine the mass. Perrin had to go to great lengths to obtain particles with a uniform distribution of radii, starting with 1200 grams of particles and, after repeated centrifugation, ending with less than a gram of uniform particles.

In 1926, Jean Perrin won the 1926 Nobel Prize in physics “for his work on the discontinuous structure of matter, and especially for his discovery of sedimentation equilibrium.”