Friday, March 27, 2015

Projections and Back Projections

Tomography is one of the most important contributions of mathematics to medicine. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe two methods to solve the problem of tomographic reconstruction.
The reconstruction problem can be stated as follows. A function f(x,y) exists in two dimensions. Measurements are made that give projections: the integrals of f(x,y) along various lines as a function of displacement perpendicular to each line…F(θ,x'), where x' is the distance along the axis at angle θ with the x axis. The problem is to reconstruct f(x,y) from the set of functions F(θ,x'). Several different techniques can be used… We will consider two of these techniques: reconstruction by Fourier transform […] and filtered back projection…. The projection at angle θ is integrated along the line y':
The definition of a projection, used in tomography.
[where x = x' cosθy' sinθ and y = x' sinθ + y' cosθ]… The definition of the back projection is
The definition of a back projection, used in tomography.
where x' is determined for each projection using Eq. 12.27 [x' = x cosθ + y sinθ].
In IPMB, Homework Problem 32 asks you can take the function (the “object”)
A mathematical function in Homework Problem 32 in Intermediate Physics for Medicine and Biology that serves as the object in an analytical example of tomography,
and calculate the projection using Eq. 12.29, and then calculate the back projection using Eq. 12.30. The object and the back projection are different. The moral of the story is that you cannot solve the tomography problem by back-projection alone. Before you back-project, you must filter.*

I like having two homework problems that illustrate the same point, one that I can do in class and another that I can assign to the students. IPMB contains only one example of projecting and then back-projecting, but recently I have found another. So, dear reader, here is a new homework problem; do this one in class, and then assign Problem 32 as homework.
Problem 32 ½ Consider the object f(x,y) = √(a2 − x2 − y2)/a for √(x2 +y2) less than a, and 0 otherwise.
(a) Plot f(x,0) vs. x 
(b) Calculate the projection F(θ,x'). Plot F(0,x') vs. x'.
(c) Use the projection from part (b) to calculate the back projection fb(x,y). Plot fb(x,0) vs. x.
(d) Compare the object and the back projection. Explain qualitatively how they differ.
The nice thing about this function
A mathematical function in a new Homework Problem for Intermediate Physics for Medicine and Biology that serves as the object in an analytical example of tomography,
(as well as the function in Problem 32) is that f(x,y) does not depend on direction, so F(θ,x') is independent of θ; you can make you life easier and solve it for θ = 0. Similarly, you can calculate the back projection along any line through the origin, such as y = 0. I won’t solve Problem 32½ here in detail, but let me outline the solution.

Below is a plot of f(x,0) versus x
A plot of the object function, in an example of tomography.
To take the projection in part (b) use θ = 0, so x' = x and y' = y. If |x| is greater than a, you integrate over a function that is always zero, so F(0,x') = 0. If |x| is less than a, you must do more work. The limits of the integral over y become −(a2 – x2) to +(a2 – x2). The integral is not too difficult (it’s in my mathematical handbook), and you get F(0,x) = π(a2 – x2)/2a. Because the projection is independent of θ, you can generalize to
A mathematical equation for the projection, in an example of tomography.
The plot, an inverted parabola, is
A plot of the projection, in an example of tomography.
In part (c) you need to find the back-projection. I suggest calculating it for the line y = 0. Once you have it, you can find it for any point by replacing x by (x2 +y2). The back-projection for |x| less than a is easy. The integral in Eq. 12.30 gives another inverted parabola. For |x| is greater than a, the calculation is more complicated because some angles give zero and some don’t. A little geometry will convince you that the integral should range from cos-1(a/x) to π - cos-1(a/x). Because the function is even around π/2, you can make your life easier by multiplying by two and integrating from cos-1(a/x) to π/2. The only way I know to show how you get cos-1(a/x) is to draw a picture of a line through the point x greater than a, y = 0 that is tangent to the circle x2 + y2 = a2, and then do some trigonometry. When you then evaluate the integral you get
A mathematical equation for the back projection, in an example of tomography.
A plot of this complicated looking function is
A plot of the back projection, in an example of tomography.
To answer part (d), compare your plots in (a) and (c). The object in (a) is confined entirely inside the circle x2 + y2 = a2, but the back-projection in (c) spreads over all space. One could say the back-projection produced a smeared-out version of the object. Why are they not the same? We didn’t filter before we back projected.

If you really want to have fun, show that the limit of the back-projection goes to zero as x goes to infinity. This is surprisingly difficult—that last term doesn’t look like it’ll go to zero—and you need to keep more than one term in your Taylor series to make it work.

The back projection shown above looks similar to the back projection shown in Fig. 12.23 of IPMB. My original goal was to calculate the back projection in Fig. 12.23 exactly, but I got stuck trying to evaluate the integral

I sometimes ask myself: why do I assign these analytical examples of projections and back-projections? Anyone solving the tomography problem in the clinic will certainly use a computer. Is anything gained by doing analytical calculations? Yes. The goal in doing these problems is to gain insight. A computer program is a black box: you put projections in and out comes an image, but you don’t know what happens in between. Analytical calculations force you to work through each step. Please don’t skip the plots in parts (a) and (c), and the comparison of the plots in part (d), otherwise you defeat the purpose; all you have done is an exercise in calculus and you learn nothing about tomography.

I wish I had come up with this example six months ago, so Russ and I could have included it in the 5th edition of IPMB. But it’s too late, as the page proofs have already been corrected and the book will be printed soon. This new homework problem will have to wait for the 6th edition!


*Incidentally, Prob. 12.32 has a typo in the second line: "|x|" should really be "(x2 +y2)".

Friday, March 20, 2015

Consider an Impervious Plane Containing A Circular Disk

When I teach, one of my goals is to help students analyze a mathematical equation to uncover the underlying physics. Equations are not merely expressions you plug numbers into to get other numbers; equations tell a story. Students often begin the semester knowing the required math and physics, but have difficulty connecting the two. One way students learn this connection is by taking limits of mathematical equations, so they can show that the equation behaves as it should when some key variable is large or small. If the equation is too complicated, taking the limit may be so difficult that the student gets lost in the algebra (see the February 20 blog entry about “the ugliest equation”). If the equation is too simple, taking the limit may be trivial. The best examples are of intermediate difficulty: students must do some work to take the limit, but their result provides physical insight.

One of my favorite examples is Problem 29 of Chapter 4 in the 4th edition of Intermediate Physics for Medicine and Biology. I should be more specific: parts (b)-(d) of that problem are just the sort of practice the students need (part (a) of the problem is for masochists).
Problem 29 Consider an impervious plane at z = 0 containing a circular disk of radius a having a concentration C0. The concentration at large z goes to zero. Carslaw and Jaeger (1959) show that the steady-state solution to the diffusion equation is 

A mathematical expression for the concentration obeying the steady-state diffusion equation.

(a) (optional) Verify that C(r, z) satisfies ∇ 2C = 0. The calculation is quite involved, and you may wish to use a computer algebra program such as Mathematica or Maple.
(b) Show that for z = 0, C = C0 if r is less than a. 
(c) Show that for z = 0, dC/dz = 0 if r is greater than a. 
(d) Integrate Jz over the disk (z = 0, r between 0 and a) and show that i = 4DaC0.
I’m amazed that the convoluted expression for the concentration, C(r, z), containing an inverse sine function with a complicated argument does indeed obey Laplace’s equation. I once proved that this is true, but have no intention of doing so again—it’s a mess. Before analyzing parts (b)-(d) of the problem, let’s examine the claim that “the concentration at large z goes to zero.” Exactly how does it go to zero? Far from the disc (z is much greater than a) the concentration should fall off as the inverse distance (just as the electrical potential of a point charge falls off as 1/r). When z is large, the argument of the inverse sine function is small, and it can be approximated using the first term of its Taylor series: sin-1(x) ~ x. In that case, the concentration becomes 2aC0/πz, falling off with the inverse distance as expected.

The behavior at small z is more puzzling and the solution has the astonishing property that it changes its behavior at r = a. For r less than a the concentration is constant, but for r greater than a the derivative of the concentration is zero. How can such a relatively simple analytical expression display such a discontinuous behavior?

To answer that question, let’s analyze the equation for the concentration. The key step in our analysis is when we factor out (r – a)2 from the first square root in the denominator of the argument. This factor is always positive because the quantity is squared. But when removed from inside the square root, it becomes |r – a|, with the absolute value needed to ensure that the expression remains positive. There are two cases: r less than a, when we replace |r – a| by (a – r); and r greater than a, when we replace |r – a| by (r – a). The rest is an exercise in Taylor series resulting from the limit as z goes to zero. Use the expansion √(1 + x2) = 1 + x2/2 for the two square roots, simplify the denominator as much as possible, and then use the expansion 1/(1 - x) = 1 + x to find that for r less than a


and for r greater than a


For all the gory details see the solution manual, available free-of-charge to instructors using IPMB in their class; email Russ Hobbie or me (roth@oakland.edu) for a copy.

Now we can do parts (b)-(d) of the homework problem. For part (b), when r is less than a set z = 0 and use that sin−1(1) = π/2 to find that C(r,0) = C0, independent of r. For part (c), when r is greater than a take the derivative of the inverse sine function with respect to z, and then set z = 0. Because the argument of the inverse sine is even in z, at z = 0 the derivative dC/dz vanishes.

Part (d) is tricky. Using the same argument as above, we might expect that because the argument of the inverse sine is even for r less than a the derivative dC/dz must be zero. But no! The derivative of sin−1(u(x)) is 1/√(1−u2) du/dx. When we take the derivative of the expression for C(r,z) for r less than a and then set z = 0, we get not only zero in the numerator (because du/dx = 0; the function is even) but also zero in the denominator (because √(1−u2) = 0 in this case). The result is dC/dz = − 2C0√(a2 − r2).

To find the total number of particles per unit time i coming out of the disk, we need to integrate the particle current density Jz over the plane z = 0. The current density is Jz = – D dC/dz, where D is the diffusion constant. Because dC/dz is zero for r greater than a, we need to integrate the current density over only the circular area r less than a. You can look this integral up and find that i = 4C0Da.

There are other aspects of the solution that are not emphasized in this problem, but are equally interesting. For instance, Jz goes to infinity at r = a. In other words, the number of particles leaving the disk is largest at the edge of the disk. This result is analogous to the concentration of electrical current near the edge of a disk electrode, discussed in Problem 22 of Chapter 6 in IPMB; Jz changes abruptly from an extremely large value for r just less than a to zero for r just greater than a. Another interesting result is that the total number of particles leaving the disk per unit time is proportional to the disk radius, not the disk area as you might naively expect. This result also has an analogy in electrostatics: the capacitance of a disk electrode relative to a distant ground is proportional to the radius of the disk, not the radius squared, which surprised me when I first encountered it (see Section 6.19 in IPMB).

In retrospect, maybe this problem is slightly too complicated to be the perfect example of taking the limits of a mathematical expression to gain insight into the physical behavior. But for those whose mathematical skills are up to the job, it provides an excellent case study.

Mathematical equations tell a physical story. You must practice extracting that story from the math.

Friday, March 13, 2015

If You Want Healthy Cows Feed Them Magnets

I saw an article on the internet claiming “If you want healthy cows feed them magnets” and I thought “oh no, not more biomagnetism nonsense.” First magnets in shoes to relieve foot pain, then magnetic bracelets for arthritis, and finally “biomagnetic therapy” for all sorts of disorders; I thought it couldn’t get worse, but feeding magnets to heifers? Really? Sounds like bull to me.

A drawing of a cow with a magnet in its stomach.
A drawing of a cow
with a magnet in its stomach.
Well, I can’t vouch for the accuracy of this story or the effectiveness of the treatment, but at least the mechanism underlying the feeding of magnets to cows is plausible. Cattle swallow a lot of junk while eating, including some that is magnetic (for example, wires and nails...yikes!). The article says
That's where magnets come in. A magnet about the size and shape of a finger is placed inside a bolus gun, essentially a long tube that ensures the magnet goes down the cow's throat. Then it settles in the reticulum, collecting any stray pieces of metal. The magnets, which cost a few bucks a pop, can also be placed preventatively. To check if a cow already has a magnet, farmers use a compass.
Apparently the “bolus gun” is inserted through the mouth; I wasn’t so sure. Wikipedia has a page about cow magnets, titled “hardware disease.” Companies make money selling cow magnets (these are big magnets, about four inches long). But even though calves eat magnets, kids should not (note the plural: the problems arise when magnets interact).

A funny picture about a spherical cow.
Consider a spherical cow.
The 4th edition of Intermediate Physics for Medicine and Biology has an entire chapter about biomagnetism, but no mention of magnets in bovine stomachs. What is wrong with Russ and me? The only place we mention cattle at all is in Homework Problem 30 in Chapter 4, where we analyze the temperature distribution throughout a spherical cow. A small-scale analogy of magnets in steers’ stomachs are rows of magnetosomes in magnetotactic bacteria (see Fig. 8.25 in IPMB), but I doubt the bacteria use them to collect nails before they can puncture their membrane. Yet, could we misunderstand the biological purpose of magnetosomes?

Finally, I have some good news and bad news about the 5th edition of IPMB. The good news: we submitted the page proofs and the book should be published in the next few months. The bad news: no more mention of livestock in the revised edition.
A funny photograph of a cow.
If you want healthy cows feed them magnets.

Friday, March 6, 2015

A Mathematical Model of Agonist-Induced Propagation of Calcium Waves in Astrocytes

When I was working at the National Institutes of Health in the mid-1990s, I spent most of my time studying transcranial magnetic stimulation and theoretical cardiac electrophysiology. But also I collaborated with James Russell to study calcium waves in astrocytes (a type of glial cell in the brain), and we published a paper in the journal Cell Calcium describing “A Mathematical Model of Agonist-Induced Propagation of Calcium Waves in Astrocytes” (Volume 17, Pages 53–64, 1995). The introduction is reproduced below (with citations removed):
Recent experiments have clearly shown that astroglia in brain participate in long distance signaling together with neurons. Such signalling in astrocytes is supported by intracellular calcium oscillations induced by neurotransmitters that are propagated as waves through the cytoplasm of individual cells and through astrocyte networks. These calcium oscillations generally are triggered by activation of metabotropic receptors which are coupled to inositol 1,4,5-trisphosphate (IP3) generation and intracellular calcium release through IP3-gated calcium channels on the endoplasmic reticulum (ER) membrane. Yagodin et al. have shown that, in astrocytes, wave propagation is saltatory, with discrete loci of nonlinear wave amplification separated by regions through which passive diffusion of calcium occurs. These wave amplification loci appear to be intracellular specializations that remain invariant and support a qualitatively characteristic response pattern in any given cell. The loci may each have different intrinsic oscillatory frequencies, resulting in complex spatio-temporal dynamics, with wave collisions and annihilations.

Several mathematical models have been presented that describe the temporal characteristics of agonist-induced calcium oscillations in different types of cells. A few of these models address the spatial characteristics of wave propagation, but none have addressed the complex wave dynamics observed in different types of cells including astrocytes. The purpose of this paper is to extend a previously developed model of calcium oscillations so that it includes spatial diffusion of calcium in a cell with discrete active loci of wave amplification. This model is then used to analyze experimental data and to gain insight into the mechanism of wave collisions and annihilations.
As you might expect, my contribution to this paper was developing the mathematical model, while Russell and his team provided the experimental data as well as the biological knowledge and insight. The model was based on a paper by Li and Rinzel (“Equations for InsP3 Receptor-Mediated [Ca2+]i Oscillations Derived from a Detailed Kinetic Model: A Hodgkin-Huxley Like Formalism,” Journal of Theoretical Biology, Volume 166, Pages 461–473, 1994). At that time, John Rinzel was at NIH, heading the Mathematical Research Branch of the National Institute of Diabetes and Digestive and Kidney Diseases. John, now at the Center for Neural Science at New York University, has contributed much to theoretical biology, but I remember him best for his work on bursting of pancreatic beta cells. He is this year’s winner of the Society of Mathematical Biology’s Arthur T. Winfree Prizefor his elegant work on the analysis of dynamical behavior in models of neural activity and the contributions that work has made in the neurobiological community to the understanding of a host of phenomena (including simple excitability as well as bursting) in single neurons, small populations of neurons, and other excitable cells.”

Russell remains at NIH with the Microscopy and Imaging Core of the Eunice Kennedy Shriver National Institute of Child Health and Development. He leads a multi-user research facility providing training and instrumentation for high resolution microscopy and image processing.

As so often happens, an echo of my work on calcium wave modeling with Russell appears in the 4th edition of Intermediate Physics for Medicine and Biology. Homework Problem 24 in Chapter 4 contains a simplified model of calcium waves. This system is a classic “reaction-diffusion” system: calcium diffuses down the cell, triggering calcium-induced calcium release, which produces more diffusion, triggering more calcium release, resulting in positive feedback and a calcium wave. The process is analogous to action potential propagation along a nerve.

Friday, February 27, 2015

Alan Turing, Biological Physicist

Recently, my wife and I went to the theater to see The Imitation Game, about Alan Turing and the breaking of the enigma code during World War II. It is a fascinating movie. I’m a big fan of Benedict Cumberbatch, who played Turing (I particularly enjoy his portrayal of Sherlock Holmes in the TV series Sherlock), and I always enjoy performances by Keira Knightly.


Turing was primarily a mathematician, but he did publish one paper that straddled the disciplines of mathematical biology and biological physics: A. M. Turing, 1952, “Chemical Basis of Morphogenesis.” Philosophical Transactions of the Royal Society of London. Series B, Volume 237, Pages 37–72. The abstract is reproduced below.
It is suggested that a system of chemical substances, called morphogens, reacting together and diffusing through a tissue, is adequate to account for the main phenomena of morphogenesis. Such a system, although it may originally be quite homogeneous, may later develop a pattern or structure due to an instability of the homogeneous equilibrium, which is triggered off by random disturbances. Such reaction-diffusion systems are considered in some detail in the case of an isolated ring of cells, a mathematically convenient, though biologically unusual system. The investigation is chiefly concerned with the onset of instability. It is found that there are six essentially different forms which this may take. In the most interesting form stationary waves appear on the ring. It is suggested that this might account, for instance, for the tentacle patterns on Hydra and for whorled leaves. A system of reactions and diffusion on a sphere is also considered. Such a system appears to account for gastrulation. Another reaction system in two dimensions gives rise to patterns reminiscent of dappling. It is also suggested that stationary waves in two dimensions could account for the phenomena of phyllotaxis. The purpose of this paper is to discuss a possible mechanism by which the genes of a zygote may determine the anatomical structure of the resulting organism. The theory does not make any new hypotheses; it merely suggests that certain well-known physical laws are sufficient to account for many of the facts. The full understanding of the paper requires a good knowledge of mathematics, some biology, and some elementary chemistry. Since readers cannot be expected to be experts in all of these subjects, a number of elementary facts are explained, which can be found in text-books, but whose omission would make the paper difficult reading.
Mathematical Biology, by James Murray, superimposed on Intermediate Physics for Medicine and Biology.
Mathematical Biology,
by James Murray.
You can learn more about Turing’s theory in James Murray’s book Mathematical Biology (I am basing my comments on the edition in the Oakland University library: the Second, Corrected Edition, 1993). Murray writes
Turing’s (1952) idea is a simple but profound one. He said that if, in the absence of diffusion….[two chemicals] A and B tend to a linearly stable uniform steady state then, under certain conditions, which we shall derive, spatially inhomogeneous patterns can evolve by diffusion driven instability… Diffusion is usually considered a stabalising process which is why this was such a novel concept. To see intuitively how diffusion can be destabilizing consider the following, albeit unrealistic, but informative analogy.

Consider a field of dry grass in which there is a large number of grasshoppers…
I don’t know about you, but I gotta love someone who explains mathematics using dry grass and grasshoppers.

Diffusion is a key concept underlying Turing’s work. Russ Hobbie and I discussion diffusion in Chapter 4 of the 4th edition of Intermediate Physics for Medicine and Biology, and it is one of the central ideas in all of biological physics. Diffusion-driven instabilities play a role when analyzing the Belousov-Zhabotinsky oscillating chemical reaction, and are relevant to explaining how leopards get their spots (a spotted leopard graces the cover of Murray’s book; whenever I search for his book in the stacks of the OU library, I just look for the leopard).

Murray continues
A reaction diffusion system exhibits diffusion-driven instability or Turing instability if the homogeneous stead state is stable to small perturbations in the absence of diffusion but unstable to small spatial perturbations when diffusion is present. The usual concept of instability in biology is in the context of ecology, where a uniform steady state becomes unstable to small perturbations and the populations typically exhibit some temporal oscillatory behaviour. The instability we are concerned with here is of a quite different kind. The mechanism driving the spatially inhomogeneous instability is diffusion: the mechanism determines the spatial pattern that evolves. How the pattern or mode is selected is an important aspect of the analysis.
Not only did Turing make a monumental contribution to deciphering the enigma code, but also he helped to develop the field of mathematical biology. In my book, that makes him a biological physicist.

Friday, February 20, 2015

The Ugliest Equation

The 4th edition of Intermediate Physics for Medicine and Biology contains thousands of equations. One of them, Equation 15.22, gives the cross section for energy transferred to electrons during Compton scattering, as a function of photon energy (x = hν/mc2). Of all the equations in the book, it is the ugliest.

Equation 15.22 in Intermediate Physics for Medicine and Biology, giving the cross section for energy transferred to electrons during Compton scattering, as a function of photon energy. It is the ugliest equation in the book.
Eq. 15.22 in IPMB; the ugliest equation.

Russ Hobbie and I write
Equation 15.22 is a rather nasty equation to evaluate, particularly at low energies, because many of the terms nearly cancel.
Examining the behavior of the expression in brackets at small x should be easy: just take its Taylor’s series (to review Taylor’s series, see Appendix D of IPMB). In order to get the correct answer, however, you need to keep not two terms in the expansion, or three, but four! The Taylor’s series you need are

Taylor series needed to analyze Equation 15.22 in Intermediate Physics for Medicine and Biology.
Three Taylor's series needed to analyze Eq. 15.22.

Oddly, the expansion for the fourth term 4x2/3(1+2x)3 doesn’t even need one term in its expansion; it is small and doesn’t contribute to the limiting behavior. Plug these all in, and you find that the terms in x−2, x−1, and x0 all vanish. The first nonzero term is linear: 4x/3.

Out of curiosity, I evaluated each of the five terms in the expression using x = 0.01. I got

20001.9608 - 0.9900 + 9900.0192 - 0.0001 - 29900.9771  =  0.0128

The terms really do “nearly cancel.”

Friday, February 13, 2015

Willem Einthoven, Biological Physicist

In Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I mention Einthoven’s triangle. This triangle is formed by three electrodes used to measure the electrocardiogram: one on the right arm, one on the left arm, and one on the left leg. Who is this Einthoven of Einthoven’s triangle? He is an excellent example of a scientist well versed in both physics and physiology.

Asimov’s Biographical Encyclopedia of Science and Technology, superimposed on the cover of Intermediate Physics for Medicine and Biology.
Asimov's Biographical
Encyclopedia.
Asimov’s Biographical Encyclopedia of Science and Technology describes Einthoven in this way:
Einthoven, Willem (eyent’-hoh-ven)
Dutch physiologist
Born: Semarang, Java (now part of Indonesia), May 22, 1860
Died: Leiden, September 29, 1927

Einthoven’s father was a practicing physician serving in the East Indies, which was then a Dutch colony. The father died in 1866, and in 1870 the family returned to the Netherlands and settled in Utrecht. In 1878 Einthoven entered the University of Utrecht and began the study of medicine, although always with considerable interest in physics. He obtained his medical degree in 1885 and was at once appointed to a professorship of physiology at the University of Leiden, serving there the remainder of his life.

The physicist in him provoked his interest in the tiny electric potentials produced in the human body…. In 1903 Einthoven developed the first string galvanometer. This consisted of a delicate conducting thread stretched across a magnetic field. A current flowing through the thread would cause it to deviate at right angles to the direction of the magnetic field lines of force, the extent of the deviation being proportional to the strength of the current. The delicacy of the device was sufficient to make it possible to record the varying electrical potentials of the heart.

Einthoven continually improved his device and worked out the significance of the rises and falls in potential. By 1906 he was correlating the recordings of these peaks and troughs (the result being what he called the electrocardiogram) with various types of heart disorders….For the development of electrocardiography Einthoven was awarded the 1924 Nobel Prize in medicine and physiology.
Willem Einthoven (1860-1927): Father of Electrocardiology.
Willem Einthoven (1860-1927):
Father of Electrocardiology.
I became intrigued by Einthoven’s skill at both mathematics and medicine, so I decided to explore deeper into how he straddled these two fields. The book Willem Einthoven (1860-1927): Father of Electrocardiography, by H. A. Snellen, provided these insights:
[Einthoven’s work] demanded more knowledge of mathematics than Einthoven’s high school and medical training had provided. He supplemented this mainly through self-study; learning differential and integral calculus from Lorentz’ book on the subject in the early 1890’s.

30 Years later he presented a copy of this book to Frank Wilson with the words: “May I send you the excellent book of Lorentz’ Differential- und Integralrechnung? I have learned my mathematics from it after my nomination as a professor in this University and I hope you will have as much pleasure and profit by it as I have had myself.”

In physical matters he was aided by his correspondence and talks with his friend (and later brother-in-law) Julius, who became extra-ordinary professor of physics at Amsterdam and subsequently full professor at Utrecht, where they had studied together.

Einthoven profited also from written and personal contact with the somewhat older and already famous Lorentz, professor of theoretical physics at Leiden….

Einthoven the physiologist with a marked general concern about patients and general medicine was at heart a physicist though not by training and office…

Most of the important topics in the correspondence [between Einthoven and English physiologist A. V. Hill] are reflected in Hill’s obituary of Einthoven in Nature. I quote a few lines, which bear testimony to Hill’s keen observation and his sincere admiration of Einthoven: “Einthoven’s investigations cover a wide range, but they are all notable for the same characteristic—the mastery of physical technique which they show. Einthoven, in spite of his medical training and his office, was essentially a physicist, and the extraordinary value of his contributions to physiology, and therewith indirectly to medicine, emphasizes the way in which an aptitude—in Einthoven’s case a genius—for physical methods can aid in the solution of physiological problems.”
Many scientists have made the leap from physics to biology (see my blog entry of a few weeks ago for examples). Einthoven did the opposite: going from biology to physics. I’ve always suspected this is the more difficult path, and it certainly seems to be the less common one. Yet, he appears to have made the journey successfully. Snellen’s book provides no anecdotes about how Einthoven picked up his mathematics and physics, but I imagine he must of spent many a night slogging through Lorentz’s book, painstakingly teaching himself the subject.

I suspect IPMB can aid physicists moving into biology and medicine. I wonder how useful it is for someone like Einthoven, travelling in the other direction?

Friday, February 6, 2015

The Sinc Function

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss many mathematical functions, from common ones like the sine function and the exponential function to less familiar ones like Bessel functions and the error function. A simple but important example is the sinc function.

Sinc(x) is defined as sin(x)/x. It is zero wherever sin(x) is zero (where x is a multiple of π), except at x = 0, where sinc is one. The shape of the sinc function is a central peak surrounded by oscillations with decaying amplitude.

A plot of sinc(x) = sin(x)/x as a function of x.
The sinc function.

The most important property of the sinc function is that it is the Fourier transform of a square pulse. In Chapter 18 about magnetic resonance imaging, a slice of a sample is selected by turning on a magnetic field gradient, so the Larmor frequencies of the hydrogen atoms depend on location. To select a uniform slice, you need to excite hydrogen atoms with a uniform range of Larmor frequencies. The radio-frequency pulse you must apply is specified by its Fourier transform. It is an oscillation at the central Larmor frequency, with an amplitude modulated by a sinc function.

When you integrate sinc(x), you get a new special function that Russ and I never discuss: the sine integral function, Si(x)

A plot of the sine integral function, Si(x), versus x.
The sine integral function, Si(x).
This function looks like a step function, but with oscillations. As x goes to infinity the sine integral approaches π/2. It is odd, so as x goes to minus infinity it approaches –π/2.

The sinc function and the sine integral function resemble the Dirac delta function and the Heaviside step function. In fact, sinc(x/a)/a gets taller and taller, and the side lobes fall off faster and faster, as a approaches zero; it becomes the delta function.Similarly, the sine integral function becomes—to within a constant term, π/2—the step function.

Special functions often have interesting and beautiful properties. As I noted earlier, if you integrate sinc(x) from zero to infinity you get π/2. However, if you integrate the square of sinc(x) from zero to infinity you get the same result: π/2. These two functions are different: sinc(x) oscillates between negative and positive values, so its integral oscillates from above π/2 to below π/2, as shown above; sinc2(x) is always positive, so its integral grows monotonically to its asymptotic value. But as you extend the integral to infinity, the area under these two curves is exactly the same! I’m not sure there is any physical significance to this property, but it is certainly a fun fact to know.

Friday, January 30, 2015

Electron Paramagnetic Resonance Imaging

Magnetic resonance comes in two types: nuclear magnetic resonance and electron paramagnetic resonance. In Chapter 18 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
Two kinds of spin measurements have biological importance. One is associated with electron magnetic moments and the other with the magnetic moments of nuclei. Most neutral atoms in their ground state have no magnetic moment due to the electrons. Exceptions are the transition elements that exhibit paramagnetism. Free radicals, which are often of biological interest, have an unpaired electron and therefore have a magnetic moment. In most cases this magnetic moment is due almost entirely to the spin of the unpaired electron.

Magnetic resonance imaging is based on the magnetic moments of atomic nuclei in the patient. The total angular momentum and magnetic moment of an atomic nucleus are due to the spins of the protons and neutrons, as well as any orbital angular momentum they have inside the nucleus. Table 18.1 lists the spin and gyromagnetic ratio of the electron and some nuclei of biological interest.
The key insight from Table 18.1 is that the Larmor frequency for an electron in a magnetic field is about a thousand times higher than for a proton. Therefore, MRI works at radio frequencies, whereas EPR imaging is at microwave frequencies. Can electron paramagnetic resonance be used to make images like nuclear magnetic resonance can? I should know the answer to this question, because I hold two patents about a “Pulsed Low Frequency EPR Spectrometer and Imager” (U.S. Patents 5,387,867 and 5,502,386)!

I’m not particularly humble, so when I tell you that I didn’t contribute much to developing the EPR imaging technique described in these patents, you should believe me. The lead scientist on the project, carried out at the National Institutes of Health in the mid 1990s, was John Bourg. John was focused intensely on developing an EPR imager. Just as with magnetic resonance imaging, his proposed device needed strong magnetic field gradients to map spatial position to precession frequency. My job was to design and build the coils to produce these gradients. The gradients would need to be strong, so the coils would get hot and would have to be water cooled. I worked on this with my former boss Seth Goldstein, who was a mechanical engineer and therefore know what he was doing in this design project. Suffice to say, the coils never were built, and from my point of view all that came out of the project was those two patents (which have never yielded a dime of royalties, at least that I know of). This project was probably the closest I ever have come to doing true mechanical engineering, even though I was a member of the Mechanical Engineering Section when I worked in the Biomedical Engineering and Instrumentation Program at NIH.

One of our collaborators, Sankaran Subramanian, continued to work on this project for years after I left NIH. In a paper in Magnetic Resonance Insights, Subramanian describes his work in “Dancing With The Electrons: Time-Domain and CW In Vivo EPR Imaging” (Volume 2, Pages 43–74, 2011). Below is an excerpt from the introduction of his article, with references removed. It provides an overview of the advantages and disadvantages of EPR imaging compared to MRI.
Magnetic resonance spectroscopy, in general, deals with the precessional frequency of magnetic nuclei, such as 1H, 13C, 19F, 31P, etc. and that of unpaired electrons in free radicals and systems with one or more unpaired electrons when placed in a uniform magnetic field. The phenomena of nuclear induction and electron resonance were discovered more or less at the same time, and have become two of the most widely practiced spectroscopic techniques. The finite dimensional spin space of magnetic nuclei makes it possible to quantum mechanically precisely predict how the nuclear spin systems will behave in a magnetic field in presence of radiofrequency fields. On the other hand, the complex and rather diffuse wave functions of the unpaired electron which get further influenced by the magnetic vector potential make it a real challenge to predict the precise behavior of electron resonance systems. The subtle variations in the precessional frequencies brought about by changes in the electronic environment of the magnetic nuclei in NMR and that of the unpaired electrons in EPR make the two techniques widely practiced and very useful in the structural elucidation of complex biomolecules. It was discovered subsequently that the presence of linear field gradients enabled precise spatial registration of nuclear spins which led to the development of imaging of the distribution of magnetic nuclei establishing an important non-invasive medical imaging modality of water-rich soft tissues in living systems with its naturally abundant presence of protons. Nuclear Magnetic Resonance Imaging, popularly known as MRI, is now a well-known and indispensable tool in diagnostic radiology. …

The entirely analogous field of electron paramagnetic (spin) resonance (EPR or ESR) that deals with unpaired electron systems developed as a structural tool much more rapidly with the intricate spectra of free radicals and metal complexes providing an abundance of precise structural information on molecules, that would otherwise be impossible to unravel. The spectroscopic practice of EPR traditionally started in the microwave region of the electromagnetic spectrum and was essentially a physicist’s tool to study magnetic properties and the structure of paramagnetic solid state materials, crystal defects (color centers), etc. Later, chemists started using EPR to unravel the structure of organic free radicals and paramagnetic transition metal and lanthanide complexes. Early EPR instrumentation closely followed the development of radar systems during the Second World War and was operating in the X-band region of the electromagnetic spectrum (~9 GHz). Pulsed EPR methods developed somewhat later due to the requirement of ultra fast switches and electronic data acquisition systems that can cope with three orders of magnitude faster dynamics of the electrons, compared to that of protons. The absence of relatively long-lived free radicals of detectable range of concentration in living systems made in vivo EPR imaging not practical. It became essential that one has to introduce relatively stable biocompatible free radicals as probes into the living system in order to image their distribution. Further the commonly practiced X-band EPR frequency is not useful for interrogating reasonable size of aqueous systems due lack of penetration. Frequencies below L-band (1–2 GHz) are needed for sufficient penetration and one has to employ either water soluble spin probes that can be introduced into the living system (via intramuscular or intravenous infusion) or solid particulate free radicals that can be implanted in vivo. Early imaging attempts were entirely in the CW mode at L-band frequencies (1–2 GHz) on small objects. For addressing objects such a laboratory mouse, rat etc., it became necessary to lower the frequency down to radiofrequency (200–500 MHz). With CW EPR imaging, the imaging approach is one of generating projections in presence of static field gradients and reconstructing the image via filtered back-projection as in X-ray CT or positron emission tomography (PET). Most spin probes used for small animal in vivo imaging get metabolically and/or renally cleared within a short time and hence there is need to speed up the imaging process. Further, the very fast dynamics, with relaxation times on the order of microseconds of common stable spin probes such as nitroxides, until recently, precluded the use of pulsed methods that are in vogue in MRI.
As a postscript, Seth Goldstein retired from NIH and now creates kinetic sculpture. Watch some of these creative devices here.

Friday, January 23, 2015

Cobalt-60

In Chapter 16 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I mention the radioactive isotope cobalt-60. Three times—in the captions of Figs. 16.13, 16.15, and 16.46—we show data obtained using 60Co radiation. So, what is a cobalt-60 radiation source, and why is it important?

In Radiation Oncology: A Physicist’s-Eye View, Michael Goitein discusses this once-prevalent but now little-used tool for generating therapeutic photons.
Radioactive isotopes are one source of radiation, and the 60Co therapy machine takes advantage of this. A highly active source of 60Co is placed in a heavy lead shield which has an aperture through which the photons produced in the decay of 60Co can escape to provide the therapeutic beam. The whole is then usually mounted on a rotating gantry so that the beam can be directed at the patient from any angle. 60Co therapy machines are little used these days, except in areas of the world where the supply of electricity and/or repair service are problematic. I mention these machines because they are unusual in that their photon beam is near mono-energetic. It consists primarily of γ-rays of 1.17 and 1.33 MeV energy – which are close enough together that one can think of the radiation as consisting of 1.25 MeV primary photons. However, photons interacting with the shielding around the 60Co source produce lower energy secondary photons which lower the effective energy of the beam somewhat.
The Gamma Knife is a device that uses hundreds of collimated cobalt sources to deliver radiation to a cancer from many directions. It was once state-of-the-art, but now has be largely superseded by other techniques. Most modern radiation sources are produced using a linear accelerator, and have energies over a range from a few up to ten MeV. However, cobalt sources are used still in many developing countries (see a recent point/counterpoint article debating if this is a good or bad situation).

Cobalt-60’s 5.3-year half-life makes it notorious as a candidate for a dirty bomb, in which radioactive fallout poses a greater risk than the explosion. Isotopes with much shorter half-lives decay away quickly and therefore produce intense but short-lived doses of radiation. Isotopes with much longer half-lives decay so slowly that they give off little radiation. 60Co’s intermediate half-life means that it lasts long enough and produces enough radiation that it could contaminate a region for years, creating a Dr. Strangelove-like doomsday device.

Fortunately, dirty bombs remain hypothetical. However, cobalt sources have a real potential for causing radiation exposure if not handled properly. Here is an excerpt from an International Atomic Energy Agency (IAEA) report about one radiological accident.
A serious radiological accident occurred in Samut Prakarn, Thailand, in late January and early February 2000 when a disused 60 Co teletherapy head was partially dismantled, taken from an unsecured storage location and sold as scrap metal. Individuals who took the housing apart and later transported the device to a junkyard were exposed to radiation from the source. At the junkyard the device was further disassembled and the unrecognized source fell out, exposing workers there. The accident came to the attention of the relevant national authority when physicians who examined several individuals suspected the possibility of radiation exposure from an unsecured source and reported this suspicion. Altogether, ten people received high doses from the source. Three of those people, all workers at the junkyard, died within two months of the accident as a consequence of their exposure.

Friday, January 16, 2015

The Immortal Life of Henrietta Lacks

The Immortal Life of Henrietta Lacks, by Rebecca Skloot, superimposed on Intermediate Physics for Medicine and Biology.
The Immortal Life
of Henrietta Lacks,
by Rebecca Skloot.
For Christmas I received a portable CD player to replace one that was broken, so I am now back in business listening to audio books while walking my dog Suki. This week I finished The Immortal Life of Henrietta Lacks by Rebecca Skloot. The book explains how a biopsy from a fatal tumor led to the most famous cell line used in medical research: HeLa.

HeLa cells are grown in cell culture. Russ Hobbie and I describe cell culture experiments in the 4th edition of Intermediate Physics for Medicine and Biology, when discussing the biological effects of radiation.
16.10.1 Cell Culture Experiments

Cell-culture studies are the simplest conceptually. A known number of cells are harvested from a stock culture and placed on nutrient medium in plastic dishes. The dishes are then irradiated with a variety of doses including zero as a control. After a fixed incubation period the cells that survived have grown into visible colonies that are stained and counted. Measurements for many absorbed doses give survival curves such as those in Fig. 16.32. These curves are difficult to measure for very small surviving fractions, because of the small number of colonies that remain.
Russ and I don’t mention HeLa cells in IPMB, but they played a key role in establishing how cells respond to radiation. For instance, Terasima and Tolmach measured “Variations in Several Responses of HeLa Cells to X-Irradiation during the Division Cycle” (Biophysical Journal, Volume 3, Pages 11–33, 1963), and found that “survival (colony-forming ability) is maximal when cells are irradiated in the early post-mitotic (G1) and the pre-mitotic (G2) phases of the cycle, and minimal in the mitotic (M) and late G1 or early DNA synthetic (S) phases.” Russ and I discuss these observations in Section 16.10.2 about Chromosome Damage
“Even though radiation damage can occur at any time in the cell cycle (albeit with different sensitivity), one looks for chromosome damage during the next M phase, when the DNA is in the form of visible chromosomes.”
Skloot’s book not only explains HeLa cells and their role in medicine but also describes the life and death of Henrietta Lacks (1920-1951). Her cervical cancer was treated at Johns Hopkins University by a primitive type of brachytherapy (see Section 17.15 of IPMB) in which tubes of radium where placed near the tumor for several days. The treatment failed and Lacks soon died from her aggressive cancer, but not before researcher George Gey obtained a biopsy and used it to create the first immortal human cell line.

The Immortal Life of Henrietta Lacks is about more than just HeLa cells and Henrietta. It also describes the story of how the Lacks family—and in particular Henrietta’s daughter Deborah—learned about and coped with the existence of HeLa cells. In addition, it is a first-person account of how Skloot came to know and gain the trust of the Lacks family. Finally, it is a case study in medical ethics, exploring the use of human tissues in research, the growing role of informed consent in human studies, and the privacy of medical records. The public’s perception of medical research and the view of those doing the research can be quite different. In 2013, the National Institutes of Health and the Lacks family reached an understanding about sharing genomic data from HeLa cells. With part of the income from her book, Skloot established the Henrietta Lacks Foundation to support the Lacks family.

It looks like Suki and I are again enjoying audio books on our walks (for example, see here and here). At least I am; I’m not sure what Suki thinks about it. I hope all the books are all this good.

 Listen to Rebecca Skloot discuss The Immortal Life of Henrietta Lacks.

Friday, January 9, 2015

The Electric Potential of a Rectangular Sheet of Charge

My idea of a great physics problem is one that is complicated enough so that it is not trivial, yet simple enough that it can be solved analytically. An example can be found in Sec. 6.3 of the 4th edition of Intermediate Physics for Medicine and Biology.
If one considers a rectangular sheet of charge lying in the xy plane of width 2c and length 2b, as shown in Fig. 6.10, it is possible to calculate exactly the E field along the z axis…. The result is
Equation 6.10 in Intermediate Physics for Medicine and Biology, which contains an expression for the electric field produced by a rectangular sheet of charge.

This is plotted in Fig. 6.11 for c = 1 m, b = 100 m. Close to the sheet (z much less than 1) the field is constant, as it is for an infinite sheet of charge. Far away compared to 1 m but close compared to 100 m, the field is proportional to 1/r as with a line charge. Far away compared to 100 m, the field is proportional to 1/r2, as from a point charge.
What I like most about this example is that you can take limits of the expression to illustrate the different cases. Russ Hobbie and I leave this as a task for the reader in Problem 8. It is not difficult. All you need is the value of the inverse tangent for a large argument (π/2), and its Taylor’s series, tan-1(x) = xx3/3 + . Often expressions like these will show simple behavior in two limits, when some variable is either very large or very small. But this example illustrates intuitive behavior in three limits. How lovely. I wish I could take credit for this example, but it was present in earlier editions of IPMB, on which Russ was the sole author. Nicely done, Russ.

Usually the electric potential, a scalar, is easier to calculate than is the electric field, a vector. This led me to wonder what electric potential is produced by this same rectangle of charge. I imagine the expression for the potential everywhere is extremely complicated, but I would be satisfied with an expression for the potential along the z axis, like in Eq. 6.10 for the electric field. We should be able to find the potential in one of two ways. We could either integrate the electric field along z, or solve for the potential directly by integrating 1/r over the entire sheet. I tried both ways, with no luck. I ground to a halt trying to integrate inverse tangent with a complicated argument. When solving directly, I was able to integrate over y successfully but then got stuck trying to integrate an inverse hyperbolic sine function with an argument that is a complicated function of x. So, I’m left with Eq. 6.10, an elegant expression for the electric field involving an inverse tangent, but no analytical expression for the electric potential.

I was concerned that I might be missing something obvious, so I checked my favorite references: Griffiths’ Introduction to Electrodynamics and Jackson’s infamous Classical Electrodynamics. Neither of these authors solve the problem, even for a square sheet.

As a last resort, I turn to you, dear readers. Does anyone out there—I always assume there is someone out there reading this—know of an analytic expression for the electric potential along the z axis caused by a rectangular sheet of charge, centered at the origin and oriented in the xy plane? If you do, please share it with me. (Warning: I suspect such an expression does not exist.) If you send me one, the first thing I plan to do is to differentiate it with respect to z, and see if I get Eq. 6.10.

This will be fun.

Friday, January 2, 2015

Triplet Production

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe how x rays interact with tissue by pair production.
A photon with energy above 1.02 MeV can produce a particle–antiparticle pair: a negative electron and a positive electron or positron… Since the rest energy (mec2) of an electron or positron is 0.51 MeV, pair production is energetically impossible for photons below 2mec2 = 1.02 MeV.

One can show, using 0 = pc for the photon, that momentum [p] is not conserved by the positron and electron if [the conservation of energy] is satisfied. However, pair production always takes place in the Coulomb field of another particle (usually a nucleus) that recoils to conserve momentum. The nucleus has a large mass, so its kinetic energy p2/2m is small…
Then we discuss a related process: triplet production.
Pair production with excitation or ionization of the recoil atom can take place at energies that are only slightly higher than the threshold [2mec2]; however, the cross section does not become appreciable until the incident photon energy exceeds 4mec2 = 2.04 MeV, the threshold for pair production in which a free electron (rather than a nucleus) recoils to conserve momentum. Because ionization and free-electron pair production are (γ, eee+) processes, this is usually called triplet production.” 
Spacetime Physics, by Taylor and Wheeler, superimposed on Intermediate Physics for Medicine and Biology.
Spacetime Physics,
by Taylor and Wheeler.
Where does the factor of four in “4mec2” come from? To answer that question, we must know more about special relativity than is presented in IPMB. We know already that the energy of a photon is and the momentum is hν/c, where ν is the frequency, h is Planck’s constant, and c is the speed of light. What we need in addition is that the energy of an electron with rest mass me is γmec2, and its momentum is βγmec, where β is the ratio of the electron’s speed to the speed of light, β = v/c, and γ = 1/sqrt(1−β2). The factors of β and, especially, γ may look odd, but they are common in special relativity. To learn how they arise, read the marvelous book Spacetime Physics by Edwin Taylor and John Archibald Wheeler. Assume a photon with energy interacts with an electron at rest (β = 0, γ = 1). Furthermore (and this is not obvious), assume that after the collision the original electron and the new electron-positron pair all move in the direction of the original photon, and travel at the same speed. The conservation of energy requires
+ mec2 = 3γmec2,

and conservation of momentum implies

hν/c = 3βγmec .

The rest is algebra. Eliminate and you find that 3γβ + 1 = 3γ. Then use γ = 1/sqrt(1−β2) to find that β = 4/5 and γ = 5/3. (I love how the Pythagorean triple 3, 4, 5 arises in triplet production). Then conservation of energy or conservation of momentum implies = 4mec2. Now you know the origin of that mysterious factor of four.

The paper “Pair and Triplet Production Revisited for the Radiologist” by Ralph Raymond (American Journal of Roentgenology, Volume 114, Pages 639–644, 1972) provides additional details. To learn about special relativity, I recommend either Spacetime Physics (their Sec. 8.5 analyzes triplet production) or Space and Time in Special Relativity by one of the best writers of physics, N. David Mermin. I hear Mermin’s recent book It’s About Time is also good, but I haven’t read it yet.

Friday, December 26, 2014

Excerpt from the Fifth Edition

Next month, Russ Hobbie and I will receive the page proofs for the 5th edition of Intermediate Physics for Medicine and Biology. I welcome their arrival because I enjoy working on the book with Russ, but also I dread their coming because they will take over my life for weeks. The page proofs are our last chance to rid the book of errors; we will do our best.

I thought that you, dear readers, might like a preview of the 5th edition. We did not add any new chapters, but we did include several new sections such as this one on color vision.
14.15 Color Vision

The eye can detect color because there are three types of cones in the retina, each of which responds to a different wavelength of light (trichromate vision): red, green, and blue, the primary colors. However, the response curve for each type of cone is broad, and there is overlap between them (particularly the green and red cones). The eye responds to yellow light by activating both the red and green cones. Exactly the same response occurs if the eye sees a mixture of red and green light. Thus, we can say that red plus green equals yellow. Similarly, the color cyan corresponds to activation of both the green and blue cones, caused either by a monochromatic beam of cyan light or a mixture of green and blue light. The eye perceives the color magenta when the red and blue cones are activated but the green is not. Interestingly, no single wavelength of light can do this, so there is no such thing as a monochromatic beam of magenta light; it can only be produced my mixing red and blue. Mixing all three colors, red and green and blue, gives white light. Color printers are based on the colors yellow, cyan and magenta, because when we view the printed page, we are looking at the reflection after some light has been absorbed by the ink. For instance, if white light is incident on a page containing ink that absorbs blue light, the reflected light will contain red and green and therefore appear yellow. Human vision is trichromate, but other animals (such as the dog) have only two types of cones (dichromate vision), and still others have more than three types.

Some people suffer from colorblindness. The most common case is when the cones responding to green light are defective, so that red, yellow and green light all activate only the red receptor. Such persons are said to be red-green color blind: they cannot distinguish red, yellow and green, but they can distinguish red from blue.

As with pitch perception, the sensation of color involves both physics and physiology. For instance, one can stare at a blue screen until the cones responding to blue become fatigued, and then immediately stare at a white screen and see a yellow afterimage. Many other optical illusions with color are possible.
You may recognize parts of this excerpt as coming from a previous entry to this blog. In fact, we used the blog as a source of material for the new edition.

A Christmas Carol, by Charles Dickens, superimposed on Intermediate Physics for Medicine and Biology.
A Christmas Carol,
by Charles Dickens.
I will leave you with another excerpt, this one from the conclusion of A Christmas Carol. Every Christmas I read Dickens’s classic story about how three spirits transformed the miser Ebenezer Scrooge. It is my favorite book; I like it better than even IPMB!

I wish you all the happiest of holidays.
Scrooge was better than his word. He did it all, and infinitely more; and to Tiny Tim, who did not die, he was a second father. He became as good a friend, as good a master, and as good a man, as the good old city knew, or any other good old city, town, or borough, in the good old world. Some people laughed to see the alteration in him, but he let them laugh, and little heeded them; for he was wise enough to know that nothing ever happened on this globe, for good, at which some people did not have their fill of laughter in the outset; and knowing that such as these would be blind anyway, he thought it quite as well that they should wrinkle up their eyes in grins, as have the malady in less attractive forms. His own heart laughed: and that was quite enough for him.