Friday, May 29, 2015

Taylor's Series

In Appendix D of the 5th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I review Taylor’s Series. Our Figures D.3 and D.4 show better and better approximations to the exponential function, ex, found by using more and more terms of its Taylor’s series. As we add terms, the approximation improves for small |x| and diverges more slowly for large |x|. Taking additional terms from the Taylor’s series approximates the exponential by higher and higher order polynomials. This is all interesting and useful, but the exponential looks similar to a polynomial, at least for positive x, so it is not too surprising that polynomials do a decent job approximating the exponential.

A more challenging function to fit with a Taylor’s Series would look nothing like a polynomial, which always grows to plus or minus infinity at large |x|. I wonder how the Taylor’s Series does approximating a bounded function; perhaps a function that oscillates back and forth a lot? The natural choice is the sine function.

The Taylor’s Series of sin(x) is

sin(x) = xx3/6 + x5/120 – x7/5040 + x9/362880 - …

The figure below shows the sine function and its various polynomial approximations.

The sine function at various polynomial approximations, derived from its Taylor's series.
The sine function and its various polynomial approximations,
from: http://www.peterstone.name/Maplepgs/images/Maclaurin_sine.gif
The red curve is the sine function itself. The simplest approximation is simply sin(x) = x, which gives the yellow straight line. It looks good for |x| less than one, but quickly diverges from sine at large |x|. The green curve is sin(x) = xx3/6. It rises to a maximum and then falls, much like sin(x), but  heads off to plus or minus infinity relatively quickly. The cyan curve is sin(x) = x – x3/6 + x5/120. It captures the first peak of the oscillation well, but then fails. The royal blue curve is sin(x) = xx3/6 + x5/120 – x7/5040. It is an excellent approximation of sin(x) out to x = π. The violet curve is sin(x) = xx3/6 + x5/120 – x7/5040 + x9/362880. It begins to capture the second oscillation, but then diverges. You can see the Taylor’s Series is working hard to represent the sine function, but it is not easy.

Appendix D in IPMB gives a table of values of the exponential and its different Taylor’s Series approximations. Below I create a similar table for the sine. Because the sine and all its series approximations are odd functions, I only consider positive values of x.

A table of values for sin(x) and its various polynomial approximations.
A table of values for sin(x) and its various polynomial approximations.

One final thought. Russ and I title Appendix D as “Taylor’s Series” with an apostrophe s. Should we write “Taylor Series” instead, without the apostrophe s? Wikipedia just calls it the “Taylor Series.” I’ve seen it both ways, and I don’t know which is correct. Any opinions?

Friday, May 22, 2015

Progress Toward a Deployable SQUID-Based Ultra-Low Field MRI System for Anatomical Imaging

When surfing the web, I like to visit medicalphysicsweb.org. This site, maintained by the Institute of Physics, always publishes interesting and up-to-date information about physics applied to medicine. Readers of the 5th Edition of Intermediate Physics for Medicine and Biology should visit it regularly.

A recent article discusses a paper by Michelle Espy and her colleagues about “Progress Toward a Deployable SQUID-Based Ultra-Low Field MRI System for Anatomical Imaging” (IEEE Transactions on Applied Superconductivity, Volume 25, Article Number 1601705, June 2015). The abstract is given below.
Magnetic resonance imaging (MRI) is the best method for non-invasive imaging of soft tissue anatomy, saving countless lives each year. But conventional MRI relies on very high fixed strength magnetic fields, ≥ 1.5 T, with parts-per-million homogeneity, requiring large and expensive magnets. This is because in conventional Faraday-coil based systems the signal scales approximately with the square of the magnetic field. Recent demonstrations have shown that MRI can be performed at much lower magnetic fields (∼100 μT, the ULF regime). Through the use of pulsed prepolarization at magnetic fields from ∼10–100 mT and SQUID detection during readout (proton Larmor frequencies on the order of a few kHz), some of the signal loss can be mitigated. Our group and others have shown promising applications of ULF MRI of human anatomy including the brain, enhanced contrast between tissues, and imaging in the presence of (and even through) metal. Although much of the required core technology has been demonstrated, ULF MRI systems still suffer from long imaging times, relatively poor quality images, and remain confined to the R and D laboratory due to the strict requirements for a low noise environment isolated from almost all ambient electromagnetic fields. Our goal in the work presented here is to move ULF MRI from a proof-of-concept in our laboratory to a functional prototype that will exploit the inherent advantages of the approach, and enable increased accessibility. Here we present results from a seven-channel SQUID-based system that achieves pre-polarization field of 100 mT over a 200 cm3 volume, is powered with all magnetic field generation from standard MRI amplifier technology, and uses off the shelf data acquisition. As our ultimate aim is unshielded operation, we also demonstrated a seven-channel system that performs ULF MRI outside of heavy magnetically-shielded enclosure. In this paper we present preliminary images and compare them to a model, and characterize the present and expected performance of this system.
Let’s compare a standard 1.5-Tesla clinical MRI system with Espy et al.’s ultra-low field device. To compare quantities using the same units, I will always express the magnetic field strength in mT; a typical clinical MRI machine has a field of 1500 mT. In Section 18.3 of IPMB, Russ Hobbie and I show that the magnetization depends linearly on the magnetic field strength (Equation 18.9). The static magnetic field of Espy et al.’s machine is 0.2 mT, but for 4000 ms before spin excitation a polarization magnetic field of 100 mT is turned on. Thus, there is a difference of a factor of 15 in the magnetization, with the ultra-low device having less and the clinical machine more. Once the polarization field is turned off, the magnetic field in the ultra-low device reduces to 0.2 mT. This is only four times the earth’s magnetic field, about 0.05 mT. Espy et al. use Hemlholtz coils to cancel the earth’s field.

A magnetic field of 1500 mT is usually produced by a coil that must be kept at low temperatures to maintain the wire as a superconductor. A 100 mT field does not require superconductivity, but the needed current generates enough heat that the 510-turn copper coil needs to be cooled by liquid nitrogen, and even still the current in the coil must be turned off half the time (50% duty cycle) to avoid overheating.

Once the polarization field turns off, the spins precess with the Larmor frequency for a 0.2 mT magnetic field. The gyromagnetic ratio of protons is 42.6 kHz/mT, implying a Larmor frequency of 8.5 kHz, compared with 64,000 kHz for a clinical machine. So, the Larmor frequencies differ by a factor of 7500.

The magnetic resonance signal recorded in a clinical system is large compared to an ultra-low device because the magnetization is larger by a factor of 15 and the Larmor frequency is larger by a factor of 7500, implying a signal over a hundred thousand times larger. Espy et al. get around this problem by measuring the signal with a Superconducting Quantum Interference Device (SQUID) magnetometer, like those used in magnetoencephalography (see Section 8.9 in IPMB).

Preliminary experiments were performed in a heavy and expensive magnetically shielded room (again, like those used when measuring the MEG). However, second-order gradiometer pickup coils reduce the noise sufficiently that the shielded room is unnecessary.

To perform imaging, Espy et al. use magnetic field gradients of about 0.00025 mT/mm, compared with 0.01 mT/mm gradients typical for clinical MRI. For two objects 1 mm apart, a clinical imaging system would therefore produce a fractional frequency shift of 0.01/1500 = 0.0000067 or 6.7 ppm, whereas a low-field device has a fractional shift of 0.00025/0.2 = 0.00125 or 1250 ppm. Therefore, the clinical magnetic field needs to be extremely homogeneous (on the order of parts per million) to avoid artifacts, whereas a low-field device can function with heterogeneities hundreds of times larger.

The relaxation time constants for gray matter in the brain are reported by Espy et al. as about T1 = 600 ms and T2 = 80 ms. In clinical devices, the value of T1 is about half that, and T2 is about the same. Based on Figure 18.12 and Equation 18.35 in IPMB, I’m not surprised that T2 is largely independent of the magnetic field strength. However, in strong fields T1 increases as the square of the magnetic field (or the square of the Larmor frequency), so I was initially expecting a much smaller value of T1 for the low-field device. But once the Larmor frequency drops to values less than the typical correlation time of the spins, T1 becomes independent of the magnetic field strength (Equation 18.34 in IPMB). I assume that is what is happening here, and explains why T1 drops by only a factor of two when the magnetic field is reduced by a factor of 7500.

I find the differences between radio-frequency excitation pulses to be interesting. In clinical imaging, if the excitation pulse has a duration of about 1 ms and the Larmor frequency is 64,000 kHz, there are 64,000 oscillations of the radio-frequency magnetic field in a single π/2 pulse. Espy et al. used a 4 ms duration excitation pulse and a Larmor frequency of 8.5 kHz, implying just 34 oscillations per pulse. I have always worried that illustrations such as Figure 18.23 in IPMB mislead because they show the Larmor frequency as being not too different from the excitation pulse duration. For low-field MRI, however, this picture is realistic.

Does low-field MRI have advantages? You don’t need the heavy, expensive superconducting coil to generate a large static field, but you do need SQUID magnetometers to record the small signal, so you don’t avoid the need for cryogenics. The medicalphysicsweb article weighs the pros and cons. For instance, the power requirements for a low-field device are relatively small, and it is more portable, but the imaging times are long. The safety hazards caused by metal are much less in a low-field system, but the impact of stray magnetic fields is greater. I’m skeptical about the ultimate usefulness of ultra low-field MRI, but it’ll be fun to watch if Espy and her team can prove me wrong.

Friday, May 15, 2015

What My Dogs Forced Me To Learn About Thermal Energy Transfer

I’m a dog lover, so I have to enjoy an American Journal of Physics paper that begins “For many years, I have competed and judged in American Kennel Club obedience trials.” The title of the paper is also delightful: “What my Dogs Forced Me to Learn About Thermal Energy Transfer” (Craig Bohren, American Journal of Physics, Volume 83, Pages 443−446, 2015). Bohren’s hypothesis is that an animal perceives hotness and coldness not directly from an object’s temperature, as one might naively expect, but from the flux density of thermal energy. I could follow his analysis of this idea, but I prefer to use the 5th edition of Intermediate Physics for Medicine and Biology, because Russ Hobbie and I have already worked out almost all the results we need.

Chapter 4 of IPMB analyzes diffusion. We consider the concentration, C, of particles as they diffuse in one dimension. Initially (t = 0), there exists a concentration difference C0 between the left and right sides of a boundary at x = 0. We solve the diffusion equation in this case, and find the concentration in terms of an error function

  C(x,t) = C0/2 [ 1 – erf(x/√4Dt) ] ,                  Eq. 4.75

where D is the diffusion constant. A plot of C(x,t) is shown in Fig. 4.22 (we assume C = 0 on the far right, but you could add a constant to the solution without changing the physics, so all that really matters is the concentration difference).

Fig. 4.22 from Intermediate Physics for Medicine and Biology, showing the spread of an initially sharp boundary due to diffusion.
Fig. 4.22 The spread of an initially sharp boundary due to diffusion.
In Homework Problem 19, Russ and I show that the analysis of particle diffusion is equivalent to an analysis of heat conduction, with the thermal diffusivity D given by the thermal conductivity, κ, divided by the specific heat capacity, c, and the density, ρ

D = κ/cρ .

So, by analogy, if you start (t = 0) with a uniform temperature on the left and on the right sides of a boundary at x = 0, with an initial temperature difference ΔT between sides, the temperature distribution T(x,t) is (to within a additive constant temperature) the same as the concentration distribution calculated earlier.

T(x,t) = ΔT /2 [ 1 – erf(x/√4Dt) ] .

The temperature of the interface is always ΔT/2. If a dog responded to simply the temperature at x = 0 (where its thermoreceptors are presumably located), it would react in a way strictly proportional to the temperature difference ΔT. But Bohren’s hypothesis is that thermoreceptors respond to the energy flux density, κ dT/dx.

Now let us look again at Fig. 4.22. The slope of the curve at x = 0 is the key quantity. So, we must differentiate our expression for T(x,t). We get

κ dT/dx = - κ ΔT/2 d/dx [ erf(x/√4Dt) ] .

By the chain rule, this becomes (with u = x/√4Dt)

κ dT/dx  = - κ ΔT / (2 √4Dt ) d(erf(u))/du .

The derivative of the error function is given in IPMB on page 179

d/du (erf(u)) = 2/√π e-u2 .

At the interface (u = 0), this becomes 2/√π. Therefore

κ dT/dx = - κ ΔT /√4πDt .

Bohren comes to the same result, but by a slightly different argument.

The energy flux density depends on time, with an infinite response initially (we assume an abrupt difference of temperature on the two sides of the boundary x = 0) that falls to zero as the time becomes large. The flux density depends on the material parameters by the quantity κ/√D , which is equivalent to √cρκ and is often called the thermal inertia.

Bohren goes on to analyze the case when the two sides have different properties (for example, the left might be a piece of aluminum, and the right a dog’s tongue), and shows that you get a similar result except the effective thermal inertia is a combination of the thermal inertia on the left and right. He does not solve the entire bioheat equation (Sec. 14.11 in IPMB), including the effect of blood flow. I would guess that blood flow would have little effect initially, but it would play a greater and greater role as time goes by.

Perhaps I will try Bohren’s experiment: I’ll give Auggie (my daughter Kathy’s lovable foxhound) a cylinder of aluminum and a cylinder of stainless steel, and see if he can distinguish between the two. My prediction is that, rather than either metal, he prefers rawhide.

Friday, May 8, 2015

The blog is dead. Long live the blog!

Intermediate Physics for Medicine and Biology.
Intermediate Physics for
Medicine and Biology.
All good things must come to an end. After nearly eight years of posting an entry to this blog every Friday morning, I must say goodbye. This is the last entry to my blog dedicated to the 4th edition of Intermediate Physics for Medicine and Biology. I hope you have found it useful.

Next week will be the first entry to the blog for the FIFTH edition of Intermediate Physics for Medicine and Biology! You can now purchase the 5th edition at the Springer website. Amazon does not have a page for the book yet, but it should be coming soon.

What’s new in the 5th edition? The preface states
The Fifth Edition does not add any new chapters, but almost every page has been improved and updated. Again, we fought the temptation to expand the book and deleted material when possible. Some of the deleted material is available at the book’s website: [https://sites.google.com/view/hobbieroth]. The Fifth Edition has 12% more end-of-chapter problems than the Fourth Edition; most highlight biological applications of the physical principles. Many of the problems extend the material in the text. A solutions manual is available to those teaching the course. Instructors can use it as a reference or provide selected solutions to their students. The solutions manual makes it much easier for an instructor to guide an independent-study student. Information about the solutions manual is available at the book’s website.
The 5th edition is only 13 pages longer than the 4th. By deleting or condensing obsolete or less important topics, we made room for some new sections, including
  • 1.2 Models (about toy models and their different roles in biology and physics)
  • 1.15 Diving (SCUBA diving and the bends)
  • 2.6 The Chemostat
  • 8.1.2 The Cyclotron
  • 8.8.4 Magnetic nanoparticles
  • 9.10.5 Microwaves, Mobile Phones, and Wi-Fi
  • 10.7 Proportional, Derivative, and Integral Control
  • 13.1.3 Shear waves
  • 13.7.4 Elastography (using ultrasound imaging)
  • 13.7.5 Safety (of ultrasound imaging)
  • 14.2 Electron Waves and Particles: The Electron Microscope
  • 14.9.2 Photodynamic Therapy
  • 14.15 Color Vision
  • 18.14 Hyperpolarized MRI of the Lung
There is no errata yet, but despite our best efforts to find and remove all mistakes I suspect we will start finding errors in the 5th edition soon, and we’ll tell you about them if we do. As always, if YOU find errors in the book, please let us know.

Enjoy!

Friday, May 1, 2015

Churchill’s moral

The Second World War,  by WInston Churchill, with Intermediate Physics for Medicine and Biology.
The Second World War,
by Winston Churchill.
Winston Churchill is one of my heroes. After completing my PhD dissertation, I rewarded myself by reading Churchill’s history The Second World War; all six volumes. I am amazed how someone could make so much history leading England through World War Two, and then write that history so well. I love his language, and the way he uses his memos and letters to illustrate his thoughts at the time the events happened, rather than relying on his memories of those events years later. His life story is fascinating, and is told eloquently by the late William Manchester in his biography of Churchill, The Last Lion.

One unique feature of Churchill’s history is that he gave it a moral:

Moral of the Work

In War: Resolution
In Defeat: Defiance
In Victory: Magnanimity
In Peace: Goodwill

All books should have a moral, which sums up its key message in just a handful of carefully chosen words, and highlights the crucial lessons readers should learn. A moral is not an abstract, meant to summarize the book. Rather than explaining what the book is about, a moral tells why you should bother reading the book at all.

What would be the moral of the 4th edition of Intermediate Physics for Medicine and Biology? I cannot capture the essence of IPMB as well as Churchill did his history, but I must try. The moral of IPMB should express how physics explains and constrains biology and medicine, how you cannot truly understand biology until you can describe it in the language of mathematics, how so much of what is learned in introductory physics has direct applications to modern medicine, how toy models provide a way to reduce complex biological processes to their fundamental mechanisms, how the goal of expressing phenomena in mathematics is not merely to calculate numbers but to tell a physical story, and how solving homework problems is the most important way (more important that reading the book!) to learn crucial modeling skills. So, with apologies to Sir Winston, I present the moral of Intermediate Physics for Medicine and Biology:

Moral of the Work

 In Physics: Physiology
In Math: Medicine
In Models: Comprehension
In Equations: Insight

Friday, April 24, 2015

Figure 13.5

In Chapter 13 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss imaging using ultrasound. One of the key concepts in this chapter is the reflection of an ultrasonic wave at the boundary between tissues having different acoustic impedances. The acoustic impedance is equal to the square root of ρ/κ, where ρ is the density and κ is the compressibility. An ultrasonic image is formed by recording echoes of an ultrasonic wave and using the timing of those echoes to determine the distance from tissue boundaries.

Figure 13.5 from Intermediate Physics for Medicine and Biology.
Figure 13.5 from Intermediate Physics for Medicine and Biology.
Figure 13.5 shows schematically how part of a pressure wave approaching a surface (the incident wave) passes through the surface (the transmitted wave) and part travels back toward the wave’s source (the reflected wave, or echo). When you look closely, however, there is something odd about this figure: the transmitted wave has a larger amplitude than the incident wave. How can this be? Doesn’t this violate some conservation law? If we consider particles rather than waves, we would expect that if 100 particles were incident on a surface, perhaps 75 of them would be transmitted and 25 reflected. Figure 13.5 seems to imply that 100 incident particles result in 133 transmitted and 33 reflected! How?

The figure is not wrong; the problem is with our intuition. Pressure is not a conserved quantity. There is no reason to expect the sum of the pressures of the transmitted and reflected waves to equal the pressure of the incident wave. The amplitudes are consistent with equations 13.26 and 13.27 in IPMB relating the three waves. Yet there is a conserved quantity, one we all know: energy.

The intensity of a wave is the energy per unit area per unit time. The intensity I, pressure p, and acoustic impedance Z are related by equation 13.29: I = p2/2Z. The transmitted wave in Fig. 13.5 has a pressure amplitude that is 1.33 times the amplitude of the incident wave, but it is moving through a tissue that has twice the acoustic impedance (the caption says that for this figure, Z2 = 2 Z1). For simplicity, take the acoustic impedance on the left (the incident side of the boundary, region 1) to be Z1 = 0.5 and the amplitude of the incident wave to be pi = 1 (let’s not worry about units for now, because our goal is to compare the relative intensities of the three waves). In this case, the intensity of the incident wave is equal to one. If the transmitted pressure is 1.33 and the acoustic impedance on the right (region 2) is Z2 = 1 (twice Z1), then the transmitted intensity is (1.33)2/2 = 0.89. The reflected wave has amplitude 0.33, and is propagating through the tissue on the left, so its intensity is (0.33)2/1 = 0.11. The sum of the intensities of the transmitted and reflected waves, 0.89 + 0.11, is equal to the intensity of the incident wave. Energy is conserved! The figure is correct after all.

Here is another way to think about intensity: it’s one half of the product of the pressure times the tissue speed. When I say “tissue speed” I don’t mean the propagation speed of the wave, but the speed of the tissue itself as it oscillates back and forth. The acoustic impedance relates the pressure and tissue speed. The large pressure of the transmitted wave is associated with a small tissue speed. The transmitted wave in Fig. 13.5 looks “big” only because we plot the pressure. Had we plotted tissue speed instead, we would not be wondering why the transmitted wave has such a large amplitude. We would, however, be scratching our head about a funny phase shift in the reflected wave, which the version of the figure showing the pressure hides.

So, Fig. 13.5 is correct. Does that mean it looks exactly the same in the 5th edition of IPMB (due out this summer)? No, we did change the figure; not to correct an error, but to emphasize another point. Figure 13.5, as presently drawn, shows the wavelength of the transmitted wave to be the same as the wavelength of the incident wave. The wavelength does not depend on the acoustic impedance, but it does depend on the wave speed (the propagation speed of the wave itself, usually given the symbol c, which is not the same as the speed of the oscillating tissue). The wave speed is equal to the square root of the reciprocal of the product of the density and compressibility. One can cook up examples where two tissues have the same wave speed but different acoustic impedances. For instance (again, not worrying about units and only comparing relative sizes), if the tissue on the left had twice the compressibility and half the density of the tissue on the right, then the left would have half the acoustic impedance and the same wave speed as the right, just as shown in Fig. 13.5. But tissues usually differ in compressibility by a greater factor than they differ in density. If we assume the two regions have the same density but the right has one-fourth the compressibility, then Z2 = 2 Z1 as before but also c2 = 2 c1, so the wavelength is longer on the right. In the 5th edition, the figure now shows the wavelength longer on the right.

What is the moral to this story? Readers (and authors) need to think carefully about illustrations such as Fig. 13.5. They tell a physical story that is often richer and more complicated than we may initially realize.

Friday, April 17, 2015

Physical Models of Living Systems

Physical Models of Living Systems, by Philip Nelson, superimposed on Intermediate Physics for Medicine and Biology.
Physical Models of
Living Systems,
by Philip Nelson.
Philip Nelson has a new textbook that came out earlier this year: Physical Models of Living Systems. It’s an excellent book, well written and beautifully illustrated. The target audience is similar to that for the 4th edition of Intermediate Physics for Medicine and Biology: upper-level undergraduates who have studied physics and math at the introductory level. Like IPMB, it stresses the construction of physical and mathematical models of living systems.

At the start of the book, Nelson provides a section labeled “To the Student.” I hope students read this, as it provides much wisdom and advice. In fact, most of this advice applies as well to IPMB. I found his discussion of “skills” to be so valuable that I reproduce it here.
Science is not just a pile of facts for you to memorize. Certainly you need to know many facts, and this book will supply some as background to the case studies. But you also need skills. Skills cannot be gained just by reading through this (or any) book. Instead you’ll need to work through at least some of the exercises, both those at the ends of chapters and others sprinkled throughout the text.

Specifically, this book emphasizes

Model construction skills: It's important to find an appropriate level of description and then write formulas that make sense at that level. (Is randomness likely to be an essential feature of this system? Does the proposed model check out at the level of dimensional analysis?) When reading others’ work, too, it’s important to be able to grasp what assumptions their model embodies, what approximations are being made, and so on.

Interconnection skills: Physical models can bridge topics that are not normally discussed together, by uncovering a hidden similarity. Many big advances in science came about when someone found an analogy of this sort.

Critical skills: Sometimes a beloved physical model turns out to be . . . wrong. Aristotle taught that the main function of the brain was to cool the blood. To evaluate more modern hypotheses, you generally need to understand how raw data can give us information, and then understanding.

Computer skills: Especially when studying biological systems, it’s usually necessary to run many trials, each of which will give slightly different results. The experimental data very quickly outstrip our abilities to handle them by using the analytical tools taught in math classes. Not very long ago, a book like this one would have to content itself with telling you things that faraway people had done; you couldn’t do the actual analysis yourself, because it was too difficult to make computers do anything. Today you can do industrial-strength analysis on any personal computer.

Communication skills: The biggest discovery is of little use until it makes it all the way into another person’s brain. For this to happen reliably, you need to sharpen some communication skills. So when writing up your answers to the problems in this book, imagine that you are preparing a report for peer review by a skeptical reader. Can you take another few minutes to make it easier to figure out what you did and why? Can you label graph axes better, add comments to your code for readability, or justify a step? Can you anticipate objections?

You'll need skills like these for reading primary research literature, for interpreting your own data when you do experiments, and even for evaluating the many statistical and pseudostatistical claims you read in the newspapers.

One more skill deserves separate mention. Some of the book’s problems may sound suspiciously vague, for example, “Comment on . . .” They are intentionally written to make you ask, “What is interesting and worthy of comment here?” There are multiple “right” answers, because there may be more than one interesting thing to say. In your own scientific research, nobody will tell you the questions. So it’s good to get the habit of asking yourself such things.
Nelson begins the book by discussing virus dynamics, and specifically analyzes the work of Alan Perelson, who constructs mathematical model of how the human immunodeficinecy virus interacts with our immune system. Here at Oakland University, Libin Rong (a former postdoc of Perelson’s) does similar research, and their work is an excellent case study in using mathematics to model a biological process.

A large fraction of the book examines the role of randomness in biology, leading to a detailed analysis of probability. Nelson provides an elegant discussion of the experiments of Max Delbruck and Salvador Luria.
S. Luria and M. Delbruck set out to explore inheritance in bacteria in 1943. Besides addressing a basic biological problem, this work developed a key mode of scientific thought. The authors laid out two competing hypotheses, and sought to generate testable quantitative predictions from them. But unusually for the time, the predictions were probabilistic in character. No conclusion can be drawn from any single bacterium—sometimes it gains resistance; usually it doesn’t. But the pattern of large numbers of bacteria has bearing on mechanism. We will see how randomness, often dismissed as an unwelcome inadequacy of an experiment, turned out to be the most interesting feature of the data.
Perhaps one way to appreciate the differences between Physical Models of Living Systems and IPMB is to compare how each handles diffusion. Russ Hobbie and I consider diffusion in Chapter 4 of IPMB; we describe diffusion as arising from a concentration gradient (Fick’s first law), and use the continuity equation to derive the diffusion equation (Fick’s second law). These relationships are macroscopic descriptions of diffusion. Then, at the end of the chapter—almost as an afterthought—we show that diffusion can be thought of as a random walk. Nelson, on the other hand, starts by analyzing the random nature of diffusion using probabilistic ideas, and then—almost as an afterthought—derives the diffusion equation (or at least a discrete approximation of it). I think this example reflects the different approaches of the two books: IPMB generally takes a macroscopic approach but sometimes reaches down with an example at the microscopic level, whereas Physical Models in Living Systems typically starts with a microscopic description and then sometimes works its way up to the macroscopic level.

Both books also have an extensive analysis of feedback. The canonical example in IPMB comes from physiology at the organism level: breathing rate controls and is controlled by blood carbon dioxide concentration. In Physical Models in Living Systems, a central example is how bacteria use feedback to regulate the synthesis of the amino acid tryptophan. Both case studies are excellent examples of negative feedback, but at different spatial scales. One example is not better than the other; they’re merely different illustrations of the same idea.

One strength of Physical Models of Living Systems is its emphasis on using computer simulations to describe a system’s behavior. IPMB has a few computer programs (for example, a program is provided to simulate the Hodgkin-Huxley model of a nerve axon), but Physical Models of Living Systems has a much heavier reliance on numerical simulation. Again, one approach isn’t better than the other, just different. One can learn a lot about biology using toy models and analytical analysis, but many more-complicated (often nonlinear) processes need the numerical approach. Anyone who uses, or plans to use, MATLAB for simulations may benefit from the Student’s Guide to Physical Models of Living Systems (available free to all at www.macmillanhighered.com/physicalmodels1e).

In conclusion, Nelson states that Physical Models of Living Systems is about how “physical science and life science illuminate each other,” and I can’t think of a better description of the goal of Intermediate Physics for Medicine and Biology. Students are lucky they have both to choose from. Finally, what is the very best thing about Physical Models of Living Systems? At the end of the “To the Student” section, Nelson lists several other books that complement his, and cites…you guessed it…Intermediate Physics for Medicine and Biology.

Friday, April 10, 2015

The Steradian

Angles are measured in radians, but solid angles are measured in steradians. Russ Hobbie and I discuss solid angles in Appendix A of the 4th edition of Intermediate Physics for Medicine and Biology.
A plane angle measures the diverging of two lines in two dimensions. Solid angles measure the diverging of a cone of lines in three dimensions. Figure A.3 shows a series of rays diverging from a point and forming a cone. The solid angle Ω is measured by constructing a sphere of radius r centered at the vertex and taking the ratio of the surface area S on the sphere enclosed by the cone to r2:

Ω = S/r2.

…The unit of solid angle is the steradian (sr). A complete sphere subtends a solid angle of 4π steradians, since the surface area of a sphere is 4πr2.
It is useful to have an intuitive idea of how big a steradian is. Viewed from the center of the earth, Asia subtends about one steradian, and Switzerland subtends about one millisteradian. From its center a sphere subtends 4π steradians, so one steradian is 1/4π = 0.08, or 8% of the sphere area. Suppose we use spherical coordinates to determine the area, centered at the north pole (θ = 0), that subtends one steradian. It is the area subtended by the “cap” of the sphere having an angle θ = cos-1(1-1/2π) = cos-1(0.84) = 32.8 degrees, or 0.57 radians.

One square degree is (π/180)2 = 0.000305 sr = 305 μsr. In other words, there are 3283 square degrees per steradian. Put in yet another way, a steradian is one square radian. The moon has a radius of 1737 km, and the distance between the earth and the moon is 384,400 km. The solid angle subtended by the moon in the night sky is therefore π 17372/3844002 = 0.000064 sr, or 64 μsr. Interestingly, the sun, with a radius of 696,000 km and an earth-sun distance of 149,600,000 km, subtends almost the same solid angle, which makes solar eclipses so interesting. Viewed from earth, Mars at its closest approach subtends about 12 nanosteradians.

At the Battle of Bunker Hill, the order went out to “don't fire until you see the whites of their eyes.” This may be a figure of speech, but let’s take it literally. You can see the whites of a person’s eyes at a distance of about 10 meters (I would definitely be shooting before the enemy got that close). The area of the “whites of the eye” is difficult to estimate accurately, but let’s approximate it as one square centimeter. What solid angle is subtended by the whites of the eye at a distance of ten meters? It would be about (0.01 m)2/(10 m)2, or one microsteradian. This is not bad for an estimate of our visual acuity. We may sometimes do a little better than this, but probably not during battle.

Friday, April 3, 2015

On Writing Well

Oakland University, where I work, has an ADVANCE grant from the National Science Foundation, with the goal of increasing the representation and advancement of women in academic science and engineering careers. I am part of the leadership team for the Women in Science and Engineering at Oakland University (WISE@OU) Program, and one of my roles is to help mentor young faculty. Last Tuesday I led a WISE@OU workshop on “Best Practices in Scientific Writing.” The event was videotaped, and you can watch it here. I’m the bald guy who is standing and wearing the red shirt.
On Writing Well, by William Zinsser, superimposed on Intermediate Physics for Medicine and Biology.
On Writing Well,
by William Zinsser.

A list of writing resources was provided to all workshop participants (see below). It begins “For the most benefit in the least time with no cost, work through the Duke online Scientific Writing Resource, then read Part 1 (about 50 pages) of Zinsser’s book On Writing Well (available free online), and finally go through the online material for the Stanford Writing in the Sciences class.” If you don’t have enough time for even these three steps, then just read Zinsser, which is a delight.
Russ Hobbie and I try to write well in the 4th edition of Intermediate Physics for Medicine and Biology. You can decide if we succeed. Many readers of this blog are from outside the United States (I can tell from the “likes” on the book’s Facebook page). As I noted in the workshop, it is not fair that scientists from other countries must write science in a language other than their native tongue. Yet, most science is published in English, and scientists need to be able to write it well. So, my advice is to do whatever it takes to become a decent writer.

When I was in graduate school, my dissertation advisor John Wikswo gave me a copy of The Complete Plain Words, a wonderful book about writing originally published by Sir Ernest Gowers. Read it for free online. The version Wikswo loaned me was a later edition coauthored by Bruce Fraser. (You always should be concerned when a perfectly good book picks up a coauthor in later editions). This spring, Gowers’ great-granddaughter Rebecca Gowers is publishing a new edition of Plain Words. I can’t wait. Another oldie but goodie is Strunk and White’s The Elements of Style. The original, by William Strunk, is available online. (The second author of “Strunk and White” is E. B. White who wrote Charlotte’s Web; I vividly remember Mrs. Sheets reading Charlotte’s Web aloud to my third grade class at Northside Elementary School.) If you have time for only three words about writing, let them be Strunk’s admonition “omit needless words.”

I’ve come up with my own Three Laws of Writing Science, patterned after Isaac Asimov’s Three Laws of Robotics (regular readers of this blog know that Asimov influenced me greatly when I was in high school).
  • First Law: What you write must be scientifically correct. 
  • Second Law: Write clearly, except when clarity would put you in conflict with the First Law.
  • Third Law: Write concisely, except when conciseness would put you in conflict with the First or Second Laws.
Writing is easier when you enjoy doing it, and I always have. I once became secretary of the Parent-Teacher Association at my daughters’ elementary school because that job allowed me to write the minutes of the PTA meetings. If you don’t enjoy writing, take heart. You don’t need to be a great writer to succeed in science. Slipping into NSF-speak, if you can improve from “poor” or “fair” to “good” you will get almost the full benefit. Go from “good” to “very good” or “excellent” only if you like to write.

Best Practices in Scientific Writing

Below is a list of resources about scientific writing. For the most benefit in the least time with no cost, work through the Duke online Scientific Writing Resource, then read Part 1 (about 50 pages) of Zinsser’s book On Writing Well (available free online), and finally go through the online material for the Stanford Writing in the Sciences class.

Books about writing:

• Gowers R, Gowers E. 2014. Plain Words
• Gray-Grant D. 2008. 8 ½ Steps to Writing Better, Faster
• Pinker, S. 2014. The Sense of Style
• Silvia PJ. 2007. How to Write a Lot 
• Strunk W, White EB. 1979. The Elements of Style
• Zinsser W. 1976. On Writing Well (free online: archive.org/details/OnWritingWell)

American Scientist article “The Science of Scientific Writing

Video of Steven Pinker discussing good writing

A free online course from Stanford about Writing in the Sciences

Kamat, Buriak, Schatz, Weiss. 2014. “Mastering the art of scientific publicaition: Twenty papers with 20/20 vision on publishing,” J. Phys. Chem. Lett., 5:3519–3521.

Kotz, Cals, Tugwell, Knottnerus. 2013. “Introducing a new series on effective writing and publishing of scientific papers,” J. Clinical Epidemiology, 66:359–360.

How to Get Published. A discussion with Mike Sevilla and myself, moderated by George Corser, about writing and publishing scientific papers, hosted the OU graduate student group GradConnection.

A free online webinar debating the use of the active or passive voice

Duke University’s online Scientific Writing Resource, open to all.

Nonnative English speakers (and the rest of us too) should see the website Scientific English as a Foreign Language.

Friday, March 27, 2015

Projections and Back Projections

Tomography is one of the most important contributions of mathematics to medicine. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe two methods to solve the problem of tomographic reconstruction.
The reconstruction problem can be stated as follows. A function f(x,y) exists in two dimensions. Measurements are made that give projections: the integrals of f(x,y) along various lines as a function of displacement perpendicular to each line…F(θ,x'), where x' is the distance along the axis at angle θ with the x axis. The problem is to reconstruct f(x,y) from the set of functions F(θ,x'). Several different techniques can be used… We will consider two of these techniques: reconstruction by Fourier transform […] and filtered back projection…. The projection at angle θ is integrated along the line y':
The definition of a projection, used in tomography.
[where x = x' cosθy' sinθ and y = x' sinθ + y' cosθ]… The definition of the back projection is
The definition of a back projection, used in tomography.
where x' is determined for each projection using Eq. 12.27 [x' = x cosθ + y sinθ].
In IPMB, Homework Problem 32 asks you can take the function (the “object”)
A mathematical function in Homework Problem 32 in Intermediate Physics for Medicine and Biology that serves as the object in an analytical example of tomography,
and calculate the projection using Eq. 12.29, and then calculate the back projection using Eq. 12.30. The object and the back projection are different. The moral of the story is that you cannot solve the tomography problem by back-projection alone. Before you back-project, you must filter.*

I like having two homework problems that illustrate the same point, one that I can do in class and another that I can assign to the students. IPMB contains only one example of projecting and then back-projecting, but recently I have found another. So, dear reader, here is a new homework problem; do this one in class, and then assign Problem 32 as homework.
Problem 32 ½ Consider the object f(x,y) = √(a2 − x2 − y2)/a for √(x2 +y2) less than a, and 0 otherwise.
(a) Plot f(x,0) vs. x 
(b) Calculate the projection F(θ,x'). Plot F(0,x') vs. x'.
(c) Use the projection from part (b) to calculate the back projection fb(x,y). Plot fb(x,0) vs. x.
(d) Compare the object and the back projection. Explain qualitatively how they differ.
The nice thing about this function
A mathematical function in a new Homework Problem for Intermediate Physics for Medicine and Biology that serves as the object in an analytical example of tomography,
(as well as the function in Problem 32) is that f(x,y) does not depend on direction, so F(θ,x') is independent of θ; you can make you life easier and solve it for θ = 0. Similarly, you can calculate the back projection along any line through the origin, such as y = 0. I won’t solve Problem 32½ here in detail, but let me outline the solution.

Below is a plot of f(x,0) versus x
A plot of the object function, in an example of tomography.
To take the projection in part (b) use θ = 0, so x' = x and y' = y. If |x| is greater than a, you integrate over a function that is always zero, so F(0,x') = 0. If |x| is less than a, you must do more work. The limits of the integral over y become −(a2 – x2) to +(a2 – x2). The integral is not too difficult (it’s in my mathematical handbook), and you get F(0,x) = π(a2 – x2)/2a. Because the projection is independent of θ, you can generalize to
A mathematical equation for the projection, in an example of tomography.
The plot, an inverted parabola, is
A plot of the projection, in an example of tomography.
In part (c) you need to find the back-projection. I suggest calculating it for the line y = 0. Once you have it, you can find it for any point by replacing x by (x2 +y2). The back-projection for |x| less than a is easy. The integral in Eq. 12.30 gives another inverted parabola. For |x| is greater than a, the calculation is more complicated because some angles give zero and some don’t. A little geometry will convince you that the integral should range from cos-1(a/x) to π - cos-1(a/x). Because the function is even around π/2, you can make your life easier by multiplying by two and integrating from cos-1(a/x) to π/2. The only way I know to show how you get cos-1(a/x) is to draw a picture of a line through the point x greater than a, y = 0 that is tangent to the circle x2 + y2 = a2, and then do some trigonometry. When you then evaluate the integral you get
A mathematical equation for the back projection, in an example of tomography.
A plot of this complicated looking function is
A plot of the back projection, in an example of tomography.
To answer part (d), compare your plots in (a) and (c). The object in (a) is confined entirely inside the circle x2 + y2 = a2, but the back-projection in (c) spreads over all space. One could say the back-projection produced a smeared-out version of the object. Why are they not the same? We didn’t filter before we back projected.

If you really want to have fun, show that the limit of the back-projection goes to zero as x goes to infinity. This is surprisingly difficult—that last term doesn’t look like it’ll go to zero—and you need to keep more than one term in your Taylor series to make it work.

The back projection shown above looks similar to the back projection shown in Fig. 12.23 of IPMB. My original goal was to calculate the back projection in Fig. 12.23 exactly, but I got stuck trying to evaluate the integral

I sometimes ask myself: why do I assign these analytical examples of projections and back-projections? Anyone solving the tomography problem in the clinic will certainly use a computer. Is anything gained by doing analytical calculations? Yes. The goal in doing these problems is to gain insight. A computer program is a black box: you put projections in and out comes an image, but you don’t know what happens in between. Analytical calculations force you to work through each step. Please don’t skip the plots in parts (a) and (c), and the comparison of the plots in part (d), otherwise you defeat the purpose; all you have done is an exercise in calculus and you learn nothing about tomography.

I wish I had come up with this example six months ago, so Russ and I could have included it in the 5th edition of IPMB. But it’s too late, as the page proofs have already been corrected and the book will be printed soon. This new homework problem will have to wait for the 6th edition!


*Incidentally, Prob. 12.32 has a typo in the second line: "|x|" should really be "(x2 +y2)".