Friday, October 11, 2019

A Blog as Ancillary Material for a Physics Textbook

Today I’m attending the Fall 2019 Meeting of the Ohio-Region Section of the American Physical Society and the Michigan Section of the American Association of Physics Teachers, held at Kettering University in Flint, Michigan. Flint is just 45 minutes north of Oakland University, so this is a local meeting for me.

At the meeting I’ll present a poster titled “A Blog as Ancillary Material for a Physics Textbook.” As you can probably guess, the blog I’m referring to is the one you’re reading now. My poster is shown below.

My poster for the Fall 2019 Meeting of the Ohio-Region Section of the American Physical Society
and the Michigan Section of the American Association of Physics Teachers
The poster begins with my meeting abstract.
Nowadays, textbooks come with many ancillary materials: solution manuals, student guides, etc. A unique ancillary feature is a blog. A blog allows you to keep your book up-to-date, to expand on ideas covered only briefly in your book, to point to other useful learning materials such as websites, articles and other books, and to interact directly with students using your book.
Then I address the question “Why write a blog associated with a textbook?” My reasons are to
  • Keep your book up-to-date. 
  • Present background material. 
  • Offer links to related websites, videos, and other books. 
  • Try out new material for future editions. 
  • Provide a direct line of communication between you and your readers. 
  • Reach out to students from other states and countries who are interested in your topic but don’t have your book (yet). 
  • Have fun. 
  • Increase book sales!
Next I discuss the blog for IPMB.
I am coauthor with Russell Hobbie of the textbook Intermediate Physics for Medicine and Biology (5th edition, Springer, 2015) My blog can be found at The blog began in 2007. I post once a week, every Friday morning, with over 600 posts so far. I also share the weekly posts on the book’s Facebook page. I use the blogger software, which is free and easy to learn;
After that, I describe my different types of posts.
  • Useful for Instructors: Posts that will be especially helpful to faculty teaching from your book, such as sample syllabi, information about prerequisites, and links. 
  • Book Reviews: Reviews of books that are related to mine. 
  • Obituaries: Stories of famous scientists who have died recently. 
  • New Homework Problems: I often post new homework problems that instructors can use in class or on exams. 
  • My Own Research: Stories from my own research, to serve as examples of how to apply the material in the textbook. 
  • Lots of Math: Some of my posts are very mathematical, and I warn the reader. 
  • Personal Favorites: About 10% of my posts I list as personal favorites. These are particularly interesting, especially well written, or sometimes autobiographical.
Finally, I provide a sample post. I chose one of my favorites about craps, published on August 10, 2018.

A big thank you to my graduate student Dilmini Wijesinghe, who helped me design the poster. She’ll be at the meeting too, presenting another poster about biomechanics and mechanotransduction. But that’s another story.

Friday, October 4, 2019

Spiral MRI

In Chapter 18 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss a type of magnetic resonance imaging called echo-planar imaging.
In EPI the echoes are not created using π pulses. Instead, they are created by dephasing the spins at different positions along the x axis using a Gx gradient, and then reversing that gradient to rephase the spins, as shown in Fig. 18.32. Whenever the integral of Gx(t) is zero, the spins are all in phase and the signal appears. A large negative Gy pulse sets the initial value of ky to be negative; small positive Gy pulses (“blips”) then increase the value of ky for each successive kx readout. Echo-planar imaging requires strong gradients—at least five times those for normal studies—so that the data can be acquired quickly. Moreover, the rise and fall-times of these pulses are short, which induces large voltages in the coils. Eddy currents are also induced in the patient, and it is necessary to keep these below the threshold for neural activation. These problems can be reduced by using sinusoidally-varying gradient currents. The engineering problems are discussed in Schmitt et al. (1998); in Vlaardingerbroek and den Boer (2004); and in Bernstein et al. (2004).
Echo-Planar Imaging: Theory, Technique and Application, edited by Schmitt, Stehling, and Turner, superimposed on Intermediate Physics for Medicine and Biology.
Echo-Planar Imaging:
Theory, Technique and Application
edited by Schmitt, Stehling, and Turner.
To learn more about “sinusoidally-varying gradient currents,” I consulted the first of the three references, Echo-Planar Imaging: Theory, Technique and Application, edited by Franz Schmitt, Michael Stehling, and Robert Turner (Springer, 1998). In his chapter on the “Theory of Echo-Planar Imaging,” Mark Cohen discusses a spiral echo-planar pulse sequence in which the gradient fields have the unusual form Gx = Go t sin(ωt) and Gy = Go t cos(ωt).

Below I show the pulse sequence, which you can compare with the echo-planar imaging sequence in Fig. 18.32 of IPMB if you have the book by your side (don’t you always keep IPMB by your side?). The top two curves are the conventional slice selection sequence: a gradient Gz (red) in the z direction is applied during a radiofrequency π/2 pulse Bx (black), which rotates the spins into the x-y plane. The unconventional readout gradient Gx (blue) varies as an increasing sine wave. It produces a gradient echo at times corresponding approximately to the extrema of the Gx curve (excluding the first small positive peak). The phase encoding gradient Gy (green), an increasing cosine wave, is approximately zero at the echo times, but will shift the phase and therefore impact the amplitude of the echo.
A pulse sequence for spiral echo-planar imaging, based on Fig. 14 of “Theory of Echo-Planar Imaging,” by Mark Cohen in Echo-Planar Imaging: Theory, Technique and Application, edited by Schmitt, Stehling, and Turner.
A pulse sequence for spiral echo-planar imaging,
based on Fig. 14 of “Theory of Echo-Planar Imaging,”
by Mark Cohen.

If you look at the output in terms of spatial frequencies (kx, ky), you find that the echos correspond to points along an Archimedean spiral.

The spiral echo-planar imaging technique as viewed in frequency space, based on Fig. 13 of “Theory of Echo-Planar Imaging,” by Mark Cohen, in Echo-Planar Imaging: Theory, Technique and Application, edited by Schmitt, Stehling, and Turner.
The spiral echo-planar imaging technique as viewed in frequency space,
based on Fig. 13 of “Theory of Echo-Planar Imaging,” by Mark Cohen.

Spiral echo-planar imaging has some drawbacks. Data in k-space is not collected over a uniform array, so you need to interpolate onto a square grid before performing a numerical two-dimensional inverse Fourier transform to produce the image. Moreover, you get blurring from chemical shift and susceptibility artifacts. The good news is that you eliminate the rapid turning on and off of gradient pulses, which reduces eddy currents that can cause their own image distortions and possibly neural stimulation. So, spiral imaging has advantages, but the pulse sequence sure looks weird.

Echo-planar imaging in general, and spiral imaging in particular, are very fast. In his chapter on “Spiral Echo-Planar Imaging,” Craig Meyer discusses his philosophy about using EPI.
Spiral scanning is a promising alternative to traditional EPI. The properties of spiral scanning stem from the circularly symmetric nature of the technique. Among the attractive properties of spiral scanning are its efficiency and its good behavior in the presence of flowing material; the most unattractive property is uncorrected inhomogeneity leads to image blurring. Spiral image reconstruction can be performed rapidly using gridding, and there are a number of techniques for compensating for inhomogeneity. There are good techniques for generating efficient spiral gradient waveforms. Among the growing number of applications of spiral scanning are cardiac imaging, angiography, abdominal tumor imaging, functional imaging, and fluoroscopy.

Spiral scanning is a promising technique, but at the present it is still not in routine clinical use. There are many theoretical reasons why spiral scanning may be advantageous for a number of clinical problems, and initial volunteer and clinical studies have yielded very promising results for a number of applications. Still, until spiral scanning is established in routine clinical use, some caution is warranted about proclaiming it to be the answer for any particular question.

Friday, September 27, 2019

The Cauchy Distribution

In an appendix of Intermediate Physics for Medicine and Biology, Russ Hobbie and I analyze the Gaussian probability distribution
An equation for the Gaussian probability distribution.
It has the classic bell shape, centered at mean x with a width determined by the standard deviation σ.

Other distributions have a similar shape. One example is the Cauchy distribution
An equation for the Cauchy probability distribution.
where the distribution is centered at x and has a half-width at half-maximum γ. I initially thought the Cauchy distribution would be as well behaved as any other probability distribution, but it’s not. It has no mean and no standard deviation!

Rather than thinking abstractly about this issue, I prefer to calculate and watch how things fall apart. So, I wrote a simple computer program to generate N random samples using either the Gaussian or the Cauchy distribution. Below is a histogram for each case (N = 1000; Gaussian, x = 0, σ = 1; Cauchy, x = 0, γ = 1).

Histograms for 1000 random samples obtained using the Cauchy (left) and Gaussian (right) probability distribution.

Those samples out on the wings of the Cauchy distribution are what screw things up. The probability falls off so slowly that there is a significant chance of having a random sample that is huge. The histograms shown above are plotted from −20 to 20, but one of the thousand Cauchy samples was about −2400. I’d need to plot the histogram over a range more than one hundred times wider to capture that bin in the histogram. Seven of the samples had a magnitude over one hundred. By contrast, the largest sample from the Gaussian was about 4.6.

What do these few giant samples do to the mean? The average of the thousand samples shown above obtained from the Cauchy distribution is −1.28, which is bigger than the half-width at half-max. The average of the thousand samples obtained from the Gaussian distribution is −0.021, which is much smaller than the standard deviation.

Even more interesting is how the mean varies with N. I tried a bunch of cases, summarized in the figure below.
A plot of the mean versus sample size, for data drawb from the Gassian and Cauchy probability distribution.

There’s a lot of scatter, but the means for the Gaussian data appear to get smaller (closer to the expected value of zero) as N gets larger, The red line is not a fit, but merely drawn by eye. I included it to show how the means fall off with N. It has a slope of −½, implying that the means decay roughly as 1/√N. In contrast, the means for the Cauchy data are large (on the order of one) and don’t fall off with N. No matter how many samples you collect, your mean doesn’t approach the expected value of zero. Some oddball sample comes along and skews the average.

If you calculate the standard deviations for these cases, the problem is even worse. For data generated using the Cauchy distribution, the standard deviation grows with N. For N over a million, the standard deviation is usually over a thousand (remember, the half-width at half-max is one), and for my N = 5,000,000 case the standard deviation was over 600,000. Oddballs dominate the standard deviation.

I’m sorry if my seat-of-the-pants experimental approach to analyzing the Cauchy distribution seems simplistic, but for me it provides insight. The Cauchy distribution is weird, and I’m glad Russ and I didn’t include an appendix about it in Intermediate Physics for Medicine and Biology.

Friday, September 20, 2019

Happy Birthday Professor Fung

Yuan-Cheng Fung celebrated his 100th birthday last Sunday.

Biomechanics: Mechanical Properties of Living Tissues, by Y. C. Fung, superimposed on Intermediate Physics for Medicine and Biology.
Biomechanics: Mechanical
Properties of Living Tissues
by Y. C. Fung.
When Russ Hobbie and I needed to cite a general biomechanics textbook in Intermediate Physics for Medicine and Biology, we chose Biomechanics: Mechanical Properties of Living Tissues by “Bert” Fung.
Whenever a force acts on an object, it undergoes a change of shape or deformation. Often these deformations can be ignored… In other cases, such as the contraction of a muscle, the expansion of the lungs, or the propagation of a sound wave, the deformation is central to the problem and must be considered. This book will not develop the properties of deformable bodies extensively; nevertheless, deformable body mechanics is important in many areas of biology (Fung 1993).
According to Google Scholar, Biomechanics has over 10,000 citations, implying it’s a very influential book. In his introduction, Fung writes
Biomechanics seeks to understand the mechanics of living systems. It is an ancient subject and covers a very wide territory. In this book we concentrate on physiology and medical applications, which constitute the majority of recent work in the field. The motivation for research in this area comes from the realization that physiology can no more be understood without biomechanics than an airplane can without aerodynamics. For an airplane, mechanics enables us to design its structure and predict its performance. For an organ, biomechanics helps us to understand its normal function, predict changes due to alterations, and propose methods of artificial intervention. Thus diagnosis, surgery, and prosthesis are closely associated with biomechanics.
A First Course in Continuum Mechanics, by Y. C. Fung, superimposed on Intermediate Physics for Medicine and Biology.
A First Course in
Continuum Mechanics
by Y. C. Fung.
Another of Fung’s books that I like is A First Course in Continuum Mechanics. He states his goal in its first sentence. It’s a similar goal to that of IPMB.
Our objective is to learn how to formulate problems in mechanics, and how to reduce vague questions and ideas to precise mathematical statements, as well as to cultivate a habit of questioning, analyzing, designing, and inventing in engineering and science.
A special issue of the Journal of Biomechanical Engineering is dedicated to Fung’s birthday celebration. The editors write
Dr. Fung has been a singular pioneer in the field of Biomechanics, establishing multiple biomechanical theories and paradigms in various organ systems, including the heart, blood vessels, blood cells, and lung... He has mentored and trained many researchers in the biomechanics and bioengineering fields. His books … have become the classic biomechanics textbooks for students and researchers around the world. Dr. Fung is a member of all three U.S. National Academies—National Academy of Sciences, National Academy of Engineering, and National Academy of Medicine. He is also a member of the Chinese Academy of Sciences and a member of Academia Sinica. He has received many awards including the Timoshenko medal, the Russ Prize, and the National Medal of Science.
Fung earned his bachelor’s and master's degrees in aeronautics from the Central University of China in 1941 and 1943. College must have been difficult in China during the Second World War. I bet he has stories to tell. After the war he won a scholarship to come to the United States and study at Caltech, where he earned his PhD in 1948.

Fung joined the faculty at Caltech and remained there for nearly twenty years. In the 1950's, he became interested in biomechanics when his mother was suffering from glaucoma. In 1966, Fung moved to the University of California at San Diego, where he established their bioengineering program. He is known as the “Father of Modern Biomechanics.”

Happy birthday Professor Fung.

Yuan-Cheng Fung: 2000 National Medal of Science

2007 Russ Prize video

Friday, September 13, 2019

Intermediate Physics for Medicine and Biology has a New Website

A New Website

This summer I received an email from University Technology Services saying that faculty websites, like the one I maintain about Intermediate Physics for Medicine and Biology, would no longer be supported at Oakland University. In other words, IPMB needed a new online home. So today I announce our new website: If you try to access the old website listed in IPMB,, it’ll link you to the new site, but I don’t know how long that will last.

What can you find at our new website? Lots of stuff, including
If you’re looking for my website, it’s changed too, to

Class Videos

This semester I’m teaching PHY 3250, Biological Physics. I am recording each class, and I’ll upload the videos to YouTube. Anyone can watch the lectures for free, as if it were an online class. I still use the blackboard, and sometimes it’s difficult to read in the video. I hope you can follow most of the lectures.
PHY 3250 class on September 6, 2019, covering biomechanics.


Useful for Instructors

If you scroll down to the box on the right of you will find a list of labels. Click the one called “Useful for Instructors” and you can find several posts that are….er….useful for instructors. If you’re teaching from IPMB, you might find these posts particularly helpful.

Google Scholar

Below is a screenshot of IPMB’s Google Scholar citation statistics. We’ve averaged 26 citations a year over the last ten years, or one every two weeks. We thank all of you who’ve referenced IPMB. We’re delighted you found it important enough to cite.

A screenshot of the Google Scholar citation data for Intermediate Physics for Medicine and Biology, taken Septeber 1, 2019.

Friday, September 6, 2019

The Linear No-Threshold Model of Radiation Risk

Certain topics discussed in Intermediate Physics for Medicine and Biology always fascinate me. One is the linear no-threshold model. In Section 16.12, Russ Hobbie and I write
In dealing with radiation to the population at large, or to populations of radiation workers, the policy of the various regulatory agencies has been to adopt the linear no-threshold (LNT) model to extrapolate from what is known about the excess risk of cancer at moderately high doses and high dose rates, to low doses, including those below natural background.
Possible responses to radiation are summarized in Figure 16.51 of IPMB. Scientists continue to debate the LNT model because reliable data (shown by the two data points with their error bars in the upper right) do not extend down to low doses.

Figure 16.51 from Intermediate Physics for Medicine and Biology, showing possible responses to various doses. The two lowest-dose measurements are shown with their error bars.
Figure 16.51 from IPMB, showing possible responses to various doses.
The two lowest-dose measurements are shown with their error bars.
The linear no-threshold assumption is debated in a point/counterpoint article in the August issue of Medical Physics (“The Eventual Rejection of the Linear No-Threshold Theory Will Lead to a Drastic Reduction in the Demand for Diagnostic Medical Physics Services,” Volume 46, Pages 3325-3328). I have discussed before how useful point/counterpoint articles are for teaching medical physics. They provide a glimpse into the controversies that medical physicists grapple with every day. The title of each point/counterpoint article is phrased as a proposition. In this case, Aaron Jones argues for the proposition and Michael O’Connor argues against it. The moderator Habib Zaidi frames the issue in his overview
Controversies about the linear no‐threshold (LNT) hypothesis have been around since the early development of basic concepts in radiation protection and publication of guidelines by professional societies. Historically, this model was conceived over 70 yr ago and is still widely adopted by most of the scientific community and national and international advisory bodies (e.g., International Commission on Radiological Protection, National Council on Radiation Protection and Measurements) for assessing risk from exposure to low‐dose ionizing radiation. The LNT model is currently employed to provide cancer risk estimates subsequent to low level exposures to ionizing radiation despite being criticized as causing unwarranted public fear of all low-dose radiation exposures and costly implementation of unwarranted safety measures. Indeed, linearly extrapolated risk estimates remain hypothetical and have never been rigorously quantified by evidence-based studies. As such, is the LNT model legitimate and its use by regulatory and advisory bodies justified? What would be the impact on our profession if this hypothesis were to be rejected by the scientific community? Would this result in drastic reduction in the demand for diagnostic medical physics services? These questions are addressed in this month’s Point/Counterpoint debate.
Both protagonists give little support to the linear no-threshold hypothesis; they write as if its rejection is inevitable. What is the threshold dose below which risk is negligible? This question is not resolved definitively, but 100 mSv is the number both authors mention.

The linear no-threshold model has little impact for individuals, but is critical for estimating public health risks—such as using backscatter x-ray detectors in airports—when millions of people are exposed to minuscule doses. I’m no expert on this topic so I can’t comment with much authority, but I’ve always been skeptical of the linear no-threshold model.

Much of this point/counterpoint article deals with the impact of the linear no-threshold model on the medical physics job market. I agree with O’Connor that “[The title of the point/counterpoint article] is an interesting proposition as it implies that medical physicists care only about their field and not about whether or not a scientific concept (the LNT) is valid or not,” except “interesting” is not the word I would have chosen. I am skeptical that resolution of the LNT controversy will have a significant consequences for medical physics employment. After we discuss a point/counterpoint article in my PHY 3260 (Medical Physics) class, I insist that students vote either "for" or "against" the proposition. In this case, I agree with O'Connor and vote against it.

I will leave you with O’Connor’s concluding speculation about how rejecting the linear no-threshold model will affect both the population at large and on the future medical physics job market.
In our new enlightened world 30 yr from now, LNT theory has long been discarded, the public are now educated as to the benefits of low doses of ionizing radiation and there is no longer a race to push radiation doses lower and lower in x‐ray imaging. On the contrary, with acceptance of radiation hormesis, a new industry has arisen that offers the public an annual booster dose of radiation every year, particularly if they live in low levels of natural background radiation. How will this booster dose be administered? For those with the means, it might mean an annual trip to the Rocky Mountains. For others it could mean a trip to the nearest clinic for a treatment session with ionizing radiation. Who will oversee the equipment designed to deliver this radiation, to insure that the correct dose is delivered? The medical physicist!

Friday, August 30, 2019

The Book of Why

The Book of Why: The New Science of Cause and Effect, by Judea Pearl and Dana MacKenzie, superimposed on Intermediate Physics for Medicine and Biology.
The Book of Why,
by Judea Pearl.
At Russ Hobbie’s suggestion, I read The Book of Why, by Judea Pearl. This book presents a new way of analyzing data, using causal inferences in addition to more traditional, hypothesis-free statistical methods. In his introduction, Pearl writes
If I could sum up the message of this book in one pithy phrase, it would be that you are smarter than your data. Data do not understand causes and effects; humans do. I hope that the new science of causal inference will enable us to better understand how we do it, because there is no better way to understand ourselves than by emulating ourselves. In the age of computers, this new understanding also brings with it the prospect of amplifying our innate abilities so that we can make better sense of data, be it big or small.
I had a hard time with this book, mainly because I’m not a fan of statistics. Rather than asking “why” questions, I usually ask “what if” questions. In other words, I build mathematical models and then analyze them and make predictions. Intermediate Physics for Medicine and Biology has a similar approach. For instance, what if drift and diffusion both act in a pore; which will dominate under what circumstances (Section 4.12 in IPMB)? What if an ultrasonic wave impinges on an interface between tissues having different acoustic impedances; what fraction of the energy in the wave is reflected (Section 13.3)? What if you divide up a round of radiation therapy into several small fractions; will this preferentially spare healthy tissue (Section 16.9)? Pearl asks a different type of question: the data shows that smokers are more likely to get lung cancer; why? Does smoking cause lung cancer, or is there some confounding effect responsible for the correlation (for instance, some people have a gene that makes them both more susceptible to lung cancer and more likely to smoke)?

Although I can’t say I’ve mastered Pearl’s statistical methods for causal inference, I do like the way he adopts a causal model to test data. Apparently for a long time statisticians analyzed data using no hypotheses, just statistical tests. If they found a correlation, they could not infer causation; does smoking cause lung cancer or does lung cancer cause smoking? Pearl draws many causal diagrams to make his causation assumptions explicit. He then uses these illustrations to derive his statistical model. These drawings remind me of Feynman diagrams that we physicists use to calculate the behavior of elementary particles.

Simpson’s Paradox

Just when my interest in The Book of Why was waning, Pearl shocked me back to attention with Simpson’s paradox.
Imagine a doctor—Dr. Simpson, we’ll call him—reading in his office about a promising new drug (Drug D) that seems to reduce the risk of a heart attack. Excitedly, he looks up the researcher’s data online. His excitement cools a little when he looks at the data on male patients and notices that their risk of a heart attack is actually higher if they take Drug D. “Oh well,” he says, “Drug D must be very effective for women.”

But then he turns to the next table, and his disappointment turns to bafflement. “What is this?” Dr. Simpson exclaims. “It says here that women who took Drug D were also at higher risk of a heart attack. I must be losing my marbles! This drug seems to be bad for women, bad for men, but good for people.”
To illustrate this effect, consider the example analyzed by Pearl. In a clinical trial some patients received a drug (treatment) and some didn’t (control). Patients who subsequently had heart attacks are indicated by red boxes, and patients who did not by blue boxes. In the figure below, the data is analyzed by gender: males and females.

An example of Simpson's paradox, showing men and women being divided into treatment and control groups. Based on The Book of Why, by Judea Pearl and Dana MacKenzie.

One out of twenty (5%) of the females in the control group had heart attacks, while three out of forty (7.5%) in the treatment group did. For women, the drug caused heart attacks! For males, twelve out of forty men in the control group (30%) suffered heart attacks, and eight out of twenty (40%) in the treatment group did. The drug caused heart attacks for the men too!

Now combine the data for men and women.

An example of Simpson's paradox, showing men and women pooled together into treatment and control groups. Based on The Book of Why, by Judea Pearl and Dana MacKenzie.

In the control group, 13 out of 60 patients had a heart attack (22%). In the treatment group, 11 of 60 patients had one (18%). The drug prevented heart attacks! This seems impossible, but if you don’t believe me, count the boxes; it’s not a trick. What do we make of this? As Pearl says “A drug can’t simultaneously cause me and you to have a heart attack and at the same time prevent us both from having heart attacks.”

To resolve the paradox, Pearl notes that this was not a randomized clinical trial. Patients could decide to take the drug or not, and women chose the drug more often then men. The preference for taking the drug is what Pearl calls a “confounder.” The chance of having a heart attack is much greater for men than women, but more women elected to join the treatment group then men. Therefore, the treatment group was overweighted with low-risk women, and the control group was overweighted with high-risk men, so when data was pooled the treatment group looked like they had fewer heart attacks than the control group. In other words, the difference between treatment and control got mixed up with the difference between men and women. Thus, the apparent effectiveness of the drug in the pooled data is a statistical fluke. A random trial would have shown similar data for men and women, but a different result when the data was pooled. The drug causes heart attacks.


The Book of Why contains only a little mathematics; Pearl tries to make the discussion accessible to a wide audience. He does, however, use lots of math in his research. His opinion of math is similar to mine and to IPMB’s.
Many people find formulas daunting, seeing them as a way of concealing rather than revealing information. But to a mathematician, or to a person who is adequately trained in the mathematical way of thinking, exactly the reverse is true. A formula reveals everything: it leaves nothing to doubt or ambiguity. When reading a scientific article, I often catch myself jumping from formula to formula, skipping the words altogether. To me, a formula is a baked idea. Words are ideas in the oven.
One goal of IPMB is to help students gain the skills in mathematical modeling so that formulas reveal rather than conceal information. I often tell my students that formulas aren’t things you stick numbers into to get other numbers. Formulas tell a story. This idea is vitally important. I suspect Pearl would agree.


The causal diagrams in The Book of Why aid Pearl in deriving the correct statistical equations needed to analyze data. Toy models in IPMB aid students in deriving the correct differential equations needed to predict behavior. I see modeling as central to both activities: you start with an underlying hypothesis about what causes what, you translate that into mathematics, and then you learn something about your system. As Pearl notes, statistics does not always have this approach.
In certain circles there is an almost religious faith that we can find the answers to these questions in the data itself, if only we are sufficiently clever at data mining. However, readers of this book will know that this hype is likely to be misguided. The questions I have just asked are all causal, and causal questions can never be answered from data alone. They require us to formulate a model of the process that generates the data, or at least some aspects of that process. Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.
I enjoyed The Book of Why, even if I didn’t entirely understand it. It was skillfully written, thanks in part to coauthor Dana MacKenzie. It’s the sort of book that, once finished, I should go back and read again because it has something important to teach me. If I liked statistics more I might do that. But I won’t.

Friday, August 23, 2019

Happy Birthday, Godfrey Hounsfield!

Godfrey Hounsfield (1919-2004).
Godfrey Hounsfield
Wednesday, August 28, is the hundredth anniversary of the birth of Godfrey Hounsfield, the inventor of the computed tomography scanner.

In Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
The history of the development of computed tomography is quite interesting (Kalender 2011). The Nobel Prize in Physiology or Medicine was shared in 1979 by a physicist, Allan Cormack, and an engineer, Godfrey Hounsfield…The Nobel Prize acceptance speeches (Cormack 1980; Hounsfield 1980) are interesting to read.
To celebrate the centenary of Hounsfield’s birth, I’ve collected excerpts from his interesting Nobel Prize acceptance speech.
When we consider the capabilities of conventional X-ray methods, three main limitations become obvious. Firstly, it is impossible to display within the framework of a two-dimensional X-ray picture all the information contained in the three-dimensional scene under view. Objects situated in depth, i. e. in the third dimension, superimpose, causing confusion to the viewer.

Secondly, conventional X-rays cannot distinguish between soft tissues. In general, a radiogram differentiates only between bone and air, as in the lungs. Variations in soft tissues such as the liver and pancreas are not discernible at all and certain other organs may be rendered visible only through the use of radio-opaque dyes.

Thirdly, when conventional X-ray methods are used, it is not possible to measure in a quantitative way the separate densities of the individual substances through which the X-ray has passed. The radiogram records the mean absorption by all the various tissues which the X-ray has penetrated. This is of little use for quantitative measurement.

Computed tomography, on the other hand, measures the attenuation of X-ray beams passing through sections of the body from hundreds of different angles, and then, from the evidence of these measurements, a computer is able to reconstruct pictures of the body’s interior...
The technique’s most important feature is its [enormous] sensitivity. It allows soft tissue such as the liver and kidneys to be clearly differentiated, which radiographs cannot do…
It can also very accurately measure the values of X-ray absorption of tissues, thus enabling the nature of tissue to be studied.
These capabilities are of great benefit in the diagnosis of disease, but CT additionally plays a role in the field of therapy by accurately locating, for example, a tumour so indicating the areas of the body to be irradiated and by monitoring the progress of the treatment afterwards...
Famous scientists and engineers often have fascinating childhoods. Learn about Hounsfield’s youth by reading these excerpts from his Nobel biographical statement.
I was born and brought up near a village in Nottinghamshire and in my childhood enjoyed the freedom of the rather isolated country life. After the first world war, my father had bought a small farm, which became a marvellous playground for his five children… At a very early age I became intrigued by all the mechanical and electrical gadgets which even then could be found on a farm; the threshing machines, the binders, the generators. But the period between my eleventh and eighteenth years remains the most vivid in my memory because this was the time of my first attempts at experimentation, which might never have been made had I lived in a city… I constructed electrical recording machines; I made hazardous investigations of the principles of flight, launching myself from the tops of haystacks with a home-made glider; I almost blew myself up during exciting experiments using water-filled tar barrels and acetylene to see how high they could be waterjet propelled…

Aeroplanes interested me and at the outbreak of the second world war I joined the RAF as a volunteer reservist. I took the opportunity of studying the books which the RAF made available for Radio Mechanics and looked forward to an interesting course in Radio. After sitting a trade test I was immediately taken on as a Radar Mechanic Instructor and moved to the then RAF-occupied Royal College of Science in South Kensington and later to Cranwell Radar School. At Cranwell, in my spare time, I sat and passed the City and Guilds examination in Radio Communications. While there I also occupied myself in building large-screen oscilloscope and demonstration equipment as aids to instruction...

It was very fortunate for me that, during this time, my work was appreciated by Air Vice-Marshal Cassidy. He was responsible for my obtaining a grant after the war which enabled me to attend Faraday House Electrical Engineering College in London, where I received a diploma.
I joined the staff of EMI in Middlesex in 1951, where I worked for a while on radar and guided weapons and later ran a small design laboratory. During this time I became particularly interested in computers, which were then in their infancy… Starting in about 1958 I led a design team building the first all-transistor computer to be constructed in Britain, the EMIDEC 1100

I was given the opportunity to go away quietly and think of other areas of research which I thought might be fruitful. One of the suggestions I put forward was connected with automatic pattern recognition and it was while exploring various aspects of pattern recognition and their potential, in 1967, that the idea occurred to me which was eventually to become the EMI-Scanner and the technique of computed tomography...
Happy birthday, Godfrey Hounsfield. Your life and work made a difference.

 Watch “The Scanner Story,” a documentary made by EMI 
about their early computed tomography brain scanners.
The video, filmed in 1978, shows its age but is engaging.

Part Two of “The Scanner Story.”

Friday, August 16, 2019

This View of Life

What’s the biggest idea in science that’s not mentioned in Intermediate Physics for Medicine and Biology? Most of the grand principles of physics appear: quantum mechanics, special relativity, the second law of thermodynamics. The foundations of chemistry are included, such as atomic theory and radioactive decay. Many basic concepts from mathematics are discussed, like calculus and chaos theory. Fundamentals of biology are also present, like the structure of DNA.

In my opinion, the biggest scientific idea never mentioned in Intermediate Physics for Medicine and Biology, not even once, is evolution. As Theodosius Dobzhansky said, “nothing in biology makes sense except in the light of evolution.” So why is evolution absent from IPMB?

A simple, if not altogether satisfactory, answer is that no single book can cover everything. As Russ Hobbie and I write in the preface to IPMB, “This book has become long enough.”

At a deeper level, however, physicists focus on principles that are common to all organisms; which unify our view of life. Evolutionary biologists, on the other hand, delight in explaining how diverse organisms come about through the quirks and accidents of history. Russ and I come from physics, and emphasize unity over diversity.

Ever Since Darwin, by Stephen Jay Gould, superimposed on Intermediate Physics for Medicine and Biology.
Ever Since Darwin,
by Stephen Jay Gould.
Suppose you want to learn more about evolution; how would you do it? I suggest reading books by Stephen Jay Gould (1941-2002), and in particular his collections of essays. I read these years ago and loved them, both for the insights into evolution and for the beauty of the writing. In the prologue of Gould’s first collection—Ever Since Darwin—he says
These essays, written from 1974-1977, originally appeared in my monthly column for Natural History Magazine, entitled “This View of Life.” They range broadly from planetary and geological to social and political history, but they are united (in my mind at least) by the common thread of evolutionary theory—Darwin’s version. I am a tradesman, not a polymath; what I know of planets and politics lies at their intersection with biological evolution.
Is evolution truly missing from Intermediate Physics for Medicine and Biology? Although it’s not discussed explicitly, ideas about how physics constrains evolution are implicit. For instance, one homework problem in Chapter 4 instructs the student to “estimate how large a cell …can be before it is limited by oxygen transport.” Doesn’t this problem really analyze how diffusion impacts natural selection? Another problem in Chapter 3 asks “could a fish be warm blooded and still breathe water [through gills]?” Isn’t this really asking why mammals such as dolphins and whales, which have evolved to live in the water, must nevertheless come to the surface to breathe air? Indeed, many ideas analyzed in IPMB are relevant to evolution.

In Ever Since Darwin, Gould dedicates an essay (Chapter 21, “Size and Shape”) to scaling. Russ and I discuss scaling in Chapter 1 of IPMB. Gould explains that
Animals are physical objects. They are shaped to their advantage by natural selection. Consequently, they must assume forms best adapted to their size. The relative strength of many fundamental forces (gravity, for example) varies with size in a regular way, and animals respond by systematically altering their shapes.
The Panda's Thumb, by Stephen Jay Gould, superimposed on Intermediate Physics for Medicine and Biology.
The Panda's Thumb,
by Stephen Jay Gould.
Gould returns to the topic of scaling in an essay on “Our Allotted Lifetimes,” Chapter 29 in his collection titled The Panda’s Thumb. This chapter contains mathematical expressions (rare in Gould’s essays but common in IPMB) analyzing how breathing rate, heart rate and lifetime scale with size. In his next essay (Chapter 30, “Natural Attraction: Bacteria, the Birds and the Bees”), Gould addresses another topic covered in IPMB: magnetotactic bacteria. He writes
In the standard examples of nature’s beauty—the cheetah running, the gazelle escaping, the eagle soaring, the tuna coursing, and even the snake slithering or the inchworm inching—what we perceive as graceful form also represents an excellent solution to a problem in physics. When we wish to illustrate the concept of adaptation in evolutionary biology, we often try to show that organisms “know” physics—that they have evolved remarkably efficient machines for eating and moving.
Gould knew one of my heroes, Isaac Asimov. In his essay on magnetotactic bacteria, Gould describes how he and Asimov discussed topics similar to those in Edward Purcell’s article “Life at Low Reynolds Number” cited in IPMB.
The world of a bacterium is so unlike our own that we must abandon all our certainties about the way things are and start from scratch. Next time you see Fantastic Voyage... ponder how the miniaturized adventurers would really fare as microscopic objects within a human body... As Isaac Asimov pointed out to me, their ship could not run on its propeller, since blood is too viscous at such a scale. It should have, he said, a flagellum—like a bacterium.
I’m fond of essays, which often provide more insight than journal articles and textbooks. Gould’s 300 essays appeared in every issue of Natural History between 1974 and 2001; he never missed a month. Asimov also had a monthly essay in The Magazine of Fantasy and Science Fiction, and his streak lasted over thirty years, from 1959 to 1992. My twelve-year streak in this blog seems puny compared to these ironmen. Had Gould and Asimov been born a half century later, I wonder if they’d be bloggers?

Gould ends his prologue to The Panda’s Thumb by quoting The Origin of Species, written by his hero Charles Darwin. There in the final paragraph of this landmark book we find a juxtaposition of physics and biology.
Charles Darwin chose to close his great book with a striking comparison that expresses this richness. He contrasted the simpler system of planetary motion, and its result of endless, static cycling, with the complexity of life and its wondrous and unpredictable change through the ages:
There is a grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

Listen to Stephen Jay Gould talk about evolution.

 National Public Radio remembers Stephen Jay Gould (May 22, 2002).

Friday, August 9, 2019

Arthur Sherman wins the Winfree Prize

A photo of Arthur Sherman, winner of the Arthur T. Winfree Prize from the Society of Mathematical Biology.
Arthur Sherman
My friend Arthur Sherman—who I knew when I worked at the National Institutes of Health in the 1990s—has won the Arthur T. Winfree Prize from the Society of Mathematical Biology. The SMB website states
Arthur Sherman, National Institute of Diabetes and Digestive and Kidney Diseases, will receive the Arthur T. Winfree Prize for his work on biophysical mechanisms underlying insulin secretion from pancreatic beta-cells. Since insulin plays a key role in maintaining blood glucose, this is of basic physiological interest and is also important for understanding the causes and treatment of type 2 diabetes, which arises from a combination of defects in insulin secretion and insulin action. The Arthur T. Winfree Prize was established in memory of Arthur T. Winfree’s contributions to mathematical biology. This prize is to honor a theoretician whose research has inspired significant new biology. The Winfree Prize consists of a cash prize of $500 and a certificate given to the recipient. The winner is expected to give a talk at the Annual Meeting of the Society for Mathematical Biology (Montreal 2019).
Russ Hobbie and I discuss the glucose-insulin negative feedback loop in Chapter 10 of Intermediate Physics for Medicine and Biology. I’ve written previously in this blog about Winfree.

Read how Sherman explains his research in lay language on a NIDDK website.
Insulin is a hormone that allows the body to use carbohydrates for quick energy. This spares fat for long-term energy storage and protein for building muscle and regulating cellular processes. Without sufficient insulin many tissues, such as muscle, cannot use glucose, the product of digestion of carbohydrates, as a fuel. This leads to diabetes, a rise in blood sugar that damages organs. It also leads to heart disease, kidney failure, blindness, and finally, premature death. We use mathematics to study how the beta cells of the pancreas know how much glucose is available and how much insulin to secrete, as well as how failure of various components of insulin secretion contributes to the development of diabetes.
When I was at NIH, Sherman worked with John Rinzel studying bursting. Here’s a page from my research notebook, showing my notes from a talk that Artie (as we called him then) gave thirty years ago. A sketch of a bursting pancreatic beta cell is in the bottom right corner.

A photo of my notes from my NIH Research Notebook 1, March 30, 1989, taken during a talk by Arthur Sherman.
From my NIH Research Notebook 1, March 30, 1989.
I recommend the video of a talk by Sherman that you can view at His abstract says
I will trace the history of models for bursting, concentrating on square-wave bursters descended from the Chay-Keizer model for pancreatic beta cells. The model was originally developed on a biophysical and intutive basis but was put into a mathematical context by John Rinzel's fast-slow analysis. Rinzel also began the process of classifying bursting oscillations based on the bifurcations undergone by the fast subsystem, which led to important mathematical generalization by others. Further mathematical work, notably by Terman, Mosekilde and others, focused rather on bifurcations of the full bursting system, which showed a fundamental role for chaos in mediating transitions between bursting and spiking and between bursts with different numbers of spikes. The development of mathematical theory was in turn both a blessing and a curse for those interested in modeling the biological phenomena⁠—having a template of what to expect made it easy to construct a plethora of models that were superficially different but mathematically redundant. This may also have steered modelers away from alternative ways of achieving bursting, but instructive examples exist in which unbiased adherence to the data led to discovery of new bursting patterns. Some of these had been anticipated by the general theory but not previously instantiated by Hodgkin-Huxley-based examples. A final level of generalization has been the addition of multiple slow variables. While often mathematically reducible to models with a one-variable slow subsystem, such models also exhibit novel resetting properties and enhanced dynamic range. Analysis of the dynamics of such models remains a current challenge for mathematicians.
Congratulations to Arthur Sherman, for this well-deserved honor.

Arthur Sherman giving a talk at the  Colorado School of Mines, October 2017.

Friday, August 2, 2019

Can Magnetic Resonance Imaging Detect Electrical Activity in Your Brain?

Can magnetic resonance imaging detect electrical activity in your brain? If so, it would be a breakthrough in neural recording, providing better spatial resolution than electroencephalography or magnetoencephalography. Functional magnetic resonance imaging (fMRI) already is used to detect brain activity, but it records changes in blood flow (BOLD, or blood-oxygen-level-dependent, imaging), which is an indirect measure of electrical signaling. MRI ought to be able to detect brain function directly; bioelectric currents produce their own biomagnetic fields that should affect a magnetic resonance image. Russ Hobbie and I discuss this possibility in Section 18.12 of Intermediate Physics for Medicine and Biology.

The magnetic field produced in the brain is tiny; a nanotesla or less. In an article I wrote with my friend Ranjith Wijesinghe of Ball State University and his students (Medical and Biological Engineering and Computing, Volume 50, Pages 651‐657, 2012), we concluded
MRI measurements of neural currents in dendrites [of neurons] may be barely detectable using current technology in extreme cases such as seizures, but the chance of detecting normal brain function is very small. Nevertheless, MRI researchers continue to develop clever new imaging methods, using either sophisticated pulse sequences or data processing. Hopefully, this paper will outline the challenges that must be overcome in order to image dendritic activity using MRI.
Toward Direct MRI of Neuro-Electro-Magnetic Oscillations in the Human Brain, Truong et al., Magn. Reson. Med. 81:3462-3475, 2019, superimposed on Intermediate Physics for Medicine and Biology.
Truong et al. (2019) “Toward Direct
MRI of Neuro-Electro-Magnetic
Oscillations in the Human Brain,”
Magn. Reson. Med.
Since we published those words seven years ago, has anyone developed a clever pulse sequence or a fancy data processing method that allows imaging of biomagnetic fields in the brain? Yes! Or, at least, maybe. Researchers in Allen Song’s laboratory published a paper titled “Toward Direct MRI of Neuro-Electro-Magnetic Oscillations in the Human Brain” in the June 2019 issue of Magnetic Resonance in Medicine. I reproduce the abstract below.
Purpose: Neuroimaging techniques are widely used to investigate the function of the human brain, but none are currently able to accurately localize neuronal activity with both high spatial and temporal specificity. Here, a new in vivo MRI acquisition and analysis technique based on the spin-lock mechanism is developed to noninvasively image local magnetic field oscillations resulting from neuroelectric activity in specifiable frequency bands.

Methods: Simulations, phantom experiments, and in vivo experiments using an eyes-open/eyes-closed task in 8 healthy volunteers were performed to demonstrate its sensitivity and specificity for detecting oscillatory neuroelectric activity in the alpha‐band (8‐12 Hz). A comprehensive postprocessing procedure was designed to enhance the neuroelectric signal, while minimizing any residual hemodynamic and physiological confounds.

Results: The phantom results show that this technique can detect 0.06-nT magnetic field oscillations, while the in vivo results demonstrate that it can image task-based modulations of neuroelectric oscillatory activity in the alpha-band. Multiple control experiments and a comparison with conventional BOLD functional MRI suggest that the activation was likely not due to any residual hemodynamic or physiological confounds.

Conclusion: These initial results provide evidence suggesting that this new technique has the potential to noninvasively and directly image neuroelectric activity in the human brain in vivo. With further development, this approach offers the promise of being able to do so with a combination of spatial and temporal specificity that is beyond what can be achieved with existing neuroimaging methods, which can advance our ability to study the functions and dysfunctions of the human brain.
I’ve been skeptical of work by Song and his team in the past; see for instance my article with Peter Basser  (Magn. Reson. Med. 61:59‐64, 2009) critiquing their “Lorentz Effect Imaging” idea. However, I’m optimistic about this recent work. I’m not expert enough in MRI to judge all the technical details—and there are lots of technical details—but the work appears sound.

The key to their method is “spin-lock,” which I discussed before in this blog. To understand spin-lock, let’s compare it to a typical MRI π/2 pulse (see Section 18.5 in IPMB). Initially, the spins lie at equilibrium along a static magnetic field Bz (blue in the left panel of the figure below). To be useful for imaging, you must rotate the spins into the x-y plane so they precess about the z axis at the Larmor frequency (typically a radio frequency of many Megahertz, with the exact frequency depending on Bz). If you apply an oscillating magnetic field Bx (red) perpendicular to Bz, with a frequency equal to the Larmor frequency and for just the right duration, you will rotate all the spins into the x-y plane (green). The behavior is simpler if instead of viewing it from the static laboratory frame of reference (x, y, z) we view it from a frame of reference rotating at the Larmor frequency (x', y', z'). At the end of the π/2 pulse the spins point in the y' direction and appear static (they will eventually relax back to equilibrium, but we ignore that slow process in this discussion). If you’re having trouble visualizing the rotating frame, see Fig. 18.7 in IPMB; the z and z' axes are the same and it’s the x-y plane that’s rotating.

After the π/2 pulse, you would normally continue your pulse sequence by measuring the free induction decay or creating an echo. In a spin-lock experiment, however, after the π/2 pulse ends you apply a circularly polarized magnetic field By' (blue in the right panel) at the Larmor frequency. In the rotating frame, By' appears static along the y' direction. Now you have a situation in the rotating frame that’s similar to the situation you had originally in the laboratory frame: a static magnetic field and spins aligned with it seemingly in equilibrium. What magnetic field during the spin-lock plays the role of the radiofrequency field during the π/2 pulse? You need a magnetic field in the z' direction that oscillates at the spin-lock frequency; the frequency that spins precess about By'. An oscillating neural magnetic field Bneural (red) would do the job. It must oscillate at the spin-lock frequency, which depends on the strength of the circularly polarized magnetic field By'. Song and his team adjusted the magnitude of By' so the spin-lock frequency matched the frequency of alpha waves in the brain (about 10 Hz). This causes the spins to rotate from y' to z' (green). Once you accumulate spins in the z' direction, turn off the spin lock and you are back to where you started (spins in the z direction in a static field Bz), except that the number of these spins depends on the strength of the neural magnetic field and the duration of the spin-lock. A neural magnetic field not at resonance⁠—that is, at any other frequency besides the spin-lock frequency⁠—will not rotate spins to the z' axis. You now have an exquisitely sensitive method of detecting an oscillating biomagnetic field, analogous to using a lock-in amplifier to isolate a particular frequency in a signal.

Comparison of a π/2 pulse and spin-lock during magnetic resonance imaging.
Comparison of a π/2 pulse and spin-lock.
There’s a lot more to Song’s method than I’ve described, including complicated techniques to eliminate any contaminating BOLD signal. But it seems to work.

Will this method revolutionize neural imaging? Time will tell. I worry about how well it can detect neural fields that are not oscillating at a single frequency. Nevertheless, Song’s experiment—together with the work from Okada’s lab that I discussed three years ago—may mean we’re on the verge of something big: using MRI to directly measure neural magnetic fields.

In the conclusion of their article, Song and his coworkers strike an appropriate balance between acknowledging the limitations of their method and speculating about its potential. Let’s hope their final sentence comes true.
Our initial results provide evidence suggesting that MRI can be used to noninvasively and directly image neuroelectric oscillations in the human brain in vivo. This new technique should not be viewed as being aimed at replacing existing neuroimaging techniques, which can address a wide range of questions, but it is expected to be able to image functionally important neural activity in ways that no other technique can currently achieve. Specifically, it is designed to directly image neuroelectric activity, and in particular oscillatory neuroelectric activity, which BOLD fMRI cannot directly sample, because it is intrinsically limited by the temporal smear and temporal delay of the hemodynamic response. Furthermore, it has the potential to do so with a high and unambiguous spatial specificity, which EEG/MEG cannot achieve, because of the limitations of the inverse problem. We expect that our technique can be extended and optimized to directly image a broad range of intrinsic and driven neuronal oscillations, thereby advancing our ability to study neuronal processes, both functional and dysfunctional, in the human brain.