Friday, March 8, 2013

Helium Shortage!

A recent article in the New York Times discusses the looming shortage of helium.
A global helium shortage has turned the second-most abundant element in the universe (after hydrogen) into a sought-after scarcity, disrupting its use in everything from party balloons and holiday parade floats to M.R.I. machines and scientific research….

Experts say the shortage has many causes. Because helium is a byproduct of natural gas extraction, a drop in natural gas prices has reduced the financial incentives for many overseas companies to produce helium. In addition, suppliers’ ability to meet the growing demand for helium has been strained by production problems around the world. Helium plants that are being built or are already operational in Qatar, Algeria, Wyoming and elsewhere have experienced a series of construction delays or maintenance troubles.
One medical use of helium is discussed in the 4th edition of Intermediate Physics for Medicine and Biology. In Chapter 8, Russ Hobbie and I write about the role of helium in magnetoencephalography—the biomagnetic measurement of electrical activity in the brain—using Superconducting Quantum Interference Device (SQUID) magnetometers.
The SQUID must be operated at temperatures where it is superconducting. It used to be necessary to keep a SQUID in a liquid-helium bath, which is expensive to operate because of the high evaporation rate of liquid helium. With the advent of high-temperature superconductors, SQUIDS have the potential to operate at liquid-nitrogen temperatures, where the cooling problems are much less severe [for additional information, see here].
A more wide-spread use of helium in medicine is during magnetic resonance imaging. Chapter 18 of our book discusses MRI, but it does not describe how the strong, static magnetic field required by MRI is created. In a clinical MRI system, a magnetic field (typically 2 to 4 T) must exist over a large volume. Producing such a magnetic field using permanent magnets would, if possible, require giant, massive, expensive structures. A more reasonable method to create this field is using coils carrying a large current. One way to minimize the resulting Joule heating losses in the coils is to make them out of superconducting wire, which must be cooled cryogenically. An article on the Time Magazine online newsfeed states
Liquid helium has an extremely low boiling point—minus 452.1 degrees Fahrenheit, close to absolute zero—which makes it a perfect substance for cooling the superconducting magnets found in MRI machines. Hospitals are generally the first in line for helium, so the shortage isn’t affecting them yet. But prices for hospital-grade helium may continue to go up, leading to higher health-care costs or, in the worst-case scenario, the need for a backup plan for cooling MRI machines.
More detail about the use of helium during MRI can be found in an online book titled The Basics of MRI by Joseph Hornak. Below I quote some of the text, but you will need to go the book website to see the pictures and animations.
The imaging magnet is the most expensive component of the magnetic resonance imaging system. Most magnets are of the superconducting type. This is a picture of a first generation 1.5 Tesla superconducting magnet from a magnetic resonance imager. A superconducting magnet is an electromagnet made of superconducting wire. Superconducting wire has a resistance approximately equal to zero when it is cooled to a temperature close to absolute zero (−273.15° C or 0 K) by immersing it in liquid helium. Once current is caused to flow in the coil it will continue to flow as long as the coil is kept at liquid helium temperatures. (Some losses do occur over time due to infinitely small resistance of the coil. These losses can be on the order of a ppm of the main magnetic field per year.)

The length of superconducting wire in the magnet is typically several miles. The coil of wire is kept at a temperature of 4.2 K by immersing it in liquid helium. The coil and liquid helium is kept in a large dewar. The typical volume of liquid Helium in an MRI magnet is 1700 liters. In early magnet designs, this dewar was typically surrounded by a liquid nitrogen (77.4 K) dewar which acts as a thermal buffer between the room temperature (293 K) and the liquid helium. See the animation window for a cross sectional view of a first generation superconducting imaging magnet.

In later magnet designs, the liquid nitrogen region was replaced by a dewar cooled by a cryocooler or refrigerator. There is a refrigerator outside the magnet with cooling lines going to a coldhead in the liquid helium. This design eliminates the need to add liquid nitrogen to the magnet, and increases the liquid helium hold time to 3 to 4 years. The animation window contains a cross sectional view of this type of magnet. Researchers are working on a magnet that requires no liquid helium.
With the discovery of high temperature superconductivity (HTS), MRI magnets cooled at higher temperatures, avoiding the need for liquid helium, are possible. The ideal solution to the helium shortage would be superconducting coils cooled with liquid nitrogen. Nitrogen makes up 80% of our atmosphere, so it is free and virtually limitless. However, a 2010 article by scientists at the MIT Francis Bitter Magnet Laboratory (FBML) suggests that a more practical solution might be the use of solid nitrogen to reach temperatures of 20 K, for which superconducting materials such as magnesium diboride (MgB2) exist that have the properties required for magnet coils.
A tremendous progress achieved in the past decade and is continuing today has transformed selected HTS materials into “magnet-grade” conductors, i.e., meet rigorous magnet specifications and are readily available from commercial wire manufacturers [1]. We are now at the threshold of a new era in which HTS will play a key role in a number of applications— here MgB2 (Tc=39 K) is classified as an HTS. The HTS offers opportunities and challenges to a number of applications for superconductivity. In this paper we briefly describe three NMR/MRI magnets programs currently being developed at FBML that would be impossible without HTS: 1) a 1.3 GHz NMR magnet; 2) a compact NMR magnet assembled from YBCO [yttrium barium copper oxide] annuli; and 3) a persistent-mode, fully-protected MgB2 0.5-T/800-mm whole-body MRI magnet.
Even if new MRI magnets using solid nitrogen or some other abundant substance as the coolant were developed, there are thousands of existing MRI devices that still would require liquid helium and would be very expensive to replace. Congress is currently considering legislation to address the helium shortage (see article here). We urgently need to preserve our helium supply to ensure its availability for important medical devices.

P.S. I saw this article just a few days ago. High temperature superconductors for MRI may be just around the corner!

Friday, March 1, 2013

Magnetoacoustic Tomography with Magnetic Induction

Magnetoacoustic tomography with magnetic induction is a new method to image the distribution of electrical conductivity in tissue. Bin He, the director of the Institute for Engineering in Medicine at the University of Minnesota, developed this technique with his student Yuan Xu in a 2005 publication (Physics in Medicine and Biology, Volume 50, Pages 5175–5187). They describe MAT-MI in their introduction.
We have developed a new approach called magnetoacoustic tomography with magnetic induction (MAT-MI) by combining ultrasound and magnetism. In this method, the object is in a static magnetic field and a time-varying (μs) magnetic field... The time-varying magnetic field induces an eddy current in the object. Consequently, the object will emit ultrasonic waves through the Lorentz force produced by the combination of the eddy current and the static magnetic field. The ultrasonic waves are then collected by the detectors located around the object for image reconstruction. MAT-MI combines the good contrast of EIT [electrical impedance tomography] with the good spatial resolution of sonography.
One nice feature of MAT-MI is that it fits so well into the 4th edition of Intermediate Physics for Medicine and Biology, in which Russ Hobbie and I analyze both eddy currents caused by Faraday induction (Chapter 8) and ultrasound imaging (Chapter 13). Another characteristic of MAT-MI is that the physics is simple enough that it can be summarized in a homework problem. So, dear reader, here is a new problem that will help you understand MAT-MI.
Section 8.6

Problem 25 ½  Assume a sheet of tissue having conductivity σ is placed perpendicular to a uniform, strong, static magnetic field B0, and a weaker spatially uniform but temporally oscillating magnetic field B1(t).
(a) Derive an expression for the electric field E induced by the oscillating magnetic field. It will depend on the distance r from the center of the sheet and the rate of change of the magnetic field.
(b) Determine an expression for the current density J by multiplying the electric field by the conductivity.
(c) The force per unit volume, F, is given by the Lorentz force, J×B0 (ignore the weak B1). Find an expression for F.
(d) The source of the ultrasonic pressure waves can be expressed as the divergence of the Lorentz Force. Derive an expression for ∇ · F.
(e) Draw a picture showing the directions of
J, B0, and F.
While this example is simple enough to serve as a homework problem, it does not illustrate imaging of conductivity; the conductivity is uniform so there is no variation to image. As He and Yuan explain, if the conductivity varies with position, this will also contribute to ∇ · F, and therefore influence the radiated ultrasonic wave. Thus, information about the conductivity distribution σ(x,y) is contained in the pressure. Subsequent papers by He and his colleagues explore methods for extracting σ(x,y) from the ultrasonic signal. Potential applications include using MAT-MI to image breast cancer tumors.

I’ve worked on MAT-MI a little bit. University of Michigan student Kayt Brinker and I published a paper describing MAT-MI in anisotropic tissue like skeletal muscle, where the conductivity is much higher parallel to the muscle fibers than perpendicular to them [Brinker, K. and B. J. Roth (2008) “The effect of electrical anisotropy during magneto-acoustic tomography with magnetic induction,” IEEE Transactions on Biomedical Engineering, Volume 55, Pages 1637–1639]. For some reason the figures published by the journal were not of high quality, so here I reproduce a better version of Figure 6, which shows the pressure wave produced during MAT-MI.

Figure 6 from Brinker and Roth (2008) shows the pressure at 20, 40, 60, and 80 μs in isotropic and anisotropic tissue.
Fig. 6. Pressure at 20, 40, 60, and 80 μs in isotropic and anisotropic tissue.
Each panel represents a 400 mm by 400 mm area.
In isotropic tissue, the wave propagates outward, the same in all directions. In electrically anisotropic tissue, the pressure is greater in the direction perpendicular to the fiber axis (vertical) than parallel to it (horizontal). The main difference between our calculation and that in the new homework problem given above is that Kayt and I restricted the oscillating magnetic field B1 to a small region (40 mm radius) at the center of the tissue sheet.

Friday, February 22, 2013

The Response of a Spherical Heart to a Uniform Electric Field

In Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the bidomain model of cardiac tissue.
Myocardial cells are typically about 10 μm in diameter and 100 μm long. They have the added complication that they are connected to one another by gap junctions, as shown schematically in Fig. 7.27. This allows currents to flow directly from one cell to another without flowing in the extracellular medium. The bidomain (two-domain) model is often used to model this situation [Henriquez (1993)]. It considers a region, small compared to the size of the heart, that contains many cells and their surrounding extracellular fluid.
The citation is to the 20-year-old-but-still-useful review article by Craig Henriquez of Duke University.
Henriquez, C. S. (1993) “Simulating the electrical behavior of cardiac tissue using the bidomain model,” Crit. Rev. Biomed. Eng., Volume 21, Pages 1–77.
According to Google Scholar, this landmark paper has been cited over 450 times (including a citation on page 202 of IPMB).

During the early 1990s I collaborated with another researcher from Duke, Natalia Trayanova. Our goal was to apply the bidomain model to the study of defibrillation of the heart. In the same year that Craig’s review appeared, Trayanova, her student Lisa Malden, and I published an article in the IEEE Transactions on Biomedical Engineering titled “The Response of the Spherical Heart to a Uniform Electric Field: A Bidomain Analysis of Cardiac Stimulation” (Volume 40, Pages 899–908). I’m fond of this paper for several reasons:
  • Like most physicists, I like simple models that highlight and clarify basic mechanisms. Our spherical heart model had that simplicity.
  • The article was the first to show that fiber curvature provides a mechanism for polarization of cardiac tissue in response to an electrical shock. Since our paper, researchers have appreciated the importance of the fiber geometry in the heart when modeling electrical stimulation.
  • The model emphasizes the role of unequal anisotropy ratios in the bidomain model. In cardiac tissue, both the intracellular and extracellular spaces are anisotropic (the electrical conductivity is different parallel to the myocardial fibers then perpendicular to them), but the intracellular space is more anisotropic than the extracellular space. Fiber curvature will only result in polarization deep in the heart wall if the tissue has unequal anisotropy ratios.
  • The calculation has important clinical implications. Fibrillation of the heart is a leading cause of death in the United States, and the only way to treat a fibrillating heart is to apply a strong electric shock: defibrillation. I’ve performed a lot of numerical simulations in my career, but none have the potential impact for medicine as my work on defibrillation.
  • The IEEE TBME publishes brief bios of the authors. Back in those days I published in this journal often, and my goal was to have my entire CV included, bit by bit, in these small bios. The one in this paper read “Bradley J Roth was raised in Morrison, Illinois. He received the BS degree in physics from the University of Kansas in 1982, and the PhD in physics from Vanderbilt University in 1987. His PhD dissertation research was performed in the Living State Physics Laboratory under the direction of Dr. J. WIkswo. He is now a Senior Staff Fellow with the Biomedical Engineering and Instrumentation Program, National Institutes of Health, Bethesda, MD. One of this research interests is the mathematical modeling of the electrical behavior of the heart. He is also interested in the production and interactions of magnetic fields with biological tissue, e.g. biomagnetism, magnetic stimulation, and magnetoacoustic imaging.”
  • The acknowledgments state “the authors thank B. Bowman for his assistance in editing the manuscript.” Barry was a great help to me in improving my writing skills during my years at NIH, and I’m glad that we mentioned him.
  • The paper cites several of my favorite books, including When Time Breaks Down by Art Winfree, Classical Electrodynamics by John David Jackson, and Handbook of Mathematical Functions with Formulas, Graphs, and and Mathematical Tables, by Abramowitz and Stegun.
  • The paper has been fairly influential. It’s been cited 97 times, which is small potatoes compared to Henriquez’s review, but not too shabby nevertheless; an average of almost five citations a year for 20 years.
  • It was a pleasure to collaborate with Natalia Trayanova, who I was to work with again seven years later on another study of cardiac electrical behavior (Lindblom, Roth, and Trayanova, Journal of Cardiovascular Electrophysiology, Volume 11, Pages 274–285, 2000).
  • The paper led to subsequent simulations of defibrillation that are much more realistic and sophisticated than our simple spherical model of twenty years ago. Trayanova has led the way in this research, first at Duke, then at Tulane, and now at Johns Hopkins. You can listen to her discuss her research here. If you have a subscription to the Journal of Visualized Experiments you can hear more here. For a recent review, see Trayanova et al. (2012). Also, see this article recently put out by Johns Hopkins University. 
Listen to Natalia Trayanova discuss developing computer simulations to improve arrhythmia treatments.
Cardiac Bioelectric Therapy: Mechanisms and Practical Implications with Intermediate Physics for Medicine and Biology.
Cardiac Bioelectric Therapy:
Mechanisms and Practical Implications.
To learn more about how physics and engineering can help us understand defibrillation, consult the book Cardiac Bioelectric Therapy: Mechanisms and Practical Implications, which has chapters by Trayanova and many of the other leading researchers in the field (including yours truly).

Friday, February 15, 2013

The Joy of X

The Joy of X,  by Steven Strogatz, superimposed on Intermediate Physics for Medicine and Biology.
The Joy of X,
by Steven Strogatz.
Steven Strogatz’s latest book is The Joy of X: A Guided Tour of Math, From One to Infinity. I have discussed books by Strogatz in previous entries of this blog, here and here. The preface defines the purpose of The Joy of X.
The Joy of X is an introduction to math’s most compelling and far-reaching ideas. The chapters—some from the original Times series [a series of articles about math that Strogatz wrote for the New York Times]—are bite-size and largely independent, so feel free to snack wherever you like. If you want to wade deeper into anything, the notes at the end of the book provide additional details, and suggestions for further reading.
My favorite chapter in The Joy of X was “Twist and Shout” about Mobius strips. Strogatz’s discussion was fine, but what I really enjoyed was the lovely video he called my attention to: “Wind and Mr. Ug”. Go watch it right now; it’s less than 8 minutes long. It is the most endearing mathematical story since Flatland.

Wind and Mr. Ug.

Of course, I’m always on the lookout for medical and biological physics, and I found it in Strogatz’s chapter called “Analyze This!,” in which he describes the Gibbs phenomenon. I have written about the Gibbs phenomenon in this blog before, but not so eloquently. Russ Hobbie and I introduce the Gibbs phenomenon in Chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology. When talking about the fit of a Fourier series to a square wave, we write
As the number of terms in the fit is increased, the value of Q [a measure of the goodness of the fit] decreases. However, spikes of constant height (about 18% of the amplitude of the square wave or 9% of the discontinuity in y) remain…These spikes appear whenever there is a discontinuity in y and are called the Gibbs phenomenon.
It turns out that the Gibbs phenomenon is related to the alternating harmonic series. Strogatz writes
Consider this series, known in the trade as the alternating harmonic series:
1 – 1/2 + 1/3 – 1/4 + 1/5 – 1/6 + … .
[…] The partial sums in this case are
S1 = 1
S2 = 1 – 1/2 = 0.500
S3 = 1 – 1/2 + 1/3 = 0.833 …
S4 = 1 – 1/2 + 1/3 – 1/4 = 0.583…

And if you go far enough, you’ll find that they home in on a number close to 0.69. The series can be proven to converge. Its limiting value is the natural logarithm of 2, denoted ln2 and approximately equal to 0.693147. […]

Let’s look at a particularly simple rearrangement whose sum is easy to calculate. Supposed we add two of the negative terms in the alternating harmonic series for every one of its positive terms, as follows:

[1 – 1/2 – 1/4] + [1/3 – 1/6 – 1/8] + [1/5 – 1/10 – 1/12] + …

Next, simplify each of the bracketed expressions by subtracting the second term from the first while leaving the third term untouched. Then the series reduces to

[1/2 – 1/4] + [1/6 – 1/8] + [1/10 – 1/12] + …

After factoring out ½ from all the fractions above and collecting terms, this becomes

½ [ 1 – 1/2 + 1/3 – 1/4 + 1/5 – 1/6 + …].

Look who’s back: the beast inside the brackets is the alternating harmonic series itself. By rearranging it, we’ve somehow made it half as big as it was originally—even though it contains all the same terms!”
Strogatz then relates this to a Fourier series

f(x) = sinx – 1/2 sin 2x + 1/3 sin 3x – 1/4 sin 4x + …

This series approaches a sawtooth curve. But when he examines its behavior with different numbers of terms in the sum, he finds the Gibbs phenomenon.
Something goes wrong near the edges of the teeth. The sine waves overshot the mark there and produce a strange finger that isn’t in the sawtooth wave itself… The blame can be laid at the doorstep of the alternating harmonic series. Its pathologies discussed earlier now contaminate the associated Fourier series. They’re responsible for that annoying finger that just won’t go away.
In the notes about the Gibbs phenomenon at the end of the book, Strogatz points us to a fascinating paper on the history of this topic
Hewitt, E. and Hewitt, R. E. (1979) “The Gibbs-Wilbraham Phenomenon: An Episode in Fourier Analysis,” Archive for the History of Exact Sciences, Volume 21, Pages129–160.
He concludes his chapter
This effect, commonly called the Gibbs phenomenon, is more than a mathematical curiosity. Known since the mid-1800s, it now turns up in our digital photographs and on MRI scans. The unwanted oscillations caused by the Gibbs phenomenon can produce blurring, shimmering, and other artifacts at sharp edges in the image. In a medical context, these can be mistaken for damaged tissue, or they can obscure lesions that are actually present.

Friday, February 8, 2013

Photodynamic Therapy

I am currently teaching Medical Physics (PHY 326) at Oakland University, and for our textbook I am using (surprise!) the 4th edition of Intermediate Physics for Medicine and Biology. In class, we recently finished Chapter 14 on Atoms and Light, which “describes some of the biologically important properties of infrared, visible, and ultraviolet light.”

Once a week, class ends with a brief discussion of a recent Point/Counterpoint article from the journal Medical Physics (see here and here for my previous discussion of Point/Counterpoint articles). I find these articles to be useful for introducing students to cutting-edge questions in modern medical physics. The title of each article contains a proposition that two leading medical physicists debate, one for it and one against it. This week, we discussed an article about photodynamic therapy (PDT) by Timothy C. Zhu (University of Pennsylvania, for the proposition) and E. Ishmael Parsai (University of Toledo, against the proposition):
Zhu, T. C., and E. I Parsai (2011) “PDT is Better than Alternative Therapies Such as Brachytherapy, Electron Beams, or Low-Energy X Rays for the Treatment of Skin Cancers,” Medical Physics, Volume 38, Pages 1133–1135.
When reading through the article, I thought I would check how extensively we discuss of PDT in IPMB. I found that we say nothing about it! A search for the term “photodynamic” or “PDT” comes back empty. So, this week (with an eye toward the 5th edition) I am preparing a very short new section in Chapter 14 about PDT.
14.8 ½ Photodynamic Therapy

Photodynamic therapy (PDT) uses a drug called a photosensitizer that is activated by light [Zhu and Finlay (2008), Wilson and Patterson (2008)]. PDT can treat accessible solid tumors such as basal cell carcinoma, a type of skin cancer [see Sec. 14.9.4]. An example of PDT is the surface application of 5-aminolevulinic acid, which is absorbed by the tumor cells and is transformed metabolically into the photosensitizer protoporphyrin IX. When this molecule interacts with light in the 600-800 nm range (red and near infrared), often delivered with a diode laser, it converts molecular oxygen into a highly reactive singlet state that causes necrosis, apoptosis (programmed cell death), or damage to the vasculature that can make the tumor ischemic. Some internal tumors can be treated using light carried by optical fibers introduced through an endoscope.
The two citations are to the articles 
Wilson, B. C. and M. S. Patterson (2008) “The Physics, Biophysics and Technology of Photodynamic Therapy,” Physics in Medicine and Biology, Volume 53, Pages R61–R109.

Zhu, T. C. and J. C. Finlay (2008) “The Role of Photodynamic Therapy (PDT) Physics,” Medical Physics, Volume 35, Pages 3127–3136.
The first PhD dissertation from the Oakland University Medical Physics graduate program dealt with photodynamic therapy: In Vivo Experimental Investigation on the Interaction Between Photodynamic Therapy and Hyperthermia, by James Mattiello (1987).

You can learn more about photodynamic therapy here and here. Please don’t confuse PDT with the alternative medicine (bogus) treatment “Sono Photo Dynamic Therapy.”

Friday, February 1, 2013

The Page 99 Test

English editor Ford Madox Ford advised people who are debating if they should read a particular book to “open the book to page ninety-nine and read, and the quality of the whole will be revealed to you.” This approach is now called the Page 99 Test. Although arbitrary, it provides a way to decide quickly if a book will interest you. Let’s try the Page 99 Test with the 4th edition of Intermediate Physics for Medicine and Biology. Section 4.12 comparing drift and diffusion ends on Page 99, and Section 4.13 about the solution to the diffusion equation begins. The page contains five displayed equations (four of them numbered, Eqs. 4.70 to 4.73) and three figures (Figs. 4.17 to 4.19). An example of the text of page 99 is the opening paragraph at the start of Sec. 4.13.
If C(x, 0) is known for t = 0, it is possible to use the result of Sec. 4.8 to determine C(x,t) at any later time. The key to doing this is that if C(x,t) dx is the number of particles in the region between x and x+dx at time t, it may be be interpreted as the probability of finding a particle in the interval (x, dx) multiplied by the total number of particles. (Recall the discussion on p. 91 about the interpretation of C(x,t).) The spreading Gaussian then represents the spread of probability that a particle is between x and x + dx.
Page 99 appears in the Table of Contents:
4.13 A General Solution for the Particle Concentration as a Function of Time . . . . . 99
and the title of this section appears as the running title at the top of the page. Page 99 appears three times in the index, under 1) Diffusion equation, general solution, 2) Fick’s law (frankly, I'm not sure why page 99 is listed for Fick’s law, as I don't see it mentioned explicitly anywhere on that page), and 3) Gaussian distribution. According to the Symbol List at the end of Chapter 4, the first use of the Greek symbol xi for position was on page 99. Somewhat unusually, no references are cited on page 99 (there are citations on the page before and the page after). No corrections to page 99 appear in the errata, and no words are emphasized using italics.

Does Intermediate Physics for Medicine and Biology pass the page 99 test? I think so. The topic—diffusion—is a physical phenomenon that is crucial for understanding biology. The mix of equations and figures is similar to the remainder of the book. Calculus is used without apology. If you like page 99, I think you will enjoy the rest of the book. And if you like page 99, you are going to love page 100, which contains more equations and figures, plus error functions, Green’s functions, random walks, and citations to classic texts such as Benedek and Villars (2000), Carslaw and Jaeger (1959), and Crank (1975). And if you liked page 100, on page 101 you find......

Friday, January 25, 2013

Aliasing

In Chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss aliasing.
If a component [in the Fourier spectrum] is present whose frequency is more than half the sampling frequency, it will appear in the analysis at a lower frequency. This is the familiar stroboscopic effect in which the wheels of the stagecoach appear to rotate backward because the samples (movie frames) are not made rapidly enough. In signal analysis, this is called aliasing. It can be seen in Fig. 11.15, which shows a sine wave sampled at regularly spaced intervals that are longer than half a period.
First of all, what is all this business about a stagecoach? Fifty years ago, when westerns were all the rage in movies and on TV, aliasing often occurred if the frame rate (typically 24 frames per second for old movies) was lower than the rotation rate of the wheel (if all the spokes of the wheel are equivalent, then you can take the “period of rotation” as the time it takes for one spoke to rotate to the position of the adjacent one, which may be much shorter than the time for the wheel to make one complete rotation). You can see an example of this in the John Wayne movie Winds of the Wasteland (1936), especially in the climactic scene of the stagecoach race. In this video of the movie, you can see aliasing of the stagecoach wheel briefly at time 55:40. For those of you who are more discriminating in your movie tastes, you can see another example of aliasing 14 minutes and 15 seconds into Stagecoach, a John Wayne classic from 1939 directed by John Ford. In my opinion, the greatest western is the John Ford masterpiece The Man Who Shot Liberty Valance. What more could you ask for than both John Wayne and Jimmy Stewart in the same production? You can see aliasing briefly when Stewart drives his buckboard out of Shinbone to practice his pistol shooting (without much success). Another time when you see a wheel rotate backwards in this movie does not involve aliasing; it is (Spoiler Alert!) after Stewart Wayne kills Valance (Lee Marvin), when Pompey (Woody Strode) takes the drunken Wayne to his ranch house where he backs up the buckboard (that was a joke….).

But I digress. Aliasing can happen in space as well as time, and can therefore affect images. If spatial frequencies in the structure of an object correspond to wavelengths smaller than the twice the pixel size, low spatial frequency artifacts, such as Moire patterns, can appear in the image, shown nicely in this figure. One can minimize aliasing by first filtering (anti-aliasing) before sampling. Some rather extreme cases of aliasing can been seen in Figs. 11.41 and 12.11 of Intermediate Physics for Medicine and Biology.

 Stagecoach, with John Wayne.

Friday, January 18, 2013

The Magic Angle

I recently found another error in the 4th edition of Intermediate Physics for Medicine and Biology. In Chapter 18 about magnetic resonance imaging, Homework Problem 18 reads
Problem 18  Suppose the two dipoles of the water molecule shown below point in the z direction while the line between them makes an angle θ with the x axis. Determine the angle θ for which the magnetic field of one dipole is perpendicular to the dipole moment of the other. For this angle the interaction energy is zero. This θ is called the “magic angle” and is used when studying anisotropic tissue such as cartilage [Xia (1998)].
Technically there is nothing wrong with this problem. However, if I were doing it over I would have the angle θ measured from the z axis, not the x axis. One reason is that this is the way θ is defined most often in the literature. Another is that in the solution manual we solve the problem as if θ were relative to the z axis, so the book and the solution manual are not consistent on the definition of θ. I should add, this problem was not present in the 3rd edition of Intermediate Physics for Medicine and Biology. It is a new problem I wrote for the 4th edition, so I can’t blame Russ Hobbie for this one (rats).

The citation in the homework problem is to the paper
Xia, Y. (1998) “Relaxation Anisotropy in Cartilage by NMR Microscopy (μMRI) at 14-μm Resolution,” Magnetic Resonance in Medicine, Volume 39, Pages 941–949.
The author, Yang Xia, is a good friend of mine, and a colleague here in the Department of Physics at Oakland University. He is well-known around OU because over the last decade he had the most grant money from the National Institutes of Health of anyone on campus. He uses a variety of techniques, including micro-magnetic resonance imaging (μMRI), to study cartilage and osteoarthritis. The abstract of his highly-cited paper reads
To study the structural anisotropy and the magic-angle effect in articular cartilage, T1 and T2 images were constructed at a series of orientations of cartilage specimens in the magnetic field by using NMR microscopy (μMRI). An isotropic T1, and a strong anisotropic T2 were observed across the cartilage tissue thickness. Three distinct regions in the microscopic MR images corresponded approximately to the superficial, transitional, and radial histological zones in the cartilage. The percentage decrease of T2 follows the pattern of the curve of (3cos2θ - 1)2 at the radial zone, where the collagen fibrils are perpendicular to the articular surface. In contrast, little orientational dependence of T2 was observed at the transitional zone, where the collagen fibrils are more randomly oriented. The result suggests that the interactions between water molecules and proteoglycans have a directional nature, which is somehow influenced by collagen fibril orientation. Hence, T2 anisotropy could serve as a sensitive and noninvasive marker for molecular-level orientations in articular cartilage.
Perhaps a better reference for our homework problem is another paper of Xia’s.
Xia, Y. (2000) “Magic Angle Effect in MRI of Articular Cartilage: A Review,” Investigative Radiology, Volume 35, Pages 602–621.
There in Fig. 3 of Xia’s review is a picture almost identical to the figure that immediately follows Homework Problem 18 in our book, except the angle θ is measured from the direction of the static magnetic field rather than perpendicular to it. Xia writes
T2 corresponds to the decay in phase coherence (dephasing) between the individual nuclear spins in a sample (protons in our case). Because each proton has a magnetic moment, it generates a small local dipolar magnetic field that impinges on its neighbor’s space (Fig. 3).43 This local field fluctuates constantly because the molecule is tumbling randomly. The T2 process can occur under the influence of this fluctuating magnetic field. At the end of signal excitation during an MRI experiment, the net magnetization (which produces the MRI signal) is coherent and points along a certain direction in space in the rotating frame of reference. This coherent magnetization vector soon becomes dephased because the local magnetic fields associated with the magnetic properties of neighboring nuclei cause the precessing nuclei to acquire a range of slightly different precessional frequencies. The time scale of this signal dephasing is reported as T2 and is characteristic of the molecular environment in the sample.43,44

For simple liquids or samples containing simple liquidlike molecules, the molecules tumble rapidly. The dipolar spin Hamiltonian (HD) that describes the dipolar interaction is averaged to zero, and its contribution to the spin relaxation vanishes. Relaxation characteristics exhibit a simple exponential decay that is well described by the Bloch equations.45 For samples containing molecules that are less mobile, HD is no longer averaged to zero and makes a significant contribution to the relaxation, resulting in a shorter T2. When HD is not zero, it is dominated by a geometric factor, (3cos2θ - 1), where θ is the angle between the position vector joining the two spins and the external magnetic field (see Fig. 3). A useful feature of this geometric factor is that it approaches zero as θ approaches 54.74° (Fig. 4). Therefore, even when HD is not zero, the contribution of HD to spin relaxation can be minimized if θ is set to 54.74°. This angle is called the magic angle in NMR.46
So, in the errata you will now find this:
Page 539: In Chapter 18, Homework Problem 18, “while the line between them makes an angle θ with the x axis” should be “while the line between them makes an angle θ with the z axis”. Also, in the accompanying figure following the homework problem, the angle θ should be measured from the z (vertical) axis, not the x (horizontal) axis. Corrected 1-18-13.
Is this the last error that we’ll find in our book? I doubt it; there are sure to be more we haven’t found yet. If you find any, please let us know.

Friday, January 11, 2013

5th Edition of Intermediate Physics for Medicine and Biology

Russ Hobbie and I are starting to talk about a 5th edition of Intermediate Physics for Medicine and Biology, and we need your help. We would like suggestions and advice about what changes/additions/deletions to make in the new edition.

We have prepared a survey to send to faculty members who we know have used IPMB as the textbook for a class they taught. However, our list may be incomplete, and input from any teacher, student, or reader would be useful. So, below is a copy of the survey. Please send responses to any or all of the questions to Russ (hobbie@umn.edu).

Thanks!
  1. What chapters did you cover when teaching from the 4th edition of IPMB?
  2. Were the homework problems appropriate?
  3. In the 4th edition we added a chapter on Sound and Ultrasound to IPMB. If we were to add one new chapter to the 5th edition, what should the topic be?
  4. Would color significantly improve the book for your purposes? How much extra money would you be willing to pay if the 5th edition contained many color pictures?
  5. What is the best feature of IPMB? What is the worst?
  6. Is the end-of-chapter list of symbols useful?
  7. Do your students use the Appendices? Suppose to save space one Appendix had to be deleted: which one should go?
  8. Did you have access to the solution manual? Was it useful? The solution manual we prepared using different software than the book itself. Did you see a noticeable difference in the quality of the book and the solution manual?
  9. Would you like students to have access to the solution manual?
  10. Did you use any information on the book website, such as the errata or text from previous editions?
  11. Are you aware of the book blog? Did you find it useful when teaching from IPMB? Do you find it interesting?
  12. How important is having a paperback version of the book?
  13. What textbooks did you consider other than IPMB? If IPMB did not exist, what book would you use for your class?

Friday, January 4, 2013

Non-Dynamical Stochastic Resonance: Theory and Experiments with White and Arbitrarily Coloured Noise

Section 11.18 of the 4th edition of Intermediate Physics for Medicine and Biology contains a discussion of stochastic resonance. This is a new section that Russ Hobbie and I added to the 4th edition, and features a discussion of a paper by Zoltan Gingl, Laszlo Kish (formerly “Kiss”), and Frank Moss.
Gingl, Z., L. B. Kiss, and F. Moss (1995) “Non-Dynamical Stochastic Resonance: Theory and Experiments with White and Arbitrarily Coloured Noise,” Europhysics Letters, Volume 29, Pages 191–196.
The paper is interesting (despite the annoying British spelling), and I reproduce part of the introduction below.
In the last decade’s physics literature, stochastic-resonance (SR) effect has been one of the most interesting phenomena taking place in noisy non-linear dynamical systems (see, e.g., [l-14]. The input of stochastic resonators [12] (non-linear systems showing SR) is fed by a Gaussian noise and a sinusoidal signal with frequency f0, that is, a random excitation and a periodic one are acting on the system. There is an optimal strength of the input noise, such that the system’s output power spectral density, at the signal frequency f0, has a maximal value. This effect is called SR. It can be viewed as: the transfer of the input sinusoidal signal through the system shows a “resonance vs. the strength of the input noise. It is a very interesting, and somewhat paradoxial effect, because it indicates that in these systems the existence of a certain amount of “indeterministic excitation is necessary to obtain the optimal “deterministic response. There are certain indications [2,13,14] that the principle of SR may be applied by nature in biological systems in order to optimise the transfer of neural signals.
Until last year, it was a common belief that SR phenomena occur only in (bistable, sometimes monostable [10] or multistable) dynamical systems [1-14]. Very recently, Wiesenfeld et al. [15] have proposed that certain systems with threshold-like properties should also show SR effects.
We present here an extremely simple system, invented by Moss, which displays SR. It consists only of a threshold and a subthreshold coherent signal plus noise as shown in fig. la). It is not a dynamical system, instead there is a single rule: whenever the signal plus the noise crosses the threshold unidirectionally, a narrow pulse of standard shape is written to a time series, as shown in fig. lb). The power spectrum of this series of pulses is shown in fig. 1c). It shows all the familiar features of SR systems previously studied [l, 2, 7, 16], in particular, the narrow, delta-like signal features riding on a broad-band noise background from which the signal-to-noise ratio (SNR) can be extracted. This system can be easily realized electronically as a level-crossing detector (LCD). There is a simple and very physically motivated theory of this phenomenon (due to Kiss), see below. Other, more detailed studies of various aspects of threshold-crossing dynamics have been made by Fox et al. [17], Jung [18] and Bulsara et al. [19].
We have experimentally realised and developed this simple SR system and carried out extensive analog and computer simulations on it. The theory of Kiss has been verified for the case of white and several sorts of coloured noises. Until now, the description of this new SR system, its physical realisation and the original theory have not appeared in the open literature, so in this letter we shall describe the new system and its developments made by us, present the outline and the main results of the theory and finally show some interesting experimental results…
Figure 1 in their paper is our Figure 11.50. It is an excellent figure, although I don’t know why they didn’t adjust the time axes so that the pulses in b) are aligned precisely with the signal crossings in a). The axes are almost correct, but are off just enough to be confusing, like when the video and audio signals are off by a fraction of a second in a movie or TV show.

The Gingl et al. paper is short and highly cited (over 200 citations to date, according to the Web of Science). However, it is not cited nearly as often as another paper published by Kurt Wisenfeld and Moss that same year:
Wisenfeld K. and F. Moss (1995) “Stochastic Resonance and the Benefits of Noise: From Ice Ages to Crayfish and SQUIDs,” Nature, Volume 373, Pages 33–36.
This paper, with over 1000 citations, reviews many applications of stochastic resonance.
Noise in dynamical systems is usually considered a nuisance. But in certain nonlinear systems, including electronic circuits and biological sensory apparatus, the presence of noise can in fact enhance the detection of weak signals. This phenomenon, called stochastic resonance, may find useful application in physical, technological and biomedical contexts.
Wisenfeld and Moss discuss how the crayfish may use stochastic resonance to detect weak signals with their mechanoreceptor hair cells.

Frank Moss (1934-2011) was the founding director of the Center for Neurodynamics at the University of Missouri at St Louis. Click here to read his obituary (he died two years ago today) in Physics Today, and click here to read a tribute to him in a focus issue of the journal Chaos.