Friday, June 30, 2017

The Fast Fourier Transform

In Chapter 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the fast Fourier transform.
The calculation of the Fourier coefficients using our equations involves N evaluations of the sine or cosine, N multiplications, and N additions for each coefficient. There are N coefficients, so that there must be N2 evaluations of the sines and cosines, which uses a lot of computer time. Cooley and Tukey (1965) showed that it is possible to group the data in such a way that the number of multiplications is about (N/2)log2N instead of N2 and the sines and cosines need to be evaluated only once, a technique known as the fast Fourier transform (FFT).
Additional analysis of the FFT is found in the homework problems at the end of the chapter.
Problem 17. This problem provides some insight into the fast Fourier transform. Start with the expression for an N-point Fourier transform in complex notation, Yk in Eq. 11.29a. Show that Yk can be written as the sum of two N/2-point Fourier transforms: Yk = ½[Yke + Wk Yko], where W = exp(-i2π/N), superscript e stands for even values of j, and o stands for odd values.
Numerical Recipes: The Art of Scientific Computing, by Press et al., superimposedo n Intermediate Physics for Medicine and Biology.
Numerical Recipes:
The Art of Scientific Computing,
by Press et al.
The FFT is a famous algorithm in the field of numerical methods. Below is how Press et al. describe it in one of my favorite books, Numerical Recipes.
The discrete Fourier transform can, in fact, be computed in O(Nlog2N) operations with an algorithm called the fast Fourier transform, or FFT. The difference between Nlog2N and N2 is immense. With N = 106, for example, it is the difference between, roughly, 30 seconds of CUP time and 2 weeks of CPU time on a microsecond cycle time computer. The existence of an FFT algorithm became generally known only in the mid-1960s, from the work of J. W. Cooley and J. W. Tukey. Retrospectively, we now know…that efficient methods for computing the DFT [discrete Fourier transform] had been independently discovered, and in some cases implemented, by as many as a dozen individuals, starting with Gauss in 1805!

One “rediscovery” of the FFT, that of Danielson and Lanczos in 1942, provides one of the clearest derivations of the algorithm. Danielson and Lanczos showed that a discrete Fourier transform of length N can be rewritten as the sum of two discrete Fourier transforms, each of length N/2. One of the two is formed from the even-numbered points of the original N, the other from the odd-numbered points…

The wonderful thing about the Danielson-Lanczos Lemma is that it can be used recursively. Having reduced the problem of computing Fk to that of computing Fke and Fko, we can do the same reduction of Fke to the problem of the transform of its N/4 even-numbered input data and N/4 odd-numbered data…

Although there are ways of treating other cases, by far the easiest case is the one in which the original N is an integer power of 2…With this restriction on N, it is evident that we can continue applying the Danielson-Lanczos Lemma until we have subdivided the data all the way down to transforms of length 1…The points as given are the one-point transforms. We combine adjacent pairs to get two-point transforms, then combine adjacent pairs of pairs to get 4-point transforms, and so on, until the first and second halves of the whole data set are combined into the final transform. Each combination takes on order N operations, and there are evidently log2N combinations, so the whole algorithm is of order Nlog2N.
This process, called decimation-in-time, is summarized in this lovely butterfly diagram.

A butterfly diagram of a decimation-in-time Fast Fourier Transform.
A butterfly diagram of a decimation-in-time
Fast Fourier Transform.

Friday, June 23, 2017

Urea

Figure 4.12 of Intermediate Physics for Medicine and Biology shows a log-log plot of the diffusion constant of various molecules as a function of molecular weight. In the top panel of the figure, containing the small molecules, only four are listed: water (H2O), oxygen (O2), glucose (C6H12O6), and urea (CO(NH2)2). Water, oxygen, and glucose are obvious choices; they are central to life. But what did urea do to make the cut? And just what is urea, anyway?

Life and Energy, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and BIology.
Life and Energy,
by Isaac Asimov.
I will let Isaac Asimov explain urea’s importance. In his book Life and Energy he writes
Now let us turn to the proteins, which, after digestion, enter the body in the form of amino acids. Before these can be utilized for the production of useful energy they must be stripped of their nitrogen.

In 1773 the French chemist G. F. Rouelle (Lavoisier’s teacher) discovered a nitrogenous compound in urine and named it ‘urea’ after its source. Once the composition of proteins began to be studied at the beginning of the nineteenth century, urea was at once recognized as the obvious route by which the body excreted the nitrogen of protein.

Its formula was shown to be
The molecular structure of urea, NH2CONH2.
The structure of urea.
or, more briefly, NH2CONH2, once structural formulas became the order of the day. As it happens, urea was involved in two startling advances in biochemistry. It was the first organic compound to be synthesized from an inorganic starting material (see Chapter 13) and the enzyme catalyzing its breakdown was the first to be crystallized (see Chapter 15).
Russ Hobbie and I mention urea again when we discuss headaches in renal dialysis.
Dialysis is used to remove urea from the plasma of patients whose kidneys do not function. Urea is in the interstitial brain fluid and the cerebrospinal fluid in the same concentration as in the plasma; however, the permeability of the capillary–brain membrane is low, so equilibration takes several hours (Patton et al. 1989, Chap. 64). Water, oxygen, and nutrients cross from the capillary to the brain at a much faster rate than urea. As the plasma urea concentration drops, there is a temporary osmotic pressure difference resulting from the urea within the brain. The driving pressure of water is higher in the plasma, and water flows to the brain interstitial fluid. Cerebral edema results, which can cause severe headaches.
A Short History of Biology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
A Short History of Biology,
by Isaac Asimov.
The role of urea in refuting “vitalism” is a fascinating story. Again I will let Asimov tell it, this time quoting from his book A Short History of Biology.
The Swedish chemist, Jons Jakob Berzelius (1779–1848), suggested, in 1807, that substances obtained from living (or once-living) organisms be called “organic substances,” while all others be referred to as “inorganic substances.” He felt that while it was possible to convert organic substances to inorganic ones easily enough, the reverse was impossible except through the agency of life. To prepare organic substances from inorganic, some vital force present only in living tissue had to be involved.

This view, however, did not endure for long. In 1828, a German chemist, Friedrich Wohler (1800–82), was investigating cyanides and related compounds; compounds which were then accepted as inorganic. He was heating ammonium cyanate and found, to his amazement, that he obtained crystals that, on testing, proved to be urea. Urea was the chief solid constituent of mammalian urine and was definitely an organic compound.
I guess urea earned its way into Figure 4.12. It is one of the key small molecules critical to life.

Friday, June 16, 2017

17 Reasons to Like Intermediate Physics for Medicine and Biology (Number 11 Will Knock Your Socks Off!)

Sometimes I read articles about blogging, and they often encourage me to make lists. So, here is a list of 17 reasons to like Intermediate Physics for Medicine and Biology. Enjoy!
  1. The book contains lots of homework problems. You learn best by doing, and there are many problems to do. 
  2. Each chapter contains a detailed list of symbols to help you keep all the math straight. 
  3. We wrote appendices about several mathematical topics, in case you need a review. 
  4. The references at the end of each chapter provide additional information. 
  5. My ideal bookshelf contains IPMB plus many related classics. 
    The logo for the Facebook page of the textbook Intermediate Physics for Medicine and Biology.
  6. Instructors can request a solution manual with answers to all the homework problems. Email Russ Hobbie or me to learn more.
  7. Russ and I worked hard to make sure the index is accurate and complete. 
  8. See a list of my favorite illustrations from the book, including this one: 
    A drawing showing the time course of the dipole of the heart, from Intermediate Physics for Medicine and Biology.
  9. A whole chapter is dedicated to the exponential function. What more could you ask? 
  10. Equations. Lots and lots of equations.
  11. A focus on mathematical modeling, especially in the homework problems. When I teach a class based on IPMB, I treat it as a workshop on modeling in medicine and biology. 
  12. See the video about a computer program called MacDose that Russ Hobbie made to explain the interaction of radiation with tissue. 
  13. We tried to eliminate any mistakes from IPMB, but because that is impossible we list all known errors in the Errata
  14. How many of your textbooks have been turned into a word cloud? 
    A word cloud, from the textbook Intermediate Physics for Medicine and Biology.
  15. IPMB helps students prepare for the MCAT
  16. Computer programs illustrate complex topics, such as the Hodgkin-Huxley model of a nerve axon. 
  17. Most importantly, IPMB has its own blog! How often do you have a an award-winning blog associated with a textbook? The blog is free, and it’s worth every penny! 

Friday, June 9, 2017

Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields

A fascinating paper, titled “Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields,” was published in the June 1 issue of Cell (Volume 169, Pages 1029–1041) by Nir Grossman and his colleagues. Although I don’t agree with everything the authors say (I never do), on the whole this study is an important contribution. You may have seen Pam Belluck's article about it in the New York Times. Below is Grossman et al.’s abstract.
We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.
The gist of the method is to apply two electric fields to the brain, one with frequency f1 and the other with frequency f2, where f2 = f1 + Δf with Δf small. The result is a carrier with a frequency equal to the average of f1 and f2, modulated by a beat frequency equal to Δf. For instance, the study uses two currents having frequencies f1 = 2000 Hz and f2 = 2010 Hz, resulting in a carrier frequency of 2005 Hz and a beat frequency of 10 Hz. When they use this current to stimulate a mouse brain, the mouse neurons respond at a frequency of 10 Hz.

The paper uses some fancy language, like the neuron “demodulating” the stimulus and responding to the “temporal interference”. I think there is a simpler explanation. The authors show that in general a nerve does not respond to a stimulus at a frequency of 2000 Hz, except that when this stimulus is first turned on there is a transient excitation. I would describe their beat-frequency stimulus as like the turning on and off of a 2000 Hz current. Each time the stimulus turns on (every 100 ms) you get a transient response. This gives you a neural response at 10 Hz, as observed in the experiment. In other words, a sinusoidally modulated carrier doesn’t act so differently from a carrier turned on and off at the same rate (modulated by a square wave), as shown in the picture below. The transient response is the key to understanding its action.

A comparison of a beat frequency and a frequency modulated by a square wave.

Stimulating neurons at the beat frequency is an amazing result. Why didn’t I think of that? Just as astonishing is the ability to selectively stimulate neurons deep in the brain. We used to worry about this a lot when I worked on magnetic stimulation at the National Institutes of Health, and we concluded that it was impossible. The argument was that the electric field obeys Laplace’s equation (the wave equation under conditions when propagation effects are negligible so you can ignore the time derivatives), and a solution to Laplace’s equation cannot have a local maximum. But the argument doesn’t seem to hold when you stimulate using two different frequencies. The reason is that a weak single-frequency field doesn’t excite neurons (the field strength is below threshold) and a strong single-frequency field doesn’t excite neurons (the stimulus is so large and rapid that the neuron is always refractory). You need two fields of about the same strength but slightly different frequencies to get the on/off behavior that causes the transient excitation. I see no reason why you can’t get such excitation to occur selectively at depth, as the author’s suggest. Wow! Again, why didn’t I think of that?

I find it interesting to analyze how the electric field behaves. Suppose you have two electric fields, one at frequency f1 that oscillates back-and-forth along a direction down and to the left, and another at frequency f2 that oscillates back-and-forth along a direction down and to the right (see the figure below). When the two electric fields are in phase, their horizontal components cancel and their vertical components add, so the result is a vertically oscillating electric field (vertical polarization). When the two electric fields are 180 degrees out of phase, their vertical components cancel and their horizontal components add, so the result is a horizontally oscillating electric field (horizontal polarization). At times when the two electric fields are 90 degrees out of phase, the electric field is rotating (circular polarization). Therefore, the electric field's amplitude doesn't change much but its polarization modulates with the beat frequency. If stimulating an axon for which only the electric field component along its length is important for excitation, you project the modulated circular polarization onto the axon direction and get the beat-frequency electric field as discussed in the paper. It’s almost like optics. (OK, maybe “temporal interference” isn’t such a bad phrase after all.)

An explanation of how a circularly polarized electric field is produced during neural stimulation.

A good paper raises as many question as it answers. For instance, how exactly does a nerve respond to a beat-frequency electric field? I would like to see a computer simulation of this case based on a neural excitation model, such as the Hodgkin-Huxley model. (You can learn more about the Hodgkin-Huxley model in Chapter 6 of Intermediate Physics for Medicine and Biology; you knew I was going to get a plug for the book in here somewhere.) Also, unlike long straight axons in the peripheral nervous system, neurons in the brain bend and branch so different neurons may respond to electric fields in different (or all) directions. How does such a neuron respond to a circularly polarized electric field?

When I first read the paper’s final sentence—“We anticipate that [the method of beat-frequency stimulation] might rapidly be deployable into human clinical trials, as well as studies of the human brain”—I was skeptical. Now that I’ve thought about it more, I willing to…ahem…not dismiss this claim out-of-hand. It might work. Maybe. There is still the issue of getting a current applied to the scalp into the brain through the high-resistance skull, which is why transcranial magnetic stimulation is more common than transcranial electric stimulation for clinical applications. I don’t know if this new method will ultimately work, but Grossman et al. will have fun giving it a try.

Friday, June 2, 2017

Internal Conversion

In Chapter 17 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss nuclear decay.
Whenever a nucleus loses energy by γ-decay, there is a competing process called internal conversion. The energy to be lost in the transition, Eγ, is transferred directly to a bound electron, which is then ejected with a kinetic energy
T = EγB,
where B is the binding energy of the electron.
What is the energy of these ejected internal conversion electrons? Does the most important γ-emitter for medical physics, 99mTc, decay by internal conversion? To answer these question, we need to know the binding energy B. Table 15.1 of IPMB provides the energy levels for tungsten; below is similar data for technetium.

    level     energy (keV)
K -21.044
 LI   -3.043
  LII   -2.793
   LIII   -2.677
  MI   -0.544
   MII   -0.448
    MIII   -0.418
    MIV   -0.258
   MV   -0.254

The binding energy B is just the negative of the energy listed above. During internal conversion, most often a K-shell electron is ejected. The most common γ-ray emitted during the decay of 99mTc has an energy of 140.5 keV. Thus, K-shell internal conversion electrons are emitted with energy 140.5 – 21.0 = 119.5 keV. If you look at the tabulated data in Fig. 17.4 in IPMB, giving the decay data for 99mTc, you will find the internal conversion of a K-shell electron (“ce-K”) for γ-ray 2 (the 140.5 keV gamma ray) has this energy (“1.195E-01 MeV”). The energy of internal conversion electrons from other shells is greater, because the electrons are not held as tightly.

Auger electrons also come spewing out of technetium following internal conversion. These electrons arise, for instance, when the just-created hole in the K-shell is filled by another electron. This process can be accompanied by emission of a characteristic x-ray, or by ejection of an Auger electron. Suppose internal conversion ejects a K-shell electron, and then the hole is filled by an electron from the L-shell, with ejection of another L-shell Auger electron. We would refer to this as a “KLL” process, and the Auger electron energy would be equal to the difference of the energies of the L and K shells, minus the binding energy of the L-shell electron, or 21 – 2(3) = 15 keV. This value is approximate, because the LI, LII, and LIII binding energies are all slightly different.

In general, Auger electron energies are much less than internal conversion electron energies, because nuclear energy levels are more widely spaced than electron energy levels. For 99mTc, the internal conversion electron has an energy of 119.5 keV compared to a typical Auger electron energy of 15 keV (Auger electron energies for other processes are often smaller).

Another important issue is what fraction of decays are internal conversion versus gamma emission. This can be quantified using the internal conversion coefficient, defined as the number of internal conversions over the number of gamma emissions. Figure 17.4 in IPMB has the data we need to calculate the internal conversion coefficient. The mean number of gamma rays (only considering γ-ray 2) per disintegration is 0.891, whereas the mean number of internal conversion electrons per disintegration is 0.0892+0.0099+0.0006+0.0003+0.0020+0.0004 = 0.1024 (adding the contributions for all the important energy levels). Thus, the internal conversion coefficient is 0.1024/0.891 = 0.115.

The ideal isotope for medical imaging would have no internal conversion, which adds nothing to the image but contributes to the dose. Technetium, which has so many desirable properties, also has a small internal conversion coefficient. It really is the ideal radioisotope for medical imaging.