Friday, July 28, 2017

Suki is Going Deaf

Suki Roth, in front of a copy of Intermediate Physics for Medicine and Biology.
Suki Roth, in front of a copy of
Intermediate Physics for
Medicine and Biology.
Suki is going deaf. She has not lost all her hearing yet, but when I call her in a normal voice she does not respond. She used to jump up when she heard me get the leash for a walk, but now I have to show it to her. Before she was scared of thunderstorms, but lately she snoozes through all but the loudest rumbles. In the past she got excited when the garage door opened, but nowadays she ignores it. Suki will be 15 years old this October, so such problems are expected. Still, I’m sad to see her sink into silence.

I think my hearing is getting worse too, but slowly. My dad uses a hearing aid, and I take after him. I find myself asking “what did you say?” more often than other people do. I decided to test myself using the website http://www.animations.physics.unsw.edu.au/jw/hearing.html. Below I plot my hearing (red dots) as a function of frequency, and compare it with the normal hearing response of a young adult (solid curve) shown in Figure 13.7 of Intermediate Physics for Medicine and Biology. I normalized the two curves so they are equal at 1 kHz.
A plot of my hearing response curve: the threshold sound intensity versus freqneucy.
My hearing response curve.
My hearing appears normal except for an odd deficit around 3000-4000 Hz. Also, I may be missing some high frequencies, but the loss is not dramatic.

I didn’t follow the website’s instructions exactly. I plotted the lowest intensity tone that I could just hear. I don’t trust this test, performed on myself using a website; it is very subjective and the loudness changes in large 3 dB steps. (In case you do not have IPMB handy, Eq. 13.34 indicates that a ten-fold change in intensity corresponds to a step of 10 in decibels.) I would be interested in hearing (get it?....) if you have a similar result using this website.

Age-related hearing loss is called presbycusis. Wikipedia says it is the second most common illness in the elderly, after arthritis. Normally we lose the high frequencies as we age, which has implications for how teenagers choose ringtones.


On the above video, I could hear the 8 kHz ringtone but not the 12 kHz or higher ones. Can you? I am not sure if it is me, my computer, or the video.

I may be losing some hearing, but probably not much. (Perhaps I just don’t pay attention when my wife talks to me.) Suki, however, is in worse shape. She is the world’s best pet, and I intend to give her extra treats to make up for her lack of hearing.

Friday, July 21, 2017

Do I Make Myself Clear?

Do I Make Myself Clear? Why Writing Well Matters, by Harold Evans
Do I Make Myself Clear?
by Harold Evans.
I enjoy reading books about writing. Recently I read Do I Make Myself Clear? Why Writing Well Matters by Harold Evans. One of Evans’s pet peeves is “unnoticed redundancies, such as complete monopoly and awkward predicament, that do not add to the sense of the message.” He provides over 250 examples, with instructions to “strike out the words in italics.”

Of course, I became curious how Intermediate Physics for Medicine and Biology fared with these redundancies. So I hunted for them using the search box in my pdf version of IPMB. Most were absent, but a few appeared. I’m not sure they are always bad; you can decide for yourself. I enjoy doing this sort of thing, but is it fair to subject my dear readers to this analysis? I believe that writing well is critical for scientists; if pointing out some sloppy writing in IPMB can help others tighten their prose, the effort is worthwhile.
all of

Russ Hobbie and I occasionally include the unnecessary “of,” such as on page 59, “all of the external parameters,” which would sound tighter with no loss of meaning as “all the external parameters.”

a distance of

Evans puts the whole phrase in italics, which must mean he thinks it is unnecessary. Our text would probably be better by deleting “a distance of” from the homework problem on page 497: “Use the appropriate values for striated muscle to estimate the dose to the gonads if they are at a distance of 50 cm from the x-ray tube."

a number of examples

While Russ and I don’t use “a number of” with the word “examples,” we often write “a number of.” Sometimes we mean “several,” which I think is OK (although it sounds slightly pompous). My guess is that Evans is concerned primarily with cases when “a number of” could be deleted with no loss of meaning. I found a few examples in IPMB, such as on page 489, “irradiating the patient through a number of absorbers of different thickness spreads out the region of maximum dose” (and should it be “thicknesses”?), and especially page 514, “A number of more complicated situations are solved by Loevinger et al.”

a period of

I suspect that Evans is irritated by authors who write “a period of time,” which Russ and I never do. Sometimes we use “a period of” in the mathematical sense of the repeat time of a periodic function, such as on page 342: “If you are told that there is a signal in these data with a period of 4 s, you can group them together and average them.” No change is needed there. On the other hand, this text from page 39 is a borderline case: “figure 2.10 shows the survival of patients with congestive heart failure for a period of 9 years.” To me our prose sounds fine; I’m not sure what Evans would say.

appear to be

I admit, we occasionally have the unneeded “to be” after “appear,” such as on page 178 “does the charge distribution appear to be continuous or discrete?” and page 297 “do the results appear to be chaotic?” I write mainly be ear, and my ear isn’t bothered by “appear to be.” I am left wondering: “to be,” or not “to be,” that is the question.

as yet

On page 134 we write “There is evidence that some as yet unidentified toxin of medium molecular weight accumulates in the blood.” Yes, I concede the sentence would sound better if we delete the “as.”

close proximity

I agree with Evans that the “close” is bothersome. Russ and I never include a “close” with our “proximity,” except once on page 483 when we had no choice, it was inside a quote: “The bystander effect in radiobiology refers to the ‘induction of biological effects in cells that are not directly traversed by a charged particle, but are in close proximity to cells that are.’”

completely untrue

I think Evans’s point is that a statement can be either true or untrue, with no intermediate case, so completely is redundant. I’m not sure science is so black and white. Sometimes you can have an approximation that is very accurate, but technically untrue (Newtonian mechanics is almost true for speeds much less than the speed of light, but not completely true). Perhaps a better example is the cliché “completely pregnant.”

We have a lot of “completely”s in IPMB, most of which I am comfortable with. One questionable case appears on page 125: “if a solute is present to which the membrane is completely impermeable...” At first the completely sounds unneeded—a membrane is either permeable or it is not—but we had just introduced the hydraulic permeability, a parameter that can be very small without being zero. Saying “completely impermeable” is probably fine when we mean the limit as the hydraulic permeability goes to zero. I side with Evans that completely is unnecessary on page 88 “this differential form of the continuity equation is completely equivalent to the integral form,” and on page 279 “Jules Henri Poincaré realized around 1900 that systems described exactly by the completely deterministic equations of Newton’s laws could exhibit wild behavior.

depreciated in value

Although we don’t use “depreciated,” this wordy sentence from page 33 would be improved by deleting “in value”: “if the interest rate is 5% and if the interest is credited to the account once a year, the account increases in value by 5% of its present value each year.”

divide up

Russ and I sin only once, on page 144: “Divide up any closed surface into elements of surface area...”

end up

You tell me if this sentence form page 510 sounds better without the “up”; my ear can’t decide: “When a radiopharmaceutical is given to a patient for either diagnosis or therapy, the nuclei end up in different organs in varying amounts.”

have got

Sometimes Evans is like the Lorax: correct but annoying. I suppose this sentence from page 607 should not have the “gotten,” but the change seems so picky: “This is the same answer we would have gotten if h had been regarded as a constant.”

it is interesting to note that

We never use this exact phrase, but on page 248 “it is interesting to compare this to Eq. 9.38” would sound better as the command “compare this to Eq. 9.38.” I probably would not change page 11: “it is interesting to read what an orthopedic surgeon had to say about the use of a cane.”

past history

I hadn’t really thought about this redundancy until Evans pointed it out. He is right that “past history” is redundant, and I would change several such cases in IPMB, including page 57, “it is independent of the past history of the system and is specified by a few macroscopic parameters.”
Do I Make Myself Clear? is a fine book, although in my opinion it is not as good as Zinsser’s On Writing Well. Scientists are judged by their journal papers and grant proposals, both written documents. You need to write well, or your reputation will suffer. Eliminating minor redundancies is one way to make your writing clearer and more concise. Train your ear to listen for them.

Friday, July 14, 2017

Nerve, Muscle, and Synapse

Nerve, Muscle, and Synapse,  by Bernard Katz, superimposed on Intermediate Physics for Medicine and Biology.
Nerve, Muscle, and Synapse,
by Bernard Katz.
In Intermediate Physics for Medicine and Biology, Russ Hobbie ad I include a footnote at the start of Chapter 6:
A good discussion of the properties of nerves and the Hodgkin–Huxley experiments is found in Katz (1966).
Why do we cite a book that is over 50 years old? One reason is nostalgia. In 1982 I graduated from the University of Kansas with a physics major and entered graduate school at Vanderbilt. I began working with John Wikswo, who was measuring the magnetic field produced by a nerve axon, so I had to learn quickly how nerves work. One of the first books I read was Nerve, Muscle and Synapse. What a lucky choice.

The author, Bernard Katz, led an interesting and productive life. Because of his Jewish background, in 1935 he fled Germany for England. There he worked with physiologist Archibald Hill (Katz dedicates Nerve, Muscle, and Synapse “to my friend and teacher, A. V. Hill”). He collaborated with Alan Hodgkin, and was a coauthor on one of the five famous papers from 1952 that established the Hodgkin and Huxley model (see Chapter 6 of IPMB for more on this model). He also published a paper with Hodgkin about electric current flowing through a membrane, leading to the Goldman-Hodgkin-Katz equation discussed in Sec. 9.6 of IPMB (Goldman derived this equation independently of Hodgkin and Katz).

Katz won his Nobel Prize for discovering the discrete nature of acetylcholine release at the nerve-muscle synapse, which explains the book’s title. I was glancing through his Chapter 9 on the Quantal Nature of Chemical Transmission when I saw an example analyzed using Poisson statistics and I thought to myself “Hey, that looks familiar.” His example uses the same data that Russ and I present in our Appendix J about the Poisson Distribution. We had a common source: IPMB and Katz both cite work by Boyd and Martin.

One reason I like Nerve, Muscle and Synapse is that it contains a lot of physics. In his foreword, George Wald writes
Professor Katz has produced here the elementary text we asked of him, but also much more. He goes far beyond the first essentials to develop the subject in depth. He has the gift of a graphic style and the apt phrase. What impresses me particularly is that each idea is pursued to the numerical level. Each theoretical development comes out in this form, in clearly stated problems worked through with the relevant numbers. But the treatment as a whole extends beyond this also, asking and answering the basic questions that few workers in electrophysiology probably have taken the trouble to pursue so far. All this is done with an easy mastery of the underlying physics and physical chemistry.
That’s high praise. Russ and I take a similar approach in IPMB, pursuing topics to the numerical level (sometimes in the text, and sometimes in the homework). Nerve, Muscle, and Synapse shares another trait with IPMB: it uses calculus without apology.

If you are looking for the most up-to-date textbook on nerve electrophysiology, you should search for a more recent publication (perhaps the latest edition of From Neuron to Brain). But, if you’re a physicist trying to learn something about how nerves work, Katz’s book remains a useful introduction. That’s why Russ and I still cite it.

Friday, July 7, 2017

Bioelectricity: A Quantitative Approach

The best way to learn about bioelectricity is to read Chapters 6–9 in Intermediate Physics for Medicine and Biology. But suppose, for some odd and incomprehensible reason, you seek an alternative to IPMB. Another option is to enroll in Roger Barr’s MOOC (massive open online course) “Bioelectricity: A Quantitative Approach” through Coursera.

I enrolled and am going through the course (if you don’t want a certificate, which I don’t need, the course is free). The website says the course begins July 17, but all the videos and course materials are accessible now. I’m curious to know what is going to happen in ten days.

Below is the summary from an article about this course, published after Barr first taught the MOOC in 2012.
After only three months for planning and development, Duke University and Dr. Roger Barr successfully delivered a challenging open online course via Coursera to thousands of students around the world. Lessons learned from this experience have contributed to the strategic goals of Duke’s Online Initiatives.
  • Over 600 hours of effort were required to build and deliver the course, including more than 420 hours of effort by the instructor. 
  • The course launched on schedule and was successfully completed by hundreds of students. Many hundreds more continued to participate in other ways. The number of students actively participating plateaued at around 1000 per week. 
  • Over 12,000 students enrolled, representing more than 100 countries. Approximately 8,000 of these students logged in during the first week. 
  • At the time of enrollment, one-third of enrolled students held less than a four year degree, one-third held a Bachelors or equivalent, and one-third held an advanced degree. 
  • 25% of students who took both Week 1 quizzes successfully completed the course, including 313 students from at least 37 countries. Course completers typically held a Bachelor’s degree or higher; however, at least 10 pre-college students were among those who successfully completed this challenging upper level undergraduate course. 
  • Students who did not complete all requirements cited a lack of time, insufficient math background or having intended to only view the lectures from the outset. Regardless of completion status, many students were primarily seeking enjoyment or educational enrichment.
  • Most students reported a positive learning experience and rated the course highly, including ones who did not complete all requirements 
  • The Coursera platform met the needs of the course in spite of being continuously under development while the course was live. Technical issues reported by the students and instructor were generally minor, of short duration and/or quickly resolved. 
  • Patience, flexibility and resilience on the part of instructor, Coursera students, CIT staff, and Duke University Office of Information Technology media services staff were key elements in the success of this course.
Bioelectricity: A Quantitative Approach, by Robert Plonsey and Roger Barr, superimposed on Intermediate Physics for Medicine and Biology.
Bioelectricity: A Quantitative Approach,
by Robert Plonsey and Roger Barr.
Barr has published extensively in bioelectricity, particularly about the electrical properties of the heart. My favorites articles are two he wrote with Robert Plonsey in 1984: “Current Flow Patterns in Two-Dimensional Anisotropic Bisyncytia with Normal and Extreme Conductivities,” Biophysical Journal 45: 557–571 and “Propagation of Excitation in Idealized Anisotropic Two-Dimensional Tissue,” Biophysical Journal 45: 1191-1202. I used Plonsey and Barr’s textbook Bioelectricity: A Quantitative Approach (which the Coursera MOOC is based on) in a graduate bioelectricity class for several semesters, until I decided to base the class entirely on published articles in the scientific literature (something like a journal club).

So far I like the MOOC, although I have only just started. It is the SECOND best way to learn about bioelectricity.


Friday, June 30, 2017

The Fast Fourier Transform

In Chapter 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the fast Fourier transform.
The calculation of the Fourier coefficients using our equations involves N evaluations of the sine or cosine, N multiplications, and N additions for each coefficient. There are N coefficients, so that there must be N2 evaluations of the sines and cosines, which uses a lot of computer time. Cooley and Tukey (1965) showed that it is possible to group the data in such a way that the number of multiplications is about (N/2)log2N instead of N2 and the sines and cosines need to be evaluated only once, a technique known as the fast Fourier transform (FFT).
Additional analysis of the FFT is found in the homework problems at the end of the chapter.
Problem 17. This problem provides some insight into the fast Fourier transform. Start with the expression for an N-point Fourier transform in complex notation, Yk in Eq. 11.29a. Show that Yk can be written as the sum of two N/2-point Fourier transforms: Yk = ½[Yke + Wk Yko], where W = exp(-i2π/N), superscript e stands for even values of j, and o stands for odd values.
Numerical Recipes: The Art of Scientific Computing, by Press et al., superimposedo n Intermediate Physics for Medicine and Biology.
Numerical Recipes:
The Art of Scientific Computing,
by Press et al.
The FFT is a famous algorithm in the field of numerical methods. Below is how Press et al. describe it in one of my favorite books, Numerical Recipes.
The discrete Fourier transform can, in fact, be computed in O(Nlog2N) operations with an algorithm called the fast Fourier transform, or FFT. The difference between Nlog2N and N2 is immense. With N = 106, for example, it is the difference between, roughly, 30 seconds of CUP time and 2 weeks of CPU time on a microsecond cycle time computer. The existence of an FFT algorithm became generally known only in the mid-1960s, from the work of J. W. Cooley and J. W. Tukey. Retrospectively, we now know…that efficient methods for computing the DFT [discrete Fourier transform] had been independently discovered, and in some cases implemented, by as many as a dozen individuals, starting with Gauss in 1805!

One “rediscovery” of the FFT, that of Danielson and Lanczos in 1942, provides one of the clearest derivations of the algorithm. Danielson and Lanczos showed that a discrete Fourier transform of length N can be rewritten as the sum of two discrete Fourier transforms, each of length N/2. One of the two is formed from the even-numbered points of the original N, the other from the odd-numbered points…

The wonderful thing about the Danielson-Lanczos Lemma is that it can be used recursively. Having reduced the problem of computing Fk to that of computing Fke and Fko, we can do the same reduction of Fke to the problem of the transform of its N/4 even-numbered input data and N/4 odd-numbered data…

Although there are ways of treating other cases, by far the easiest case is the one in which the original N is an integer power of 2…With this restriction on N, it is evident that we can continue applying the Danielson-Lanczos Lemma until we have subdivided the data all the way down to transforms of length 1…The points as given are the one-point transforms. We combine adjacent pairs to get two-point transforms, then combine adjacent pairs of pairs to get 4-point transforms, and so on, until the first and second halves of the whole data set are combined into the final transform. Each combination takes on order N operations, and there are evidently log2N combinations, so the whole algorithm is of order Nlog2N.
This process, called decimation-in-time, is summarized in this lovely butterfly diagram.

A butterfly diagram of a decimation-in-time Fast Fourier Transform.
A butterfly diagram of a decimation-in-time
Fast Fourier Transform.

Friday, June 23, 2017

Urea

Figure 4.12 of Intermediate Physics for Medicine and Biology shows a log-log plot of the diffusion constant of various molecules as a function of molecular weight. In the top panel of the figure, containing the small molecules, only four are listed: water (H2O), oxygen (O2), glucose (C6H12O6), and urea (CO(NH2)2). Water, oxygen, and glucose are obvious choices; they are central to life. But what did urea do to make the cut? And just what is urea, anyway?

Life and Energy, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and BIology.
Life and Energy,
by Isaac Asimov.
I will let Isaac Asimov explain urea’s importance. In his book Life and Energy he writes
Now let us turn to the proteins, which, after digestion, enter the body in the form of amino acids. Before these can be utilized for the production of useful energy they must be stripped of their nitrogen.

In 1773 the French chemist G. F. Rouelle (Lavoisier’s teacher) discovered a nitrogenous compound in urine and named it ‘urea’ after its source. Once the composition of proteins began to be studied at the beginning of the nineteenth century, urea was at once recognized as the obvious route by which the body excreted the nitrogen of protein.

Its formula was shown to be
The molecular structure of urea, NH2CONH2.
The structure of urea.
or, more briefly, NH2CONH2, once structural formulas became the order of the day. As it happens, urea was involved in two startling advances in biochemistry. It was the first organic compound to be synthesized from an inorganic starting material (see Chapter 13) and the enzyme catalyzing its breakdown was the first to be crystallized (see Chapter 15).
Russ Hobbie and I mention urea again when we discuss headaches in renal dialysis.
Dialysis is used to remove urea from the plasma of patients whose kidneys do not function. Urea is in the interstitial brain fluid and the cerebrospinal fluid in the same concentration as in the plasma; however, the permeability of the capillary–brain membrane is low, so equilibration takes several hours (Patton et al. 1989, Chap. 64). Water, oxygen, and nutrients cross from the capillary to the brain at a much faster rate than urea. As the plasma urea concentration drops, there is a temporary osmotic pressure difference resulting from the urea within the brain. The driving pressure of water is higher in the plasma, and water flows to the brain interstitial fluid. Cerebral edema results, which can cause severe headaches.
A Short History of Biology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
A Short History of Biology,
by Isaac Asimov.
The role of urea in refuting “vitalism” is a fascinating story. Again I will let Asimov tell it, this time quoting from his book A Short History of Biology.
The Swedish chemist, Jons Jakob Berzelius (1779–1848), suggested, in 1807, that substances obtained from living (or once-living) organisms be called “organic substances,” while all others be referred to as “inorganic substances.” He felt that while it was possible to convert organic substances to inorganic ones easily enough, the reverse was impossible except through the agency of life. To prepare organic substances from inorganic, some vital force present only in living tissue had to be involved.

This view, however, did not endure for long. In 1828, a German chemist, Friedrich Wohler (1800–82), was investigating cyanides and related compounds; compounds which were then accepted as inorganic. He was heating ammonium cyanate and found, to his amazement, that he obtained crystals that, on testing, proved to be urea. Urea was the chief solid constituent of mammalian urine and was definitely an organic compound.
I guess urea earned its way into Figure 4.12. It is one of the key small molecules critical to life.

Friday, June 16, 2017

17 Reasons to Like Intermediate Physics for Medicine and Biology (Number 11 Will Knock Your Socks Off!)

Sometimes I read articles about blogging, and they often encourage me to make lists. So, here is a list of 17 reasons to like Intermediate Physics for Medicine and Biology. Enjoy!
  1. The book contains lots of homework problems. You learn best by doing, and there are many problems to do. 
  2. Each chapter contains a detailed list of symbols to help you keep all the math straight. 
  3. We wrote appendices about several mathematical topics, in case you need a review. 
  4. The references at the end of each chapter provide additional information. 
  5. My ideal bookshelf contains IPMB plus many related classics. 
    The logo for the Facebook page of the textbook Intermediate Physics for Medicine and Biology.
  6. Instructors can request a solution manual with answers to all the homework problems. Email Russ Hobbie or me to learn more.
  7. Russ and I worked hard to make sure the index is accurate and complete. 
  8. See a list of my favorite illustrations from the book, including this one: 
    A drawing showing the time course of the dipole of the heart, from Intermediate Physics for Medicine and Biology.
  9. A whole chapter is dedicated to the exponential function. What more could you ask? 
  10. Equations. Lots and lots of equations.
  11. A focus on mathematical modeling, especially in the homework problems. When I teach a class based on IPMB, I treat it as a workshop on modeling in medicine and biology. 
  12. See the video about a computer program called MacDose that Russ Hobbie made to explain the interaction of radiation with tissue. 
  13. We tried to eliminate any mistakes from IPMB, but because that is impossible we list all known errors in the Errata
  14. How many of your textbooks have been turned into a word cloud? 
    A word cloud, from the textbook Intermediate Physics for Medicine and Biology.
  15. IPMB helps students prepare for the MCAT
  16. Computer programs illustrate complex topics, such as the Hodgkin-Huxley model of a nerve axon. 
  17. Most importantly, IPMB has its own blog! How often do you have a an award-winning blog associated with a textbook? The blog is free, and it’s worth every penny! 

Friday, June 9, 2017

Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields

A fascinating paper, titled “Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields,” was published in the June 1 issue of Cell (Volume 169, Pages 1029–1041) by Nir Grossman and his colleagues. Although I don’t agree with everything the authors say (I never do), on the whole this study is an important contribution. You may have seen Pam Belluck's article about it in the New York Times. Below is Grossman et al.’s abstract.
We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.
The gist of the method is to apply two electric fields to the brain, one with frequency f1 and the other with frequency f2, where f2 = f1 + Δf with Δf small. The result is a carrier with a frequency equal to the average of f1 and f2, modulated by a beat frequency equal to Δf. For instance, the study uses two currents having frequencies f1 = 2000 Hz and f2 = 2010 Hz, resulting in a carrier frequency of 2005 Hz and a beat frequency of 10 Hz. When they use this current to stimulate a mouse brain, the mouse neurons respond at a frequency of 10 Hz.

The paper uses some fancy language, like the neuron “demodulating” the stimulus and responding to the “temporal interference”. I think there is a simpler explanation. The authors show that in general a nerve does not respond to a stimulus at a frequency of 2000 Hz, except that when this stimulus is first turned on there is a transient excitation. I would describe their beat-frequency stimulus as like the turning on and off of a 2000 Hz current. Each time the stimulus turns on (every 100 ms) you get a transient response. This gives you a neural response at 10 Hz, as observed in the experiment. In other words, a sinusoidally modulated carrier doesn’t act so differently from a carrier turned on and off at the same rate (modulated by a square wave), as shown in the picture below. The transient response is the key to understanding its action.

A comparison of a beat frequency and a frequency modulated by a square wave.

Stimulating neurons at the beat frequency is an amazing result. Why didn’t I think of that? Just as astonishing is the ability to selectively stimulate neurons deep in the brain. We used to worry about this a lot when I worked on magnetic stimulation at the National Institutes of Health, and we concluded that it was impossible. The argument was that the electric field obeys Laplace’s equation (the wave equation under conditions when propagation effects are negligible so you can ignore the time derivatives), and a solution to Laplace’s equation cannot have a local maximum. But the argument doesn’t seem to hold when you stimulate using two different frequencies. The reason is that a weak single-frequency field doesn’t excite neurons (the field strength is below threshold) and a strong single-frequency field doesn’t excite neurons (the stimulus is so large and rapid that the neuron is always refractory). You need two fields of about the same strength but slightly different frequencies to get the on/off behavior that causes the transient excitation. I see no reason why you can’t get such excitation to occur selectively at depth, as the author’s suggest. Wow! Again, why didn’t I think of that?

I find it interesting to analyze how the electric field behaves. Suppose you have two electric fields, one at frequency f1 that oscillates back-and-forth along a direction down and to the left, and another at frequency f2 that oscillates back-and-forth along a direction down and to the right (see the figure below). When the two electric fields are in phase, their horizontal components cancel and their vertical components add, so the result is a vertically oscillating electric field (vertical polarization). When the two electric fields are 180 degrees out of phase, their vertical components cancel and their horizontal components add, so the result is a horizontally oscillating electric field (horizontal polarization). At times when the two electric fields are 90 degrees out of phase, the electric field is rotating (circular polarization). Therefore, the electric field's amplitude doesn't change much but its polarization modulates with the beat frequency. If stimulating an axon for which only the electric field component along its length is important for excitation, you project the modulated circular polarization onto the axon direction and get the beat-frequency electric field as discussed in the paper. It’s almost like optics. (OK, maybe “temporal interference” isn’t such a bad phrase after all.)

An explanation of how a circularly polarized electric field is produced during neural stimulation.

A good paper raises as many question as it answers. For instance, how exactly does a nerve respond to a beat-frequency electric field? I would like to see a computer simulation of this case based on a neural excitation model, such as the Hodgkin-Huxley model. (You can learn more about the Hodgkin-Huxley model in Chapter 6 of Intermediate Physics for Medicine and Biology; you knew I was going to get a plug for the book in here somewhere.) Also, unlike long straight axons in the peripheral nervous system, neurons in the brain bend and branch so different neurons may respond to electric fields in different (or all) directions. How does such a neuron respond to a circularly polarized electric field?

When I first read the paper’s final sentence—“We anticipate that [the method of beat-frequency stimulation] might rapidly be deployable into human clinical trials, as well as studies of the human brain”—I was skeptical. Now that I’ve thought about it more, I willing to…ahem…not dismiss this claim out-of-hand. It might work. Maybe. There is still the issue of getting a current applied to the scalp into the brain through the high-resistance skull, which is why transcranial magnetic stimulation is more common than transcranial electric stimulation for clinical applications. I don’t know if this new method will ultimately work, but Grossman et al. will have fun giving it a try.

Friday, June 2, 2017

Internal Conversion

In Chapter 17 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss nuclear decay.
Whenever a nucleus loses energy by γ-decay, there is a competing process called internal conversion. The energy to be lost in the transition, Eγ, is transferred directly to a bound electron, which is then ejected with a kinetic energy
T = EγB,
where B is the binding energy of the electron.
What is the energy of these ejected internal conversion electrons? Does the most important γ-emitter for medical physics, 99mTc, decay by internal conversion? To answer these question, we need to know the binding energy B. Table 15.1 of IPMB provides the energy levels for tungsten; below is similar data for technetium.

    level     energy (keV)
K -21.044
 LI   -3.043
  LII   -2.793
   LIII   -2.677
  MI   -0.544
   MII   -0.448
    MIII   -0.418
    MIV   -0.258
   MV   -0.254

The binding energy B is just the negative of the energy listed above. During internal conversion, most often a K-shell electron is ejected. The most common γ-ray emitted during the decay of 99mTc has an energy of 140.5 keV. Thus, K-shell internal conversion electrons are emitted with energy 140.5 – 21.0 = 119.5 keV. If you look at the tabulated data in Fig. 17.4 in IPMB, giving the decay data for 99mTc, you will find the internal conversion of a K-shell electron (“ce-K”) for γ-ray 2 (the 140.5 keV gamma ray) has this energy (“1.195E-01 MeV”). The energy of internal conversion electrons from other shells is greater, because the electrons are not held as tightly.

Auger electrons also come spewing out of technetium following internal conversion. These electrons arise, for instance, when the just-created hole in the K-shell is filled by another electron. This process can be accompanied by emission of a characteristic x-ray, or by ejection of an Auger electron. Suppose internal conversion ejects a K-shell electron, and then the hole is filled by an electron from the L-shell, with ejection of another L-shell Auger electron. We would refer to this as a “KLL” process, and the Auger electron energy would be equal to the difference of the energies of the L and K shells, minus the binding energy of the L-shell electron, or 21 – 2(3) = 15 keV. This value is approximate, because the LI, LII, and LIII binding energies are all slightly different.

In general, Auger electron energies are much less than internal conversion electron energies, because nuclear energy levels are more widely spaced than electron energy levels. For 99mTc, the internal conversion electron has an energy of 119.5 keV compared to a typical Auger electron energy of 15 keV (Auger electron energies for other processes are often smaller).

Another important issue is what fraction of decays are internal conversion versus gamma emission. This can be quantified using the internal conversion coefficient, defined as the number of internal conversions over the number of gamma emissions. Figure 17.4 in IPMB has the data we need to calculate the internal conversion coefficient. The mean number of gamma rays (only considering γ-ray 2) per disintegration is 0.891, whereas the mean number of internal conversion electrons per disintegration is 0.0892+0.0099+0.0006+0.0003+0.0020+0.0004 = 0.1024 (adding the contributions for all the important energy levels). Thus, the internal conversion coefficient is 0.1024/0.891 = 0.115.

The ideal isotope for medical imaging would have no internal conversion, which adds nothing to the image but contributes to the dose. Technetium, which has so many desirable properties, also has a small internal conversion coefficient. It really is the ideal radioisotope for medical imaging.

Friday, May 26, 2017

Confocal Microscopy

Russ Hobbie and I don’t talk much about microscopy in Intermediate Physics for Medicine and Biology. In the homework problems to Chapter 14 (Atoms and Light) we describe the compound microscope, but that is about it. Physics, however, plays a big role in microscopy. In this post, I attempt to explain the physics behind the confocal microscope. I leave out much, but I hope this explanation conveys the essence of the technique.

Start with a simple converging lens. The lens is often indicated by a vertical line with triangles on the top and bottom, but this is shorthand for the dashed concave lens shown below. Assume this is the objective lens of your microscope. A lens has two focal points. Light originating at the left focal point exits the lens horizontally (yellow), like in a searchlight. Light coming from a distant object (purple) converges at the right focal point, like in a telescope.
A ray diagram showing how an objective lens works.
When an object is not so distant, the light converges at a point beyond the focal point; the closer the object, the farther back it converges. You can calculate the point where the light converges using the thin lens equation (Eq. 14.64 in IPMB). Below I show three rays originating at different positions in a sample of biological tissue. The colors (green, blue, and red) do not indicate different wavelengths of light; I use different colors so the rays are easier to follow. Light originating deep in the sample (green) converges just beyond the right focal point, but light originating near the front of the sample (red) converges far beyond the focal point. This is why in a camera you can adjust the focus by changing the distance from the lens to the detector.
A ray diagram showing how an ojective lens focus objects at different distances away at different locations.
Suppose you wanted to detect light from only the center of the sample. You could put an opaque screen containing a small pinhole beyond the focal point of the lens, just where the blue rays converge. All the light originating from the center of the sample would pass through the pinhole. The light from deep in the sample (green) would be out of focus, so most of this light would be blocked by the screen. Likewise, light from the front of the sample (red) is even more out of focus, and only a tiny bit passes through the pinhole. So, voilá, the light detected beyond the pinhole is almost entirely from the center of the sample.
A ray diagram that shows how a pinhole can be used to make a confocal microscope, that images only one plane in an object.
Do you want to view a different depth? Just move the screen/pinhole to the right or left, and you can image shallower or deeper positions.
Another ray diagram that shows how a pinhole can be used to make a confocal microscope, that images only one plane in an object.
In this way, you build up an image of the sample as a function of depth.

How do you get information in the plane at one depth? In confocal microscopy, you usually scan a laser at different positions in the x,y plane (taking z as the distance from the lens). Pixel by pixel, you build up an image, then adjust the position of the screen and build up another image, and then another and another.

Often, confocal microscopy is used with samples that emit fluoresced light. You shine the narrow laser beam of short-wavelength light onto the sample from the left. The sample then emits long-wavelength light as molecules in the sample fluoresce. You can filter out the short-wavelength light, and just image the long-wavelength light. Biologists play all kinds of tricks with fluorescence, such as attaching a fluorescent molecule to the particular structure in the sample they want to image.

There are advantages and disadvantages of a confocal microscope. On advantage is that your detector, positioned to the right of the screen/pinhole, need not be an array like in a camera. A single detector is sufficient; you build up the spatial information by scanning the laser beam (x,y) and the pinhole (z) to obtain full three-dimensional information that you can then manipulate with a computer to create informative and beautiful pictures. One disadvantage is that you have to scan both the laser and the pinhole in synchrony, which is not easy. All this scanning takes time, although video-rate scans are possible using modern technology. Also, most of your fluoresced light gets blocked by the screen, so you need an intense light source that may bleach of your fluorescent tag.

The confocal microscope was invented by Marvin Minsky, who died last year after a productive career in science. Minsky was an undergraduate physics major who went on to study mathematics, computer science, artificial intelligence, robotics, and consciousness. Isaac Asimov claimed in his biography In Joy Still Felt that only two people he knew were more intelligent than he was: Carl Sagan and Minsky. Marvin Minsky and his confocal microscope illustrate the critical role of physics in medicine and biology.