Friday, July 14, 2017

Nerve, Muscle, and Synapse

Nerve, Muscle, and Synapse,  by Bernard Katz, superimposed on Intermediate Physics for Medicine and Biology.
Nerve, Muscle, and Synapse,
by Bernard Katz.
In Intermediate Physics for Medicine and Biology, Russ Hobbie ad I include a footnote at the start of Chapter 6:
A good discussion of the properties of nerves and the Hodgkin–Huxley experiments is found in Katz (1966).
Why do we cite a book that is over 50 years old? One reason is nostalgia. In 1982 I graduated from the University of Kansas with a physics major and entered graduate school at Vanderbilt. I began working with John Wikswo, who was measuring the magnetic field produced by a nerve axon, so I had to learn quickly how nerves work. One of the first books I read was Nerve, Muscle and Synapse. What a lucky choice.

The author, Bernard Katz, led an interesting and productive life. Because of his Jewish background, in 1935 he fled Germany for England. There he worked with physiologist Archibald Hill (Katz dedicates Nerve, Muscle, and Synapse “to my friend and teacher, A. V. Hill”). He collaborated with Alan Hodgkin, and was a coauthor on one of the five famous papers from 1952 that established the Hodgkin and Huxley model (see Chapter 6 of IPMB for more on this model). He also published a paper with Hodgkin about electric current flowing through a membrane, leading to the Goldman-Hodgkin-Katz equation discussed in Sec. 9.6 of IPMB (Goldman derived this equation independently of Hodgkin and Katz).

Katz won his Nobel Prize for discovering the discrete nature of acetylcholine release at the nerve-muscle synapse, which explains the book’s title. I was glancing through his Chapter 9 on the Quantal Nature of Chemical Transmission when I saw an example analyzed using Poisson statistics and I thought to myself “Hey, that looks familiar.” His example uses the same data that Russ and I present in our Appendix J about the Poisson Distribution. We had a common source: IPMB and Katz both cite work by Boyd and Martin.

One reason I like Nerve, Muscle and Synapse is that it contains a lot of physics. In his foreword, George Wald writes
Professor Katz has produced here the elementary text we asked of him, but also much more. He goes far beyond the first essentials to develop the subject in depth. He has the gift of a graphic style and the apt phrase. What impresses me particularly is that each idea is pursued to the numerical level. Each theoretical development comes out in this form, in clearly stated problems worked through with the relevant numbers. But the treatment as a whole extends beyond this also, asking and answering the basic questions that few workers in electrophysiology probably have taken the trouble to pursue so far. All this is done with an easy mastery of the underlying physics and physical chemistry.
That’s high praise. Russ and I take a similar approach in IPMB, pursuing topics to the numerical level (sometimes in the text, and sometimes in the homework). Nerve, Muscle, and Synapse shares another trait with IPMB: it uses calculus without apology.

If you are looking for the most up-to-date textbook on nerve electrophysiology, you should search for a more recent publication (perhaps the latest edition of From Neuron to Brain). But, if you’re a physicist trying to learn something about how nerves work, Katz’s book remains a useful introduction. That’s why Russ and I still cite it.

Friday, July 7, 2017

Bioelectricity: A Quantitative Approach

The best way to learn about bioelectricity is to read Chapters 6–9 in Intermediate Physics for Medicine and Biology. But suppose, for some odd and incomprehensible reason, you seek an alternative to IPMB. Another option is to enroll in Roger Barr’s MOOC (massive open online course) “Bioelectricity: A Quantitative Approach” through Coursera.

I enrolled and am going through the course (if you don’t want a certificate, which I don’t need, the course is free). The website says the course begins July 17, but all the videos and course materials are accessible now. I’m curious to know what is going to happen in ten days.

Below is the summary from an article about this course, published after Barr first taught the MOOC in 2012.
After only three months for planning and development, Duke University and Dr. Roger Barr successfully delivered a challenging open online course via Coursera to thousands of students around the world. Lessons learned from this experience have contributed to the strategic goals of Duke’s Online Initiatives.
  • Over 600 hours of effort were required to build and deliver the course, including more than 420 hours of effort by the instructor. 
  • The course launched on schedule and was successfully completed by hundreds of students. Many hundreds more continued to participate in other ways. The number of students actively participating plateaued at around 1000 per week. 
  • Over 12,000 students enrolled, representing more than 100 countries. Approximately 8,000 of these students logged in during the first week. 
  • At the time of enrollment, one-third of enrolled students held less than a four year degree, one-third held a Bachelors or equivalent, and one-third held an advanced degree. 
  • 25% of students who took both Week 1 quizzes successfully completed the course, including 313 students from at least 37 countries. Course completers typically held a Bachelor’s degree or higher; however, at least 10 pre-college students were among those who successfully completed this challenging upper level undergraduate course. 
  • Students who did not complete all requirements cited a lack of time, insufficient math background or having intended to only view the lectures from the outset. Regardless of completion status, many students were primarily seeking enjoyment or educational enrichment.
  • Most students reported a positive learning experience and rated the course highly, including ones who did not complete all requirements 
  • The Coursera platform met the needs of the course in spite of being continuously under development while the course was live. Technical issues reported by the students and instructor were generally minor, of short duration and/or quickly resolved. 
  • Patience, flexibility and resilience on the part of instructor, Coursera students, CIT staff, and Duke University Office of Information Technology media services staff were key elements in the success of this course.
Bioelectricity: A Quantitative Approach, by Robert Plonsey and Roger Barr, superimposed on Intermediate Physics for Medicine and Biology.
Bioelectricity: A Quantitative Approach,
by Robert Plonsey and Roger Barr.
Barr has published extensively in bioelectricity, particularly about the electrical properties of the heart. My favorites articles are two he wrote with Robert Plonsey in 1984: “Current Flow Patterns in Two-Dimensional Anisotropic Bisyncytia with Normal and Extreme Conductivities,” Biophysical Journal 45: 557–571 and “Propagation of Excitation in Idealized Anisotropic Two-Dimensional Tissue,” Biophysical Journal 45: 1191-1202. I used Plonsey and Barr’s textbook Bioelectricity: A Quantitative Approach (which the Coursera MOOC is based on) in a graduate bioelectricity class for several semesters, until I decided to base the class entirely on published articles in the scientific literature (something like a journal club).

So far I like the MOOC, although I have only just started. It is the SECOND best way to learn about bioelectricity.


Friday, June 30, 2017

The Fast Fourier Transform

In Chapter 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the fast Fourier transform.
The calculation of the Fourier coefficients using our equations involves N evaluations of the sine or cosine, N multiplications, and N additions for each coefficient. There are N coefficients, so that there must be N2 evaluations of the sines and cosines, which uses a lot of computer time. Cooley and Tukey (1965) showed that it is possible to group the data in such a way that the number of multiplications is about (N/2)log2N instead of N2 and the sines and cosines need to be evaluated only once, a technique known as the fast Fourier transform (FFT).
Additional analysis of the FFT is found in the homework problems at the end of the chapter.
Problem 17. This problem provides some insight into the fast Fourier transform. Start with the expression for an N-point Fourier transform in complex notation, Yk in Eq. 11.29a. Show that Yk can be written as the sum of two N/2-point Fourier transforms: Yk = ½[Yke + Wk Yko], where W = exp(-i2π/N), superscript e stands for even values of j, and o stands for odd values.
Numerical Recipes: The Art of Scientific Computing, by Press et al., superimposedo n Intermediate Physics for Medicine and Biology.
Numerical Recipes:
The Art of Scientific Computing,
by Press et al.
The FFT is a famous algorithm in the field of numerical methods. Below is how Press et al. describe it in one of my favorite books, Numerical Recipes.
The discrete Fourier transform can, in fact, be computed in O(Nlog2N) operations with an algorithm called the fast Fourier transform, or FFT. The difference between Nlog2N and N2 is immense. With N = 106, for example, it is the difference between, roughly, 30 seconds of CUP time and 2 weeks of CPU time on a microsecond cycle time computer. The existence of an FFT algorithm became generally known only in the mid-1960s, from the work of J. W. Cooley and J. W. Tukey. Retrospectively, we now know…that efficient methods for computing the DFT [discrete Fourier transform] had been independently discovered, and in some cases implemented, by as many as a dozen individuals, starting with Gauss in 1805!

One “rediscovery” of the FFT, that of Danielson and Lanczos in 1942, provides one of the clearest derivations of the algorithm. Danielson and Lanczos showed that a discrete Fourier transform of length N can be rewritten as the sum of two discrete Fourier transforms, each of length N/2. One of the two is formed from the even-numbered points of the original N, the other from the odd-numbered points…

The wonderful thing about the Danielson-Lanczos Lemma is that it can be used recursively. Having reduced the problem of computing Fk to that of computing Fke and Fko, we can do the same reduction of Fke to the problem of the transform of its N/4 even-numbered input data and N/4 odd-numbered data…

Although there are ways of treating other cases, by far the easiest case is the one in which the original N is an integer power of 2…With this restriction on N, it is evident that we can continue applying the Danielson-Lanczos Lemma until we have subdivided the data all the way down to transforms of length 1…The points as given are the one-point transforms. We combine adjacent pairs to get two-point transforms, then combine adjacent pairs of pairs to get 4-point transforms, and so on, until the first and second halves of the whole data set are combined into the final transform. Each combination takes on order N operations, and there are evidently log2N combinations, so the whole algorithm is of order Nlog2N.
This process, called decimation-in-time, is summarized in this lovely butterfly diagram.

A butterfly diagram of a decimation-in-time Fast Fourier Transform.
A butterfly diagram of a decimation-in-time
Fast Fourier Transform.

Friday, June 23, 2017

Urea

Figure 4.12 of Intermediate Physics for Medicine and Biology shows a log-log plot of the diffusion constant of various molecules as a function of molecular weight. In the top panel of the figure, containing the small molecules, only four are listed: water (H2O), oxygen (O2), glucose (C6H12O6), and urea (CO(NH2)2). Water, oxygen, and glucose are obvious choices; they are central to life. But what did urea do to make the cut? And just what is urea, anyway?

Life and Energy, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and BIology.
Life and Energy,
by Isaac Asimov.
I will let Isaac Asimov explain urea’s importance. In his book Life and Energy he writes
Now let us turn to the proteins, which, after digestion, enter the body in the form of amino acids. Before these can be utilized for the production of useful energy they must be stripped of their nitrogen.

In 1773 the French chemist G. F. Rouelle (Lavoisier’s teacher) discovered a nitrogenous compound in urine and named it ‘urea’ after its source. Once the composition of proteins began to be studied at the beginning of the nineteenth century, urea was at once recognized as the obvious route by which the body excreted the nitrogen of protein.

Its formula was shown to be
The molecular structure of urea, NH2CONH2.
The structure of urea.
or, more briefly, NH2CONH2, once structural formulas became the order of the day. As it happens, urea was involved in two startling advances in biochemistry. It was the first organic compound to be synthesized from an inorganic starting material (see Chapter 13) and the enzyme catalyzing its breakdown was the first to be crystallized (see Chapter 15).
Russ Hobbie and I mention urea again when we discuss headaches in renal dialysis.
Dialysis is used to remove urea from the plasma of patients whose kidneys do not function. Urea is in the interstitial brain fluid and the cerebrospinal fluid in the same concentration as in the plasma; however, the permeability of the capillary–brain membrane is low, so equilibration takes several hours (Patton et al. 1989, Chap. 64). Water, oxygen, and nutrients cross from the capillary to the brain at a much faster rate than urea. As the plasma urea concentration drops, there is a temporary osmotic pressure difference resulting from the urea within the brain. The driving pressure of water is higher in the plasma, and water flows to the brain interstitial fluid. Cerebral edema results, which can cause severe headaches.
A Short History of Biology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
A Short History of Biology,
by Isaac Asimov.
The role of urea in refuting “vitalism” is a fascinating story. Again I will let Asimov tell it, this time quoting from his book A Short History of Biology.
The Swedish chemist, Jons Jakob Berzelius (1779–1848), suggested, in 1807, that substances obtained from living (or once-living) organisms be called “organic substances,” while all others be referred to as “inorganic substances.” He felt that while it was possible to convert organic substances to inorganic ones easily enough, the reverse was impossible except through the agency of life. To prepare organic substances from inorganic, some vital force present only in living tissue had to be involved.

This view, however, did not endure for long. In 1828, a German chemist, Friedrich Wohler (1800–82), was investigating cyanides and related compounds; compounds which were then accepted as inorganic. He was heating ammonium cyanate and found, to his amazement, that he obtained crystals that, on testing, proved to be urea. Urea was the chief solid constituent of mammalian urine and was definitely an organic compound.
I guess urea earned its way into Figure 4.12. It is one of the key small molecules critical to life.

Friday, June 16, 2017

17 Reasons to Like Intermediate Physics for Medicine and Biology (Number 11 Will Knock Your Socks Off!)

Sometimes I read articles about blogging, and they often encourage me to make lists. So, here is a list of 17 reasons to like Intermediate Physics for Medicine and Biology. Enjoy!
  1. The book contains lots of homework problems. You learn best by doing, and there are many problems to do. 
  2. Each chapter contains a detailed list of symbols to help you keep all the math straight. 
  3. We wrote appendices about several mathematical topics, in case you need a review. 
  4. The references at the end of each chapter provide additional information. 
  5. My ideal bookshelf contains IPMB plus many related classics. 
    The logo for the Facebook page of the textbook Intermediate Physics for Medicine and Biology.
  6. Instructors can request a solution manual with answers to all the homework problems. Email Russ Hobbie or me to learn more.
  7. Russ and I worked hard to make sure the index is accurate and complete. 
  8. See a list of my favorite illustrations from the book, including this one: 
    A drawing showing the time course of the dipole of the heart, from Intermediate Physics for Medicine and Biology.
  9. A whole chapter is dedicated to the exponential function. What more could you ask? 
  10. Equations. Lots and lots of equations.
  11. A focus on mathematical modeling, especially in the homework problems. When I teach a class based on IPMB, I treat it as a workshop on modeling in medicine and biology. 
  12. See the video about a computer program called MacDose that Russ Hobbie made to explain the interaction of radiation with tissue. 
  13. We tried to eliminate any mistakes from IPMB, but because that is impossible we list all known errors in the Errata
  14. How many of your textbooks have been turned into a word cloud? 
    A word cloud, from the textbook Intermediate Physics for Medicine and Biology.
  15. IPMB helps students prepare for the MCAT
  16. Computer programs illustrate complex topics, such as the Hodgkin-Huxley model of a nerve axon. 
  17. Most importantly, IPMB has its own blog! How often do you have a an award-winning blog associated with a textbook? The blog is free, and it’s worth every penny! 

Friday, June 9, 2017

Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields

A fascinating paper, titled “Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields,” was published in the June 1 issue of Cell (Volume 169, Pages 1029–1041) by Nir Grossman and his colleagues. Although I don’t agree with everything the authors say (I never do), on the whole this study is an important contribution. You may have seen Pam Belluck's article about it in the New York Times. Below is Grossman et al.’s abstract.
We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.
The gist of the method is to apply two electric fields to the brain, one with frequency f1 and the other with frequency f2, where f2 = f1 + Δf with Δf small. The result is a carrier with a frequency equal to the average of f1 and f2, modulated by a beat frequency equal to Δf. For instance, the study uses two currents having frequencies f1 = 2000 Hz and f2 = 2010 Hz, resulting in a carrier frequency of 2005 Hz and a beat frequency of 10 Hz. When they use this current to stimulate a mouse brain, the mouse neurons respond at a frequency of 10 Hz.

The paper uses some fancy language, like the neuron “demodulating” the stimulus and responding to the “temporal interference”. I think there is a simpler explanation. The authors show that in general a nerve does not respond to a stimulus at a frequency of 2000 Hz, except that when this stimulus is first turned on there is a transient excitation. I would describe their beat-frequency stimulus as like the turning on and off of a 2000 Hz current. Each time the stimulus turns on (every 100 ms) you get a transient response. This gives you a neural response at 10 Hz, as observed in the experiment. In other words, a sinusoidally modulated carrier doesn’t act so differently from a carrier turned on and off at the same rate (modulated by a square wave), as shown in the picture below. The transient response is the key to understanding its action.

A comparison of a beat frequency and a frequency modulated by a square wave.

Stimulating neurons at the beat frequency is an amazing result. Why didn’t I think of that? Just as astonishing is the ability to selectively stimulate neurons deep in the brain. We used to worry about this a lot when I worked on magnetic stimulation at the National Institutes of Health, and we concluded that it was impossible. The argument was that the electric field obeys Laplace’s equation (the wave equation under conditions when propagation effects are negligible so you can ignore the time derivatives), and a solution to Laplace’s equation cannot have a local maximum. But the argument doesn’t seem to hold when you stimulate using two different frequencies. The reason is that a weak single-frequency field doesn’t excite neurons (the field strength is below threshold) and a strong single-frequency field doesn’t excite neurons (the stimulus is so large and rapid that the neuron is always refractory). You need two fields of about the same strength but slightly different frequencies to get the on/off behavior that causes the transient excitation. I see no reason why you can’t get such excitation to occur selectively at depth, as the author’s suggest. Wow! Again, why didn’t I think of that?

I find it interesting to analyze how the electric field behaves. Suppose you have two electric fields, one at frequency f1 that oscillates back-and-forth along a direction down and to the left, and another at frequency f2 that oscillates back-and-forth along a direction down and to the right (see the figure below). When the two electric fields are in phase, their horizontal components cancel and their vertical components add, so the result is a vertically oscillating electric field (vertical polarization). When the two electric fields are 180 degrees out of phase, their vertical components cancel and their horizontal components add, so the result is a horizontally oscillating electric field (horizontal polarization). At times when the two electric fields are 90 degrees out of phase, the electric field is rotating (circular polarization). Therefore, the electric field's amplitude doesn't change much but its polarization modulates with the beat frequency. If stimulating an axon for which only the electric field component along its length is important for excitation, you project the modulated circular polarization onto the axon direction and get the beat-frequency electric field as discussed in the paper. It’s almost like optics. (OK, maybe “temporal interference” isn’t such a bad phrase after all.)

An explanation of how a circularly polarized electric field is produced during neural stimulation.

A good paper raises as many question as it answers. For instance, how exactly does a nerve respond to a beat-frequency electric field? I would like to see a computer simulation of this case based on a neural excitation model, such as the Hodgkin-Huxley model. (You can learn more about the Hodgkin-Huxley model in Chapter 6 of Intermediate Physics for Medicine and Biology; you knew I was going to get a plug for the book in here somewhere.) Also, unlike long straight axons in the peripheral nervous system, neurons in the brain bend and branch so different neurons may respond to electric fields in different (or all) directions. How does such a neuron respond to a circularly polarized electric field?

When I first read the paper’s final sentence—“We anticipate that [the method of beat-frequency stimulation] might rapidly be deployable into human clinical trials, as well as studies of the human brain”—I was skeptical. Now that I’ve thought about it more, I willing to…ahem…not dismiss this claim out-of-hand. It might work. Maybe. There is still the issue of getting a current applied to the scalp into the brain through the high-resistance skull, which is why transcranial magnetic stimulation is more common than transcranial electric stimulation for clinical applications. I don’t know if this new method will ultimately work, but Grossman et al. will have fun giving it a try.

Friday, June 2, 2017

Internal Conversion

In Chapter 17 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss nuclear decay.
Whenever a nucleus loses energy by γ-decay, there is a competing process called internal conversion. The energy to be lost in the transition, Eγ, is transferred directly to a bound electron, which is then ejected with a kinetic energy
T = EγB,
where B is the binding energy of the electron.
What is the energy of these ejected internal conversion electrons? Does the most important γ-emitter for medical physics, 99mTc, decay by internal conversion? To answer these question, we need to know the binding energy B. Table 15.1 of IPMB provides the energy levels for tungsten; below is similar data for technetium.

    level     energy (keV)
K -21.044
 LI   -3.043
  LII   -2.793
   LIII   -2.677
  MI   -0.544
   MII   -0.448
    MIII   -0.418
    MIV   -0.258
   MV   -0.254

The binding energy B is just the negative of the energy listed above. During internal conversion, most often a K-shell electron is ejected. The most common γ-ray emitted during the decay of 99mTc has an energy of 140.5 keV. Thus, K-shell internal conversion electrons are emitted with energy 140.5 – 21.0 = 119.5 keV. If you look at the tabulated data in Fig. 17.4 in IPMB, giving the decay data for 99mTc, you will find the internal conversion of a K-shell electron (“ce-K”) for γ-ray 2 (the 140.5 keV gamma ray) has this energy (“1.195E-01 MeV”). The energy of internal conversion electrons from other shells is greater, because the electrons are not held as tightly.

Auger electrons also come spewing out of technetium following internal conversion. These electrons arise, for instance, when the just-created hole in the K-shell is filled by another electron. This process can be accompanied by emission of a characteristic x-ray, or by ejection of an Auger electron. Suppose internal conversion ejects a K-shell electron, and then the hole is filled by an electron from the L-shell, with ejection of another L-shell Auger electron. We would refer to this as a “KLL” process, and the Auger electron energy would be equal to the difference of the energies of the L and K shells, minus the binding energy of the L-shell electron, or 21 – 2(3) = 15 keV. This value is approximate, because the LI, LII, and LIII binding energies are all slightly different.

In general, Auger electron energies are much less than internal conversion electron energies, because nuclear energy levels are more widely spaced than electron energy levels. For 99mTc, the internal conversion electron has an energy of 119.5 keV compared to a typical Auger electron energy of 15 keV (Auger electron energies for other processes are often smaller).

Another important issue is what fraction of decays are internal conversion versus gamma emission. This can be quantified using the internal conversion coefficient, defined as the number of internal conversions over the number of gamma emissions. Figure 17.4 in IPMB has the data we need to calculate the internal conversion coefficient. The mean number of gamma rays (only considering γ-ray 2) per disintegration is 0.891, whereas the mean number of internal conversion electrons per disintegration is 0.0892+0.0099+0.0006+0.0003+0.0020+0.0004 = 0.1024 (adding the contributions for all the important energy levels). Thus, the internal conversion coefficient is 0.1024/0.891 = 0.115.

The ideal isotope for medical imaging would have no internal conversion, which adds nothing to the image but contributes to the dose. Technetium, which has so many desirable properties, also has a small internal conversion coefficient. It really is the ideal radioisotope for medical imaging.

Friday, May 26, 2017

Confocal Microscopy

Russ Hobbie and I don’t talk much about microscopy in Intermediate Physics for Medicine and Biology. In the homework problems to Chapter 14 (Atoms and Light) we describe the compound microscope, but that is about it. Physics, however, plays a big role in microscopy. In this post, I attempt to explain the physics behind the confocal microscope. I leave out much, but I hope this explanation conveys the essence of the technique.

Start with a simple converging lens. The lens is often indicated by a vertical line with triangles on the top and bottom, but this is shorthand for the dashed concave lens shown below. Assume this is the objective lens of your microscope. A lens has two focal points. Light originating at the left focal point exits the lens horizontally (yellow), like in a searchlight. Light coming from a distant object (purple) converges at the right focal point, like in a telescope.
A ray diagram showing how an objective lens works.
When an object is not so distant, the light converges at a point beyond the focal point; the closer the object, the farther back it converges. You can calculate the point where the light converges using the thin lens equation (Eq. 14.64 in IPMB). Below I show three rays originating at different positions in a sample of biological tissue. The colors (green, blue, and red) do not indicate different wavelengths of light; I use different colors so the rays are easier to follow. Light originating deep in the sample (green) converges just beyond the right focal point, but light originating near the front of the sample (red) converges far beyond the focal point. This is why in a camera you can adjust the focus by changing the distance from the lens to the detector.
A ray diagram showing how an ojective lens focus objects at different distances away at different locations.
Suppose you wanted to detect light from only the center of the sample. You could put an opaque screen containing a small pinhole beyond the focal point of the lens, just where the blue rays converge. All the light originating from the center of the sample would pass through the pinhole. The light from deep in the sample (green) would be out of focus, so most of this light would be blocked by the screen. Likewise, light from the front of the sample (red) is even more out of focus, and only a tiny bit passes through the pinhole. So, voilá, the light detected beyond the pinhole is almost entirely from the center of the sample.
A ray diagram that shows how a pinhole can be used to make a confocal microscope, that images only one plane in an object.
Do you want to view a different depth? Just move the screen/pinhole to the right or left, and you can image shallower or deeper positions.
Another ray diagram that shows how a pinhole can be used to make a confocal microscope, that images only one plane in an object.
In this way, you build up an image of the sample as a function of depth.

How do you get information in the plane at one depth? In confocal microscopy, you usually scan a laser at different positions in the x,y plane (taking z as the distance from the lens). Pixel by pixel, you build up an image, then adjust the position of the screen and build up another image, and then another and another.

Often, confocal microscopy is used with samples that emit fluoresced light. You shine the narrow laser beam of short-wavelength light onto the sample from the left. The sample then emits long-wavelength light as molecules in the sample fluoresce. You can filter out the short-wavelength light, and just image the long-wavelength light. Biologists play all kinds of tricks with fluorescence, such as attaching a fluorescent molecule to the particular structure in the sample they want to image.

There are advantages and disadvantages of a confocal microscope. On advantage is that your detector, positioned to the right of the screen/pinhole, need not be an array like in a camera. A single detector is sufficient; you build up the spatial information by scanning the laser beam (x,y) and the pinhole (z) to obtain full three-dimensional information that you can then manipulate with a computer to create informative and beautiful pictures. One disadvantage is that you have to scan both the laser and the pinhole in synchrony, which is not easy. All this scanning takes time, although video-rate scans are possible using modern technology. Also, most of your fluoresced light gets blocked by the screen, so you need an intense light source that may bleach of your fluorescent tag.

The confocal microscope was invented by Marvin Minsky, who died last year after a productive career in science. Minsky was an undergraduate physics major who went on to study mathematics, computer science, artificial intelligence, robotics, and consciousness. Isaac Asimov claimed in his biography In Joy Still Felt that only two people he knew were more intelligent than he was: Carl Sagan and Minsky. Marvin Minsky and his confocal microscope illustrate the critical role of physics in medicine and biology.

Friday, May 19, 2017

Applying Magneto-Rheology to Reduce Blood Viscosity and Suppress Turbulence to Prevent Heart Attacks

Recently, Russ Hobbie pointed out to me an abstract presented at the 2017 American Physical Society March Meeting about “Applying Magneto-Rheology to Reduce Blood Viscosity and Suppress Turbulence to Prevent Heart Attacks," by Rongjia Tao.
Heart attacks are the leading causes of death in USA. Research indicates one common thread, high blood viscosity, linking all cardiovascular diseases. Turbulence in blood circulation makes different regions of the vasculature vulnerable to development of atherosclerotic plaque. Turbulence is also responsible for systolic ejection murmurs and places heavier workload on heart, a possible trigger of heart attacks. Presently, neither medicine nor method is available to suppress turbulence. The only method to reduce the blood viscosity is to take medicine, such as aspirin. However, using medicine to reduce the blood viscosity does not help suppressing turbulence. In fact, the turbulence gets worse as the Reynolds number goes up with the viscosity reduction by the medicine. Here we report our new discovery: application of a strong magnetic field to blood along its flow direction, red blood cells are polarized in the magnetic field and aggregated into short chains along the flow direction. The blood viscosity becomes anisotropic: Along the flow direction the viscosity is significantly reduced, but in the directions perpendicular to the flow the viscosity is considerably increased. In this way, the blood flow becomes laminar, turbulence is suppressed, the blood circulation is greatly improved, and the risk for heart attacks is reduced. While these effects are not permanent, they last for about 24 hours after one magnetic therapy treatment.
The report is related to an earlier paper by Tao and Ke Huang “Reducing Blood Viscosity with Magnetic Fields” (Phys Rev E, Volume 84, Article Number 011905, 2011). The APS published a news article about this work.

I have some concerns. Let’s use basic physics, like that discussed in Intermediate Physics for Medicine and Biology, to make order-of-magnitude estimates of the forces acting on a red blood cell.

First, we’ll estimate the dipole-dipole magnetic force. A red blood cell has a funny shape, but for our back-of-the-envelope calculations let’s consider it to be a cube 5 microns on a side. The magnetization M, the magnetic field intensity H, and the magnetic susceptibility χm are related by M = χm H (Eq. 8.31, all equation numbers from the 5th edition of IPMB), and H is related to the applied magnetic field B by B = μo H (Eq. 8.30), where μo is the permeability of free space. The total magnetic dipole of a red blood cell, m, is then a3M (Eq. 8.27), or m = a3χmB/μo. If we use χm = 10-5, B = 1 T, and μo = 4π × 10-7 T m/A, the dipole strength is about 10-15 A m2. The magnetic field produced by this magnetic dipole in an adjacent red blood cell is about μom/(4πa3) = 10-6 T (Eq. 18.32). The force on a magnetic dipole in this nonuniform magnetic field is approximately mB/a = 2 × 10-16 N (Eq. 8.26).

What other forces act on this red blood cell? Consider a cell in an arteriole that has a radius of 30 μm, is 10 mm long, and has a pressure drop from one end of the arteriole to the other of 45 torr = 6000 Pa (see Table 1.4 of IPMB). The pressure gradient dP/dx is therefore 6 × 105 Pa/m. The pressure difference between one side of a red blood cell and the other should be the product of the pressure gradient and the cell length. The force is this pressure difference times the surface area, or a3dP/dx = 8 × 10-11 N. This force is about 400,000 times larger than the magnetic force calculated above.

Another force arises from friction between the fluid and the cell. It is equal to the product of the surface area (a2), the viscosity η, and the velocity gradient (Eq. 1.33). Take the blood viscosity to be 3 × 10-3 Pa s. If we assume Poiseuille flow, the average speed of the blood in the arteriole is 0.02 m/s (Eq. 1.37). The average velocity gradient should be the average speed divided by the radius, or about 700 1/s. The viscous force is then 5  × 10-11 N. This is almost the same as the pressure force. (Had we done the calculation more accurately, we should have found the two forces have the same magnitude and cancel each other out, because the blood is not accelerating).

Another small force acting on the red blood cell is gravity. The gravitational force is the density times the volume times the acceleration of gravity (Eq. 1.31). If we assume a density of 1000 kg/m3, this force is equal to about 10-12 N. Even if this overestimates the force of gravity by a factor of a thousand because of buoyancy, it is still nearly an order of magnitude larger than the magnetic force.

These back-of-the-envelope calculations suggest that the dipole-dipole force is very small compared to other forces acting on the red blood cell. It is not obvious how it could trigger cell aggregation.

Let me add a few other points.
  • The abstract talks about suppressing turbulence. However, as Russ and I point out in IPMB, turbulence is only important in the largest vessels of the circulatory system, such as the aorta. In the vast majority of vessels there is no turbulence to suppress. 
  • In their 2011 paper, Tao and Huang claim the change in viscosity is caused by aggregation of blood cells, and their Fig. 3c shows one such clump of about a dozen cells. However, capillaries are so small that blood cells go through them one at a time. Aggregates of cells might not be able to pass through a capillary. 
  • If a magnetic field makes dramatic changes in blood viscosity, then you should experience noticeable changes in blood flow and blood pressure during magnetic resonance imaging, which can expose you to magnetic fields of several tesla. I have not seen any reports of such hemodynamic changes during an MRI. 
  • I would expect that an aggregate of blood cells blocking a vessel could cause a stroke. I have never heard of an increased risk of stroke when a person is exposed to a magnetic field. 
  • Tau and Huang claim that for the dipole interaction energy to be stronger than thermal energy, kT, the applied magnetic field should be on the order of 1 T. I have reproduced their calculation and they are correct, but I am not sure kT is the relevant energy for comparison. A 1 T magnetic field would result in a dipole-dipole interaction energy for the entire red blood cell of about kT. At the temperature of the human body kT is about 1/40 of an electron volt, which is less than the energy of one covalent bond. There are about 1014 atoms making up a red blood cell. Is one extra bond among those hundred million million atoms going to cause aggregation? 
  • The change in viscosity apparently depends on direction. I can see how you could adjust the geometry so the magnetic field is parallel to the blood flow for one large artery or vein, but the arterioles, veinuoles, and especially capillaries are going to be oriented every which way. Blood flow is slower in these small vessels, so red blood cells spend a large fraction of their time in them. I expect that in some vessels the viscosity would go up, and in others it would go down. 
  • Tao claims that the increase in viscosity lasts 24 hours after the magnetic field is turned off. If the dipole-dipole interaction causes this effect, why does it last so long after the magnetic field is gone? Perhaps the magnetic interaction pushes the cells together and then other chemical reactions cause them to stick to each other. But if that were the case, then why are the cells not sticking together whenever they bump into each other as they tumble through the circulatory system? 
  • Finally--and this is a little out of my expertise so I am on shakier ground here--doctors recommend aspirin because of its effect on blot clotting, not because it reduces viscosity.
What lessons can we learn from this analysis? First, I am not convinced that this effect of magnetism on blood viscosity is real. I could be wrong, and I may be missing some key piece of the puzzle. I'm a simple man, and the process may be inherently complex. Nevertheless, it just doesn’t make sense to me. Second, you should always make back-of-the-envelope estimations of the effects you study. Russ and I encourage such estimates in Intermediate Physics for Medicine and Biology. Get into the habit of using order-of-magnitude calculations to check if your results are reasonable.

Friday, May 12, 2017

Free-Radical Chain Reactions that Spread Damage and Destruction

One way radiation damages tissue is by producing free radicals, also known as reactive oxygen species. In Chapter 16 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss these molecules:
High-LET [linear energy transfer] radiation produces so many ion pairs along its path that it exerts a direct action on the cellular DNA. Low-LET radiation can also ionize, but it usually acts indirectly. It ionizes water (primarily) according to the chemical reaction

H2O → H2O+ + e- .

The H2O+ ion decays with a lifetime of about 10-10 s to the hydroxyl free radical:

H2O+ + H2O → H3O+ + OH .

This then produces hydrogen peroxide and other free radicals that cause the damage by disrupting chemical bonds in the DNA.
Free radicals are produced not only by water, but also by oxygen, O2. Tissues without much oxygen (such as the ischemic core of a tumor) are resistant to radiation damage.

Oxygen: The Molecule that Made the World, by Nick Lane, superimposed on Intermediate Physics for Medicine and Biology.
Oxygen, by Nick Lane.
Nick Lane’s book Oxygen: The Molecule that Made the World explains how radiation interacts with tissue through free radicals. In his Chapter 6, “Treachery in the Air: Oxygen Poisoning and X-Irradiation: A Common Mechanism,” he writes:
A free radical is loosely defined as any molecule capable of independent existence that has an unpaired electron. This tends to be an unstable electronic configuration. An unstable molecule in search of stability is quick to react with other molecules. Many free radicals are, accordingly, very reactive…

The three intermediates formed by irradiating water, the hydroxyl radicals, hydrogen peroxide and superoxide radicals, react in very different ways. However, because all three are linked and can be formed from each other, they might be considered equally dangerous…

Hydroxyl radicals (OH) are the first to be formed. These are extremely reactive fragments, the molecular equivalents of random muggers. They can react with all biological molecules at speeds approaching their rate of diffusion. This means that they react with the first molecules in their path and it is virtually impossible to stop them from doing so. They cause damage even before leaving the barrel of the gun…

If radiation strips a second electron from water, the next fleeting intermediate is hydrogen peroxide (H2O2)…Hydrogen peroxide is unusual in that it lies chemically exactly half way between oxygen and water. This gives the molecule something of a split personality. Like a would-be reformed mugger, whose instinct is pitted against his judgement, it can go either way in its reactions….[A] dangerous and significant reaction, however, takes place in the presence of iron, which can pass electrons one at a time to hydrogen peroxide to generate free radicals. If dissolved iron is present, hydrogen peroxide is a real hazard…

The third of our intermediates … [is] the superoxide radical (O2-). Like hydrogen peroxide, the superoxide radical is not terribly reactive. However, it too has an affinity for iron…
In summary, then, the three intermediates between water and oxygen operate as an insidious catalytic system that damages biological molecules in the presence of iron. Superoxide radicals release iron from storage depots and convert it into the soluble form. Hydrogen peroxide reacts with soluble iron to generate hydroxyl radicals. Hydroxyl radicals attack all proteins, lipids and DNA indiscriminately, initiating destructive free-redical chain reactions that spread damage and destruction.
I fear that physics and biology alone are not enough to understand how radiation interacts with tissue; we need some chemistry too.