Friday, June 9, 2017

Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields

A fascinating paper, titled “Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields,” was published in the June 1 issue of Cell (Volume 169, Pages 1029–1041) by Nir Grossman and his colleagues. Although I don’t agree with everything the authors say (I never do), on the whole this study is an important contribution. You may have seen Pam Belluck's article about it in the New York Times. Below is Grossman et al.’s abstract.
We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.
The gist of the method is to apply two electric fields to the brain, one with frequency f1 and the other with frequency f2, where f2 = f1 + Δf with Δf small. The result is a carrier with a frequency equal to the average of f1 and f2, modulated by a beat frequency equal to Δf. For instance, the study uses two currents having frequencies f1 = 2000 Hz and f2 = 2010 Hz, resulting in a carrier frequency of 2005 Hz and a beat frequency of 10 Hz. When they use this current to stimulate a mouse brain, the mouse neurons respond at a frequency of 10 Hz.

The paper uses some fancy language, like the neuron “demodulating” the stimulus and responding to the “temporal interference”. I think there is a simpler explanation. The authors show that in general a nerve does not respond to a stimulus at a frequency of 2000 Hz, except that when this stimulus is first turned on there is a transient excitation. I would describe their beat-frequency stimulus as like the turning on and off of a 2000 Hz current. Each time the stimulus turns on (every 100 ms) you get a transient response. This gives you a neural response at 10 Hz, as observed in the experiment. In other words, a sinusoidally modulated carrier doesn’t act so differently from a carrier turned on and off at the same rate (modulated by a square wave), as shown in the picture below. The transient response is the key to understanding its action.

A comparison of a beat frequency and a frequency modulated by a square wave.

Stimulating neurons at the beat frequency is an amazing result. Why didn’t I think of that? Just as astonishing is the ability to selectively stimulate neurons deep in the brain. We used to worry about this a lot when I worked on magnetic stimulation at the National Institutes of Health, and we concluded that it was impossible. The argument was that the electric field obeys Laplace’s equation (the wave equation under conditions when propagation effects are negligible so you can ignore the time derivatives), and a solution to Laplace’s equation cannot have a local maximum. But the argument doesn’t seem to hold when you stimulate using two different frequencies. The reason is that a weak single-frequency field doesn’t excite neurons (the field strength is below threshold) and a strong single-frequency field doesn’t excite neurons (the stimulus is so large and rapid that the neuron is always refractory). You need two fields of about the same strength but slightly different frequencies to get the on/off behavior that causes the transient excitation. I see no reason why you can’t get such excitation to occur selectively at depth, as the author’s suggest. Wow! Again, why didn’t I think of that?

I find it interesting to analyze how the electric field behaves. Suppose you have two electric fields, one at frequency f1 that oscillates back-and-forth along a direction down and to the left, and another at frequency f2 that oscillates back-and-forth along a direction down and to the right (see the figure below). When the two electric fields are in phase, their horizontal components cancel and their vertical components add, so the result is a vertically oscillating electric field (vertical polarization). When the two electric fields are 180 degrees out of phase, their vertical components cancel and their horizontal components add, so the result is a horizontally oscillating electric field (horizontal polarization). At times when the two electric fields are 90 degrees out of phase, the electric field is rotating (circular polarization). Therefore, the electric field's amplitude doesn't change much but its polarization modulates with the beat frequency. If stimulating an axon for which only the electric field component along its length is important for excitation, you project the modulated circular polarization onto the axon direction and get the beat-frequency electric field as discussed in the paper. It’s almost like optics. (OK, maybe “temporal interference” isn’t such a bad phrase after all.)

An explanation of how a circularly polarized electric field is produced during neural stimulation.

A good paper raises as many question as it answers. For instance, how exactly does a nerve respond to a beat-frequency electric field? I would like to see a computer simulation of this case based on a neural excitation model, such as the Hodgkin-Huxley model. (You can learn more about the Hodgkin-Huxley model in Chapter 6 of Intermediate Physics for Medicine and Biology; you knew I was going to get a plug for the book in here somewhere.) Also, unlike long straight axons in the peripheral nervous system, neurons in the brain bend and branch so different neurons may respond to electric fields in different (or all) directions. How does such a neuron respond to a circularly polarized electric field?

When I first read the paper’s final sentence—“We anticipate that [the method of beat-frequency stimulation] might rapidly be deployable into human clinical trials, as well as studies of the human brain”—I was skeptical. Now that I’ve thought about it more, I willing to…ahem…not dismiss this claim out-of-hand. It might work. Maybe. There is still the issue of getting a current applied to the scalp into the brain through the high-resistance skull, which is why transcranial magnetic stimulation is more common than transcranial electric stimulation for clinical applications. I don’t know if this new method will ultimately work, but Grossman et al. will have fun giving it a try.

Friday, June 2, 2017

Internal Conversion

In Chapter 17 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss nuclear decay.
Whenever a nucleus loses energy by γ-decay, there is a competing process called internal conversion. The energy to be lost in the transition, Eγ, is transferred directly to a bound electron, which is then ejected with a kinetic energy
T = EγB,
where B is the binding energy of the electron.
What is the energy of these ejected internal conversion electrons? Does the most important γ-emitter for medical physics, 99mTc, decay by internal conversion? To answer these question, we need to know the binding energy B. Table 15.1 of IPMB provides the energy levels for tungsten; below is similar data for technetium.

    level     energy (keV)
K -21.044
 LI   -3.043
  LII   -2.793
   LIII   -2.677
  MI   -0.544
   MII   -0.448
    MIII   -0.418
    MIV   -0.258
   MV   -0.254

The binding energy B is just the negative of the energy listed above. During internal conversion, most often a K-shell electron is ejected. The most common γ-ray emitted during the decay of 99mTc has an energy of 140.5 keV. Thus, K-shell internal conversion electrons are emitted with energy 140.5 – 21.0 = 119.5 keV. If you look at the tabulated data in Fig. 17.4 in IPMB, giving the decay data for 99mTc, you will find the internal conversion of a K-shell electron (“ce-K”) for γ-ray 2 (the 140.5 keV gamma ray) has this energy (“1.195E-01 MeV”). The energy of internal conversion electrons from other shells is greater, because the electrons are not held as tightly.

Auger electrons also come spewing out of technetium following internal conversion. These electrons arise, for instance, when the just-created hole in the K-shell is filled by another electron. This process can be accompanied by emission of a characteristic x-ray, or by ejection of an Auger electron. Suppose internal conversion ejects a K-shell electron, and then the hole is filled by an electron from the L-shell, with ejection of another L-shell Auger electron. We would refer to this as a “KLL” process, and the Auger electron energy would be equal to the difference of the energies of the L and K shells, minus the binding energy of the L-shell electron, or 21 – 2(3) = 15 keV. This value is approximate, because the LI, LII, and LIII binding energies are all slightly different.

In general, Auger electron energies are much less than internal conversion electron energies, because nuclear energy levels are more widely spaced than electron energy levels. For 99mTc, the internal conversion electron has an energy of 119.5 keV compared to a typical Auger electron energy of 15 keV (Auger electron energies for other processes are often smaller).

Another important issue is what fraction of decays are internal conversion versus gamma emission. This can be quantified using the internal conversion coefficient, defined as the number of internal conversions over the number of gamma emissions. Figure 17.4 in IPMB has the data we need to calculate the internal conversion coefficient. The mean number of gamma rays (only considering γ-ray 2) per disintegration is 0.891, whereas the mean number of internal conversion electrons per disintegration is 0.0892+0.0099+0.0006+0.0003+0.0020+0.0004 = 0.1024 (adding the contributions for all the important energy levels). Thus, the internal conversion coefficient is 0.1024/0.891 = 0.115.

The ideal isotope for medical imaging would have no internal conversion, which adds nothing to the image but contributes to the dose. Technetium, which has so many desirable properties, also has a small internal conversion coefficient. It really is the ideal radioisotope for medical imaging.

Friday, May 26, 2017

Confocal Microscopy

Russ Hobbie and I don’t talk much about microscopy in Intermediate Physics for Medicine and Biology. In the homework problems to Chapter 14 (Atoms and Light) we describe the compound microscope, but that is about it. Physics, however, plays a big role in microscopy. In this post, I attempt to explain the physics behind the confocal microscope. I leave out much, but I hope this explanation conveys the essence of the technique.

Start with a simple converging lens. The lens is often indicated by a vertical line with triangles on the top and bottom, but this is shorthand for the dashed concave lens shown below. Assume this is the objective lens of your microscope. A lens has two focal points. Light originating at the left focal point exits the lens horizontally (yellow), like in a searchlight. Light coming from a distant object (purple) converges at the right focal point, like in a telescope.
A ray diagram showing how an objective lens works.
When an object is not so distant, the light converges at a point beyond the focal point; the closer the object, the farther back it converges. You can calculate the point where the light converges using the thin lens equation (Eq. 14.64 in IPMB). Below I show three rays originating at different positions in a sample of biological tissue. The colors (green, blue, and red) do not indicate different wavelengths of light; I use different colors so the rays are easier to follow. Light originating deep in the sample (green) converges just beyond the right focal point, but light originating near the front of the sample (red) converges far beyond the focal point. This is why in a camera you can adjust the focus by changing the distance from the lens to the detector.
A ray diagram showing how an ojective lens focus objects at different distances away at different locations.
Suppose you wanted to detect light from only the center of the sample. You could put an opaque screen containing a small pinhole beyond the focal point of the lens, just where the blue rays converge. All the light originating from the center of the sample would pass through the pinhole. The light from deep in the sample (green) would be out of focus, so most of this light would be blocked by the screen. Likewise, light from the front of the sample (red) is even more out of focus, and only a tiny bit passes through the pinhole. So, voilá, the light detected beyond the pinhole is almost entirely from the center of the sample.
A ray diagram that shows how a pinhole can be used to make a confocal microscope, that images only one plane in an object.
Do you want to view a different depth? Just move the screen/pinhole to the right or left, and you can image shallower or deeper positions.
Another ray diagram that shows how a pinhole can be used to make a confocal microscope, that images only one plane in an object.
In this way, you build up an image of the sample as a function of depth.

How do you get information in the plane at one depth? In confocal microscopy, you usually scan a laser at different positions in the x,y plane (taking z as the distance from the lens). Pixel by pixel, you build up an image, then adjust the position of the screen and build up another image, and then another and another.

Often, confocal microscopy is used with samples that emit fluoresced light. You shine the narrow laser beam of short-wavelength light onto the sample from the left. The sample then emits long-wavelength light as molecules in the sample fluoresce. You can filter out the short-wavelength light, and just image the long-wavelength light. Biologists play all kinds of tricks with fluorescence, such as attaching a fluorescent molecule to the particular structure in the sample they want to image.

There are advantages and disadvantages of a confocal microscope. On advantage is that your detector, positioned to the right of the screen/pinhole, need not be an array like in a camera. A single detector is sufficient; you build up the spatial information by scanning the laser beam (x,y) and the pinhole (z) to obtain full three-dimensional information that you can then manipulate with a computer to create informative and beautiful pictures. One disadvantage is that you have to scan both the laser and the pinhole in synchrony, which is not easy. All this scanning takes time, although video-rate scans are possible using modern technology. Also, most of your fluoresced light gets blocked by the screen, so you need an intense light source that may bleach of your fluorescent tag.

The confocal microscope was invented by Marvin Minsky, who died last year after a productive career in science. Minsky was an undergraduate physics major who went on to study mathematics, computer science, artificial intelligence, robotics, and consciousness. Isaac Asimov claimed in his biography In Joy Still Felt that only two people he knew were more intelligent than he was: Carl Sagan and Minsky. Marvin Minsky and his confocal microscope illustrate the critical role of physics in medicine and biology.

Friday, May 19, 2017

Applying Magneto-Rheology to Reduce Blood Viscosity and Suppress Turbulence to Prevent Heart Attacks

Recently, Russ Hobbie pointed out to me an abstract presented at the 2017 American Physical Society March Meeting about “Applying Magneto-Rheology to Reduce Blood Viscosity and Suppress Turbulence to Prevent Heart Attacks," by Rongjia Tao.
Heart attacks are the leading causes of death in USA. Research indicates one common thread, high blood viscosity, linking all cardiovascular diseases. Turbulence in blood circulation makes different regions of the vasculature vulnerable to development of atherosclerotic plaque. Turbulence is also responsible for systolic ejection murmurs and places heavier workload on heart, a possible trigger of heart attacks. Presently, neither medicine nor method is available to suppress turbulence. The only method to reduce the blood viscosity is to take medicine, such as aspirin. However, using medicine to reduce the blood viscosity does not help suppressing turbulence. In fact, the turbulence gets worse as the Reynolds number goes up with the viscosity reduction by the medicine. Here we report our new discovery: application of a strong magnetic field to blood along its flow direction, red blood cells are polarized in the magnetic field and aggregated into short chains along the flow direction. The blood viscosity becomes anisotropic: Along the flow direction the viscosity is significantly reduced, but in the directions perpendicular to the flow the viscosity is considerably increased. In this way, the blood flow becomes laminar, turbulence is suppressed, the blood circulation is greatly improved, and the risk for heart attacks is reduced. While these effects are not permanent, they last for about 24 hours after one magnetic therapy treatment.
The report is related to an earlier paper by Tao and Ke Huang “Reducing Blood Viscosity with Magnetic Fields” (Phys Rev E, Volume 84, Article Number 011905, 2011). The APS published a news article about this work.

I have some concerns. Let’s use basic physics, like that discussed in Intermediate Physics for Medicine and Biology, to make order-of-magnitude estimates of the forces acting on a red blood cell.

First, we’ll estimate the dipole-dipole magnetic force. A red blood cell has a funny shape, but for our back-of-the-envelope calculations let’s consider it to be a cube 5 microns on a side. The magnetization M, the magnetic field intensity H, and the magnetic susceptibility χm are related by M = χm H (Eq. 8.31, all equation numbers from the 5th edition of IPMB), and H is related to the applied magnetic field B by B = μo H (Eq. 8.30), where μo is the permeability of free space. The total magnetic dipole of a red blood cell, m, is then a3M (Eq. 8.27), or m = a3χmB/μo. If we use χm = 10-5, B = 1 T, and μo = 4π × 10-7 T m/A, the dipole strength is about 10-15 A m2. The magnetic field produced by this magnetic dipole in an adjacent red blood cell is about μom/(4πa3) = 10-6 T (Eq. 18.32). The force on a magnetic dipole in this nonuniform magnetic field is approximately mB/a = 2 × 10-16 N (Eq. 8.26).

What other forces act on this red blood cell? Consider a cell in an arteriole that has a radius of 30 μm, is 10 mm long, and has a pressure drop from one end of the arteriole to the other of 45 torr = 6000 Pa (see Table 1.4 of IPMB). The pressure gradient dP/dx is therefore 6 × 105 Pa/m. The pressure difference between one side of a red blood cell and the other should be the product of the pressure gradient and the cell length. The force is this pressure difference times the surface area, or a3dP/dx = 8 × 10-11 N. This force is about 400,000 times larger than the magnetic force calculated above.

Another force arises from friction between the fluid and the cell. It is equal to the product of the surface area (a2), the viscosity η, and the velocity gradient (Eq. 1.33). Take the blood viscosity to be 3 × 10-3 Pa s. If we assume Poiseuille flow, the average speed of the blood in the arteriole is 0.02 m/s (Eq. 1.37). The average velocity gradient should be the average speed divided by the radius, or about 700 1/s. The viscous force is then 5  × 10-11 N. This is almost the same as the pressure force. (Had we done the calculation more accurately, we should have found the two forces have the same magnitude and cancel each other out, because the blood is not accelerating).

Another small force acting on the red blood cell is gravity. The gravitational force is the density times the volume times the acceleration of gravity (Eq. 1.31). If we assume a density of 1000 kg/m3, this force is equal to about 10-12 N. Even if this overestimates the force of gravity by a factor of a thousand because of buoyancy, it is still nearly an order of magnitude larger than the magnetic force.

These back-of-the-envelope calculations suggest that the dipole-dipole force is very small compared to other forces acting on the red blood cell. It is not obvious how it could trigger cell aggregation.

Let me add a few other points.
  • The abstract talks about suppressing turbulence. However, as Russ and I point out in IPMB, turbulence is only important in the largest vessels of the circulatory system, such as the aorta. In the vast majority of vessels there is no turbulence to suppress. 
  • In their 2011 paper, Tao and Huang claim the change in viscosity is caused by aggregation of blood cells, and their Fig. 3c shows one such clump of about a dozen cells. However, capillaries are so small that blood cells go through them one at a time. Aggregates of cells might not be able to pass through a capillary. 
  • If a magnetic field makes dramatic changes in blood viscosity, then you should experience noticeable changes in blood flow and blood pressure during magnetic resonance imaging, which can expose you to magnetic fields of several tesla. I have not seen any reports of such hemodynamic changes during an MRI. 
  • I would expect that an aggregate of blood cells blocking a vessel could cause a stroke. I have never heard of an increased risk of stroke when a person is exposed to a magnetic field. 
  • Tau and Huang claim that for the dipole interaction energy to be stronger than thermal energy, kT, the applied magnetic field should be on the order of 1 T. I have reproduced their calculation and they are correct, but I am not sure kT is the relevant energy for comparison. A 1 T magnetic field would result in a dipole-dipole interaction energy for the entire red blood cell of about kT. At the temperature of the human body kT is about 1/40 of an electron volt, which is less than the energy of one covalent bond. There are about 1014 atoms making up a red blood cell. Is one extra bond among those hundred million million atoms going to cause aggregation? 
  • The change in viscosity apparently depends on direction. I can see how you could adjust the geometry so the magnetic field is parallel to the blood flow for one large artery or vein, but the arterioles, veinuoles, and especially capillaries are going to be oriented every which way. Blood flow is slower in these small vessels, so red blood cells spend a large fraction of their time in them. I expect that in some vessels the viscosity would go up, and in others it would go down. 
  • Tao claims that the increase in viscosity lasts 24 hours after the magnetic field is turned off. If the dipole-dipole interaction causes this effect, why does it last so long after the magnetic field is gone? Perhaps the magnetic interaction pushes the cells together and then other chemical reactions cause them to stick to each other. But if that were the case, then why are the cells not sticking together whenever they bump into each other as they tumble through the circulatory system? 
  • Finally--and this is a little out of my expertise so I am on shakier ground here--doctors recommend aspirin because of its effect on blot clotting, not because it reduces viscosity.
What lessons can we learn from this analysis? First, I am not convinced that this effect of magnetism on blood viscosity is real. I could be wrong, and I may be missing some key piece of the puzzle. I'm a simple man, and the process may be inherently complex. Nevertheless, it just doesn’t make sense to me. Second, you should always make back-of-the-envelope estimations of the effects you study. Russ and I encourage such estimates in Intermediate Physics for Medicine and Biology. Get into the habit of using order-of-magnitude calculations to check if your results are reasonable.

Friday, May 12, 2017

Free-Radical Chain Reactions that Spread Damage and Destruction

One way radiation damages tissue is by producing free radicals, also known as reactive oxygen species. In Chapter 16 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss these molecules:
High-LET [linear energy transfer] radiation produces so many ion pairs along its path that it exerts a direct action on the cellular DNA. Low-LET radiation can also ionize, but it usually acts indirectly. It ionizes water (primarily) according to the chemical reaction

H2O → H2O+ + e- .

The H2O+ ion decays with a lifetime of about 10-10 s to the hydroxyl free radical:

H2O+ + H2O → H3O+ + OH .

This then produces hydrogen peroxide and other free radicals that cause the damage by disrupting chemical bonds in the DNA.
Free radicals are produced not only by water, but also by oxygen, O2. Tissues without much oxygen (such as the ischemic core of a tumor) are resistant to radiation damage.

Oxygen: The Molecule that Made the World, by Nick Lane, superimposed on Intermediate Physics for Medicine and Biology.
Oxygen, by Nick Lane.
Nick Lane’s book Oxygen: The Molecule that Made the World explains how radiation interacts with tissue through free radicals. In his Chapter 6, “Treachery in the Air: Oxygen Poisoning and X-Irradiation: A Common Mechanism,” he writes:
A free radical is loosely defined as any molecule capable of independent existence that has an unpaired electron. This tends to be an unstable electronic configuration. An unstable molecule in search of stability is quick to react with other molecules. Many free radicals are, accordingly, very reactive…

The three intermediates formed by irradiating water, the hydroxyl radicals, hydrogen peroxide and superoxide radicals, react in very different ways. However, because all three are linked and can be formed from each other, they might be considered equally dangerous…

Hydroxyl radicals (OH) are the first to be formed. These are extremely reactive fragments, the molecular equivalents of random muggers. They can react with all biological molecules at speeds approaching their rate of diffusion. This means that they react with the first molecules in their path and it is virtually impossible to stop them from doing so. They cause damage even before leaving the barrel of the gun…

If radiation strips a second electron from water, the next fleeting intermediate is hydrogen peroxide (H2O2)…Hydrogen peroxide is unusual in that it lies chemically exactly half way between oxygen and water. This gives the molecule something of a split personality. Like a would-be reformed mugger, whose instinct is pitted against his judgement, it can go either way in its reactions….[A] dangerous and significant reaction, however, takes place in the presence of iron, which can pass electrons one at a time to hydrogen peroxide to generate free radicals. If dissolved iron is present, hydrogen peroxide is a real hazard…

The third of our intermediates … [is] the superoxide radical (O2-). Like hydrogen peroxide, the superoxide radical is not terribly reactive. However, it too has an affinity for iron…
In summary, then, the three intermediates between water and oxygen operate as an insidious catalytic system that damages biological molecules in the presence of iron. Superoxide radicals release iron from storage depots and convert it into the soluble form. Hydrogen peroxide reacts with soluble iron to generate hydroxyl radicals. Hydroxyl radicals attack all proteins, lipids and DNA indiscriminately, initiating destructive free-redical chain reactions that spread damage and destruction.
I fear that physics and biology alone are not enough to understand how radiation interacts with tissue; we need some chemistry too.

Friday, May 5, 2017

Magnetic Force Microscopy for Nanoparticle Characterization

In Chapter 8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss biomagnetism. The 5th edition of IPMB contains a brief new section on magnetic nanoparticles
8.8.4 Magnetic Nanoparticles

Small single-domain nanoparticles (10–70 nm in diameter) are used to treat cancer (Jordan et al. 1999; Pankhurst et al. 2009). The particles are injected into the body intravenously. Then an oscillating magnetic field is applied. It causes the particles to rotate, heating the surrounding tissue. Cancer cells are particularly sensitive to damage by hyperthermia. Often the surface of the nanoparticles can be coated with antibodies that cause the nanoparticle to be selectively taken up by the tumor, providing more localized heating of the cancer.
Suppose you wanted to image the distribution of nanoparticles in tissue. How would you do it? In IPMB we describe several ways to map magnetic field distributions, including using a superconducting quantum interference device (SQUID) magnetometer. Is there a way to get even better spatial resolution than using a SQUID? Gustavo Cordova, Brenda Yasie Lee, and Zoya Leonenko recently published a review in the NanoWorld Journal describing “Magnetic Force Microscopy for Nanoparticle Characterization” (Volume 2, Pages 10–14, 2016). Their abstract states
Since the invention of the atomic force microscope (AFM) in 1986, there has been a drive to apply this scanning probe technique or a form of this technique to various disciplines in nanoscale science. Magnetic force microscopy (MFM) is a member of a growing family of scanning probe methods and has been widely used for the study of magnetic materials. In MFM a magnetic probe is used to raster-scan the surface of the sample, of which its magnetic field interacts with the magnetic tip to offer insight into its magnetic properties. This review will focus on the use of MFM in relation to nanoparticle characterization, including superparamagnetic iron oxide nanoparticles, covering MFM imaging in air and in liquid environments.
Figure 1 from their paper shows how the MFM “two-pass technique” works: a first scan produces a topographical image using an atomic force microscope, and then a second pass creates a magnetic image using a magnetic force microscope. Both the AFM and MFM are based on using a small, nanometer sized scanning tip attached to a cantilever to detect small-scale tip deflections.


Their Figure 2 shows the results of MFM imaging of superparamagnetic iron oxide nanoparticles (SPIONs). The technique has sufficient sensitivity that they can measure how the size distribution of SPIONs depends on the particle coating.


Cordova et al. conclude
Considering rapid development of novel applications of magnetic nanoparticles in medicine and biomedical nanotechnology, as therapeutic agents, contrast agents in MRI imaging and drug delivery [3] MFM characterization of nanoparticles becomes more valuable and desirable. Overall, MFM has proven itself to be an effective yet underused tool that offers great potential for the localization and characterization of magnetic nanoparticles.
The magnetic force microscope does not have the exquisite picotesla sensitivity and millisecond time resolution of a SQUID, and it requires access to the tissue surface so you can scan over it. But for static, strong-field systems such as magnetic nanoparticle distributions, the magnetic force microscope provides exceptional spatial resolution, and represents one more tool in the physicists arsenal for imaging magnetic fields in biology.

Friday, April 28, 2017

The Thermodynamics of the Proton Spin

In Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce a lot of statistical mechanics and thermodynamics. For instance, in Chapter 3 we describe the Boltzmann factor and the heat capacity, and in Chapter 18 we analyze magnetic resonance imaging by considering the magnetization of the two-state spin system using statistical mechanics. Perhaps we can do a little more.

First, let’s calculate the average energy of a spin-1/2 proton in a magnetic field B. The proton has two states: up and down. Up has the lower energy Eup = -μB, and down has the higher energy Edown = +μB, where μ is the proton’s magnetic moment. Using the Boltzmann factor, the probability of having spin up is Pup = C eμB/kT, and of spin down is Pdown = C e-μB/kT, where T is the absolute temperature and k is Boltzmann’s constant. The spin must be in one of these two states, so Pup + Pdown = 1, or C = 1/(eμB/kT + e-μB/kT). The average energy,E⟩, is PupEup + PdownEdown, or

An equation giving the average energy of spins in a magnetic field B at a temperature T.
The total energy E of the spin system is just the average energy times the number of spins, E = NE.

Equations are not just things you plug numbers into to get other numbers. Equations tell a physical story. So, whenever I teach the two-state spin system I stress the story, which becomes clearer if we examine the limiting cases of this equation. At high temperatures (μB much less than kT), the argument of the hyperbolic tangent is small, we can use a Taylor expansion for the exponentials, and the average energy is –μ2B2/kT. This is the limit of interest for magnetic resonance imaging, when the average energy increases as the square of the magnetic field. At low temperatures (μB much greater than kT), the argument of the hyperbolic tangent is large, tanh goes to one, and the average energy is –μB. All the spins are in the spin up ground state.

Next, let’s calculate the heat capacity, C = dE/dT. The derivative of the hyperbolic tangent is the hyperbolic secant squared, so

An equation for the heat capacity of spins in a magnetic field B at temperature T.
The leading factor is the number of molecules time the Boltzmann constant, which is equal to the number of moles times the gas constant. At high temperatures, C goes to zero because of the leading factor of 1/T2. Physically, this result arises because in this case the spins are approximately half spin up and half spin down, so the average energy is about zero, and making the system even hotter won’t change the situation. You typically see this type of behavior in systems that have an upper energy level (as opposed to, say, a system like the harmonic oscillator that has energy levels at increasing energies without bound). At low temperatures, C also goes to zero because the secant goes to zero at large argument. This result arises because the spin down state freezes out: if the system is cold enough no spins can reach the spin down state so the average energy is simply the energy of the spin up ground state.

The heat capacity going to zero as the temperature goes to zero is one way of stating the third law of thermodynamics. Russ and I discuss the first and second laws of thermodynamics in IPMB, but not the third. This is mainly because life occurs at warm temperatures, so the behavior as T approaches absolute zero does not have much biological significance. But although little biology happens around absolute zero, much physics does. To learn more about the world at low temperatures, I recommend the book The Quest for Absolute Zero, by Kurt Mendelssohn. Fascinating reading.

Friday, April 21, 2017

Erythropoietin and Feedback Loops

In Chapter 10 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss feedback loops. We included two new problems about feedback loops in the 5th edition of IPMB, but as Russ says “you can never have too many examples.” So, here’s another.

The number of red blood cells is controlled by a feedback loop involving the hormone erythropoietin. The higher the erythropoietin concentration, the more red blood cells are produced and therefore the higher the hematocrit. However, the kidney adjusts the production of erythropoietin in response to hypoxia (caused in part by too few red blood cells). The lower the hematocrit the more erythropoietin produced. This new homework problem illustrates the feedback loop. It reinforces concepts from Chapter 10 on feedback and from Chapter 2 on the exponential function, and requires the student to analyze data (albeit made-up data) rather than merely manipulating equations. Warning: the physiological details of this feedback loop are more complicated than discussed in this idealized example.
Section 10.3

Problem 17 ½. Consider a negative feedback loop relating the concentration of red blood cells (the hematocrit, or HCT) to the concentration of the hormone erythropoietin (EPO). In an initial experiment, we infuse blood or plasma intravenously as needed to maintain a constant hematocrit, and measure the EPO concentration. The resulting data are

HCT EPO
(%) (mU/ml)
20 200
30   60.1
40   18.1
50     5.45
60     1.64

In a healthy person, the kidney adjusts the concentration of EPO in response to the oxygen concentration (controlled primarily by the hematocrit). In a second experiment, we suppress the kidney’s ability to produce EPO, control the concentration of EPO by infusing the drug intravenously, and measure the resulting hematocrit. We find

EPO HCT
(mU/ml) (%)
  1 35.0
  2 36.0
  5 39.1
10 45.0
20 59.5

(a) Plot these results on semilog paper and determine an exponential equation describing each set of data.
(b) Draw a block diagram of the feedback loop, including accurate plots of the two relationships.
(c) Determine the set point (you may have to do this numerically).
(d) Calculate the open loop gain.
Biochemist Eugene Goldwasser first reported purification of erythropoietin when working at the University of Chicago in 1977. In his essay “Erythropoietin: A Somewhat Personal History” he writes about his ultimately successful attempt to isolate erythropoietin from urine samples.
Unfortunately the amounts of active urine concentrates available to us from the NIH source or our own collection were still too small to make significant progress, and it seemed as if purification and characterization of human epo might never be accomplished—that it might remain merely an intriguing biological curiosity. The prospect brightened considerably when Drs. M. Kawakita and T. Miyake instituted a very large-scale collection of urine from patients with aplastic anemia in Kumamato City, Japan. After some lengthy correspondence, Dr. Miyake arrived in Chicago on Christmas Day of 1975, carrying a package representing 2550 liters of urine [!] which he had concentrated using our first-step procedure. He and Charles Kung and I then proceeded systematically to work out a reliable purification method…we eventually obtained about 8 mg of pure human urinary epo .
You can learn more about Goldwasser and his career in his many obituaries, for instance here  and here. A more wide-ranging history of erythropoietin can be found here.

Friday, April 14, 2017

Unequal Anisotropy Ratios

In Chapter 7 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the bidomain model, a mathematical description of the anisotropic electrical properties of cardiac tissue.
Anisotropy plays an important role in the bidomain model. To see why, consider a solution to Laplace’s equation in a monodomain—a two-dimensional sheet of homogeneous, anisotropic tissue with straight fibers. If the x direction is chosen to be along the fiber direction (the direction of greatest conductivity), then Laplace’s equation becomes
 Now define a new set of coordinates
and
 You can show that in these new coordinates Laplace’s equation becomes
We have removed the effect of anisotropy by rescaling distance in the direction perpendicular to the fibers. If you try a similar trick with the bidomain model … you can find a new coordinate system that removes the effect of anisotropy in either the intracellular space or the extracellular space, but in general you cannot find a coordinate system that removes the anisotropy in both spaces simultaneously (Roth 1992).
My 1992 paper (Journal of Mathematical Biology, Volume 30, Pages 633–646) contains one of my favorite figures, which illustrates the importance of bidomain anisotropy visually. It is equivalent to an old concept from mechanics called the simultaneous diagonalization of two quadratic forms.

A liiustration explaining unequal anisotropy ratios using the simultaneous diagonalization of two quadratic forms.

Russ and I continue,
Only in the special case of equal anisotropy ratios (σix/σiy = σox/σoy ) will the equations simplify dramatically. But the anisotropy ratios in the heart are not equal. In the intracellular space the ratio of conductivities parallel and perpendicular to the fibers is about 10:1, while in the extracellular space this ratio is about 4:1 (Roth 1997). Anisotropy plays an essential role in the electrical behavior of the heart, especially during electrical stimulation.
My 1997 paper (IEEE Transactions on Biomedical Engineering, Volume 44, Pages 326–328), published 20 years ago this month, contains an estimate of the bidomain conductivities. After surveying much of the available experimental data, I found
  • Intracellular Longitudinal Conductivity     σiL     0.2 S/m 
  • Intracellular Transverse Conductivity        σiT     0.02 S/m
  • Extracellular Longitudinal Conductivity    σeL    0.2 S/m 
  • Extracellular Transverse Conductivity       σeT    0.08 S/m
This means that the ratio of the conductivity along the fibers to the conductivity across the fibers is 10:1 in the intracellular space, while only 5:2 in the extracellular space.

Hold on. In IPMB, Russ and I said σeLeT = 4:1, but now I am saying σeLeT = 5:2. Drat! IPMB is wrong. Another entry for the errata

The 1997 paper is highly cited: Google Scholar lists 178 citations. This perplexes me, because I have many papers that are more significant and innovative, but have far fewer citations. I guess usefulness can sometimes be as important as significance and innovation.

Friday, April 7, 2017

Radiopaedia

A screenshot of Radiopaedia.org.
A screenshot of Radiopaedia.org.
Readers of Intermediate Physics for Medicine and Biology learn topics in medical physics from a physics point-of-view. Often, however, the discussion in IPMB doesn’t emphasize clinical applications. Where can you get more clinical information? Radiopaedia! Radiopaedia.org is a free online website with a large collection of radiology cases and reference articles.

To see what this site is like, I typed some terms into its search box. When I searched for MRI, I found articles about topics that Russ Hobbie and I present in Chapter 18 of IPMB, such as MRI pulse sequences and MRI artifacts, but also a wealth of clinical topics such as protocols for MRI brain screens, stroke, demyelination, and rectal cancer. The site also contains many case studies of specific patients. And it doesn’t cost a thing.

Radiopaedia has much information about nuclear medicine (Chapter 17 in IPMB). I typed “99mTc” into the search box and found articles describing a variety of radiopharmaceuticals based on the technetium-99m radioisotope. Also, the site has much information about positron emission tomography (PET) and single photon emission computed tomography (SPECT).

Radiopaedia covers the interaction of x-rays with tissue (Chapter 15 in IPMB) in a variety of articles about different mechanisms such as the photoelectric effect, Compton scattering, and pair-production. Many features of x-ray technology are also discussed (Chapter 16 in IPMB), like x-ray tubes, filters, collimators, grids, and intensifying screens. But also describes x-ray images of specific body parts, such as the abdomen, pelvis, ankle, and shoulder. And all this information is available gratis.

The web site discusses computed tomography qualitatively, but not quantitatively, and lacks much of the mathematics presented in Chapter 12 of IPMB. It contains many medical images, but almost no other figures. For example, the discussion of four generations of CT scanners would benefit from a figure, like Fig. 16.25 in IPMB.

Ultrasound is covered in Chapter 13 of IPMB, and also in Radiopaedia. Topics include transducers, pulse-echo imaging, elastography, and Doppler imaging. Best of all, this valuable information is on the house.

One of the best parts of Radiopaedia is the quiz mode for patient cases. You get to be the doctor, analyzing different medical problems. These cases are too difficult for me to diagnose, but perhaps you can. I find Radiopaedia to be a helpful, no-cost supplement to our book: IPMB supplies the math and physics, while Radiopaedia analyzes the clinical applications.

Did I mention that Radiopaedia is free?

Enjoy.