Friday, April 15, 2016

The Eigenvalue Problem

An image of fiber tracts in the brain, obtained using Diffusion Tensor Imaging.
An image of fiber tracts in the brain
using Diffusion Tensor Imaging.
From: Wikipedia.
In Intermediate Physics for Medicine and Biology, Russ Hobbie and I consider many mathematical topics. We analyze partial differential equations, Fourier transforms, vector calculus, probability, and special functions such as Bessel functions and the error function. One mathematical technique we never analyze is the central problem of linear algebra: the eigenvalue problem.

Calculating the eigenvalues and eigenvectors of a matrix has medical and biological applications. For example, in Chapter 18 of IPMB, Russ and I discuss diffusion tensor imaging. In this technique, magnetic resonance imaging is used to measure, in each voxel, the diffusion tensor, or matrix.
The diffusion tensor.
This matrix is symmetric, so DxyDyx, etc. It contains information about how easily spins (primarily protons in water) diffuse throughout the tissue, and about the anisotropy of the diffusion: how the rate of diffusion changes with direction. White matter in the brain is made up of bundles of nerve axons, and spins can diffuse down the long axis of an axon much easier than in the direction perpendicular to it.

Suppose you measure the diffusion matrix to be
An example of a diffusion tensor.
How do you get the fiber direction from this matrix? That is the eigenvalue and eigenvector problem. Stated mathematically, the fibers are in the direction of the eigenvector corresponding to the largest eigenvalue. In other words, you can determine a coordinate system in which the diffusion matrix becomes diagonal, and the direction corresponding to the largest of the diagonal elements of the matrix is the fiber direction.

The eigenvalue problem starts with the assumption that there are some vectors r = (x, y, z) that obey the equation Dr = Dr, where D in bold is the matrix (a tensor) and D in italics is one of the eigenvalues (a scalar). We can multiply the right side by the identity matrix (1’s along the diagonal, 0’s off the diagonal) and then move this term to the left side, and get the system of equations
Solving the eigenvalue problem to determine the fiber direction from the diffusion tensor.
One obvious solution is (x, y, z) = (0, 0, 0), the trivial solution. There is a beautiful theorem from linear algebra, which I will not prove, stating that there is a nontrivial solution for (x, y, z) if and only if the determinant of the matrix is zero
Solving the eigenvalue problem to determine the fiber direction using a diffusion tensor.
I am going to assume you know how to evaluate a determinant. From this determinant, you can obtain the equation

Solving the eigenvalue problem to determine the fiber direction using a diffusion tensor.

This is a cubic equation for D, which is in general difficult to solve. However, you can show that this equation is equivalent to
Solving the eigenvalue problem to determine the fiber direction using a diffusion tensor.
Therefore, the eigenvalues of this diffusion matrix are 4, 1, and 1 (1 is a repeated eigenvalue). The largest eigenvalue is D = 4.

To find the eigenvector associated with the eigenvalue D = 4, we solve
Solving the eigenvalue problem to determine the fiber direction using a diffusion tensor.
The solution is (1, 1, 1), which points in the direction of the fibers. If you do this calculation at every voxel, you generate a fiber map of the brain, leading to beautiful pictures such as you can see at the top of this post, and here or here.

Sometimes anisotropy can be a nuisance. Suppose you just want to determine the amount of diffusion in a tissue independent of direction. You can show (see Problem 49 of Chapter 18 in IPMB) that the trace of the diffusion matrix is independent of the coordinate system. The trace is the sum of the diagonal elements of the matrix. In our example, it is 2+2+2 = 6. In the coordinate system aligned with the fiber axis, the trace is just the sum of the eigenvalues, 4+1+1 = 6 (you have to count the repeated eigenvalue twice). The trace is the same.

Now you try. Here is a new homework problem for Section 13 in Chapter 18 of IPMB.
Problem 49 1/2. Suppose the diffusion tensor in one voxel is
A diffusion tensor to be used in a new homework problem for Intermediate Physics for Medicine and Biology.
a) Determine the fiber direction.
b) Show explicitly in this case that the trace is the same in the original matrix as in the matrix rotated so it is diagonal.
One word of warning. The examples in this blog post all happen to have simple integer eigenvalues. In general, that is not true and you need to use numerical methods to solve for the eigenvalues.

Have fun!

Friday, April 8, 2016

Darcy’s Law

Intermediate Physics for Medicine and Biiology
Table 4.3 of Intermediate Physics for Medicine and Biology contains five transport equations. Each has the form “flux density equals a coefficient times the negative of a gradient of some quantity.” The table includes the flux of particles with the coefficient being the diffusion constant, the flux of heat with the coefficient being the thermal conductivity, the flux of momentum with the coefficient being the viscosity, and the flux of charge with the coefficient being the electrical conductivity. Are there other examples of transport equations important in biology and medicine? Yes. For instance, consider Darcy’s law.

Darcy’s law governs the flow of fluid through a porous medium. It is used to model the movement of groundwater through sedimentary rock, but it also describes the flow of water in tissue's extracellular space. Using a notation consistent with Table 4.3, we can write Darcy’s law as

jv = - K dp/dx

where jv is the flux density of fluid volume, p is the pressure, and K is the hydraulic conductivity. The units for jv are m3 m-2 s−1, or m s−1; therefore jv corresponds to the speed of flow. Pressure has units of pascals, so dp/dx is expressed in Pa m−1. Therefore, the units of hydraulic conductivity are m2 Pa−1 s−1. Hydraulic conductivity is analogous to electrical conductivity or thermal conductivity; it specifies how well a material permits the transport of a quantity (flow of water) caused by some driving force (pressure gradient).

Russ Hobbie and I don’t discuss Darcy’s law in IPMB, but we come close. In Chapter 5 we analyze the flow of water across a membrane, and define the relationship

jv = Lp Δp ,    (5.9)

where jv again is the speed of flow, Δp is the pressure difference across the membrane, and Lp is the hydraulic permeability. If the membrane has a thickness Δx, then we can multiply and divide by Δx and obtain jv = (Lp Δx) (Δp/Δx). The equation looks just like Darcy’s law (except for a minus sign), where the hydraulic conductivity is the hydraulic permeability times the membrane thickness:

K = Lp Δx.

I first encountered Darcy’s law when reading my friend Peter Basser’s paper about “Interstitial Volume, Pressure, and Flow During Infusion into the Brain” (Microvascular Research, 44:143–165, 1992). He derived a model of swelling in the brain that occurs during infusion of a drug. When Basser combined Darcy’s law with the equations of elasticity, he derived a diffusion equation for volume change of the tissue caused by accumulation of interstitial fluid (swelling), in which the diffusion constant is approximately the hydraulic conductivity times the bulk modulus.

Darcy’s law plays a key role in governing fluid flow in many tissues. A nice summary can be found in “Interstitial Flow and Its Effects in Solft Tissues” by Melody Swartz and Mark Fleury (Annual Review of Biomedical Engineering, 9:229–256, 2007). Below is the abstract to their review.
Interstitial flow plays important roles in the morphogenesis, function, and pathogenesis of tissues. To investigate these roles and exploit them for tissue engineering or to overcome barriers to drug delivery, a comprehensive consideration of the interstitial space and how it controls and affects such processes is critical. Here we attempt to review the many physical and mathematical correlations that describe fluid and mass transport in the tissue interstitium; the factors that control and affect them; and the importance of interstitial transport on cell biology, tissue morphogenesis, and tissue engineering. Finally, we end with some discussion of interstitial transport issues in drug delivery, cell mechanobiology, and cell homing toward draining lymphatics.

Friday, April 1, 2016

Strat-O-Matic Baseball

My Die-Hard Cub Fan Club membership card.
My Die-Hard Cub Fan Club
membership card.
Monday is opening day!

When I was young I was an avid baseball fan. I still enjoy the game, but now I haven’t time to follow it closely. My childhood team was the Chicago Cubs. I can still remember the lineup: shortstop Don Kessinger led off, second baseman Glenn Beckert hit next, left fielder Billy Williams batted third, and third baseman Ron Santo was cleanup. Ferguson Jenkins was the pitching ace, colorful Joe Pepitone—a former Yankee—arrived by trade to play first, Mr. Cub Ernie Banks was in the twilight of his career, and hot-tempered Leo Durochur was the manager. The Miracle Mets broke my heart in 1969, when the Cubs led their division into September only to collapse in the season's final weeks. The Cubs have not won the World Series since 1908, but I still love ’em. Maybe this year?

I wasn’t a good little league player; I struck out a lot, and I was assigned to play right field, where I could do the least damage with my glove. Yet, I had fun. One summer when I was in junior high, because of the timing of the age cutoffs and my birthday, I was nearly the oldest player in my age group. That was my best summer, when I approached mediocrity. I enjoyed the sport so much that I volunteered to manage the high school team. For those not familiar with baseball, being the manager in high school is very different than managing a professional team. In high school, the manager washes the uniforms, keeps track of the equipment, collects player statistics, and—my favorite job—draws the foul lines on the field before each game.

Strat-O-Matic Baseball.
Strat-O-Matic Baseball.
When growing up in Morrison, Illinois, my friend Ted Paul owned the game Strat-O-Matic Baseball. It was played with dice and player cards, allowing you to recreate baseball games from your armchair. Unfortunately, Strat-O-Matic Baseball was expensive. We were not poor, but the price was out of the range my parents spent on birthday or Christmas presents. Necessity is the mother of invention, so I reverse engineered the game, making my own cards and rules that mimicked Strat-O-Matic’s in some ways but in other ways were my own creation.

A photograph of homemade Strat-O-Matic baseball cards from the Oakland A's, the dominant team of that era (circa 1973), superimposed on the cover of Intermediate Physics for Medicine and Biology.
Homemade Strat-O-Matic baseball cards
from the Oakland A’s, the dominant team
of that era (circa 1973).
In order to make my version of Strat-O-Matic Baseball, I had to learn the basics of probability. I didn’t need advanced concepts, and you can find all the necessary probability theory in Chapter 3 of Intermediate Physics for Medicine and Biology. Two ideas are key. First, the probability that one or the other of two mutually exclusive events happens is found by adding their individual probabilities. For instance, the probability of rolling either a one, two, or three on a single die is equal to the probability of rolling a one plus the probability of rolling a two plus the probability of rolling a three. Second, the probability that two independent events both happen is found by multiplying their individual probabilities. For example, the probability of throwing a one on the first die and a three on the second is equal to the probability of throwing a one times the probability of throwing a three. This concept underlies the joint probability distribution described in Appendix M of IPMB. These two rules, plus some counting, is all the math required to recreate Strat-O-Matic baseball. I also needed a source of baseball statistics, supplied by Street and Smith’s Baseball Yearbook, published each year around Valentine's Day and well within the family gift budget. In retrospect, making my own version of Strat-O-Matic Baseball was not difficult, but for a twelve-year-old kid I think I did a pretty good job.

Let me explain briefly how Strat-O-Matic Baseball works. The game was based on batters’ cards and pitchers’ cards. First you roll one die, and if you get a 1, 2, or 3 you use the batter’s card; a 4, 5, or 6 means you use the pitcher's card. Then you roll two dice which determine the outcome of the at-bat: out, walk, single, double, triple, or home run. The trick is to match the player’s statistics to the probability of a particular throw of the dice. The pitchers’ cards were hardest to create, because Street and Smith didn’t tabulate batting averages given up by pitchers, so I had to invent an algorithm based on wins, earned run average, and strikeouts. I remember spending many hours playing my homemade Strat-O-Matic baseball. In some ways it was pathetic: a child playing alone in his room with just his dice and cards. But in other ways it was romantic: thrilling late night ballgames with all the drama and excitement of sports, but performed just for me.

Even now, when I teach probability I focus on those key concepts I used when creating my version of Strat-O-Matic Baseball. Sometimes you learn more when you play than when you work.

Friday, March 25, 2016

Basic Physics of Nuclear Medicine

I’m cheap and I’m proud of it; I love free stuff. Intermediate Physics for Medicine and Biology isn’t free. Russ Hobbie and I appreciate our readers’ willingness to spend their money to purchase our book. Thank you! But what if you want more? What if—heaven forbid—you find our book is not totally clear, complete, or comprehensive? In IPMB we cite many references at the end of each chapter, so you have many sources of additional information. But often these sources cost money or may be difficult to obtain. Is there anywhere you can go online for free to augment IPMB?

A screenshot of the wikibook Basic Physics of Nuclear Medicine.
A screenshot of the wikibook
Basic Physics of Nuclear Medicine.
One option is the wikibook Basic Physics of Nuclear Medicine. This book covers much of the same material as in the last half of IPMB. It analyzes in depth nuclear medicine (our Chapter 17), but it also covers the interaction of radiation with tissue (our Chapter 15), Fourier methods and tomography (our Chapters 11 and 12), detectors and x-ray imaging systems (our Chapter 16), ultrasound (our Chapter 13), and even a little magnetic resonance imaging (our Chapter 18).

Some of my favorite parts of the wikibook are not covered in IPMB:
What are the advantages of IPMB? For one thing, IPMB has a large collection of homework problems, more extensive than in Basic Physics of Nuclear Medicine. Also, I think our book has a better focus on using mathematical modeling to illustrate medical and biological physics concepts. Moreover, the entire first half of IPMB—about biomechanics, biothermodynamics, diffusion, bioelectricity, biomagnetism, and feedback—is absent from Basic Physics of Nuclear Medicine. Finally, and most importantly, Basic Physics of Nuclear Medicine doesn’t have a blog with weekly updates.

If you are looking for a free, easily accessible online textbook to use as a supplement (please, not a replacement!) for Intermediate Physics for Medicine and Biology, consider Basic Physics of Nuclear Medicine. It’s worth every penny.

Friday, March 18, 2016

Phineas Gage: Neuroscience’s Most Famous Patient

When Russ Hobbie and I discuss transcranial magnetic stimulation in Intermediate Physics for Medicine and Biology, we write that “because TMS is noninvasive and nearly painless, it can be used to study learning and plasticity (changes in brain organization over time).” When I worked with Mark Hallett and Leo Cohen at the National Institutes of Health, they were using TMS to study plasticity in patients who had undergone amputations or spinal cord injuries.

A photograph of Phineas Gage.
Phineas Gage.
How much can the brain reorganize and rehabilitate after an injury? We gain insight into this question by examining the amazing case of Phineas Gage. Recently, science writer Sam Kean published the article “Phineas Gage, Neuroscience’s Most Famous Patient” in the online magazine Slate. Let me quote Kean’s opening lines.
On Sept. 13, 1848, at around 4:30 p.m., the time of day when the mind might start wandering, a railroad foreman named Phineas Gage filled a drill hole with gunpowder and turned his head to check on his men. It was the last normal moment of his life….

The Rutland and Burlington Railroad had hired Gage’s crew that fall to clear away some tough black rock near Cavendish, Vermont, and it considered Gage the best foreman around. Among other tasks, a foreman sprinkled gunpowder into blasting holes, and then tamped the powder down, gently, with an iron rod. This completed, an assistant poured in sand or clay, which got tamped down hard to confine the bang to a tiny space. Gage had specially commissioned his tamping iron from a blacksmith. Sleek like a javelin, it weighed 13¼ pounds and stretched 3 feet 7 inches long. (Gage stood 5-foot-6.) At its widest, the rod had a diameter of 1¼ inches, although the last foot—the part Gage held near his head when tamping—tapered to a point.

Gage’s crew members were loading some busted rock onto a cart, and they apparently distracted him. Accounts differ about what happened after Gage turned his head. One says Gage tried to tamp the gunpowder down with his head still turned, and scraped his iron against the side of the hole, creating a spark. Another says Gage’s assistant (perhaps also distracted) failed to pour the sand in, and when Gage turned back, he smashed the rod down hard, thinking he was packing inert material. Regardless, a spark shot out somewhere in the dark cavity, igniting the gunpowder, and the tamping iron rocketed upward.

The iron entered Gage’s head point-first, striking below the left cheekbone. It destroyed an upper molar, passed behind his left eye, and tore into the underbelly of his brain’s left frontal lobe. It then plowed through the top of his skull, exiting near the midline, just behind where his hairline started. After parabola-ing upward—one report claimed it whistled as it flew—the rod landed 25 yards away and stuck upright in the dirt, mumblety-peg-style. Witnesses described it as streaked with red and greasy to the touch, from fatty brain tissue.
Gage survived after his rod destroyed much of his frontal lobe. He eventually recovered much neural function, but his personality changed; Gage “was no longer Gage”. At least, so goes the traditional story as told in many neuroscience textbooks. Kean argues that these personality changes were not as dramatic as claimed, and were temporary. Years after the accident, Gage enjoyed fairly good health and lived a nearly normal life. His brain recovered. Kean writes
Modern neuroscientific knowledge makes the idea of Gage’s recovery all the more plausible. Neuroscientists once believed that brain lesions caused permanent deficits: Once lost, a faculty never returned. More and more, though, they recognize that the adult brain can relearn lost skills. This ability to change, called brain plasticity, remains somewhat mysterious, and it happens achingly slowly. But the bottom line is that the brain can recover lost functions in certain circumstances.
If transcranial magnetic stimulation had been developed in the first half of the nineteenth century (and why not? Faraday discovered electromagnetic induction 17 years before Gage’s accident), perhaps neuroscientists would have had the tool they needed to monitor and map Gage’s brain during his recovery. Magnetic stimulation—a classic application of physics to medicine—has taught us much about how the brain can change and heal. This knowledge might have implications for how we treat all sorts of brain injuries, from concussion to stroke to dementia to rods shot through our head. As Kean concludes, “If even Phineas Gage bounced back—that’s a powerful message of hope.”

Friday, March 11, 2016

Mass Attenuation Coefficient and Areal Density

I don’t like the mass attenuation coefficient; it bugs me and it has weird units. Yet researchers studying the attenuation of x-rays in materials usually quote the mass attenuation coefficient rather than the linear attenuation coefficient in their publications.

As x-rays pass through a material, they fall off exponentially as exp(−μL), where L is the distance (m) and μ is the linear attenuation coefficient (m−1). But often researchers multiply and divide by the density ρ, so the exponential becomes exp(−(μ/ρ)(ρL)), where μ/ρ is the mass attenuation coefficient (m2/kg) and ρL is the areal density (kg/m2).

In Chapter 15 of Intermediate Physics of Medicine and Biology, Russ Hobbie and I explain some of the advantages of using μ/ρ.
The mass attenuation coefficient has the advantage of being independent of the density of the target material, which is particularly useful if the target is a gas. It has an additional advantage if Compton scattering is the dominant interaction. If σtot = C, then μatten/ρ = CNA/A [Z is the atomic number, A the mass number, NA is Avogadro’s number, and σC is the Compton cross section]. Since Z/A is nearly 1/2 for all elements except hydrogen, this quantity changes very little throughout the periodic table. This constancy is not true for the photoelectric effect or pair production. Figure 15.10 plots the mass attenuation coefficient vs energy for three substances spanning the periodic table. It is nearly independent of Z around 1 MeV where Compton scattering is dominant. The K and L absorption edges can be seen for lead; for the lighter elements they are below 10 keV. Figure 15.11 shows the contributions to μatten/ρ for air from the photoelectric effect, incoherent scattering, and pair production. Tables of mass attenuation coefficients are provided by the National Institute of Standards and Technology (NIST) at http://www.nist.gov/pml/data/xcom/index.cfm.
Let me offer an example where it makes sense to consider the mass attenuation coefficient.

Imagine you have a large box of area S. You measure a mass M of the fluid pentane and poor it into the box. Then you place a source of x-rays under the box, directed upwards. You measure the intensity of the radiation incident on the underside of the box to be Io, and then move your detector to above the box and measure the intensity of radiation that passes through the pentane to be I. Finally, use your ruler to measure the thickness of the pentane layer, L.

You now have enough data to determine both the linear attenuation coefficient and the mass attenuation coefficient of pentane. For the linear attenuation coefficient, use the relationship I = Io exp(−μL) and solve for μ = ln(Io/I)/L. You can also calculate the density ρ = M/(SL). If you want the mass attenuation coefficient, you can now easily determine it: S ln(Io/I)/M. You can also calculate the areal density: M/S.

Next you perform the same experiment on neopentane. You use the same box with area S and measure out the same mass M of fluid. You find that Io/I is unchanged, but L is about 6% larger. You conclude the linear attenuation coefficient and the density both decrease by 6%, but the mass attenuation coefficient and the areal density are unchanged.

Why is Io/I the same for both fluids? Pentane and neopentane are isomers. They have exactly the same chemical formula, C5H12, but they have different structures. Pentane is an unbranched hydrocarbon and neopentane has a central carbon bonded to four other carbon atoms. Because the mass M of both substances is the same, the number of the atoms is the same in each case. The x-ray attenuation only depends on the number of atoms and the type of atoms, but not how those atoms are arranged (the density). This is one advantage of the mass attenuation coefficient: it depends only on the atoms and not their arrangement.

You can calculate the mass attenuation coefficient and the areal density without knowing L. If for some reason L were difficult to measure, you could still determine the mass attenuation coefficient even if you could not calculate the linear attenuation coefficient.

In a gas, the number of molecules is fixed but the density depends on the pressure and temperature. The mass attenuation coefficient does not change with the pressure and temperature. Again, it just depends on the atoms and not their distribution.

Water has a density of 1 g/cm3. If you express the mass attenuation coefficient in cm2/g and the linear attenuation coefficient in cm, then the mass attenuation coefficient and the linear attenuation coefficient have the same numerical value. Most tissue has a density close to that of water, so this trick works well for tissue too.

Given these advantages, have I started liking the mass attenuation coefficient? No, I still think it’s weird. But I can tolerate it a little better now.

Friday, March 4, 2016

Welcome Home Scott Kelly

A photograph of Scott Kelly, when he returned to earth after a year on the space station.
Scott Kelly, when he returned to earth
after a year on the space station.
This week astronaut Scott Kelly returned to Earth after nearly a year on the International Space Station. One goal of his mission was to determine how astronauts would function during long trips in space. I suspect we will learn a lot from Kelly about life in a weightless environment. But one of the biggest risks during a mission to Mars would be radiation exposure, and we may not learn much about that from trips to the space station.

In space, the major source of radiation is cosmic rays, consisting mostly of high energy (GeV) protons. Most of these particles are absorbed by our atmosphere and never reach Earth, or are deflected by Earth’s magnetic field. The space station orbits above the atmosphere but within range of the geomagnetic field, so Kelly was partially shielded from cosmic rays. He probably experienced a dose of about 150 mSv. This is much larger than the annual background dose on the surface of the earth. According to Chapter 16 of Intermediate Physics for Medicine and Biology, we all are exposed to about 3 mSv per year.

A photograph of Scott and Mark Kelly.
Scott and Mark Kelly.
Is 150 mSv in one year dangerous? This dose is below the threshold for acute radiation sickness. It would, however, increase your chances of developing cancer. A rule of thumb is that the excess relative risk of cancer is about 5% per Sv. This does not mean Kelly has a 0.75% chance of getting cancer (5%/Sv times 0.15 Sv). Instead, it means that Scott Kelly has a 0.75% higher chance of getting cancer than his brother Mark Kelly, who remained on Earth. This is a significant increase in risk, but may be acceptable if your goal in life is to be an astronaut. The Kelly twins are both 52 years old, and the excess relative risk goes down with age, so the extra risk of Scott Kelly contracting cancer is probably less than 0.5%.

NASA’s goal is to send astronauts to Mars. Such a mission would require venturing beyond the range of Earth’s geomagnetic field, increasing the exposure to cosmic rays. Data obtained by the Mars rover Curiosity indicate that a one-year interplanetary trip would result in an exposure of 660 mSv. This would be four times Kelly's exposure in the space station. 660 mSv would be unlikely to cause serious acute radiation sickness, but would increase the cancer risk. NASA would have to either shield the astronauts from cosmic rays (not easy given their high energy) or accept the increased risk. I’m guessing they will accept the risk.

Friday, February 26, 2016

Top 10 Isotopes

Everyone loves “top ten” lists. So, I have prepared a list of the top ten isotopes mentioned in Intermediate Physics for Medicine and Biology. These isotopes range from light to heavy, from abundant to rare, and from mundane to exotic. I have no statistics to back up my choices; they are just my own view about which isotopes play a key role in biology and medicine. Feel free to sound off in the comments about your favorite isotope that I missed. Let’s count them down to number one.
  1. 1H (hydrogen-1). This simplest of all isotopes has a nucleus that consists of only a single proton. Almost all magnetic resonance imaging is based on imaging 1H (see Chapter 18 of IPMB about MRI). Its importance arises from its large abundance and its nuclear dipole moment.
  2. 222Rn (radon-222). While radon doesn’t have a large role in nuclear medicine, it is responsible for a large fraction of our annual background radiation dose (see Chapter 16 about the medical uses of x-rays). 222Rn is created in a decay chain starting with the long-lived isotope 238U. Because radon is a noble gas, it can diffuse out of uranium-containing rocks and enter the air, where we breathe it in, exposing our lungs to its alpha particle decay.
  3. 131I (iodine-131). 131I is used in the treatment of thyroid cancer. Iodine is selectively taken up by the thyroid, where it undergoes beta decay, providing a significant dose to the surrounding tissue. A tenth of its radiation arises from gamma decay, so we can use the isotope for both imaging and therapy (see Chapter 17 about nuclear medicine).
  4. 192Ir (iridium-192). This gamma emitter is often used in stents placed in blocked arteries. It is also an important source for brachytherapy (Chapter 17), when a radioactive isotope is implanted in a tumor.
  5. 129Xe (xenon-129). This isotope is used in magnetic resonance images of the lung. Although the isotope is not abundant, its polarization can be increased dramatically using a technique called hyperpolarization (Chapter 18).
  6. 10B (boron-10). This isotope of boron plays the central role in boron neutron capture therapy (Chapter 16). in which boron-containing drugs accumulate in a tumor. When irradiated by neutrons, the boron decays into an alpha particle (4He) and 7Li, which both have high energy and are highly ionizing.
  7. 60Co (cobalt-60). For many years cobalt-60 was used as a source of radiation during cancer therapy (Chapter 16). The gamma knife uses 60Co sources to produce its 1.25 MeV radiation. The isotope is used less nowadays, replaced by linear accelerators.
  8. 125I (iodine-125). Iodine is the only element with two isotopes in this list. Unlike 131I, which emits penetrating beta and gamma rays, 125I deposits much of its energy in short-range Auger electrons (see Chapter 15 on the interaction of x-rays with matter). They deliver a large, concentrated dose when 125I is used for radioimmunotherapy.
  9. 18F (florine-18). A classic positron emitter, 18F is widely used in positron emission tomography (Chapter 17). Often it is attached to the sugar molecule as 18F-fluorodeoxyglucose, which is taken up and is then trapped inside cells, providing a PET marker for high metabolic activity.
  10. 99mTc (technitium-99m). The king of all nuclear medicine isotopes, 99mTc is used in diverse imaging applications (Chapter 17). It emits a 141-keV gamma ray that is ideal for most detectors. The isotope is often bound to other molecules to produce specific radiopharmaceuticals, such as 99mTc-sestamibi or 99mTc-tetrofosmin. If you are only familiar with one isotope used in nuclear medicine, let it be 99mTc.

Friday, February 19, 2016

The Sievert Integral

In Section 17.11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Brachytherapy.
Brachytherapy (brachy means short) involves implanting directly in a tumor sources for which the radiation falls off rapidly with distance because of attenuation, short range, or 1/r2. Originally the radioactive sources (seeds) were implanted surgically, resulting in high doses to the operating room personnel. In the afterloading technique, developed in the 1960s, hollow catheters are implanted surgically and the sources inserted after the surgery. Remote afterloading, developed in the 1980s, places the sources by remote control, so that only the patient receives a radiation dose.
A photograph of bracytherapy sources, used for radiation treatment of cancer.
Bracytherapy sources.
Often brachytherapy is performed by implanting a source of radiation formed as a line. Below is a new homework problem for calculating the dose of radiation assuming a small line source. You will do the calculation with and without a shield surrounding the source.

Section 17.11

Problem 56 ½. Brachytherapy is often performed using a radioactive source shaped as a line of length L having a total cumulated activity à and a mean energy emitted per unit cumulated activity Δ. Assume Eq. 17.50 describes the specific absorbed fraction Φ in the surrounding tissue having an energy absorption coefficient μen and density ρ

(a) Calculate the dose D a distance h away from the center of the line source (assume h is much less than both L and 1/μen). Let x indicate the position along the source, and set x = 0 at the center, so r2 = x2 + h2. The total dose is an integral over the length of the source, which has a cumulated activity per unit length Ã/L. Evaluate this integral using the substitution x = h tanθ. In the limits of integration, ignore end effects by letting L extend to infinity. You may need the trigonometric relationships d(tanθ)/dθ = sec2θ and 1 + tan2θ = sec2θ.

(b) Repeat the calculation in part (a), except add a coaxial cylindrical shield of thickness b surrounding the line source, made of a material having an absorption coefficient μatten. The dose from a small section of the source is now attenuated by an additional factor of exp(−μattenbsecθ). Justify the factor of secθ in the exponential. Show that the dose can now be written as the result from part (a) times 2/π times a definite integral, called the Sievert integral. Derive an expression for the Sievert integral.

(c) Make a drawing that indicates the physical meaning of h, b, x, r, L, and θ. Explain why the dose is inversely proportional to L.
The Sievert integral is analyzed and tabulated in the Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables by Abramowitz and Stegun. It can be generalized to include end effects. The integral is named after Rolf Sievert, the Swedish medical physicist who is honored by the SI unit for equivalent dose: the sievert.

Friday, February 12, 2016

Perspectives on Working at the Physics-Biology Interface

The cover of the journal Physical Biology, containing the special issue Perspectives on Working at the Physics-Biology Interface, by Howard Berg and Krastan Blagoev.
Physical Biology.
A few weeks ago, I wrote that “I’ve always been fascinated by physicists who move into biology, and I collect stories about scientists who have made this transition successfully.” Imagine my delight when I discovered a special issue of the journal Physical Biology about “Perspectives on Working at the Physics-Biology Interface.” Howard Berg and Krastan Blagoev collected many stories about physicists working in biology. In their introductory editorial, they write
Physics is analytical, heavily dependent on mathematical equations; biology is more descriptive, heavily dependent on historical facts. There is a cultural gap. In physics, a theorist who can interpret others' experimental results is revered. In biology, such a person is suspect: ideas are thought cheap, facts dear.

But cultures can change. As problems in physics have become more difficult and more expensive to solve, or have been solved and thus are less interesting, physicists have begun to explore more complex areas of endeavor, including biology. Biologists, on the other hand, have begun to appreciate the benefits of thinking more quantitatively about their data. We thought it would be of interest to hear from physicists who have negotiated this cultural gap. What did they find challenging about biology, and how did they manage to begin work in such a different field? What advice might they have for younger practitioners of the art? One of us (HCB) moved long ago from work on hydrogen masers to studies of the motile behavior of bacteria. His trajectory is given in an interview published in Current Biology [1].

Some of our contributors have been involved with biophysics since their PhD, several were trained in condensed-matter theory, and others in nuclear or high-energy particle physics. Their interests range from the structure of proteins, RNA, or natural products, to cognitive or social abilities of bacteria, to emergent properties of complex or active media, or to the behavior of immune systems or neural networks. They all have interesting points of view, some subdued, others outspoken. We hope you enjoy the mix. Our hope is that with this issue we are able to capture the situation at the beginning of the 21st Century and to follow with another issue of this kind in ten years time.
Below I list all the papers in this special issue, along with their abstracts. I hope readers of Intermediate Physics for Medicine and Biology will find them as inspiring as I did.

The emergence of a new kind of biology by Harold J Morowitz

“It is happily no longer axiomatic that a biophysicist is a physiologist who can fix his own amplifier. Fortunately, physicists are still drifting into biology and bringing new ideas. Please dear colleagues, do take the time to learn biochemistry.” Harold Morowitz provides a personal perspective on working at the interface between the physical and biological sciences.

Two cultures? Experiences at the physics-biology interface by John J Hopfield

“I didn’t really think of this as moving into biology, but rather as exploring another venue in which to do physics.” John Hopfield provides a personal perspective on working on the border between physical and biological sciences.

A Perspective: Robert B Laughlin by Robert B Laughlin

Despite their cultural differences, physics and biology are destined to interact with each other more in the future. The reason is that modern physics is fundamentally about codification of emergent law, and life is the greatest of all emergent phenomena.

Ask not what physics can do for biology—ask what biology can do for physics by Hans Frauenfelder

Stan Ulam, the famous mathematician, said once to Hans Frauenfelder: “Ask not what Physics can do for biology, ask what biology can do for physics.” The interaction between biologists and physicists is a two-way street. Biology reveals the secrets of complex systems, physics provides the physical tools and the theoretical concepts to understand the complexity. The perspective gives a personal view of the path to some of the physical concepts that are relevant for biology and physics (Frauenfelder et al 1999 Rev. Mod. Phys. 71 S419–S442). Schrödinger's book (Schrödinger 1944 What is Life? (Cambridge: Cambridge University Press)), loved by physicists and hated by eminent biologists (Dronamraju 1999 Genetics 153 1071–6), still shows how a great physicist looked at biology well before the first protein structure was known.

Universal relations in the self-assembly of proteins and RNA by D Thirumalai

Concepts rooted in physics are becoming increasingly important in biology as we transition to an era in which quantitative descriptions of all processes from molecular to cellular level are needed. In this perspective I discuss two unexpected findings of universal behavior, uncommon in biology, in the self-assembly of proteins and RNA. These findings, which are surprising, reveal that physics ideas applied to biological problems, ranging from folding to gene expression to cellular movement and communication between cells, might lead to discovery of universal principles operating in adoptable living systems.

Physics transforming the life sciences by José N Onuchic

Biological physics is clearly becoming one of the leading sciences of the 21st century. This field involves the cross-fertilization of ideas and methods from biology and biochemistry on the one hand and the physics of complex and far from equilibrium systems on the other. Here I want to discuss how biological physics is a new area of physics and not simply applications of known physics to biological problems. I will focus in particular on the new advances in theoretical physics that are already flourishing today. They will become central pieces in the creation of this new frontier of science.

Research at the interface of physics and biology: bridging the two fields by Kamal Shukla

I firmly believe that interaction between physics and biology is not only natural, but inevitable. Kamal Shukla provides a personal perspective on working at the interface between the physical and biological sciences.

Let’s not forget plants by Athene Donald

“Many physicists see the interface with biology as an exciting place to be.” Athene Donald provides a personal perspective on working at the interface between the physical and biological sciences.

My encounters with bacteria—learning about communication, cooperation and choice by Eshel Ben-Jacob

My journey into the physics of living systems began with the most fundamental organisms on Earth, bacteria, that three decades ago were perceived as solitary, primitive creatures of limited capabilities. A decade later this notion had faded away and bacteria came to be recognized as the smart beasts they are, engaging in intricate social life through a sophisticated chemical language. Acting jointly, these tiny organisms can sense the environment, process information, solve problems and make decisions so as to thrive in harsh environments. The bacterial power of cooperation manifests in their ability to develop large colonies of astonishing complexity. The number of bacteria in a colony can amount to many billions, yet they exchange 'chemical tweets' that reach each and every one of them so they all know what they're all doing, each cell being both actor and spectator in the bacterial Game of Life. I share my encounters with bacteria, what I learned about the secrets of their social life and wisdom of the crowd, and why and how, starting as a theoretical physicist, I found myself studying social intelligence of bacteria. The story ends with a bacteria guide to cyber-war on cancer.

Working together at the interface of physics and biology by Bonnie L Bassler and Ned S Wingreen

Good communication, whether it is between quorum-sensing bacteria or the different scientists studying those critters, is the key to a successful interdisciplinary collaboration, Bonnie Bassler and Ned Wingreen provide a personal perspective on working at the interface between the physical and biological sciences.

Learning physics of living systems from Dictyostelium by Herbert Levine

Unlike a new generation of scientists that are being trained directly to work on the physics of living systems, most of us more senior members of the community had to find our way from other research areas. We all have our own stories as to how we made this transition. Here, I describe how a chance encounter with the eukaryotic microorganism Dictyostelium discoideum led to a decades-long research project and taught me valuable lessons about how physics and biology can be mutually supportive disciplines.

Letting the cat out of the bag: a personal journey in Biophysics by Carlos J Bustamante

When the author arrived in Berkeley, in the mid 1970s, to study Biophysics he soon felt as if he was engaging himself in a somewhat marginal activity. Biology was then entering another of its cyclical periods of annotation that was to culminate with the human genome project. Two decades later, however, at the end of this process, it had become clear that two main tasks were acquiring a central importance in biological research: a renewed push for a quantitative, precise description of biological systems at the molecular level, and efforts towards an integrated understanding of the operation, control, and coordination of cellular processes. Today, these have become two of the most fertile research areas in Biophysics.

A theoretical physicist’s journey into biology: from quarks and strings to cells and whales by Geoffrey B West

Biology will almost certainly be the predominant science of the twenty-first century but, for it to become successfully so, it will need to embrace some of the quantitative, analytic, predictive culture that has made physics so successful. This includes the search for underlying principles, systemic thinking at all scales, the development of coarse-grained models, and closer ongoing collaboration between theorists and experimentalists. This article presents a personal, slightly provocative, perspective of a theoretical physicist working in close collaboration with biologists at the interface between the physical and biological sciences.

Understanding immunology: fun at an intersection of the physical, life, and clinical sciences by Arup K Chakraborty

Understanding how the immune system works is a grand challenge in science with myriad direct implications for improving human health. The immune system protects us from infectious pathogens and cancer, and maintains a harmonious steady state with essential microbiota in our gut. Vaccination, the medical procedure that has saved more lives than any other, involves manipulating the immune system. Unfortunately, the immune system can also go awry to cause autoimmune diseases. Immune responses are the product of stochastic collective dynamic processes involving many interacting components. These processes span multiple scales of length and time. Thus, statistical mechanics has much to contribute to immunology, and the oeuvre of biological physics will be further enriched if the number of physical scientists interested in immunology continues to increase. I describe how I got interested in immunology and provide a glimpse of my experiences working on immunology using approaches from statistical mechanics and collaborating closely with immunologists.

Rejoice in the hubris: useful things biologists could do for physicists by Robert H Austin

Political correctness urges us to state how wonderful it is to work with biologists and how, just as the lion will someday lie down with the lamb, so will interdisciplinary work, where biologists and physicists are mixed together in light, airy buildings designed to force socialization, give rise to wonderful new science. But it has been said that the only drive in human nature stronger than the sex drive is the drive to censor and suppress, and so I claim that it is OK for physicists and biologists to maintain a wary distance from each other, so that neither one censors or suppresses the wild ideas of the other.
One of my favorite quotes is from Morowitz’s paper: “Like many physicists, Gamov was impatient with biochemical nomenclature and for adenine, thymine, guanine, and cytosine he substituted hearts, spades, clubs, and diamonds.” Many of the papers reinforce the need for tight collaborations with biologists, and the need to learn some biology. I agree with that view, but it was nevertheless a guilty delight to read Robert Austin’s article, in which the old physics hubris takes center stage. Read it, but don’t tell anyone that you did.

Friday, February 5, 2016

The Rest of the Story

Alan was born 102 years ago today in Banbury, England. He was descended from a long line of Quakers. Quakers are often pacifists, so Alan’s dad George didn’t fight in World War I. Instead, he took part in a relief effort in the Middle East. But war is dangerous even if you are not in the line of fire, and George died of dysentery in Baghdad when Alan was only four.

Alan’s mom was left to raise him and his two brothers alone. She encouraged Alan’s interest in science, and so did his eccentric Aunt Katie who took him bird watching. When he was 15, Alan was hired by a ornithologist to survey rookeries and heronries. He spent hours searching for rare birds in salt marshes. All this kindled his passion for learning.

Based on his strong academic record, Alan won a scholarship to study botony, zoology, and chemistry at Trinity College, part of the University of Cambridge. One of Cambridge’s distinguished zoologists gave Alan some good advice: study as much physics and mathematics as you can! So he did. He also did what all undergraduates should do: research. He was good at it; so good that he was awarded a Rockefeller Fellowship to go to New York for a year. He kept at his research, and traveled around to other parts of the United States, such as Massachusetts and Saint Louis, to learn more.

When he got back to Cambridge, Alan’s knowledge of physics allowed him to build his own equipment, enabling him to move his research in exciting directions. He and his collaborators began to get dramatic results. Just when he was on the verge of making decisive discoveries, Hitler marched into Poland and the world was at war again.

Page 2

Alan suspended his own research and dedicated his talents to defeating the Germans. The Battle of Britain was won, in part, by the development of radar. Alan worked on a special type of radar that was installed in airplanes and used by RAF fighter pilots to locate and intercept Luftwaffe bombers. Alan and a small group of scientists toiled frantically, working seven days a week. They risked their lives on test flights in planes fitted with the new radar. For six years, during what should have been a young scientist’s most productive period, Alan set aside his own interests to help the Allies win the war.

Once World War II ended, Alan returned to Cambridge. After all this time, had science passed him by? No! He took up his research where he had left off, and started making groundbreaking discoveries in electrophysiology. With his coworkers, Alan figured out how nerves send signals down their axons, first passing sodium ions through the cell membrane and then passing potassium ions.

In 1963, Alan Hodgkin received the 1963 Noble Prize for Physiology or Medicine for discovering the ionic mechanism of nerve excitation.

And now you know THE REST OF THE STORY. Good day!

---------------------------------------------------------------------------------

This blog post was written in the style of Paul Harvey’s wonderful “The Rest of the Story” radio program. The content is based on Hodgkin’s autobiography Chance and Design: Reminiscences of Science in Peace and War. You can read about Hodgkin's work on electrophysiology—including Hodgkin and Huxley’s famous mathematical model of the nerve action potential—in Chapter 6 of Intermediate Physics for Medicine and Biology.

Happy birthday, Alan Hodgkin!

Friday, January 29, 2016

The Number and Distribution of Capillaries in Muscles with Calculations of the Oxygen Pressure Head Necessary for Supplying the Tissue

Oxygen diffuses from capillaries into tissue where it is used for metabolism. Russ Hobbie and I discuss diffusion in Chapter 4 of Intermediate Physics for Medicine and Biology. Below is a new homework problem on this topic.
Section 4.11

Problem 37 ½. Consider a cylindrical capillary of radius a, containing blood having an oxygen concentration Co (molecules/m3). The capillary is surrounded by a cylinder of tissue of radius b that has an oxygen concentration C(r), consumes oxygen at a rate per unit volume Q (molecules/m3s), and has a diffusion constant D (m2/s). At r = a, C = Co and at r = b, dC/dr = 0. Within the tissue, C(r) obeys the steady-state diffusion equation
The steady state diffusion equation with a source term.
(a) Calculate C(r). Hint: guess a solution of the form C(r) = A + B r2 + E ln(r), and determine values for the constants A, B, and E.
(b) Plot C(r) versus r assuming b = 10a and Qb2/(CoD) = 1

(c) Determine the minimum value of Co as a function of a, b, D, and Q, assuming the oxygen concentration is nowhere negative.
(d) Describe what assumptions underlie this model.
This problem plays an important role in the history of physiology. August Krogh used the model to infer that when Q increased during exercise, b must decrease (by additional vessels opening that were closed when the muscle was at rest) in order to supply the tissue with sufficient oxygen. For “his discovery of the capillary motor regulating mechanism,” he was awarded the Nobel Prize. Krogh’s model also represents an early contribution of mathematical modeling to medicine and biology. He presented his model in the paper:
Krogh, A. (1919) The number and distribution of capillaries in muscles with calculations of the oxygen pressure head necessary for supplying the tissue. J. Physiol., 52: 409–415.
He acknowledges mathematician K. Erlang for deriving the mathematical formula for C(r).

Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov's Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
Isaac Asimov includes an entry for Krogh in Asimov’s Biographical Encyclopedia of Science and Technology (Second Revised Edition).
KROGH, Schack August Steenberg (krawg)
Danish physiologist
Born: Grena, Jutland, November 15, 1874
Died: Copenhagen, September 13, 1949

Krogh, the son of a brewer, was educated at the University of Copenhagen, where he intended to study medicine but shifted his interest to physiology. He obtained his master’s degree in 1899.

He was particularly involved in respiration, following the path of oxygen, nitrogen, and carbon dioxide in and out of the body. In 1908 he gained a professorial position at the University of Copenhagen and there his studies of respiration led him to suggest that the capillaries (the tiniest blood vessels) of the muscles were open during muscular work and partially closed during rest. He went on to demonstrate this and to show the importance of such capillary control to the economy of the body.

For this work, he was awarded the Nobel Prize in Physiology and Medicine in 1920. He went on thereafter to show that this capillary control was brought about by the action of both muscles and hormones.

After Denmark was occupied by Nazi Germany in 1940, Krogh was forced to go underground and then to escape to Sweden. He remained there till the end of the war, then returned to liberated Denmark.

Friday, January 22, 2016

A Brief History of Human Functional Brain Mapping

In Chapter 18 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe functional magnetic resonance imaging.
The term functional magnetic resonance imaging (fMRI) usually refers to a technique developed in the 1990s that allows one to study structure and function simultaneously. The basis for fMRI is inhomogeneities in the magnetic field caused by the differences in the magnetic properties of oxygenated and deoxygenated hemoglobin. No external contrast agent is required. Oxygenated hemoglobin is less paramagnetic than deoxyhemoglobin. If we make images before and after a change in the blood flow to a small region of tissue (perhaps caused by a change in its metabolic activity), the difference between the two images is due mainly to changes in the blood oxygenation. One usually sees an increase in blood flow to a region of the brain when that region is active. This BOLD contrast in the two images provides information about the metabolic state of the tissue, and therefore about the tissue function.
Marcus Raichle has a chapter about A Brief History of Human Functional Brain Mapping in the book Brain Mappig: The Systems.
Brain Mapping: The Systems.
The amazing story of how, 25 years ago, two competing teams developed fMRI simultaneously is told by Marcus Raichle in his chapter “A Brief History of Human Functional Brain Mapping,” published in the book Brain Mapping: The Systems. Below I provide excerpts.
A race was on to produce the first human functional images with MRI even though the participants were unaware of the activities of each other! Who were the participants? [Seiji] Ogawa and his colleagues working with Kamil Ugurbil and friends at the University of Minnesota and a group at the Massachusetts General Hospital led by Ken Kwong...

Ugurbil turned to a new postdoctoral fellow in the laboratory, Ravi Menon, to help in the effort to obtain the first functional MRI BOLD images in humans...He was joined by Ogawa and [David] Tank from Bell Labs along with members of the Ugurbil lab and a pair of Grass Goggles for visual stimulation!….It was early summer of 1991 that believable results were finally obtained. This was obviously too late to submit an abstract to the upcoming Society of Magnetic Resonance Conference to be held in San Francisco in August. Members of the laboratory, nevertheless, left for the meeting with slides in their pockets hopeful that they would have a chance to show some of their new results.

Meanwhile, a very parallel but completely independent set of events was unfolding in Boston. Ken Kwong, a member of the group at the Massachusetts General Hospital, was anxious to develop a method for measuring blood flow with MRI….Kwong saw a poster by Bob Turner, another MR physicist working at the NIH, which was of related interest. Turner had been studying hypoxia/ischemia in cats produced by brief periods of ventilatory arrest….They choose a visual activation paradigm. A pair of well-known Grass Goggles resided in the lab to support the function activation work using contrast agents….

Buoyed by the results obtained with the goggles and BOLD imaging, the MGH group rushed to submit a “Works in Progress” abstract to the Society of Magnetic Resonance Conference… Much to the MGH group’s dismay and to this day unexplained, this particular abstract failed to reach those putting the program together…. Recognizing by this time the significance of their results, they persuaded Tom Brady to include their results in his plenary lecture. The group from Minneapolis had no such opportunity! Not only did the scientific world get its first glimpse of fMRI, but the two groups working on the concept also realized for the first time who the competition was!

By the early fall of 1991 both the Minneapolis and the Boston groups had publishable results. With great anticipation, papers were submitted to Nature (Minneapolis) and Science (Boston) and summarily rejected. The basic judgment of both journals was that they contained nothing new! It is fitting that the work of the two groups appeared together in the Proceedings of the National Academy of Science (Kwong et al., 1992; Ogawa et al., 1992). A new and very important chapter on functional brain imaging had begun.
Functional MRI has since been used for many studies of how the brain works. I consider it one of the best examples of physics applied to medicine in the last 25 years. Raichle not only tells the story of fMRI’s development well (but perhaps with too many exclamation points for my taste), but also reviews the long history of mapping brain function, dating back into the 19th century. The chapter is well worth a read. Enjoy!