Friday, March 11, 2016

Mass Attenuation Coefficient and Areal Density

I don’t like the mass attenuation coefficient; it bugs me and it has weird units. Yet researchers studying the attenuation of x-rays in materials usually quote the mass attenuation coefficient rather than the linear attenuation coefficient in their publications.

As x-rays pass through a material, they fall off exponentially as exp(−μL), where L is the distance (m) and μ is the linear attenuation coefficient (m−1). But often researchers multiply and divide by the density ρ, so the exponential becomes exp(−(μ/ρ)(ρL)), where μ/ρ is the mass attenuation coefficient (m2/kg) and ρL is the areal density (kg/m2).

In Chapter 15 of Intermediate Physics of Medicine and Biology, Russ Hobbie and I explain some of the advantages of using μ/ρ.
The mass attenuation coefficient has the advantage of being independent of the density of the target material, which is particularly useful if the target is a gas. It has an additional advantage if Compton scattering is the dominant interaction. If σtot = C, then μatten/ρ = CNA/A [Z is the atomic number, A the mass number, NA is Avogadro’s number, and σC is the Compton cross section]. Since Z/A is nearly 1/2 for all elements except hydrogen, this quantity changes very little throughout the periodic table. This constancy is not true for the photoelectric effect or pair production. Figure 15.10 plots the mass attenuation coefficient vs energy for three substances spanning the periodic table. It is nearly independent of Z around 1 MeV where Compton scattering is dominant. The K and L absorption edges can be seen for lead; for the lighter elements they are below 10 keV. Figure 15.11 shows the contributions to μatten/ρ for air from the photoelectric effect, incoherent scattering, and pair production. Tables of mass attenuation coefficients are provided by the National Institute of Standards and Technology (NIST) at http://www.nist.gov/pml/data/xcom/index.cfm.
Let me offer an example where it makes sense to consider the mass attenuation coefficient.

Imagine you have a large box of area S. You measure a mass M of the fluid pentane and poor it into the box. Then you place a source of x-rays under the box, directed upwards. You measure the intensity of the radiation incident on the underside of the box to be Io, and then move your detector to above the box and measure the intensity of radiation that passes through the pentane to be I. Finally, use your ruler to measure the thickness of the pentane layer, L.

You now have enough data to determine both the linear attenuation coefficient and the mass attenuation coefficient of pentane. For the linear attenuation coefficient, use the relationship I = Io exp(−μL) and solve for μ = ln(Io/I)/L. You can also calculate the density ρ = M/(SL). If you want the mass attenuation coefficient, you can now easily determine it: S ln(Io/I)/M. You can also calculate the areal density: M/S.

Next you perform the same experiment on neopentane. You use the same box with area S and measure out the same mass M of fluid. You find that Io/I is unchanged, but L is about 6% larger. You conclude the linear attenuation coefficient and the density both decrease by 6%, but the mass attenuation coefficient and the areal density are unchanged.

Why is Io/I the same for both fluids? Pentane and neopentane are isomers. They have exactly the same chemical formula, C5H12, but they have different structures. Pentane is an unbranched hydrocarbon and neopentane has a central carbon bonded to four other carbon atoms. Because the mass M of both substances is the same, the number of the atoms is the same in each case. The x-ray attenuation only depends on the number of atoms and the type of atoms, but not how those atoms are arranged (the density). This is one advantage of the mass attenuation coefficient: it depends only on the atoms and not their arrangement.

You can calculate the mass attenuation coefficient and the areal density without knowing L. If for some reason L were difficult to measure, you could still determine the mass attenuation coefficient even if you could not calculate the linear attenuation coefficient.

In a gas, the number of molecules is fixed but the density depends on the pressure and temperature. The mass attenuation coefficient does not change with the pressure and temperature. Again, it just depends on the atoms and not their distribution.

Water has a density of 1 g/cm3. If you express the mass attenuation coefficient in cm2/g and the linear attenuation coefficient in cm, then the mass attenuation coefficient and the linear attenuation coefficient have the same numerical value. Most tissue has a density close to that of water, so this trick works well for tissue too.

Given these advantages, have I started liking the mass attenuation coefficient? No, I still think it’s weird. But I can tolerate it a little better now.

Friday, March 4, 2016

Welcome Home Scott Kelly

A photograph of Scott Kelly, when he returned to earth after a year on the space station.
Scott Kelly, when he returned to earth
after a year on the space station.
This week astronaut Scott Kelly returned to Earth after nearly a year on the International Space Station. One goal of his mission was to determine how astronauts would function during long trips in space. I suspect we will learn a lot from Kelly about life in a weightless environment. But one of the biggest risks during a mission to Mars would be radiation exposure, and we may not learn much about that from trips to the space station.

In space, the major source of radiation is cosmic rays, consisting mostly of high energy (GeV) protons. Most of these particles are absorbed by our atmosphere and never reach Earth, or are deflected by Earth’s magnetic field. The space station orbits above the atmosphere but within range of the geomagnetic field, so Kelly was partially shielded from cosmic rays. He probably experienced a dose of about 150 mSv. This is much larger than the annual background dose on the surface of the earth. According to Chapter 16 of Intermediate Physics for Medicine and Biology, we all are exposed to about 3 mSv per year.

A photograph of Scott and Mark Kelly.
Scott and Mark Kelly.
Is 150 mSv in one year dangerous? This dose is below the threshold for acute radiation sickness. It would, however, increase your chances of developing cancer. A rule of thumb is that the excess relative risk of cancer is about 5% per Sv. This does not mean Kelly has a 0.75% chance of getting cancer (5%/Sv times 0.15 Sv). Instead, it means that Scott Kelly has a 0.75% higher chance of getting cancer than his brother Mark Kelly, who remained on Earth. This is a significant increase in risk, but may be acceptable if your goal in life is to be an astronaut. The Kelly twins are both 52 years old, and the excess relative risk goes down with age, so the extra risk of Scott Kelly contracting cancer is probably less than 0.5%.

NASA’s goal is to send astronauts to Mars. Such a mission would require venturing beyond the range of Earth’s geomagnetic field, increasing the exposure to cosmic rays. Data obtained by the Mars rover Curiosity indicate that a one-year interplanetary trip would result in an exposure of 660 mSv. This would be four times Kelly's exposure in the space station. 660 mSv would be unlikely to cause serious acute radiation sickness, but would increase the cancer risk. NASA would have to either shield the astronauts from cosmic rays (not easy given their high energy) or accept the increased risk. I’m guessing they will accept the risk.

Friday, February 26, 2016

Top 10 Isotopes

Everyone loves “top ten” lists. So, I have prepared a list of the top ten isotopes mentioned in Intermediate Physics for Medicine and Biology. These isotopes range from light to heavy, from abundant to rare, and from mundane to exotic. I have no statistics to back up my choices; they are just my own view about which isotopes play a key role in biology and medicine. Feel free to sound off in the comments about your favorite isotope that I missed. Let’s count them down to number one.
  1. 1H (hydrogen-1). This simplest of all isotopes has a nucleus that consists of only a single proton. Almost all magnetic resonance imaging is based on imaging 1H (see Chapter 18 of IPMB about MRI). Its importance arises from its large abundance and its nuclear dipole moment.
  2. 222Rn (radon-222). While radon doesn’t have a large role in nuclear medicine, it is responsible for a large fraction of our annual background radiation dose (see Chapter 16 about the medical uses of x-rays). 222Rn is created in a decay chain starting with the long-lived isotope 238U. Because radon is a noble gas, it can diffuse out of uranium-containing rocks and enter the air, where we breathe it in, exposing our lungs to its alpha particle decay.
  3. 131I (iodine-131). 131I is used in the treatment of thyroid cancer. Iodine is selectively taken up by the thyroid, where it undergoes beta decay, providing a significant dose to the surrounding tissue. A tenth of its radiation arises from gamma decay, so we can use the isotope for both imaging and therapy (see Chapter 17 about nuclear medicine).
  4. 192Ir (iridium-192). This gamma emitter is often used in stents placed in blocked arteries. It is also an important source for brachytherapy (Chapter 17), when a radioactive isotope is implanted in a tumor.
  5. 129Xe (xenon-129). This isotope is used in magnetic resonance images of the lung. Although the isotope is not abundant, its polarization can be increased dramatically using a technique called hyperpolarization (Chapter 18).
  6. 10B (boron-10). This isotope of boron plays the central role in boron neutron capture therapy (Chapter 16). in which boron-containing drugs accumulate in a tumor. When irradiated by neutrons, the boron decays into an alpha particle (4He) and 7Li, which both have high energy and are highly ionizing.
  7. 60Co (cobalt-60). For many years cobalt-60 was used as a source of radiation during cancer therapy (Chapter 16). The gamma knife uses 60Co sources to produce its 1.25 MeV radiation. The isotope is used less nowadays, replaced by linear accelerators.
  8. 125I (iodine-125). Iodine is the only element with two isotopes in this list. Unlike 131I, which emits penetrating beta and gamma rays, 125I deposits much of its energy in short-range Auger electrons (see Chapter 15 on the interaction of x-rays with matter). They deliver a large, concentrated dose when 125I is used for radioimmunotherapy.
  9. 18F (florine-18). A classic positron emitter, 18F is widely used in positron emission tomography (Chapter 17). Often it is attached to the sugar molecule as 18F-fluorodeoxyglucose, which is taken up and is then trapped inside cells, providing a PET marker for high metabolic activity.
  10. 99mTc (technitium-99m). The king of all nuclear medicine isotopes, 99mTc is used in diverse imaging applications (Chapter 17). It emits a 141-keV gamma ray that is ideal for most detectors. The isotope is often bound to other molecules to produce specific radiopharmaceuticals, such as 99mTc-sestamibi or 99mTc-tetrofosmin. If you are only familiar with one isotope used in nuclear medicine, let it be 99mTc.

Friday, February 19, 2016

The Sievert Integral

In Section 17.11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Brachytherapy.
Brachytherapy (brachy means short) involves implanting directly in a tumor sources for which the radiation falls off rapidly with distance because of attenuation, short range, or 1/r2. Originally the radioactive sources (seeds) were implanted surgically, resulting in high doses to the operating room personnel. In the afterloading technique, developed in the 1960s, hollow catheters are implanted surgically and the sources inserted after the surgery. Remote afterloading, developed in the 1980s, places the sources by remote control, so that only the patient receives a radiation dose.
A photograph of bracytherapy sources, used for radiation treatment of cancer.
Bracytherapy sources.
Often brachytherapy is performed by implanting a source of radiation formed as a line. Below is a new homework problem for calculating the dose of radiation assuming a small line source. You will do the calculation with and without a shield surrounding the source.

Section 17.11

Problem 56 ½. Brachytherapy is often performed using a radioactive source shaped as a line of length L having a total cumulated activity à and a mean energy emitted per unit cumulated activity Δ. Assume Eq. 17.50 describes the specific absorbed fraction Φ in the surrounding tissue having an energy absorption coefficient μen and density ρ

(a) Calculate the dose D a distance h away from the center of the line source (assume h is much less than both L and 1/μen). Let x indicate the position along the source, and set x = 0 at the center, so r2 = x2 + h2. The total dose is an integral over the length of the source, which has a cumulated activity per unit length Ã/L. Evaluate this integral using the substitution x = h tanθ. In the limits of integration, ignore end effects by letting L extend to infinity. You may need the trigonometric relationships d(tanθ)/dθ = sec2θ and 1 + tan2θ = sec2θ.

(b) Repeat the calculation in part (a), except add a coaxial cylindrical shield of thickness b surrounding the line source, made of a material having an absorption coefficient μatten. The dose from a small section of the source is now attenuated by an additional factor of exp(−μattenbsecθ). Justify the factor of secθ in the exponential. Show that the dose can now be written as the result from part (a) times 2/π times a definite integral, called the Sievert integral. Derive an expression for the Sievert integral.

(c) Make a drawing that indicates the physical meaning of h, b, x, r, L, and θ. Explain why the dose is inversely proportional to L.
The Sievert integral is analyzed and tabulated in the Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables by Abramowitz and Stegun. It can be generalized to include end effects. The integral is named after Rolf Sievert, the Swedish medical physicist who is honored by the SI unit for equivalent dose: the sievert.

Friday, February 12, 2016

Perspectives on Working at the Physics-Biology Interface

The cover of the journal Physical Biology, containing the special issue Perspectives on Working at the Physics-Biology Interface, by Howard Berg and Krastan Blagoev.
Physical Biology.
A few weeks ago, I wrote that “I’ve always been fascinated by physicists who move into biology, and I collect stories about scientists who have made this transition successfully.” Imagine my delight when I discovered a special issue of the journal Physical Biology about “Perspectives on Working at the Physics-Biology Interface.” Howard Berg and Krastan Blagoev collected many stories about physicists working in biology. In their introductory editorial, they write
Physics is analytical, heavily dependent on mathematical equations; biology is more descriptive, heavily dependent on historical facts. There is a cultural gap. In physics, a theorist who can interpret others' experimental results is revered. In biology, such a person is suspect: ideas are thought cheap, facts dear.

But cultures can change. As problems in physics have become more difficult and more expensive to solve, or have been solved and thus are less interesting, physicists have begun to explore more complex areas of endeavor, including biology. Biologists, on the other hand, have begun to appreciate the benefits of thinking more quantitatively about their data. We thought it would be of interest to hear from physicists who have negotiated this cultural gap. What did they find challenging about biology, and how did they manage to begin work in such a different field? What advice might they have for younger practitioners of the art? One of us (HCB) moved long ago from work on hydrogen masers to studies of the motile behavior of bacteria. His trajectory is given in an interview published in Current Biology [1].

Some of our contributors have been involved with biophysics since their PhD, several were trained in condensed-matter theory, and others in nuclear or high-energy particle physics. Their interests range from the structure of proteins, RNA, or natural products, to cognitive or social abilities of bacteria, to emergent properties of complex or active media, or to the behavior of immune systems or neural networks. They all have interesting points of view, some subdued, others outspoken. We hope you enjoy the mix. Our hope is that with this issue we are able to capture the situation at the beginning of the 21st Century and to follow with another issue of this kind in ten years time.
Below I list all the papers in this special issue, along with their abstracts. I hope readers of Intermediate Physics for Medicine and Biology will find them as inspiring as I did.

The emergence of a new kind of biology by Harold J Morowitz

“It is happily no longer axiomatic that a biophysicist is a physiologist who can fix his own amplifier. Fortunately, physicists are still drifting into biology and bringing new ideas. Please dear colleagues, do take the time to learn biochemistry.” Harold Morowitz provides a personal perspective on working at the interface between the physical and biological sciences.

Two cultures? Experiences at the physics-biology interface by John J Hopfield

“I didn’t really think of this as moving into biology, but rather as exploring another venue in which to do physics.” John Hopfield provides a personal perspective on working on the border between physical and biological sciences.

A Perspective: Robert B Laughlin by Robert B Laughlin

Despite their cultural differences, physics and biology are destined to interact with each other more in the future. The reason is that modern physics is fundamentally about codification of emergent law, and life is the greatest of all emergent phenomena.

Ask not what physics can do for biology—ask what biology can do for physics by Hans Frauenfelder

Stan Ulam, the famous mathematician, said once to Hans Frauenfelder: “Ask not what Physics can do for biology, ask what biology can do for physics.” The interaction between biologists and physicists is a two-way street. Biology reveals the secrets of complex systems, physics provides the physical tools and the theoretical concepts to understand the complexity. The perspective gives a personal view of the path to some of the physical concepts that are relevant for biology and physics (Frauenfelder et al 1999 Rev. Mod. Phys. 71 S419–S442). Schrödinger's book (Schrödinger 1944 What is Life? (Cambridge: Cambridge University Press)), loved by physicists and hated by eminent biologists (Dronamraju 1999 Genetics 153 1071–6), still shows how a great physicist looked at biology well before the first protein structure was known.

Universal relations in the self-assembly of proteins and RNA by D Thirumalai

Concepts rooted in physics are becoming increasingly important in biology as we transition to an era in which quantitative descriptions of all processes from molecular to cellular level are needed. In this perspective I discuss two unexpected findings of universal behavior, uncommon in biology, in the self-assembly of proteins and RNA. These findings, which are surprising, reveal that physics ideas applied to biological problems, ranging from folding to gene expression to cellular movement and communication between cells, might lead to discovery of universal principles operating in adoptable living systems.

Physics transforming the life sciences by José N Onuchic

Biological physics is clearly becoming one of the leading sciences of the 21st century. This field involves the cross-fertilization of ideas and methods from biology and biochemistry on the one hand and the physics of complex and far from equilibrium systems on the other. Here I want to discuss how biological physics is a new area of physics and not simply applications of known physics to biological problems. I will focus in particular on the new advances in theoretical physics that are already flourishing today. They will become central pieces in the creation of this new frontier of science.

Research at the interface of physics and biology: bridging the two fields by Kamal Shukla

I firmly believe that interaction between physics and biology is not only natural, but inevitable. Kamal Shukla provides a personal perspective on working at the interface between the physical and biological sciences.

Let’s not forget plants by Athene Donald

“Many physicists see the interface with biology as an exciting place to be.” Athene Donald provides a personal perspective on working at the interface between the physical and biological sciences.

My encounters with bacteria—learning about communication, cooperation and choice by Eshel Ben-Jacob

My journey into the physics of living systems began with the most fundamental organisms on Earth, bacteria, that three decades ago were perceived as solitary, primitive creatures of limited capabilities. A decade later this notion had faded away and bacteria came to be recognized as the smart beasts they are, engaging in intricate social life through a sophisticated chemical language. Acting jointly, these tiny organisms can sense the environment, process information, solve problems and make decisions so as to thrive in harsh environments. The bacterial power of cooperation manifests in their ability to develop large colonies of astonishing complexity. The number of bacteria in a colony can amount to many billions, yet they exchange 'chemical tweets' that reach each and every one of them so they all know what they're all doing, each cell being both actor and spectator in the bacterial Game of Life. I share my encounters with bacteria, what I learned about the secrets of their social life and wisdom of the crowd, and why and how, starting as a theoretical physicist, I found myself studying social intelligence of bacteria. The story ends with a bacteria guide to cyber-war on cancer.

Working together at the interface of physics and biology by Bonnie L Bassler and Ned S Wingreen

Good communication, whether it is between quorum-sensing bacteria or the different scientists studying those critters, is the key to a successful interdisciplinary collaboration, Bonnie Bassler and Ned Wingreen provide a personal perspective on working at the interface between the physical and biological sciences.

Learning physics of living systems from Dictyostelium by Herbert Levine

Unlike a new generation of scientists that are being trained directly to work on the physics of living systems, most of us more senior members of the community had to find our way from other research areas. We all have our own stories as to how we made this transition. Here, I describe how a chance encounter with the eukaryotic microorganism Dictyostelium discoideum led to a decades-long research project and taught me valuable lessons about how physics and biology can be mutually supportive disciplines.

Letting the cat out of the bag: a personal journey in Biophysics by Carlos J Bustamante

When the author arrived in Berkeley, in the mid 1970s, to study Biophysics he soon felt as if he was engaging himself in a somewhat marginal activity. Biology was then entering another of its cyclical periods of annotation that was to culminate with the human genome project. Two decades later, however, at the end of this process, it had become clear that two main tasks were acquiring a central importance in biological research: a renewed push for a quantitative, precise description of biological systems at the molecular level, and efforts towards an integrated understanding of the operation, control, and coordination of cellular processes. Today, these have become two of the most fertile research areas in Biophysics.

A theoretical physicist’s journey into biology: from quarks and strings to cells and whales by Geoffrey B West

Biology will almost certainly be the predominant science of the twenty-first century but, for it to become successfully so, it will need to embrace some of the quantitative, analytic, predictive culture that has made physics so successful. This includes the search for underlying principles, systemic thinking at all scales, the development of coarse-grained models, and closer ongoing collaboration between theorists and experimentalists. This article presents a personal, slightly provocative, perspective of a theoretical physicist working in close collaboration with biologists at the interface between the physical and biological sciences.

Understanding immunology: fun at an intersection of the physical, life, and clinical sciences by Arup K Chakraborty

Understanding how the immune system works is a grand challenge in science with myriad direct implications for improving human health. The immune system protects us from infectious pathogens and cancer, and maintains a harmonious steady state with essential microbiota in our gut. Vaccination, the medical procedure that has saved more lives than any other, involves manipulating the immune system. Unfortunately, the immune system can also go awry to cause autoimmune diseases. Immune responses are the product of stochastic collective dynamic processes involving many interacting components. These processes span multiple scales of length and time. Thus, statistical mechanics has much to contribute to immunology, and the oeuvre of biological physics will be further enriched if the number of physical scientists interested in immunology continues to increase. I describe how I got interested in immunology and provide a glimpse of my experiences working on immunology using approaches from statistical mechanics and collaborating closely with immunologists.

Rejoice in the hubris: useful things biologists could do for physicists by Robert H Austin

Political correctness urges us to state how wonderful it is to work with biologists and how, just as the lion will someday lie down with the lamb, so will interdisciplinary work, where biologists and physicists are mixed together in light, airy buildings designed to force socialization, give rise to wonderful new science. But it has been said that the only drive in human nature stronger than the sex drive is the drive to censor and suppress, and so I claim that it is OK for physicists and biologists to maintain a wary distance from each other, so that neither one censors or suppresses the wild ideas of the other.
One of my favorite quotes is from Morowitz’s paper: “Like many physicists, Gamov was impatient with biochemical nomenclature and for adenine, thymine, guanine, and cytosine he substituted hearts, spades, clubs, and diamonds.” Many of the papers reinforce the need for tight collaborations with biologists, and the need to learn some biology. I agree with that view, but it was nevertheless a guilty delight to read Robert Austin’s article, in which the old physics hubris takes center stage. Read it, but don’t tell anyone that you did.

Friday, February 5, 2016

The Rest of the Story

Alan was born 102 years ago today in Banbury, England. He was descended from a long line of Quakers. Quakers are often pacifists, so Alan’s dad George didn’t fight in World War I. Instead, he took part in a relief effort in the Middle East. But war is dangerous even if you are not in the line of fire, and George died of dysentery in Baghdad when Alan was only four.

Alan’s mom was left to raise him and his two brothers alone. She encouraged Alan’s interest in science, and so did his eccentric Aunt Katie who took him bird watching. When he was 15, Alan was hired by a ornithologist to survey rookeries and heronries. He spent hours searching for rare birds in salt marshes. All this kindled his passion for learning.

Based on his strong academic record, Alan won a scholarship to study botony, zoology, and chemistry at Trinity College, part of the University of Cambridge. One of Cambridge’s distinguished zoologists gave Alan some good advice: study as much physics and mathematics as you can! So he did. He also did what all undergraduates should do: research. He was good at it; so good that he was awarded a Rockefeller Fellowship to go to New York for a year. He kept at his research, and traveled around to other parts of the United States, such as Massachusetts and Saint Louis, to learn more.

When he got back to Cambridge, Alan’s knowledge of physics allowed him to build his own equipment, enabling him to move his research in exciting directions. He and his collaborators began to get dramatic results. Just when he was on the verge of making decisive discoveries, Hitler marched into Poland and the world was at war again.

Page 2

Alan suspended his own research and dedicated his talents to defeating the Germans. The Battle of Britain was won, in part, by the development of radar. Alan worked on a special type of radar that was installed in airplanes and used by RAF fighter pilots to locate and intercept Luftwaffe bombers. Alan and a small group of scientists toiled frantically, working seven days a week. They risked their lives on test flights in planes fitted with the new radar. For six years, during what should have been a young scientist’s most productive period, Alan set aside his own interests to help the Allies win the war.

Once World War II ended, Alan returned to Cambridge. After all this time, had science passed him by? No! He took up his research where he had left off, and started making groundbreaking discoveries in electrophysiology. With his coworkers, Alan figured out how nerves send signals down their axons, first passing sodium ions through the cell membrane and then passing potassium ions.

In 1963, Alan Hodgkin received the 1963 Noble Prize for Physiology or Medicine for discovering the ionic mechanism of nerve excitation.

And now you know THE REST OF THE STORY. Good day!

---------------------------------------------------------------------------------

This blog post was written in the style of Paul Harvey’s wonderful “The Rest of the Story” radio program. The content is based on Hodgkin’s autobiography Chance and Design: Reminiscences of Science in Peace and War. You can read about Hodgkin's work on electrophysiology—including Hodgkin and Huxley’s famous mathematical model of the nerve action potential—in Chapter 6 of Intermediate Physics for Medicine and Biology.

Happy birthday, Alan Hodgkin!

Friday, January 29, 2016

The Number and Distribution of Capillaries in Muscles with Calculations of the Oxygen Pressure Head Necessary for Supplying the Tissue

Oxygen diffuses from capillaries into tissue where it is used for metabolism. Russ Hobbie and I discuss diffusion in Chapter 4 of Intermediate Physics for Medicine and Biology. Below is a new homework problem on this topic.
Section 4.11

Problem 37 ½. Consider a cylindrical capillary of radius a, containing blood having an oxygen concentration Co (molecules/m3). The capillary is surrounded by a cylinder of tissue of radius b that has an oxygen concentration C(r), consumes oxygen at a rate per unit volume Q (molecules/m3s), and has a diffusion constant D (m2/s). At r = a, C = Co and at r = b, dC/dr = 0. Within the tissue, C(r) obeys the steady-state diffusion equation
The steady state diffusion equation with a source term.
(a) Calculate C(r). Hint: guess a solution of the form C(r) = A + B r2 + E ln(r), and determine values for the constants A, B, and E.
(b) Plot C(r) versus r assuming b = 10a and Qb2/(CoD) = 1

(c) Determine the minimum value of Co as a function of a, b, D, and Q, assuming the oxygen concentration is nowhere negative.
(d) Describe what assumptions underlie this model.
This problem plays an important role in the history of physiology. August Krogh used the model to infer that when Q increased during exercise, b must decrease (by additional vessels opening that were closed when the muscle was at rest) in order to supply the tissue with sufficient oxygen. For “his discovery of the capillary motor regulating mechanism,” he was awarded the Nobel Prize. Krogh’s model also represents an early contribution of mathematical modeling to medicine and biology. He presented his model in the paper:
Krogh, A. (1919) The number and distribution of capillaries in muscles with calculations of the oxygen pressure head necessary for supplying the tissue. J. Physiol., 52: 409–415.
He acknowledges mathematician K. Erlang for deriving the mathematical formula for C(r).

Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov's Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
Isaac Asimov includes an entry for Krogh in Asimov’s Biographical Encyclopedia of Science and Technology (Second Revised Edition).
KROGH, Schack August Steenberg (krawg)
Danish physiologist
Born: Grena, Jutland, November 15, 1874
Died: Copenhagen, September 13, 1949

Krogh, the son of a brewer, was educated at the University of Copenhagen, where he intended to study medicine but shifted his interest to physiology. He obtained his master’s degree in 1899.

He was particularly involved in respiration, following the path of oxygen, nitrogen, and carbon dioxide in and out of the body. In 1908 he gained a professorial position at the University of Copenhagen and there his studies of respiration led him to suggest that the capillaries (the tiniest blood vessels) of the muscles were open during muscular work and partially closed during rest. He went on to demonstrate this and to show the importance of such capillary control to the economy of the body.

For this work, he was awarded the Nobel Prize in Physiology and Medicine in 1920. He went on thereafter to show that this capillary control was brought about by the action of both muscles and hormones.

After Denmark was occupied by Nazi Germany in 1940, Krogh was forced to go underground and then to escape to Sweden. He remained there till the end of the war, then returned to liberated Denmark.

Friday, January 22, 2016

A Brief History of Human Functional Brain Mapping

In Chapter 18 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe functional magnetic resonance imaging.
The term functional magnetic resonance imaging (fMRI) usually refers to a technique developed in the 1990s that allows one to study structure and function simultaneously. The basis for fMRI is inhomogeneities in the magnetic field caused by the differences in the magnetic properties of oxygenated and deoxygenated hemoglobin. No external contrast agent is required. Oxygenated hemoglobin is less paramagnetic than deoxyhemoglobin. If we make images before and after a change in the blood flow to a small region of tissue (perhaps caused by a change in its metabolic activity), the difference between the two images is due mainly to changes in the blood oxygenation. One usually sees an increase in blood flow to a region of the brain when that region is active. This BOLD contrast in the two images provides information about the metabolic state of the tissue, and therefore about the tissue function.
Marcus Raichle has a chapter about A Brief History of Human Functional Brain Mapping in the book Brain Mappig: The Systems.
Brain Mapping: The Systems.
The amazing story of how, 25 years ago, two competing teams developed fMRI simultaneously is told by Marcus Raichle in his chapter “A Brief History of Human Functional Brain Mapping,” published in the book Brain Mapping: The Systems. Below I provide excerpts.
A race was on to produce the first human functional images with MRI even though the participants were unaware of the activities of each other! Who were the participants? [Seiji] Ogawa and his colleagues working with Kamil Ugurbil and friends at the University of Minnesota and a group at the Massachusetts General Hospital led by Ken Kwong...

Ugurbil turned to a new postdoctoral fellow in the laboratory, Ravi Menon, to help in the effort to obtain the first functional MRI BOLD images in humans...He was joined by Ogawa and [David] Tank from Bell Labs along with members of the Ugurbil lab and a pair of Grass Goggles for visual stimulation!….It was early summer of 1991 that believable results were finally obtained. This was obviously too late to submit an abstract to the upcoming Society of Magnetic Resonance Conference to be held in San Francisco in August. Members of the laboratory, nevertheless, left for the meeting with slides in their pockets hopeful that they would have a chance to show some of their new results.

Meanwhile, a very parallel but completely independent set of events was unfolding in Boston. Ken Kwong, a member of the group at the Massachusetts General Hospital, was anxious to develop a method for measuring blood flow with MRI….Kwong saw a poster by Bob Turner, another MR physicist working at the NIH, which was of related interest. Turner had been studying hypoxia/ischemia in cats produced by brief periods of ventilatory arrest….They choose a visual activation paradigm. A pair of well-known Grass Goggles resided in the lab to support the function activation work using contrast agents….

Buoyed by the results obtained with the goggles and BOLD imaging, the MGH group rushed to submit a “Works in Progress” abstract to the Society of Magnetic Resonance Conference… Much to the MGH group’s dismay and to this day unexplained, this particular abstract failed to reach those putting the program together…. Recognizing by this time the significance of their results, they persuaded Tom Brady to include their results in his plenary lecture. The group from Minneapolis had no such opportunity! Not only did the scientific world get its first glimpse of fMRI, but the two groups working on the concept also realized for the first time who the competition was!

By the early fall of 1991 both the Minneapolis and the Boston groups had publishable results. With great anticipation, papers were submitted to Nature (Minneapolis) and Science (Boston) and summarily rejected. The basic judgment of both journals was that they contained nothing new! It is fitting that the work of the two groups appeared together in the Proceedings of the National Academy of Science (Kwong et al., 1992; Ogawa et al., 1992). A new and very important chapter on functional brain imaging had begun.
Functional MRI has since been used for many studies of how the brain works. I consider it one of the best examples of physics applied to medicine in the last 25 years. Raichle not only tells the story of fMRI’s development well (but perhaps with too many exclamation points for my taste), but also reviews the long history of mapping brain function, dating back into the 19th century. The chapter is well worth a read. Enjoy!

Friday, January 15, 2016

You Can Hear About a Nickel’s Worth of Difference

In Chapter 13 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe a logarithmic scale for sound intensity: the decibel. In general, an increase in intensity is perceived as a greater loudness (although this relationship is surprisingly complex). Is there an analogous relationship between frequency and pitch? Yes! Loudness is determined using a logarithmic scale with a base of ten, whereas pitch is measured using a logarithmic scale of base two, because a doubling of the frequency corresponds to raising the pitch by one octave. Those familiar with music are accustomed to associating different pitches with different notes in a musical scale. Most instruments are tuned using the equal tempered scale, dividing an octave into twelve equal logarithmic steps. One step, called a semitone, corresponds to the difference in pitch between, say, F and F-sharp. The fractional change of one semitone is 21/12 = 1.0595, or an increase in frequency of roughly 6%. This frequency shift is so important that it is expressed by a special unit: one semitone is equal to 100 cents. The cent is to pitch as the decibel is to loudness. Doubling the intensity corresponds to an increase of 3 dB, whereas doubling the frequency corresponds to an increase of 1200 cents.

How finely can the human ear resolve pitch? In other words, if you play two pure tones one right after the other, by how much must their frequency differ for you to notice that they are indeed different? A typical listener can detect a change of about 5 cents, leading to the rule of thumb that you can hear about a nickel’s worth of difference. Try testing your own pitch perception online here. When I tried, I could always detect a difference of 50 cents (tones of 440 and 453 Hz, presented one after another; I assume this is a control used by the testing software, because the difference is obvious). I could never detect a difference of 4 or 6 cents (440 compared to 441 or 441.5 Hz). I had inconsistent results with 8 and 11 cents (440 compared to 442 or 443 Hz); sometimes I perceived slightly different tones, and sometimes I didn’t. Ten cents is a reasonable approximation to my just noticeable difference, which confirms what I have always suspected: I have poor but not pathological pitch perception. When playing the tuba in my high school band, the director often had us “tune up” relative to a standard note, typically played by the first clarinet. I always had trouble with this task; he had to tell me if I was sharp or flat. Poor pitch perception has its advantages: you don’t need to hire a piano tuner! Pity my poor wife—my only audience, other than my dog Suki—but my wrong notes are probably more bothersome than the out-of-tune piano, so spending for a piano tuner would not help.

Humans can hear pitches from roughly 20 to 20,000 Hz. Because 210 is about 1000, humans can hear frequencies that range over roughly ten octaves, or 12,000 cents, and therefore you (but not I) can distinguish about 2400 different tones. A piano keyboard plays notes 28 to 4186 Hz, which is a little more than seven octaves, or roughly 8700 cents (87 semitones between the 88 keys). Sometimes changes in frequency are measured in millioctaves: 1 mO = 1.2 cents. Although the cent is not a metric unit, I still worry that the SI police, who insist that all things centi- are going out of fashion compared to all things milli-, will demand we start using the millioctave. I hope not.

If two tones are played at the same time, rather than one after another, you can perceive small differences in pitch using beats. Consider the triginometric identity
A  triginometric identity relating the sum of two sine waves to the beat frequency.
If frequencies A and B are similar, then their sum consists of a carrier frequency equal to the average of the two original frequencies modulated by the difference of the two frequencies. For instance, if you have a tone corresponding to concert A (440.00 Hz) and another tone out of tune with concert A by 3 cents (440.78 Hz), when played together the sound consists of a tone having frequency 440.39 Hz modulated by a sinusoid that gets louder and softer with a frequency of 0.78 Hz (or a period of 1.3 seconds). Your ear can’t tell that the pitch of the carrier tone is different from concert A, but it can detect the variation in loudness caused by the beats. You can hear beats generated online here.

If several tones have widely separated frequencies, your ear (or more properly, your brain) detects distinct notes played together; a chord. If two tones have nearly the same frequency (like in the example above), you hear a single tone with modulated amplitude; beats. For intermediate frequency differences (say, between a note an octave above concert A, 880 Hz, and a severely out-of-tune A at 897 Hz, a difference of 17 Hz or 33 cents) you hear neither a chord nor beats. Instead, your brain perceives dissonance. A London police whistle generates frequencies of 1904 and 2136 Hz, a difference of 232 Hz or more than 200 cents. It sounds annoying. Try it yourself.

The frequency ratio between a C and G (a perfect fifth) should be 3:2, which is 702 cents. However, in equal tempered tuning the shift between C and G is 700 cents. This difference of 2 cents is indistinguishable to all but the best ears. The major third is more of a problem. It should have a ratio of 5:4, or 386 cents. In the equal tempered scale, a third is 400 cents, off by 14 cents, which a good ear can hear. If all these tonal relations are different than what they would be in just intonation, then why do we use an equal tempered scale? It allows us to change keys without retuning the instrument. But we pay a price in that a major chord does not have precisely the desired 4:5:6 frequency ratio. Pythagoras must be turning over in his grave.

Both pitch perception and color vision arise from the physics of frequency detection. But the tones we hear and colors we see are determined by more than just physics. There is a lot of information processing by the brain, which leads to many fascinating and unexpected results including surprising pathologies. To completely appreciate how we perceive frequency, you also need to understand the brain.

Friday, January 8, 2016

Biomagnetism Therapy: Pseudoscientific Twaddle

Voodoo Science, by Robert Park, superimposed on Intermediate Physics for Medicine and Biology.
Voodoo Science,
by Robert Park.
Ever since Robert Park—author of Voodoo Science and the weekly column What’s New—suffered a stroke in 2013, I have been searching for another debunker of pseudoscience. Finally, I’ve found her! Harriet Hall is a retired physician and former Air Force flight surgeon. Every week she battles nonsense in the blog sciencebasedmedicine.org. In her November 20 entry, she addressed “biomagnetic therapy.”
Biomagnetism Therapy: Pseudoscientific Twaddle

In a television interview, a practitioner of biomagnetic therapy claimed she had cured her own breast lump and the metastatic cancer of another person. I wonder how many viewers believed her. On the “official website” of biomagnetism therapy, http://biomagnetism.net/, they claim it is “the answer to ALL your health problems… an all-natural, non-invasive therapy proven to prevent, diagnose and treat countless diseases, chronic illnesses and degenerative health problems.”

Sound too good to be true? Of course it does! You are already skeptical. If you read further, you will become even more skeptical….”
She concludes
…It pains me to see misinformation such as this fed to gullible patients. Using biomagnetic therapy isn’t likely to harm patients physically, but it’s likely to harm their comprehension of science. It’s likely to waste their money, and it could delay getting treatments that do work. Perhaps the worst thing is that people who practice this therapy are deceiving themselves. They don’t understand science, and they mistake testimonials for evidence of efficacy. They don’t understand the need for controlled studies. They don’t understand placebo effects, suggestion, expectation, regression to the mean, the natural course of illness, and all the other things that can lead people to believe a bogus treatment works. It is particularly tragic that anyone trained as an MD could have such poor critical thinking skills and be misled by such egregious pseudoscience.
Russ Hobbie and I have an entire chapter about biomagnetism in Intermediate Physics for Medicine and Biology. We discuss the measurement of the very small magnetic fields produced by the brain (magnetoencephalography) and the use of rapidly changing magnetic fields to stimulate neurons (transcranial magnetic stimulation). We also devote a chapter to magnetic resonance imaging. These are important topics, but they often get mixed up with phony claims about “biomagnetic therapy.”

If you doubt this is a real problem, go to Google and search for “biomagnetism” (the title of Chapter 8 in IPMB). The first site you get starts “One of the most peculiar therapy systems that FAIM [Foundation for Alternative and Integrative Medicine] is investigating is one that uses ordinary magnets to heal. Although magnets have been used in therapies for a long time, this particular method uses pairs of magnets to neutralize disease-causing pathogens in the body...” The second site begins “Yes! It’s the answer to ALL your health problems…” The third describes “The Revolutionary Therapy based on the Biomagnetic Pairs discovered by Dr. Isaac Goiz Durán, MD in 1988...” The fourth is the “Official website for Biomagnetism classes in the USA with Dr. Isaac Goiz Durán...” Finally, the fifth site in the list is Wikipedia’s entry on biomagnetism (the measurement of weak magnetic fields produced by the body). The first four are twaddle; the fifth is reputable.

Women Aren't Supposed to Fly, by Harriet Hall, superimposed on Intermediate Physics for Medicine and Biology.
Women Aren’t Supposed to Fly,
by Harriet Hall.
If you want to learn more about Harriet Hall, read her autobiography Women Aren’t Supposed to Fly, in which she describes her experiences in medical school and the Air Force. From the Preface:
There’s an old curse “may you live in interesting times.” I lived in an era when society was starting to allow women to enter male-dominated fields, but didn’t yet entirely approve. Someone said, “Whatever women do they must do twice as well as men to be thought half as good. Luckily this is not difficult.” Actually, it was difficult. It was frequently frustrating, sometimes painful, often ridiculously funny, and always interesting. Come with me on a ramble through my education and career and let me tell you what it was like.
What Women Aren’t Supposed to Fly does not explain is how Hall ended up a lampooner of baloney and poppycock. She needs to write a second book, telling that story. I’m sure it would be equally fascinating and amusing.

I’d have preferred another physicist pick up Bob Park’s banner, but I’ll take what I can get. Harriet Hall, keep up the good work and let’s end this “biomagnetic therapy” rubbish.