Friday, September 27, 2024

Taylor Diffusion

In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Poiseuille flow: the flow of a viscous fluid in a pipe. Consider laminar flow of a fluid, having viscosity η, through a long pipe with radius R and length Δx. The flow is driven by a pressure difference Δp across its ends. 

The velocity of the fluid in the pipe is 

where r is the distance from the center of the pipe. Figure 1.26 in IPMB includes a plot of the velocity profile, which is a parabola: large at the center of the pipe (r = 0) and zero at the wall (r = R) because of the no-slip boundary condition.

 
In most mechanics problems, not only is the velocity important but also the displacement. Yet, somehow until recently I never stopped to consider what the displacement of the fluid looks like during Poiseuille flow. Let’s say that at time t = 0 you somehow mark a thin layer of the fluid uniformly across the pipe’s cross section (the light blue line on the left in the figure below). Perhaps you do this by injecting dye or using magnetic resonance imaging to tag the spins. How does the fluid move?

At time tΔt the displacement also forms a parabola, with the fluid at the center moving a ways down the pipe to the right and the fluid at the wall not moving at all. As time marches on, the fluid keeps flowing down the pipe, with the parabola getting stretched longer and longer. Eventually, the marked fluid will extend the entire length of the pipe.

Poiseuille flow is laminar, meaning the fluid moves smoothly along streamlines. Laminar flow is typical of fluid motion when viscosity dominates so the Reynolds number is small. Now let’s consider how the marked or tagged fluid gets mixed with the normal fluid. In laminar flow, there is no turbulent mixing, because there are no eddies to stir the fluid. In fact, there is no component of the fluid velocity in the radial direction at all. There is no mixing, except by diffusion.

Diffusion is discussed in Chapter 4 of IPMB. It is the random movement of particles from a region of higher concentration to a region of lower concentration. Let’s consider what would happen to the marked fluid if flow was turned off (for instance, if we set Δp = 0) and only diffusion occurs. The originally narrow light blue band would no longer drift downstream but it would spread with time, rapidly at first and then more slowly later. In reality the concentration of marked fluid would change continuously in a Gaussian-like way, with a higher concentration at the center and gradually lower concentration in the periphery, but drawing that picture would be difficult, so I’ll settle for showing a uniform band getting wider in time. 

Now, what happens if drift and diffusion happen together? You get something like this: 

The parabola stretched out along the pipe is still there, but its gets wider and wider with time because of diffusion. 

What happens as even more time goes by? Eventually the marked fluid will have enough time to diffuse radially across the entire cross section of the pipe. If we look a ways downstream, the situation will be something like shown below.

The parabola disappears as the marked fluid becomes locally smeared out. Now, here’s the interesting thing: The spreading of the marked fluid is greater than you would expect from pure diffusion. It’s as if Poiseuille flow increased the diffusion. This effect is called Taylor diffusion: an effective diffusion on a large scale arising from Poiseuille flow on a small scale. The flow stretches that parabola axially and then diffusion spreads the marked fluid radially. This phenomenon is named after British physicist Geoffrey Ingram Taylor (1886–1975). Although the derivation is a bit too difficult for a blog post, you can show (see the Widipedia article about Taylor diffusion) that the long-time, large-scale behavior is a combination of drift plus diffusion with an effective diffusion constant, Deff, given by


where v is the mean flow speed (equal to one half the flow speed at the center of the tube). As the flow goes to zero (v = 0) the effective diffusion constant goes to Deff = D and Taylor diffusion disappears; it’s just plain old diffusion. If the flow speed is large, then Deff  is larger than D by a factor of R2 v2/48D2. The quantity Rv/D is the Péclet number (see Homework Problem 43 in Chapter 4 of IPMB), which is a dimensionless ratio of transport by convection to transport by diffusion. Taylor diffusion is particularly important when the Péclet number is large, meaning the drift caused by Poiseuille flow is greater than the spreading caused by diffusion. This enhanced diffusion can be important in some applications. For instance, if you are trying to mix two liquids using microfluidics, you would ordinarily have to wait a long time for diffusion to do its thing. Taylor diffusion can speed that mixing along.

You can call this phenomenon “Taylor diffusion” if you want. Some people use the term “Taylor dispersion.” I call it “diffusion (Taylor’s version).”

 Taylor Swift singing Shake It Off (Taylor’s Version)

 


 

Friday, September 20, 2024

Transitioning to Environmentally Sustainable, Climate-Smart Radiation Oncology Care

“Transitioning to Environmentally
Sustainable Climate-Smart
Radiation Oncology Care,”
by Lichter et al.,
IJROBP, 113:915–924, 2022.
Loyal readers of this blog may have noticed an increasing number of posts related to climate change, and the intersection of global warming with health care and medical physics. This is not an accident. I’m growing increasingly worried about the impact of climate change on our society. One way I act to oppose climate change is to write about it (here, here, here). So, I was delighted to read Katie Lichter and her team’s editorial about “Transitioning to Environmentally Sustainable, Climate-Smart Radiation Oncology Care” (International Journal of Radiation Oncology Biology Physics, Volume 113, Pages 915–924, 2022). Their introduction begins (references removed)

Climate change is among the most pressing global threats. Action now and in the coming decades is critical. Rising temperatures exacerbate the frequency and intensity of extreme weather events, including wildfires, hurricanes, floods, and droughts. Such events threaten not only our ecosystems, but also our health. Climate change’s negative effects on human health are slowly becoming better understood and are projected to increase if emissions mitigation remains inadequate. Emerging research notes a disproportionate effect of climate change on vulnerable populations (e.g., older populations, children, low-income populations, ethnic minorities, and patients with chronic conditions, including cancer) who are the least equipped to deal with these outsized effects.
Then Lichter and her coauthors get specific about radiation oncology.
More than half of cancer patients will require radiation therapy (RT) during the course of their illness. As most RT courses are delivered using fractionated external beam radiation (EBRT), patients undergoing EBRT are vulnerable to treatment disruptions from climate events. Notably, disruption of RT treatments due to severe weather events has been shown to affect patient treatment and survival. As radiation oncologists, it is imperative to recognize and further investigate the effects of climate change on health and cancer outcomes and understand the specific vulnerabilities of patients receiving RT to the effects of climate change. We must also advance our understanding of the contribution of radiation oncology as a specialty to green house gas (GHG) emissions, and what measures may be taken in our daily practices to join the international efforts in reducing our negative environmental impact.
Next the authors present their four R’s to address oncology care: reduce, reuse, recycle, rethink. This is sort of an inside joke among radiation biologists, because radiation biology famously has its own four R’s: repair, reassortment, reoxygenation, and repopulation. Lichter et al.’s four R’s explain how to lower radiation oncology’s effect on the climate.

  1. Reduce means to lower the energy needs for imaging and therapeutic devices, and to minimize medical waste.
  2. Reuse means to favor reusable equipment and supplies (such as surgical gowns) whenever possible.
  3. Recycle means to recycle any single-use supplies than cannot be reused. Much now finds its way to urban landfills rather than to recycling centers.
  4. Rethink means to reconsider all medical radiation oncology processes and procedures in light of climate change. Can some things be done by telemedicine? Can we reduce the number of fractions of radiation a patient receives so fewer visits to the hospital are required? Can some professional conferences be held virtually rather than in person? Sometimes the answer may be yes and sometimes no, but all these issues need to be reexamined.

Lichter’s editorial concludes (my italics)

The health care system contributes significantly to today’s climate health crisis. All efforts addressing the crisis are important due to their direct emissions reduction potential, and the example they set for the health care system and the patients who need the care. Although the effects of increasing global temperatures on human health are well studied, the effects of health care, and specifically oncology and radiation treatments, on contributing to climate change are not. The radiation oncology community has a unique opportunity to use our technological expertise and awareness to assess and minimize the environmental impact of our care and set the standard for sustainable health care practices for other specialties to emulate. 

Thank you Katie Lichter and your whole team for all the important work that you are doing to fight climate change! Your four R’s—reduce, reuse, recycle, and rethink—apply beyond radiation oncology, and even beyond health care, to all of our society’s activities. Perhaps writers of textbooks such as Intermediate Physics for Medicine and Biology need to reduce, reuse, recycle, and especially rethink how our books impact, and are impacted by, global warming.

 
Listen to Katie Lichter talk about her climate journey.

Friday, September 13, 2024

The Million Person Study: Whence It Came and Why

A screenshot of the article "How Sound is the Model Used to Establish Safe Radiation Levels?" on the website physicsworld.com, superimposed on the cover of Intermediate Physics for Medicine and Biology.
A screenshot of the article
“How Sound is the Model Used to
Establish Safe Radiation Levels?
on the website physicsworld.com.
Last fall, physicsworld.com published an editorial by Robert Crease asking “How Sound is the Model Used to Establish Safe Radiation Levels?” This question is addressed in Chapter 16 of Intermediate Physics for Medicine and Biology, and I have discussed it before in this blog. Crease begins
Ionizing radiation can damage living organisms, that’s clear. But there are big questions over the validity of the linear no-threshold model (LNT), which essentially states that the risk of cancer from radiation and carcinogens always increases linearly with dose. The LNT model implies, in other words, that any amount of radiation is always dangerous and that zero risk is present only at zero dose.
Crease notes that alternative models are the threshold model in which there is a minimum dose below which there is no risk, and the hormesis model which says that small doses are beneficial by triggering repair mechanisms. He explains that by adopting such a conservative position as the linear no-threshold model we may cause unforeseen negative consequences.

What sort of negative consequences? One of the most urgent and dire health hazards faced by humanity is climate change. Addressing the danger of a warming climate, with all its implications, must be our top priority. Climate change is caused primarily by the emission of greenhouse gasses such as carbon dioxide that result from the burning of fossil fuels to generate electricity, warm our homes, power our vehicles, or make steel and concrete. One alternative to burning fossil fuels is to use nuclear energy. But nuclear energy is feared by many, in part because of the linear no-threshold model, which implies that any exposure to ionizing radiation is dangerous. If, in fact, the linear no-threshold model is not valid at the low doses associated with nuclear power plants and nuclear waste disposal then the public might be more accepting of nuclear power, which may help us in the battle against climate change. Crease concludes
One of the many reasons for the need to study the validity of LNT is that convictions of its accuracy continue to be used as an argument against nuclear power plants, in connection with their operation as well as their spent fuel rods. Nuclear power may be undesirable for reasons other than this. But the critical need to find a workable alternative to fossil fuels for energy production requires an honest ability to assess the validity of this model.
In my opinion, determining if the linear no-threshold model is valid at low doses is one of the greatest challenges of medical physics today. It’s a critical example of how physics interacts with medicine and biology. We need to figure this out. But how?

Screenshot of The Million Person Study website, superimposed on the cover of Intermediate Physics for Medicine and Biology.
Screenshot of The Million
Person Study website.
One way is to conduct an epidemiological study of low-dose radiation exposure. But such a study would have to be huge, because it’s looking for a tiny effect influencing an enormous population. What you need is something like The Million Person Study. Yes, medical physics has its own “big science” large-scale collaboration. The Million Person Study’s website states
There is a major gap in epidemiological understanding, however, of the health effects experienced by populations exposed to radiation at lower doses, gradually over time.

The foundation of the Million Person Study is to fill that gap, using epidemiological methods of assessing rate and quality of mortality on a study group of one million persons exposed to this type of radiation.
The website notes that there are many reasons to assess the risk of low doses of radiation, including determining 1) the side effects of medical imaging procedures such as computed tomography, 2) the danger of nuclear accidents or terrorism (dirty bombs), 3) the safety of occupations that expose workers to a slight radiation dose, 4) the hazards of environmental exposure such as from radon in homes, and 5) the uncertainty of space and high altitude travel such as when sending astronauts to Mars. The Million Person Study not only focuses on the level of exposure, but also on the duration: was it a brief exposure as if from an nuclear accident, or a low dose delivered over a long time?

The cover of a special issue of the International Journal of Radiation Biology about The Million Person Study, superimposed on the cover of Intermediate Physics for Medicine and Biology.
The cover of a special issue of the
International Journal of Radiation Biology
about The Million Person Study.
Want to learn more about The Million Person Study? See the paper by John Boice, Sarah Cohen, Michael Mumma, and Elisabeth Ellis titled “The Million Person Study: Whence it Came and Why,” published in the International Journal of Radiation Biology in 2022 (Volume 98, Pages 537–550). Its abstract is printed below.
Purpose: The study of low dose and low-dose rate exposure is of immeasurable value in understanding the possible range of health effects from prolonged exposures to radiation. The Million Person Study (MPS) of low-dose health effects was designed to evaluate radiation risks among healthy American workers and veterans who are more representative of today’s populations than are the Japanese atomic bomb survivors exposed briefly to high-dose radiation in 1945. A million persons were needed for statistical reasons to evaluate low-dose and dose-rate effects, rare cancers, intakes of radioactive elements, and differences in risks between women and men.

Methods and Materials: The MPS consists of five categories of workers and veterans exposed to radiation from 1939 to the present. The U.S. Department of Energy (DOE) Health and Mortality study began over 40 years ago and is the source of ∼360,000 workers. Over 25 years ago, the National Cancer Institute (NCI) collaborated with the U.S. Nuclear Regulatory Commission (NRC) to effectively create a cohort of nuclear power plant workers (∼150,000) and industrial radiographers (∼130,000). For over 30 years, the Department of Defense (DoD) collected data on aboveground nuclear weapons test participants (∼115,000). At the request of NCI in 1978, Landauer, Inc., (Glenwood, IL) saved their dosimetry databases which became the source of a cohort of ∼250,000 medical and other workers.

Results: Overall, 29 individual cohorts comprise the MPS of which 21 have been or are under active study (∼810,000 persons). The remaining eight cohorts (∼190,000 persons) will be studied as resources become available. The MPS is a national effort with critical support from the NRC, DOE, National Aeronautics and Space Administration (NASA), DoD, NCI, the Centers for Disease Control and Prevention (CDC), the Environmental Protection Agency (EPA), Landauer, Inc., and national laboratories.

Conclusions: The MPS is designed to address the major unanswered question in radiation risk understanding: What is the level of health effects when exposure is gradual over time and not delivered briefly. The MPS will provide scientific understandings of prolonged exposure which will improve guidelines to protect workers and the public; improve compensation schemes for workers, veterans and the public; provide guidance for policy and decision makers; and provide evidence for or against the continued use of the linear nonthreshold dose-response model in radiation protection.

 Lead on Million Person Study, and thank you for your effort. We need those results!

Friday, September 6, 2024

Black Carbon and Radon

Drawdown
In a previous post, I reviewed the book Drawdown: The Most Comprehensive Plan Ever Proposed to Reverse Global Warming. Sometimes I visit the book’s associated website, drawdown.org, because it has so much to teach me about climate change. Recently, I read one of their publications about Reducing Black Carbon. The executive summary begins:
Black carbon—also referred to as soot—is a particulate matter that results from the incomplete combustion of fossil fuels and biomass. As a major air and climate pollutant, black carbon (BC) emissions have widespread adverse effects on human health and climate change. Globally, exposure to unhealthy levels of particulate matter, including BC, is estimated to cause between three and six million excess deaths every year. These health impacts—and the related economic losses—are felt disproportionately by those living in low- and middle-income countries. Furthermore, BC is a potent greenhouse gas with a short-term global warming potential well beyond carbon dioxide and methane. Worse still, it is often deposited on sea ice and glaciers, reducing reflectivity and accelerating melting, particularly in the Arctic and Himalayas.

Therefore, reducing BC emissions results in a triple win, mitigating climate change, improving the lives of more than two billion people currently exposed to unclean air, and saving trillions of dollars in economic losses.
As I learned more, I found that black carbon is only one type of fine particles in the air. I begin to wonder “where have I heard about the risk of particulate matter before?” Then it hit me: Section 17.12 of Intermediate Physics for Medicine and Biology, which is about radon. Russ Hobbie and I wrote
Uranium, and therefore radium and radon, are present in most rocks and soil. Radon, a noble gas, percolates through grainy rocks and soil and enters the air and water in different concentrations. Although radon is a noble gas, its decay products have different chemical properties and attach to dust or aerosol droplets which can collect in the lungs. High levels of radon products in the lungs have been shown by both epidemiological studies of uranium miners and by animal studies to cause lung cancer.

Aha! Perhaps black carbon is an effective carrier of radon decay products into the lungs. This is just a hypothesis, but I did find a reference that supported the idea (Wang et al., “Particle Radioactivity from Radon Decay Products and Reduced Pulmonary Function Among Chronic Obstructive Pulmonary Disease Patients,” Environmental Research, Volume 216, Article Number 114492, 2023). Below I present part of their introduction (references removed)

Consistent with the existing literature on ambient particulate matter (PM) exposure, our previous studies found that indoor PM was associated with increased systemic inflammation and oxidative stress and reduced pulmonary function among [chronic obstructive pulmonary disease] patients in Eastern Massachusetts. It has recently been recognized that an attribute of PM with potential to promote pulmonary damage after inhalation is radionuclides attached to PM, referred to as particle radioactivity (PR). Though ionizing radiation has many sources (e.g., cosmic radiation and medical procedures), the majority of natural background radiation (and, thus, of PR) is from radon (222Rn), which decays into α-, β-, and γ-emitting decay products. Although radon gas itself is rapidly exhaled, freshly generated radon decay products (also referred to a radon progeny) can rapidly attach to particles in the ambient and indoor air and be inhaled into the airways. After deposition, particles continue to emit radiation in the lungs with a residence time that can range from several days to months. Compared to β- and γ-emissions from radionuclides, α-emitting particles are considered the most toxic due to their high energy and large mass. Since α-radiation cannot penetrate the intact epidermis, inhalation is the predominant route of exposure, and evidence that α-radiation may cause pulmonary damage is suggested by its effects on inducing inflammation and reactive oxygen species in human lung fibroblasts as well as up-regulating gene pathways in human pulmonary epithelial cells associated with inflammatory and respiratory diseases.

I didn’t find any mention of radon in Drawdown’s publication Reducing Black Carbon or in the World Health Organization’s publication Health Effects of Black Carbon. I don’t know if radon is an important part of the mechanism by which black carbon causes health hazards. Yet, I wonder. I know that radon is a more serious hazard among smokers compared to nonsmokers, and smoking should have similarities to breathing soot. This black carbon/radon hypothesis raises some interesting questions. Is black carbon more effective than other types of particulate matter in transporting radon decay products? Does global warming increase lung cancer? Is black carbon more dangerous in areas with high radon concentrations? Is black carbon more hazardous for people living in poorly ventilated buildings rather than in well-ventilated buildings or outdoors?

Soot is clearly bad news. As drawdown.com says, it’s a triple threat: climate, health, and well-being. They offer several ideas for reducing black carbon:

  1. Urgently implement clean cooking solutions
  2. Target transportation to reduce current—and prevent future—emissions
  3. Reduce BC from the shipping industry
  4. Regulate air quality
  5. Include BC in nationally determined contributions and the United Nations Framework Convention on Climate Change
  6. Improve BC measurements and estimates

The item about regulating air quality makes me speculate if a positive feedback loop could underlie the impact of black carbon on the climate: Soot in the air increases global warming; increased global warming increases the number of forest fires, and an increased number of forest fires increases the amount of soot in the air. Again, this is just a hypothesis, and I don’t know it’s true. But I do know that in my 25 years living in Michigan, the only serious problem with air pollution and soot I’ve experienced was caused by last summer’s Canadian forest fires, and such fires appear, at least to me, to be related to global warming.

 

Black carbon may be one of the places where climate change and IPMB intersect. It’s an important topic and deserves closer study.

Friday, August 30, 2024

Joe Redish (1943–2024)

Edward “Joe” Redish, a University of Maryland physics professor, died August 24 of cancer. Joe has been mentioned many times in this blog (here, here, here, and here). He was deeply interested in how students—and in particular biology students—learn physics, an interest with obvious relevance to Intermediate Physics for Medicine and Biology.

Redish, E. F.,  “Using Math in Physics: 7. Telling the Story,” Phys. Teach., 62: 5–11, 2024, on the cover of Intermediate Physics for Medicine and Biology.
Redish, E. F., 
“Using Math in Physics: 7. Telling the Story,”
Phys. Teach.
, 62: 5–11, 2024.
I knew Joe, and valued his friendship. Rather than writing about him myself,  I’ll share some of his thoughts in his own words. He had a wonderful series of papers in The Physics Teacher about using math in physics. The last of the series (published this year) was about using math to tell a story (Redish, E. F., “Using Math in Physics: 7. Telling the Story,” Phys. Teach., Volume 62, Pages 5–11, 2024). He wrote

Even if students can make the blend—interpret physics correctly in mathematical symbology and graphs—they still need to be able to apply that knowledge in productive and coherent ways. As instructors, we can show our solutions to complex problems in class. We can give complex problems to students as homework. But our students are likely to still have trouble because they are missing a key element of making sense of how we think about physics: How to tell the story of what’s happening.

We use math in physics differently than it’s used in math classes. In math classes, students manipulate equations with abstract symbols that usually have no physical meaning. In physics, we blend conceptual physics knowledge with mathematical symbology. This changes the way that we use math and what we can do with it.

We use these blended mental structures to create stories about what’s happening (mechanism) and stabilize them with fundamental physical laws (synthesis).
In an oral history interview with the American Institute of Physics, Joe talked about using simple toy models when teaching physics to biology students.
One of the problems that students run into, that teachers of physics run into teaching biology students, is we use all these trivial toy models, right? Frictionless vacuum. Ignore air resistance. Treat it as a point mass. And the biology students come in and they look at this and they say, “These are not relevant. This is not the real world.” And they know in biology, that if you simplify a system, it dies. You can’t do that. In physics we do this all the time. Simple models are kind of a core epistemological resource for us. You find the simplest example you possibly can and you beat it to death. It illustrates the principle. Then you see how the mathematics goes with the physics. The whole issue of finding simple models is where a lot of the creative art is in physics.
Redish and Cooke, “Learning Each Other’s Ropes: Negotiating Interdisciplinary Authenticity” CBE—Life Sciences Education, 12:175–186, 2013, on the cover of Intermediate Physics for Medicine and Biology.
Redish and Cooke,
Learning Each Other’s Ropes:
Negotiating Interdisciplinary Authenticity

CBE—Life Sciences Education
,
12:175–186, 2013.
My favorite of Joe’s papers is “Learning Each Other’s Ropes: Negotiating Interdisciplinary Authenticity” which he coauthored with biologist Todd Cooke (CBE—Life Sciences Education, Volume 12, Pages 175–186, 2013).
From our extended conversations, both with each other and with other biologists, chemists, and physicists, we conclude that, “science is not just science.” Scientists in each discipline employ a tool kit of different types of scientific reasoning. A particular discipline is not characterized by the exclusive use of a set of particular reasoning types, but each discipline is characterized by the tendency to emphasize some types more than others and to value different kinds of knowledge differently. The physicist’s enthusiasm for characterizing an object as a disembodied point mass can make a biologist uncomfortable, because biologists find in biology that function is directly related to structure. Yet similar sorts of simplified structures can be very powerful in some biological analyses. The enthusiasm that some biologists feel toward our students learning physics is based not so much on the potential for students to learn physics knowledge, but rather on the potential for them to learn the types of reasoning more often experienced in physics classes. They do not want their students to think like physicists. They want them to think like biologists who have access to many of the tools and skills physicists introduce in introductory physics classes… We conclude that the process is significantly more complex than many reformers working largely within their discipline often assume. But the process of learning each other’s ropes—at least to the extent that we can understand each other’s goals and ask each other challenging questions—can be both enlightening and enjoyable. And much to our surprise, we each feel that we have developed a deeper understanding of our own discipline as a result of our discussions.

You can listen to Joe talk about physics education research on the Physics Alive podcast.

We’ll miss ya, Joe.

Friday, August 23, 2024

The Song of the Dodo

The Song of the Dodo,
by David Quammem.
One of my favorite science writers is David Quammen. I’ve discussed several of his books in this blog before, such as Breathless, Spillover, and The Tangled Tree. A copy of one of his earlier books—The Song of the Dodo: Island Biogeography in an Age of Extinctions—has sat on my bookshelf for a while, but only recently have I had a chance to read it. I shouldn’t have waited so long. It’s my favorite.

Quammen is not surprised that the central idea of biology, natural selection, was proposed by two scientists who studied islands: Charles Darwin and the Galapagos, and Alfred Russell Wallace and the Malay Archipelago. The book begins by telling Wallace’s story. Quammen calls him “the man who knew islands.” Wallace was the founder of the science of biogeography: the study of how species are distributed throughout the world. For example, Wallace’s line lies between two islands in Indonesia that are only 20 miles apart: Bali (with plants and animals similar to those native to Asia) and Lombok (with flora and fauna more like that found in Australia). Because islands are so isolated, they are excellent laboratories for studying speciation (the creation of new species through evolution) and extinction (the disappearance of existing species).

Quammen is the best writer about evolution since Stephen Jay Gould. I would say that Gould was better at penning essays and Quammen is better at authoring books. Much of The Song of the Dodo deals with the history of science. I would rank it up there with my favorite history of science books: The Making of the Atomic Bomb by Richard Rhodes, The Eighth Day of Creation by Horace Freeland Judson, and The Maxwellians by Bruce Hunt.

Yet, The Song of the Dodo is more than just a history. It’s also an amazing travelogue. Quammen doesn’t merely write about islands. He visits them, crawling through rugged jungles to see firsthand animals such as the Komodo Dragon (a giant man-eating lizard), the Madagascan Indri (a type of lemur), and the Thylacine (a marsupial also known as the Tasmanian tiger). A few parts of The Song of the Dodo are one comic sidekick away from sounding like a travel book Tony Horwitz might have written. Quammen talks with renowned scientists and takes part in their research. He reminds me of George Plimpton, sampling different fields of science instead of trying out various sports.

Although I consider myself a big Quammen fan, he does have one habit that bugs me. He hates math and assumes his readers hate it too. In fact, if Quammen’s wife Betsy wanted to get rid of her husband, she would only need to open Intermediate Physics for Medicine and Biology to a random page and flash its many mathematical equations in front of his face. It would put him into shock, and he probably wouldn’t last the hour. In his book, Quammen only presents one equation and apologizes profusely for it. It’s a power law relationship

S = c An .

This is the same equation that Russ Hobbie and I analyze in Chapter 2 of IPMB, when discussing log-log plots and scaling. How do you determine the dimensionless exponent n for a particular case? As is my wont, I’ll show you in a new homework problem.
Section 2.11

Problem 40½. In island biogeography, the number of species on an island, S, is related to the area of the island, A, by the species-area relationship: S = c An, where c and n are constants. Philip Darlington counted the number of reptile and amphibian species from several islands in the Antilles. He found that when the island area increased by a factor of ten, the number of species doubled. Determine the value of n.
Let me explain to mathaphobes like Quammen how to solve the problem. Assume that on one island there are S0 species and the area is A0. On another island, there are 2S0 species and an area of 10A0. Put these values into the power law to find S0 = cA0n and 2S0 = c(10A0)n. Now divide the second equation by the first (c, S0, and A0 all cancel) to find 2 = 10n. Take the logarithm of both sides, so log(2) = log(10n), or using a property of logarithms log(2) = n log(10). So n = log(2)/log(10) = 0.3. Note that n is positive, as it should be since increasing the area increases the number of species.

When I finished the main text of The Song of the Dodo, I thumbed through the glossary and found an entry for logarithm. “Aww,” I thought, “Quammen was only joking; he likes math after all.” Then I read his definition: “logarithm. A mathematical thing. Never mind.”

About halfway through, the book makes a remarkable leap from island biogeography—interesting for its history and relevance to exotic tropical isles—to mainland ecology, relevant to critical conservation efforts. Natural habitats on the continents are being broken up into patches, a process called fragmentation. The expansion of towns and farms creates small natural reserves surrounded by inhospitable homes and fields. The few remaining native regions tend to be small and isolated, making them similar to islands. A small natural reserve cannot support the species diversity that a large continent can (S = c An). Extinctions inevitably follow.

The Song of the Dodo also provides insight into how science is done. For instance, the species-area relationship was derived by Robert MacArthur and Edward Wilson. While it’s a valuable contribution to island biogeography, scientists disagree on its applicability to fragmented continents, and in particular they argue about its relevance to applied conservation. Is a single large reserve better than several small ones? In the 1970s a scientific battle raged, with Jared Diamond supporting a narrow interpretation of the species-area relationship and Dan Simberloff advocating for a more nuanced and less dogmatic view. As in any science, the key is to get data to test your hypothesis. Thomas Lovejoy performed an experiment in the Amazon to test the species-area relationship. Parts of the rainforest were being cleared for agriculture or other uses, but the Brazilian government insisted on preserving some of the native habitat. Lovejoy obtained permission to create many different protected rainforest reserves, each a different size. His team monitored the reserves before and after they became isolated from adjacent lands, and tracked the number of species supported in each of these “islands” over time. While the results are complicated, there is a correlation between species diversity and reserve size. Area matters.

One theme that runs through the story is extinction. If you read the book, you better have your hanky ready when you reach the part where Quammen imagines the death of the last Dodo bird. Conservation efforts are featured throughout the text, such as the quest to save the Mauritius kestrel.  
 
The Song of the Dodo concludes with a mix of optimism and pessimism. Near the end of the book, when writing about his trip to Aru (an island in eastern Indonesia) to observe a rare Bird of Paradise, Quammen writes
The sad, dire things that have happened elsewhere, in so many parts of the world—biological imperialism, massive habitat destruction, fragmentation, inbreeding depression, loss of adaptability, decline of wild populations to unviable population levels, ecosystem decay, trophic cascades, extinction, extinction, extinction—haven’t yet happened here. Probably they soon will. Meanwhile, though, there’s still time. If time is hope, there’s still hope.

An interview with David Quammen, by www.authorsroad.com

https://www.youtube.com/watch?v=Quq7PNH1zWM

Friday, August 16, 2024

Happy 100th Birthday Robert Adair

Are Electromagnetic Fields
Making Me Ill?

This Wednesday will be the 100th anniversary of Robert Adair’s birth. I wrote a blog post about Adair recently but he is an important enough figure in biological physics, and in Intermediate Physics for Medicine and Biology, that today I will write about him again. This time I will focus on a difference of opinion between Adair and Joseph Kirschvink about the possible effects of weak electric and magnetic fields in biology. In Are Electromagnetic Fields Making Me Ill? I wrote
One of the first physicists to enter the fray [over the potential hazards of powerline magnetic fields] was Yale physics professor Robert Adair, a member of the National Academy of Sciences who was known for his research on elementary particles called kaons and for his interest in the physics of baseball. In 1991, Adair published an article in the leading physics journal Physical Review investigating the possible mechanisms by which 60-Hz electric and magnetic fields could affect organisms…. Adair concluded that “there are very good reasons to believe that weak [extremely low frequency] fields can have no significant biological effect at the cell level—and no strong reason to believe otherwise” [10].
“Constraints on Biological Effects
of Weak Extremely-Low-Frequency
Electromagnetic Fields”

Reference 10 is
R. Adair, “Constraints on Biological Effects of Weak Extremely-Low-Frequency Electromagnetic Fields,” Physical Review A, Volume 43, Pages 1039–1048, 1991
Kirschvink responded (“Comment on ‘Constraints on biological effects of weak extremely-low-frequency electromagnetic fields.’” Physical Review A, Volume 46. Pages 2178–2184, 1992)
In a recent paper, Adair [Phys. Rev. A 43, 1039 (1991)] concludes that weak extremely-low-frequency (ELF) electromagnetic fields cannot affect biology on the cell level. However, Adair's assertion that few cells of higher organisms contain magnetite (Fe3O4) and his blanket denial of reproducible ELF effects on animals are both wrong. Large numbers of single-domain magnetite particles are present in a variety of animal tissues, including up to a hundred million per gram in human brain tissues, organized in clusters of tens to hundreds of thousand per gram. This is far more than a "few cells." Similarly, a series of reproducible behavioral experiments on honeybees, Apis mellifera, have shown that they are capable of responding to weak ELF magnetic fields that are well within the bounds of Adair s criteria. A biologically plausible model of the interaction of single-domain magnetosomes with a mechanically activated transmembrane ion channel shows that ELF fields on the order of 0.1 to 1 mT are capable of perturbing the open-closed state by an energy of kT. As up to several hundred thousand such structures could fit within a eukaryotic cell, and the noise should go as the square root of the number of independent channels, much smaller ELF sensitivities at the cellular level are possible. Hence, the credibility of weak ELF magnetic effects on living systems must stand or fall mainly on the merits and reproducibility of the biological or epidemiological experiments that suggest them, rather than on dogma about physical implausibility.
In his comment, Kirschvink proposed a model of a magnetosome interacting with the earth’s magnetic field that Russ Hobbie and I discuss in Section 9.10 of Intermediate Physics for Medicine and Biology.

What do you think about Kirschvink’s claim that magnetite is found in the human brain? In Are Electromagnetic Fields Making Me Ill? I wrote
Caltech geophysicist Joseph Kirschvink has found magnetite in the brain, which could be the basis of magnetoreception in humans [12]. Experiments to test this hypothesis are difficult; contamination of tissue samples is always a problem, and the mere presence of magnetite does not by itself imply that a magnetic sensor exists.

[12] J. L. Kirschvink, A. Kobayashi-Kirschvink, B. J. Woodford, “Magnetite Biomineralization in the Human Brain,” Proceedings of the National Academy of Sciences, Volume 89, Pages 7683–7687, 1992.
The last sentence of Kirschvink’s abstract particularly interests me: “Hence, the credibility of weak ELF magnetic effects on living systems must stand or fall mainly on the merits and reproducibility of the biological or epidemiological experiments that suggest them, rather than on dogma about physical implausibility.” In one sense it is a truism. Yes, of course, experiments are the final deciding factor in scientific truth. Yet, I’m uncomfortable about characterizing Adair’s analysis as “dogma about physical implausibility.” Adair’s work was based on very basic physics. I suppose you could call Maxwell’s equations and the three laws of thermodynamics “dogma,” but it is a pretty credible dogma.

More recently, Sheraz Khan and David Cohen published a fascinating study about “Using the Magnetoencephalogram to Noninvasively Measure Magnetite in the Living Human Brain” (Human Brain Mapping, Volume 40, Pages 1654–1665, 2019). They observed magnetite primarily in older men, and suggest that magnetite may play a role in neurodegenerative diseases, such as Alzheimers.

Adair published a reply (R. K. Adiar, “Reply to ‘Comment on “Constraints on Biological Effects on Weak Extremely-Low-Frequency Electromagnetic Fields,”’” Physical Review A, Volume 46, Pages 2185–2187, 1992). His abstract says:
Kirschvink [preceding Comment, Phys. Rev. A 46, 2178 (1992)] objects to my conclusions [Phys. Rev. A 43, 1039 (1991)] that weak extremely-low-frequency (ELF) electromagnetic fields cannot affect biology on the cell level. He argues that I did not properly consider the interaction of such fields with magnetite (Fe3O4) grains in cells and that such interactions can induce biological effects. However, his model, designed as a proof of principle that the interaction of weak 60-Hz ELF fields with magnetite domains in a cell can affect cell biology, requires, by his account, a magnetic field of 0.14 mT (1400 mG) to operate, while my paper purported to demonstrate only that fields smaller than 0.05 mT (500 mG) must be ineffective. I then discuss ELF interactions with magnetite generally and show that the failure of Kirschvink s model to respond to weak fields must be general and that no plausible interaction with biological magnetite of 60-Hz magnetic fields with a strength less than 0.05 mT can affect biology on the cell level.
I tend to side with Adair’s position in his reply; I, too, am skeptical of weak-field magnetic effects in biology. However, the controversy makes me wonder if magnetic resonance imaging interacting with magnetite in the brain might possibly trigger some sort of effect, especially in the newer high-magnetic-field scanners. The magnetic field in a 4-tesla MRI machine is nearly 105 stronger than the 0.05 mT field of the earth that Adair and Kirschvink are arguing about. I still remain skeptical about MRI effects (see Chapter 2 in Are Electromagnetic Fields Making Me Ill?), but at least this seems to be a more plausible mechanism than interactions with the earth’s magnetic field.

Several important figures in physics applied to medicine and biology were born in 1924: Allan Cormack, Bernard Cohen, Robert Plonsey, and Robert Adair. This week we wish Adair a happy 100th birthday. His work on the effect of weak electric and magnetic fields in biology remains relevant today. I wish he was here to see the latest results.

Friday, August 9, 2024

A Comparison of Two Models for Calculating the Electrical Potential in Skeletal Muscle

Roth and Gielen,
Annals of Biomedical Engineering,
Volume 15, Pages 591–602, 1987
Today I want to tell you how Frans Gielen and I wrote the paper “A Comparison of Two Models for Calculating the Electrical Potential in Skeletal Muscle” (Annals of Biomedical Engineering, Volume 15, Pages 591–602, 1987). It’s not one of my more influential works, but it provides insight into the kind of mathematical modeling I do.

The story begins in 1984 when Frans arrived as a post doc in John Wikswo’s Living State Physics Laboratory at Vanderbilt University in Nashville. Tennessee. I had already been working in Wikswo’s lab since 1982 as a graduate student. Frans was from the Netherlands and I called him “that crazy Dutchman.” My girlfriend (now wife) Shirley and I would often go over to Frans and his wife Tiny’s apartment to play bridge. I remember well when they had their first child, Irene. We all became close friends, and would go camping in the Great Smoky Mountains together.

Frans had received his PhD in biophysics from Twente University. In his dissertation he had developed a mathematical model of the electrical conductivity of skeletal muscle. His model was macroscopic, meaning it represented the electrical behavior of the tissue averaged over many cells. It was also anisotropic, so that the conductivity was different if measured parallel or perpendicular to the muscle fiber direction. His PhD dissertation also reported many experiments he performed to test his model. He used the four-electrode method, where two electrodes pass current into the tissue and two others measure the resulting voltage. When the electrodes are placed along the muscle fiber direction, he found that the resulting conductivity depended on the electrode separation. If the current-passing electrodes where very close together then the current was restricted to the extracellular space, resulting in a low conductivity. If, however, the electrodes were farther apart then the current would distribute between the extracellular and intracellular spaces, resulting in a high conductivity.

When Frans arrived at Vanderbilt, he collaborated with Wikswo and me to revise his model. It seemed odd to have the conductivity (a property of the tissue) depend on the electrode separation (a property of the experiment). So we expressed the conductivity using Fourier analysis (a sum of sines and cosines of different frequencies), and let the conductivity depended on the spatial frequency k. Frans’s model already had the conductivity depend on the temporal frequency, ω, because of the muscle fiber’s membrane capacitance. So our revised model had the conductivity σ be a function of both k and ωσ = σ(k,ω). Our new model had the same behavior as Fran’s original one: for high spatial frequencies the current remained in the extracellular space, but for low spatial frequencies it redistributed between the extracellular and intracellular spaces. The three of us published this result in an article titled “Spatial and Temporal Frequency-Dependent Conductivities in Volume-Conduction Calculations for Skeletal Muscle” (Mathematical Biosciences, Volume 88, Pages 159–189, 1988; the research was done in January 1986, although the paper wasn’t published until April of 1988).

Meanwhile, I was doing experiments using tissue from the heart. My goal was to calculate the magnetic field produced by a strand of cardiac muscle. Current could flow inside the cardiac cells, in the perfusing bath surrounding the strand, or in the extracellular space between the cells. I was stumped about how to incorporate the extracellular space until I read Les Tung’s PhD dissertation, in which he introduced the “bidomain model.” Using this model and Fourier analysis, I was able to derive equations for the magnetic field and test them in a series of experiments. Wikswo and I published these results in the article “A Bidomain Model for the Extracellular Potential and Magnetic Field of Cardiac Tissue” (IEEE Transactions of Biomedical Engineering, Volume 33, Pages 467–469, 1986).

By the summer of 1986 I had two mathematical models for the electrical conductivity of muscle. One was a “monodomain” model (representing an averaging over both the intracellular and extracellular spaces) and one was a “bidomain” model (in which the intracellular and extracellular spaces were each individually averaged over many cells). It was strange to have two models, and I wondered how they were related. One was for skeletal muscle, in which each muscle cell is long and thin but not coupled to its neighbors. The other was for cardiac muscle, which is a syncytium where all the cells are coupled through intercellular junctions. I can remember going into Frans’s office and grumbling that I didn’t know how these two mathematical representations were connected. As I was writing the equations for each model on his chalkboard, it suddenly dawned on me that the main difference between the two models was that for cardiac tissue current could flow perpendicular to the fiber direction by passing through the intercellular junctions, whereas for skeletal muscle there was no intracellular path transverse to the uncoupled fibers. What if I took the bidomain model for cardiac tissue and set the transverse, intracellular conductivity equal to zero? Wouldn’t that, in some way, be equivalent to the skeletal muscle model?

I immediately went back to my own office and began to work out the details. This calculation starts on page 85 of my Vanderbilt research notebook #15, dated June 13, 1986. There were several false starts, work scratched out, and a whole page crossed out with a red pen. But by page 92 I had shown that the frequency-dependent conductivity model for skeletal muscle was equivalent to the bidomain model for cardiac muscle if I set the bidomain transverse intracellular conductivity to zero, except for one strange factor that included the membrane impedance, which represented current traveling transverse to the skeletal muscle fibers by shunting across the cell membrane. But this extra factor was important only at high temporal frequencies (when capacitance shorted out the membrane) and otherwise was negligible. I proudly marked the end of my analysis with “QED” (quod erat demonstrandum; Latin for “that which was to be demonstrated,” which often appears at the end of a mathematical proof).

Two pages (85 and 92) from my Research Notebook #15 (June, 1986).

Frans and I published this result in the Annals of Biomedical Engineering, and it is the paper I cite at the top of this blog post. Wikswo was not listed as an author; I think he was traveling that summer, and when he returned to the lab we already had the manuscript prepared, so he let us publish it just under our names. The abstract is given below:

We compare two models for calculating the extracellular electrical potential in skeletal muscle bundles: one a bidomain model, and the other a model using spatial and temporal frequency-dependent conductivities. Under some conditions the two models are nearly identical, However, under other conditions the model using frequency-dependent conductivities provides a more accurate description of the tissue. The bidomain model, having been developed to describe syncytial tissues like cardiac muscle, fails to provide a general description of skeletal muscle bundles due to the non-syncytial nature of skeletal muscle.

Frans left Vanderbilt in December, 1986 and took a job with the Netherlands section of the company Medtronic, famous for making pacemakers and defibrillators. He was instrumental in developing their deep brain stimulation treatment for Parkinson’s disease. I graduated from Vanderbilt in August 1987, stayed for one more year working as a post doc, and then took a job at the National Institutes of Health in Bethesda, Maryland.

Those were fun times working with Frans Gielen. He was a joy to collaborate with. I’ll always remember than June day when—after brainstorming with Frans—I proved how those two models were related.

Short bios of Frans and me published in an article with Wikswo in the IEEE Trans. Biomed. Eng.,
cited on page 237 of Intermediate Physics for Medicine and Biology.
 

Friday, August 2, 2024

If I Understood You, Would I Have This Look on My Face?

I’m a big Alan Alda fan. As a teenager, I would watch him each week as Hawkeye Pierce on M*A*S*H. Besides being an actor, Alda also had a second career as a science communicator, hosting the PBS series Scientific American Frontiers.

The cover of If I Understood You, Would I Have This Look on My Face? superimposed on Intermediate Physics for Medicine and Biology.
If I Understood You, Would
I Have This Look on My Face?

by Alan Alda.
After writing this science blog for seventeen years, I’ve decided I should try to figure out what I’m doing. So I read Alda’s book If I Understood You, Would I Have This Look on My Face? My Adventures in the Art and Science of Relating and Communicating. In his introduction, Alda writes
You run a company and you think you are relating to your customers and employees, and that they understand what you’re saying, but they don’t, and both customers and employees are leaving you. You’re a scientist who can’t get funded because the people with the money just can’t figure out what you’re telling them. You’re a doctor who reacts to a needy patient with annoyance; or you love someone who finds you annoying, because they just don’t get what you’re trying to say.

But it doesn’t have to be that way.

For the last twenty years, I’ve been trying to understand why communicating seems so hard—especially when we’re trying to communicate something weighty and complicated. I started with how scientists explain their work to the public: I helped found the Center for Communicating Science at Stony Brook University in New York, and we’ve spread what we learned to universities and medical schools across the country and overseas.

But as we helped scientists be clear to the rest of us, I realized we were teaching something so fundamental to communication that it affects not just how scientists communicate, but the way all of us relate to one another.

We were developing empathy and the ability to be aware of what was happening in the mind of another person.
The first half of the book describes a variety of improvisation techniques that teach how to increase your empathy and your ability connect to others; almost how to read someone’s mind. Alda believes that empathy is the key to communicating: “relating is everything.”

While I find these ideas interesting, improvisation isn’t something I have any experience with and, frankly, have little interest in trying. After all, most of these methods require interpreting facial expressions and body language. What could any of this have to do with the solitary process of writing a blog post?

Then I reached Chapter 15: “Reading the Mind of the Reader.” It starts
I know it sounds odd, but we’ve found that it’s possible to have an inkling of what’s going on in the mind of our audience even when they’re not actually in the room with us—like when we write.
I wish this chapter had been longer. Alda stresses the importance of writing from the reader’s perspective
In his elegant book The Sense of Style, Steven Pinker says that to write as if the reader were looking over your shoulder is probably to not possible. It’s just too difficult to take on the perspective of another person.

I wonder...
He then describes Steven Strogatz’s success in writing about mathematics, and how he “engages the reader as a friend.” Readers of my blog might be familiar with Strogatz, whose work I have discussed before (here, here, here, and here). Alda concludes this chapter about writing with
My guess is that even in writing, respecting the other person’s experience gives us our best shot at being clear and vivid, and our best shot, if not at being loved, at least at being understood.
Another technique to improve a scientist’s writing is to tell stories. The secret is to first introduce the main character and their goal. For a scientist, this may be to test a hypothesis. Then, crucially, comes some obstacle that puts everything in suspense. Finally, some turning point arises and the story resolves. Alda claims that a story is engaging because you get “caught up in someone's struggle to achieve something.”
If we’re looking for a way to bring emotion to someone, a story is the perfect vehicle. We can’t resist stories. We crave them.
I’m going to try to incorporate more empathy and story-telling into these blog posts. An even greater challenge will be to use these techniques in a textbook like Intermediate Physics for Medicine and Biology as we prepare the 6th edition. My years of experience teaching undergraduates based on IPMB should help. I’ll do my best.

I’ll let Alda have the last word.
So, it’s really not that complicated: If your read my face, you’ll see if I understand you. Improv games, and even exercises on your own, can bring you in touch with the inner life of another person—even when you sit by yourself and write.

 

Alan Alda on If I Understood You, Would I Have This Look on My Face?

https://www.youtube.com/watch?v=y8xPr6fJRMs 


Alan Alda on why communication is so important to science.

https://www.youtube.com/watch?v=abr6CqbNdM4