Friday, December 20, 2024

The Luria-Delbrück Experiment

Introduction

Today’s question is: do mutations happen randomly, or are they caused by some selective pressure? In other words, are mutations a Darwinian event where they happen by chance and then natural selection selects those that are favorable to pass on to the offspring, or are mutations Lamarckian where they happen because they help a species survive (like a giraffe constantly stretching its neck to reach the leaves at the top of the tree, thereby making its neck longer, and then passing that acquired trait to its offspring). To determine which of these two hypotheses is correct, we need an experimental test.

Let’s examine one famous experiment. To make things simple, consider a specific case. Assume we start with just one individual, who is not a mutant. Furthermore, let each parent have two offspring, and only analyze three generations. For the first two generations there is no selective pressure, and only in the third generation the selective pressure is present. To make the analysis really simple, assume the probability of a mutation, p, is very small.

The most common case is shown in the figure below. Blue circles represent the individuals in each generation, starting in the first generation with just one. Locations where lines branch represent births. (Wait, you say, each child should have two parents, not one! Okay, we are making a simple model. Assume an individual reproduces asexually by splitting into two. We should talk about “splittings” and not “births.”) The green dashed line represents when the selective pressure begins. So our picture shows one great-grandparent, two grandparents, four parents, and eight children. A mutation is indicated by changing a blue circle to red. 

Because p << 1, by far the most common result is shown below, with no mutations. 

A drawing showing a single organism splitting into two, four, and then eight offspring.

 

Lamarckian Evolution

In the case when mutations are caused by some selective pressure (Lamarckian), you can get a more interesting situation like shown below. No one above the dashed line undergoes a mutation because there was no selective pressure then. A child below the dashed line in the bottom row might have a mutation. There are eight children, so the probability of one of the eight having a mutation is 8p. The probability of two offspring having mutations will go as p2, but since we are assuming p is small the odds of having multiple mutant offspring will be negligible. We’ll ignore those cases.  

A drawing showing a single organism splitting into two, four, and then eight offspring. One of the final offspring is a mutant.

Let’s calculate some statistics for this case. Let n be the number of mutant offspring in the last generation (below the dashed line). To find the average value, or mean, of n over several experiments, which we’ll call <n>, you sum up all the possible cases, each multiplied by its probability. In general, we could have n = 0, 1, 2, …, 8, each with probability p0p1, …, p8, so <n> is 

<n> = p0 (0) + p1 (1) + p2 (2) + … + p8 (8).

But in this case p2, p3, …, p8 are all negligibly small, so we have only the first two terms in the sum to worry about.

For each individual, the odds of not mutating is (1 – p). In the last generation below the dashed line there are 8 offspring, so the probability of none of them having a mutation, p0, is (1 – 8p). The probability for one mutation (p1) is 8p because there are 8 offspring, each with probability p of mutating. So

<n> = (1 – 8p) (0) + 8p (1) = 8p .

We will also be interested in the variation of results between different trials. For this, we need <n2>

<n2> = (1 – 8p) (0)2 + 8p (1)2 = 8p .

The variance is the mean of the square of the variation from the mean. In Appendix G of Intermediate Physics for Medicine and Biology, Russ Hobbie and I call the variance σ2 and we prove that σ2 = <n2> – <n>2. In our case

σ2 = <n2> – <n>2 = 8p – (8p)2 .

But remember, p << 1 so the last term is negligible and the variance is 8p. Therefore, the mean and variance are the same. You may have seen a probability distribution with this property before. Appendix J of IPMB states that the Poisson distribution has the same mean and variance. Basically, the Lamarckian case is a Poisson process

 

Darwinian Evolution

Now consider the case when mutations occur randomly (Darwinian). You still can get all the results shown earlier in the Lamarckian case, but you get others too because mutations can happen all the time, not just when the selective pressure is operating. Suppose one of the parents (just above the dashed line) mutates. Their mutation gets passed to both offspring. The odds of mutating back (changing from red to blue) are very small (p << 1), so we assume both offspring of a mutant inherit the mutation, as shown below. 

A drawing showing a single organism splitting into two, four, and then eight offspring. One of the offspring is a mutant that pases the mutation to its offspring.

You could also have one of the two grandparents give rise to four mutant offspring below the dashed line, as shown below.

A drawing showing a single organism splitting into two, four, and then eight offspring. One of the offspring is a mutant that pases the mutation to all its offspring.

Let’s do our statistics again. As before, the vast majority of the cases have no mutations. There are now 14 cases, each of which could have the mutation in one of the offspring. All the cases are shown below.

All the possible results of a mutation in three generations of reproduction.

The probability of having no mutations ever (the bottom right case) is (1 – 14p). The probability of one of the offspring having a mutation is 8p (the eight cases in the top row). The probability of any one of the parents having a mutation is p and there are 4 parents, so the probability of a mutation among the parents is 4p, and each would give rise to two mutants below the dashed line (the four cases on the left in the bottom row). Finally, one of the two grandparents could mutate (the fifth and sixth cases in the bottom row), each with probability p. If a grandparent mutates it results in 4 mutants below the dashed line. So, the mean number of mutants in the final generation is

<n> = (1 – 14p) (0) + 8p (1) + 4p (2) + 2p (4) = 24p .

The odds of a mutant appearing in the final generation is three times higher in the Darwinian case than in the Lamarckian case. What about the variance?

<n2>  = (1 – 14p) (0)2 + 8p (1)2 + 4p (2)2 + 2p (4)2 = 56p .

The variance is

σ2 = <n2> – <n>2  = 56p – 242p2 = 56p

(remember, terms in p2 are negligible). Now the variance (56p) is over twice the mean (24p). It is not a Poisson process. It’s something else. There is much more variation in the number of mutants because of mutations happening early in the family tree that pass the mutation to all of the subsequent offspring. 

 

Conclusion

In an experiment, p may not be easy to determine. You need to know how many individuals you start with (in our example, one) and how many generations you examine (in our example, three), as well as how many mutants you end up with. But you can easily compare the variance to the mean; just take their ratio (variance/mean). If they are the same, you suspect a Lamarckian Poisson process. If the variance is significantly more than the mean, you suspect Darwinian selection.  In our example, variance/mean = 2.3.

There are some limitations. The probability is not always very small, so you might need to extend this analysis to cases where you have more than one mutation occurring. Also, in many experiments you will want to let the number of generations be much larger than three. There is also the possibility of a mutant mutating back to its original state. Finally, during sexual reproduction you have the in-laws to worry about, and you could have more than two offspring. So, to be quantitative you have some more work to do. But even in the more general case, the qualitative conclusion remains the same: Darwinian evolution results in a larger variance in the number of mutants than does Lamarckian evolution.

I suspect you now are saying “this is an interesting result; has anyone done this experiment?” The answer is yes! Salvador Luria and Max Delbrück did the experiment using E. coli bacteria (so the asexual splitting of generations is appropriate). The selective pressure applied at the end was resistance to a bacteriophage (a virus that infects bacteria). Their result: there was a lot more variation than you would expect from a Poisson process. Evolution is Darwinian, not Lamarckian. Mutations happen all the time, regardless of if there is some evolutionary pressure present.

 


The Luria-Delbrück experiment, described by Doug Koshland of UC Berkeley

https://www.youtube.com/watch?v=slfLeKqE3Bg

Friday, December 13, 2024

Electromagnetic Hypersensitivity

What is electromagnetic hypersensitivity? It’s an alleged condition in which a person is especially sensitive to weak radiofrequency electromagnetic fields, such as those emitted by a cell phone or other wireless technology. All sorts of symptoms are claimed to be associated with electromagnetic hypersensitivity, such as headaches, fatigue, anxiety, and sleep disturbances. An example of a person who says he has electromagnetic hypersensitivity is Arthur Firstenberg, author of The Invisible Rainbow, a book about his trials and tribulations. Many people purportedly suffering from electromagnetic hypersensitivity flock to Green Bank, West Virginia, because a radiotelescope there requires that the surrounding area being a “radio quiet zone.”

Is electromagnetic hypersensitivity real? Answering this question should be easy. Take people who claim such hypersensitivity, sit them down in a lab, turn a radiofrequency device on (or just pretend to), and ask them if they can sense it. Ask them about their symptoms. Of course, you must do this carefully, avoiding any subtle cues that might signal if the radiation is present. (For a cautionary tale about why such care is important, read this post.) You should do the study double blind (neither the patient nor the doctor who asks the questions should be told if the radiation is or is not on) and compare the patients to control subjects.

The first page of the article "The effects of radiofrequency electromagnetic fields exposure on human self-reported symptoms" superimposed on the cover of Intermediate Physics for Medicine and Biology.
The effects of
radiofrequency
electromagnetic fields
exposure on human
self-reported symptoms.
Many such experiments have been done, and recently a systematic review of the results was published.
Xavier Bosch-Caplanch, Ekpereonne Esu, Chioma Moses Oringanje, Stefan Dongus, Hamed Jalilian, John Eyers, Christian Auer, Martin Meremikwu, and Martin Röösli (2024) The effects of radiofrequency electromagnetic fields exposure on human self-reported symptoms: A systematic review of human experimental studies. Environment International, Volume 187, Article number 108612.
This review is part of an ongoing project by the World Health Organization to assess potential health effects from exposure to radiofrequency electromagnetic fields. The authors come from a variety of countries, but several work at the respected Swiss Tropical and Public Health Institute. I’m particularly familiar with the fine research of Martin Röösli, a renowned leader in this field.

The authors surveyed all publications on this topic and established stringent eligibility criteria so only the highest quality papers were included in their review. A total of 41 studies met the criteria. What did they find? Here’s the key conclusion from the author’s abstract.
The available evidence suggested that study volunteers could not perceive the EMF [electromagnetic field] exposure status better than what is expected by chance and that IEI-EMF [Idiopathic environmental intolerance attributed to electromagnetic fields, their fancy name for electromagnetic hypersensitivity] individuals could not determine EMF conditions better than the general population.
The patients couldn’t determine if the fields were on or off better than chance. In other words, they were right about the field being on or off about as often as if they had decided the question by flipping a coin. The authors added
Available evidence suggests that [an] acute RF-EMF [radiofrequency electromagnetic field] below regulatory limits does not cause symptoms and corresponding claims in... everyday life are related to perceived and not to real EMF exposure status.

Let me repeat, the claims are related “to perceived and not to real EMF exposure.” This means that electromagnetic hypersensitivity is not caused by an electromagnetic field being present, but is caused by thinking that an electromagnetic field is present.

Yes, there are some limitations to this study, which are discussed and analyzed by the authors. The experimental conditions might differ from real-life exposures in the duration, frequency, and location of the field source. Most of the subjects were young, healthy volunteers, so the authors could not make conclusions about the elderly or chronically ill. The authors could not rule out the possibility that a few super-sensitive people are mixed in with a vast majority who can’t sense the fields (although they do offer some evidence suggesting that this is not the case).

Are Electromagnetic Fields Making Me Ill? superimposed on Intermediate Physics for Medicine and Biology.
Are Electromagnetic Fields
Making Me Ill?

Their results do not prove that a condition like electromagnetic hypersensitivity is impossible. Impossibility proofs are always difficult in science, and especially in medicine and biology. But the evidence suggests that the patients’ symptoms are related “to perceived and not to real EMF exposure.” While I don’t doubt that these patients are suffering, I’m skeptical that their distress is caused by electromagnetic fields. 

To learn more about potential health effects of electromagnetic fields, I refer you to Intermediate Physics for Medicine and Biology (especially Chapter 9) or Are Electromagnetic Fields Making Me Ill?

Martin Röösli - Electromagnetic Hypersensitivity and Vulnerable Populations

https://www.youtube.com/watch?v=UPXY0WQJ37Q


Is Electromagnetic Hypersensitivity Real?

https://www.youtube.com/watch?v=IrkL1Hm5myE

Friday, December 6, 2024

J. Patrick Reilly (1937—2024)

J. Patrick Reilly died on October 28 in Silver Spring, Maryland, at the age of 87. He was a leader in the field of bioelectricity, and especially the study of electrical stimulation.

Russ Hobbie and I didn’t mention Reilly in Intermediate Physics for Medicine and Biology, but I did in my review paper “The Development of Transcranial Magnetic Stimulation.
J. Patrick Reilly of the Johns Hopkins Applied Physics Laboratory calculated electric fields in the body produced by a changing magnetic field, although primarily in the context of neural stimulation caused by magnetic resonance imaging (MRI) [54, 55].

[54] Reilly, J. P. (1989). Peripheral nerve stimulation by induced electric currents: Exposure to time-varying magnetic fields. Med. Biol. Eng. Comput., 27, 101–110.

[55] Reilly, J. P. (1991). Magnetic field excitation of peripheral nerves and the heart: A comparison of thresholds. Med. Biol. Eng. Comput., 29, 571–579.

The papers included this biography of the author. 

A brief biography of J. Patrick Reilly.
 

Applied Bioelectricity, by J. Patrick Reilly, superimposed on Intermediate Physics by Medicine and Biology.
Applied Bioelectricity,
by J. Patrick Reilly.
Reilly was also known for his 1998 book Applied Bioelectricity: From Electrical Stimulation to Electropathology, which covered many of the same topics as Chapters 6–8 in IPMB: The Hodgkin-Huxley model of a nerve action potential, the electrical properties of cardiac tissue, the strength-duration curve, the electrocardiogram, and magnetic stimulation. However, you can tell that Russ and I are physicists while Reilly is an engineer. Applied Bioelectricity focuses less on deriving equations from fundamental principles and providing insights using toy models, and more on predicting stimulus thresholds, analyzing stimulus wave forms, examining electrode types, and assessing electrical injuries. That’s probably why he included the word “Applied” in his title. Compared to IPMB, Applied Bioelectricity has no homework problems, fewer equations, a similar number of figures, more references, and way more tables.

Reilly’s preface begins

The use of electrical devices is pervasive in modern society. The same electrical forces that run our air conditioners, lighting, communications, computers, and myriad other devices are also capable of interacting with biological systems, including the human body. The biological effects of electrical forces can be beneficial, as with medical diagnostic devices or biomedical implants, or can be detrimental, as with chance exposures that we typically call electric shock. Whether our interest is in intended or accidental exposure, it is important to understand the range of potential biological reactions to electrical stimulation.
In 2018, Reilly was the winner of the d’Arsonval Award, presented by the Bioelectromagnetic Society for outstanding achievement in research in bioelectromagnetics. The award puts him in good company. Other d’Arsonval Award winners include Herman Schwan, Thomas Tenforde, Elanor Adair, Shoogo Ueno, and Kenneth Foster.

I don’t recall meeting Reilly, which is a bit surprising given the overlap in our research areas. I certainly have been aware of his work for a long time. He was a skilled musician as well as an engineer. I would like to get a hold of his book Snake Music: A Detroit Memoir. It sounds like he had a difficult childhood, and there were many obstacles he had to overcome to make himself into a leading expert in bioelectricity. Thank goodness he persevered. J. Patrick Reilly, we’ll miss ya.

Friday, November 29, 2024

Willi Kalender (1949–2024)

Medical physicist Willi Kalender died on October 20 at the age of 75. Kalender was an inventor of spiral computed tomography. Russ Hobbie and I describe spiral CT in Chapter 16 of Intermediate Physics for Medicine and Biology.

Figure 16.25 shows the evolution of the detector and source configurations [of CT]. The third generation configuration is the most popular. All of the electrical connections are made through slip rings. This allows continuous rotation of the gantry and scanning in a spiral as the patient moves through the machine. Interpolation in the direction of the axis of rotation (the z axis) is used to perform the reconstruction for a particular value of z. This is called spiral CT or helical CT. Kalender (2011) discusses the physical performance of CT machines, particularly the various forms of spiral machines.


Computed Tomography,
by Willi Kalender.
The citation is to Kalender’s well-known textbook Computed Tomography: Fundamentals, System Technology, Image Quality and Applications. According to Google Scholar, it has been cited over 1800 times. Russ and I reference it often.

Kalender obtained his PhD in 1979 from the University of Wisconsin’s famous medical physics program. He then went to the University of Tübingen in Germany. There, according to Wikipedia, “he took and successfully completed all courses in the pre-clinical medicine curriculum.” This is interesting, because just a few years earlier Russ Hobbie did the same thing in Minnesota.

Between 1971 and 1973 I audited all the courses medical students take in their first 2 years at the University of Minnesota. I was amazed at the amount of physics I found in these courses and how little of it is discussed in the general physics course.
Kalender was much loved in the radiology community. The European Society of Radiology wrote
With deep sadness, the ESR announces the passing of Prof. Willi Kalender on October 20, 2024 at the age of 75. A pioneering figure in diagnostic imaging and medical physics, Prof. Kalender significantly influenced the field through his groundbreaking research and leadership.
You can find a memorial page with many more tributes to Kalender here: https://www.kudoboard.com/boards/xqZwpoWO

Prof. Willi Kalender — Dedicated Breast CT — Interview at RSNA 2013

https://www.youtube.com/watch?v=9Ay-Ry6a8C0 


 


Friday, November 22, 2024

From Brownian Motion to Virtual Biopsy: A Historical Perspective from 40 years of Diffusion MRI

From Brownian Motion to Virtual Biopsy: A Historical Perspective from 40 years of Diffusion MRI, by Denis Le Bihan, superimposed on the cover of Intermediate Physics for Medicine and BIology.
From Brownian Motion to Virtual Biopsy:
A Historical Perspective from 40 years
of Diffusion MRI, by Denis Le Bihan
Denis Le Bihan
recently published an open access review article in the Japanese Journal of Radiology titled “From Brownian Motion to Virtual Biopsy: A Historical Perspective from 40 years of Diffusion MRI” (https://doi.org/10.1007/s11604-024-01642-z). The article explores in depth several of the concepts that Russ Hobbie and I describe in Section 18.13 (Diffusion and Diffusion Tensor MRI) of Intermediate Physics for Medicine and Biology. The introduction begins (references removed)
Diffusion MRI was born in the mid-1980s. Since then, it has enjoyed incredible success over the past 40 years, both for research and in the clinical field. Clinical applications began in the brain, notably in the management of acute stroke patients. Diffusion MRI then became the standard for the study of cerebral white-matter diseases, through the diffusion tensor imaging (DTI) framework, revealing abnormalities in the integrity of white-matter fibers in neurologic disorders and, more recently, mental disorders. Over time, clinical applications of diffusion MRI have been extended, notably in oncology, to diagnose and monitor cancerous lesions in almost all organs of the body. Diffusion MRI has become a reference-imaging modality for prostate and breast cancer. Diffusion MRI began in my hands in 1984 (I was then a radiology resident and a PhD student in nuclear and particle physics) with my intuition that measuring the molecular diffusion of water would perhaps allow to characterize solid tumors due to the restriction of molecular motion and vascular lesions where in circulating blood “diffusion” would be somewhat enhanced. This idea was to become the cornerstone of diffusion MRI. This article retraces the early days and milestones of diffusion MRI which spawned over 40 years.
I knew Le Bihan when I worked at the intramural program of the National Institutes of Health in the late 1980s and early 1990s. To me, he was mainly Peter Basser’s French friend. Peter was my colleague who worked in the same section as I did (his office was the second office down the hall from mine), and was my best friend at NIH. Le Bihan describes the start of his collaboration with Basser this way:
During the “NIH Research Festival” of October 1990 I met Peter Basser who had a poster on ionic fluxes in tissues while I had a talk on our recent diffusion MRI results. Peter appropriately commented that the correct way to deal with anisotropic diffusion was to estimate the full diffusion tensor , not just the ADC [apparent diffusion constant], as the approach of the time provided. Basically, ADCs are not sufficient in the presence of diffusion anisotropy, except in particular cases where the main diffusion directions coincide with those of the diffusion MRI measurements. To solve this issue Peter and I came with a new paradigm, the Diffusion Tensor Imaging (DTI) framework. By applying simultaneous diffusion-sensitizing gradient pulses along the X, Y and Z axes the diffusion MRI signal would become a linear combination of the diffusion tensor components. From the diffusion MRI signals acquired along a set of non-colinear directions, encoding multiple combinations of diffusion tensor components weighted by the corresponding b values, it would be possible to retrieve the individual diffusion tensor components at each location.

In Le Bihan’s Figure 3, he includes a photo of Basser, Jim Mattiello, and himself doing an early diffusion tensor imaging experiment. Le Bihan was the diffusion MRI expert and Mattiello (who worked in the same section as Basser and I did at NIH, and who I’ve written about before) was skilled at writing MRI pulse sequences. When they started collaborating, Basser knew little about magnetic resonance imaging, but he understood linear algebra and its relationship to anisotropy, and realized that by making the “b vector” a matrix he could obtain important information (such as its eigenvalues and eigenvectors) that would determine the fiber direction. 

A photo of Denis Le Bihon (left), Peter Basser (center) and Jim Mattiello (seated), circa 1991.
Denis Le Bihon (left), Peter Basser (center)
and Jim Mattiello (seated), circa 1991.

Diffusion MRI works because spins that are excited by a radiofrequency pulse will then diffuse away from the tissue voxel being imaged, degrading the signal. The degradation is exponential and given by e–bD, where D is the diffusion constant and b is the “b-factor” that depends on the magnetic field gradient used to extract the diffusion information and the timing of the gradient pulse. I had always thought that this notation went way back in the MRI literature, but according to Le Bihon’s article he named the “b-factor” after himself (“B”ihon)!

Le Bihon describes how the clinical importance of diffusion MRI was demonstrated in 1990 when it was found that stroke victims showed a big change in the diffusion signal while having little change in the traditional magnetic resonance image. In fact, Le Bihon claims that the other big advance in MRI of that era—the development of functional MRI based on the blood oxygenation level dependent (BOLD) imaging—has not yet led to any clinical applications, while diffusion imaging has several.

Le Bihon’s article concludes

Diffusion MRI, as its additions, DTI and IVIM [IntraVoxel Incoherent Motion] MRI, has become a pillar of modern medical imaging with broad applications in both clinical and research settings, providing insights into tissue integrity and structural abnormalities. It allows to detect early changes in tissues that may not be visible with other imaging modalities. Diffusion imaging first revolutionized the management of acute cerebral ischemia by allowing diagnosis at an acute stage when therapies can still work, saving the outcomes of many patients. Diffusion imaging is today extensively used not only in neurology but also in oncology throughout the body for detecting and classifying various kinds of cancers, as well as monitoring treatment response at an early stage. The second major impact of diffusion imaging concerns the wiring of the brain, allowing to obtain non-invasively images in 3 dimensions of the brain connections. DTI has opened up new avenues of clinical diagnosis and research to investigate brain diseases, revealing for the first time how defects in white-matter track integrity could be linked to mental illnesses.
If you want to learn more about diffusion MRI, I recommend Le Bihon’s article. It provides an excellent introduction to the subject, with a fascinating historical perspective.

Friday, November 15, 2024

Trusted Information on Public Health

Where can you find trusted information about public health? Ordinarily, I would say from the National Institutes of Health (NIH), the Centers for Disease Control and Prevention (CDC), or the Food and Drug Administration (FDA). I hope these critical institutions remain reliable authorities, but with the recent election results I think it’s wise to seek other independent sources. Today I focus on two that I find useful.

A screenshot of the website yourlocalepidemiologist.substack.com
A screenshot of
yourlocalepidemiologist.substack.com
Dr. Katelyn Jetelina is the founder of “Your Local Epidemiologist,” a public health newsletter that reaches nearly 300,000 people in over 130 countries. Jetelina has a masters in public health and a PhD in epidemiology and biostatistics. She says that her “main goal is to translate the ever-evolving public health science so that people will be well-equipped to make evidence-based decisions.” This year she was named one of TIME magazine’s most influential people in health (that’s how I found out about her). You can find her website at https://yourlocalepidemiologist.substack.com.

A screenshot of the website www.immunologic.org
A screenshot of
www.immunologic.org
Dr. Andrea Love is an immunologist and microbiologist with over a decade of experience in translational medicine and clinical research. She is “a subject-matter expert in infectious disease immunology, cancer immunology, and autoimmunity and is adept at translating complex scientific data and topics for the public and healthcare providers.” Love is the founder of Immunologic (https://www.immunologic.org), a “science and health education organization and newsletter geared toward addressing misinformation and misconceptions about scientific topics that are relevant to the general public.”

These two science communicators gain their credibility because they read, understand, and can explain the scientific literature. Their views usually reflect the scientific and medical consensus. Another way to learn about that consensus is from various scientific and medical professional organizations.

Jetelina and Love both try to reach readers and listeners who may have legitimate questions and concerns about public health controversies. I admire this, and since the election I keep telling myself to be like Katelyn and Andrea; don’t be consumed by frustration and fury, and don’t attack those who disagree with you. But then I compose something like this blog post and I find myself writing with anger and hate. I guess I need both their newsletters to keep me from boiling over, and to serve as examples of how to discuss complex topics rationally.

I follow both Jetelina and Love on Twitter (I refuse to call it “X”). But during the presidential campaign I found Twitter to be a cesspool. I’ve been staying off social media since election day (except, of course, to publish my weekly blog post on Facebook). I’m thinking about deleting my Twitter account, but I’ll probably return to Facebook eventually. I haven’t yet gathered the courage to watch the evening news. I just can’t stomach it. I’m self-medicating by reading P. G. Wodehouse stories, and I’m trying to address my anger management issues. It isn’t easy.

I worked at the National Institutes of Health for seven years. It’s a wonderful institution, which I have tremendous respect for. It pains me to even hint that they might not be the most trustworthy source of health information available. But as I look to the future, I just don’t know. Let’s hope for the best and prepare for the worst by subscribing to Jetelina’s and Love’s newsletters. And in these difficult times I can offer you one bit of good news: both newsletters are free!

For those who want a little more physics mixed in with your public health (and who doesn’t?), I recommend my blog (hobbieroth.blogspot.com) associated with my textbook Intermediate Physics for Medicine and Biology, and my book Are Electromagnetic Fields Making Me Ill? (the answer to the title question is no!). I will do my best to give you the truth, but with the storm clouds I see on the horizon I can’t promise I’ll always give it to you cheerfully. I do promise to delete the profanity before I publish any posts.

A conversation with Dr. Katelyn Jetelina about her journey in the field of epidemiology.

https://www.youtube.com/watch?v=N0UbomAFYTQ

Questioning the wellness industry with Dr. Andrea Love.

https://www.youtube.com/watch?v=zWIJiv71Azs

Friday, November 8, 2024

International Day of Medical Physics Poster

Yesterday was the International Day of Medical Physics. This event is organized by the International Organization for Medical Physics, and is held each year on November 7, the birthday of Marie Curie. This year’s theme is “Inspiring the Next Generation of Medical Physicists.”

The IOMP held a poster design contest to celebrate the event. The winning poster was created by Lavanya Murugan from Rajiv Gandhi Government General Hospital and Madras Medical College in Chennai, India. IDMP coordinator Ibrahim Duhaini (who works right here in Michigan at Wayne State University) wrote that “Her artwork beautifully captures the theme and spirit of this year’s IDMP and will continuously serve as an inspiration to others… Let us all commit to being beacons of inspiration for the next generation.” I couldn’t have said it better (but maybe Randy Travis could).

The award-winning poster, a masterpiece, is shown below. In case you can’t read it, the quote in the center is by Curie: “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.” Never has this quote been more relevant than now, as we face the dire health threats generated by climate change. I can identify many of the famous physicists and medical physicists in the poster. Can you? By the way, that little sticky note on the upper left of the frame contains a conversion factor indicating that one roentgen deposits 0.877 rads in dry air.

Award winning poster for the International Day of Medical Physics.
The winning poster of the design contest associated with the
International Day of Medical Physics 2024.

Lavanya sent me her thoughts about the design of the poster.

Inspiration: Once, I gave up my dream of becoming an artist to pursue a career in Medical Physics. This piece of art is a reflection of my study wall and myself, inspired by the world around me.
Technique: It’s a digital Art piece.
This artwork portrays a young girl immersed in her studies, surrounded by images of great scientists who have contributed to the field of radiation. The wall features news clips about Roentgen’s groundbreaking discovery and a picture of Marie Curie’s notebook, symbolizing power of radiating knowledge. Everyone experiences uncertainty about their knowledge, future and career at some point. Believing in ourselves is the first step to achieving our goals. The individuals whose photos adorn the wall were once in our shoes, grappling with doubts and questioning their abilities. Yet, they persevered, never giving up and ultimately inspiring us in the field of radiation. Today, we proudly serve healthcare and humanity as Medical Physicists, standing on their shoulders.
I have included one of my favourite quotes from Marie Curie, a female scientist who has been inspiring women in research: “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.”
Everyone fears radiation and its impact on mankind, but people like us choose to be radiation professionals regardless of the risks involved. This quote inspires us to understand the risks for the betterment of this field.
The message I wanted to convey through this art is to inspire the next generation of Medical Physicists to contribute their best to our field, following in the footsteps of the great minds of our past.

Lavanya is a medical physicist with over eight years of clinical experience in radiotherapy, nuclear medicine, and radiology. She excels in treatment planning, quality assurance, and treatment delivery. She’s also an artist, creating artwork under the pseudonym “Nivi.” You can find many of her pieces at her Instagram account. Below I show a few that are related to medical physics. 

Lavanya calls this a “boredom doodle.”
Lavanya calls this a “boredom doodle.” You can see a tiny version of it
to the right of the Curie quote in her award winning poster.

 
This radiotherapy picture features many of the topics discussed in Intermediate Physics for Medicine and Biology.
This radiotherapy picture features many of the topics
discussed in Intermediate Physics for Medicine and Biology.

Lavanya at work as a medical physicist.
Lavanya at work as a medical physicist.






Randy Travis singing “Point of Light.”

https://www.youtube.com/watch?v=w3a8i2F1Mf0

 

 
International Day of Medical Physics 2024 message by Raymond Wu

https://www.youtube.com/watch?v=rKxZZEFv0Bo

 

Friday, November 1, 2024

Why Are Oxygen and Nitrogen Not Greenhouse Gases But Carbon Dioxide and Water Vapor Are?

In last week’s blog post about A Toy Model for Climate Change, I wrote
“The main constituents of the atmosphere—oxygen and nitrogen—are transparent to both visible and thermal radiation, so they don’t contribute to eA [the fraction of the earth’s infrared radiation that the atmosphere absorbs]. Thermal energy is primarily absorbed by greenhouse gases. Examples of such gases are water vapor, carbon dioxide, and methane.”

I never discussed why oxygen and nitrogen are not greenhouse gasses, although water vapor and carbon dioxide are. Today, I’ll address this question.

Below is a list of gasses in our atmosphere and their abundance.

    Nitrogen N2 78%
    Oxygen O2 21%
    Argon Ar   1%
    Carbon dioxide CO2
  0.03%
    Water vapor
H2O   0–4%
    Neon Ne  18 ppm
    Helium
He    5 ppm
    Methane CH4    2 ppm
    Krypton Kr    1 ppm
    Sulfur dioxide
SO2    1 ppm
    Hydrogen H2    0.5 ppm
    Nitrous Oxide
N2O    0.5 ppm

In order to absorb infrared radiation, a molecule must have a dipole moment that can oscillate with the same frequency as the infrared electromagnetic wave. Let’s look at these molecules case by case.

Nitrogen

Nitrogen (N2) is diatomic; it consists of two nitrogen atoms bound together. Because the two atoms are the same, they share the electron charge equally. If there is no charge separation, then there is no dipole moment to oscillate at the frequency of the infrared radiation. Therefore, diatomic nitrogen—by far the most abundant molecule in our atmosphere, with nearly four out of every five molecules being N2—does not absorb infrared radiation. It’s not a greenhouse gas.

Oxygen

About one out of every five molecules in the atmosphere is oxygen (O2), which is also diatomic with two identical atoms. Like nitrogen, oxygen can’t absorb infrared radiation. 

Argon

Almost one out of every hundred molecules in the atmosphere is argon (Ar). Argon is a nonreactive noble gas, so it consists of individual atoms. A single atom cannot have a dipole moment, so argon can’t absorb infrared radiation. Neither can the other noble gasses: neon, helium, and krypton

Carbon dioxide

The next most abundant gas is carbon dioxide (CO2), which makes up less than one tenth of one percent of the atmosphere. The above table lists the abundance of carbon dioxide as 0.03%, which corresponds to 300 parts per million (ppm). I must have gotten the 300 ppm value from an old source. Its concentration is now over 400 ppm and is increasing every year. The main cause of global warming is the rapidly increasing carbon dioxide concentration.

The carbon dioxide molecule has a linear structure; it has a central carbon atom surrounded by two oxygen atoms, one on each side, so the molecule forms a straight line. Perhaps instead of writing it as CO2 we should write OCO. The electrons of this molecule are more attracted to the oxygen atoms than the carbon atom, so the carbon carries a partially positive charge and the two oxygen atoms each are partially negative. But because of its linear structure, at equilibrium there is no net dipole moment. You can think of it as consisting of two dipoles with equal strength but oriented in opposite directions, so they cancel out.

Carbon dioxide has three types of “vibrational modes” (see the video at the end of this post). One is a symmetric stretch, where the two oxygen atoms move together outward or inward from the central carbon atom. This makes the OCO molecule first get longer and then shorter, but it still consists of two equal but opposite dipoles that add to zero. Thus, this mode does not produce a dipole, so it cannot absorb infrared radiation. 

Carbon dioxide can also undergo an asymmetric vibration, in which one of the oxygen atoms is moving inward or outward, and the other is moving outward or inward. In this case, the molecule maintains the same length, but the position of the oxygen atoms oscillate back and forth, with one being closer to the carbon atom and then the other. Now the two dipoles don’t cancel, so there’s a net dipole moment. (Think of the dipole moment as the charge times the distance; Even if the partial charge on each atom does not change, the different distances of each oxygen atom from the central carbon atom will alter the net dipole moment.) So, this mode of vibration will absorb infrared radiation. Carbon dioxide is a greenhouse gas.

Just for completeness, CO2 also has bending modes, where the two oxygen atoms move back and forth in a plane parallel to the line of the molecule (see the video). Again, these modes induce a dipole that can oscillate in synchrony with infrared radiation and are therefore greenhouse active. Carbon dioxide is the primary contributor to climate change. 

The earth is lucky that carbon dioxide has such a low concentration in its atmosphere. I wonder what would happen if most of our atmosphere consisted of CO2 instead of oxygen and nitrogen. Oh, wait… we don’t have to wonder. The atmosphere of Venus is 96% CO2, and Venus has an average surface temperature of 464°C (well above the boiling point of water). Wow! 

Water vapor

Water vapor (H2O) is a special case. Its abundance in the atmosphere is not constant. It can vary from nearly zero to about 4%, depending on the humidity. A molecule of water is also different than carbon dioxide because it is not a linear molecule. Figure 6.18 in Intermediate Physics for Medicine and Biology shows the structure of a water molecule, with its oxygen atom having a partial negative charge and its hydrogen atoms being partially positive. Even when at rest, a molecule of water has a dipole moment. The water molecule has several vibrational modes, all of which cause this dipole moment to change, and it’s therefore an absorber of infrared radiation.

Fig. 6.18 from Intermediate Physics for Medicine and Biology, showing the structure of a water molecule.

In the last post, I mentioned that feedback loops affect the climate. Water vapor provides an example. As the atmosphere heats up, it can hold more water vapor (see Homework Problems 65 and 66 in Chapter 3 of IPMB). More water vapor means more infrared absorption. More infrared absorption means more heating of the atmosphere, which means the atmosphere can hold more water vapor, which means more infrared absorption and heating, and so on. A positive feedback loop is sometimes called a vicious cycle.

Some of the water in the atmosphere is in the form of clouds. Clouds play a complex role in climate change. They can block the sunlight and therefore contribute to cooling. But it’s complicated.

Methane

Methane (CH4) is a very active infrared absorber. The methane molecule consists of a central carbon atom with partial negative charge, surrounded by a tetrahedron of four hydrogen atoms each with a partial positive change. Like carbon dioxide, when in equilibrium methane has no net dipole moment. However, methane has many complicated rotational and vibrational modes, in part because it consists of so many atoms. Many of those modes result in a changing dipole moment, similar to what we saw for carbon dioxide. So, methane can absorb infrared radiation and is an important greenhouse gas. Molecule for molecule, methane is a much stronger greenhouse gas than carbon dioxide. The only reason it doesn’t contribute more to global warming is that its concentration is so low. 

Sulfur dioxide

A molecule of sulfur dioxide (SO2) is a lot like a molecule of water, with a bent shape. In this case, the central sulfur atom carries a partial positive charge and the two oxygen atoms are partially negative. Water is a stable molecule but sulfur dioxide is chemically reactive. If it is present in a high concentration it’s hazardous to your health. In that case, its contribution as a greenhouse gas will be the least of your problems. It’s often emitted when burning fossil fuels (especially coal), and is considered an air pollutant. 

Sulfur dioxide can interact with water vapor to form tiny droplets called aerosols. These aerosols can remain in the air for years and reflect incoming sunlight (somewhat like clouds do). In this way, sulfur dioxide can have a cooling effect in addition to its greenhouse gas warming effect. On the whole, the aerosol cooling dominates, so sulfur dioxide cools the earth. It’s often released during volcanic eruptions, which can lead to cooler summers and colder winters for a few years.

Hydrogen

There is a tiny bit of hydrogen gas (H2) in the atmosphere, but like oxygen and nitrogen it’s diatomic so it doesn’t absorb infrared radiation. 

Nitrous oxide

Finally, nitrous oxide (laughing gas, N2O) is similar in structure to sulfur dioxide and water. Like sulfur dioxide, it’s a form of air pollution and can be a greenhouse gas too (although its concentration is so small that it doesn’t make much contribution to global warming). Our atmosphere consists mostly of nitrogen and oxygen. We are fortunate that the most common form these elements take in the atmosphere are diatomic N2 and O2. Imagine what would happen if chemistry was slightly different, so that a large fraction of our atmosphere was N2O instead of N2 and O2. Yikes!

 Gasses in the earth's atmosphere.

https://www.youtube.com/watch?v=BPdfKxS3rUc

 


 Carbon dioxide vibration modes.

https://www.youtube.com/watch?v=AauIOanNaWk

 

The normal modes of methane.

https://www.youtube.com/watch?v=v3QPe6-37bk

 

Friday, October 25, 2024

A Toy Model of Climate Change

Introduction

A screenshot of the online book
Math for the People.
In Introductory Physics for Medicine and Biology, Russ Hobbie and I make use of toy models. Such mathematical descriptions are not intended to be accurate or realistic. Rather, they‘re simple models that capture the main idea without getting bogged down in the details. Today, I present an example of a toy model. It’s not related to medicine or biology, but instead describes climate change. I didn’t originally derive this model. Much of the analysis below comes from other sources, such as the online book Math for the People published by Mark Branson and Whitney George.

Earth Without an Atmosphere

First, consider the earth with no atmosphere. We will balance the energy coming into the earth from the sun with the energy from the earth that is radiated out into space. Our goal will be to calculate the earth’s temperature, T.

The power density (energy per unit time per unit area, in watts per square meter) emitted by the sun is called the solar constant, S. It depends on how far you are from the sun, but at the earth’s orbit S = 1360 W/m2. To get the total power impinging on our planet, we must multiply S by the area subtended by the earth, which is πR2, where R is the earth’s radius (R = 6.4 × 106 meters). This gives SπR2 = 1.8 × 1017 W, or nearly 200,000 TW (T, or tera-, means one trillion). That’s a lot of power. The total average power consumption by humanity is only about 20 TW, so there’s plenty of energy from the sun.

We often prefer to talk about the energy loss or gain per unit area of the earth’s surface. The surface area of the earth is 4πR2 (the factor of four comes from the total surface area of the spherical earth, in contrast to the area subtended by the earth when viewed from the sun). The power per unit area of the earth’s surface is therefore SπR2/4πR2, or S/4.

Not all of this energy is absorbed by the earth; some is reflected back into space. The albedo, a, is a dimensionless number that indicates the fraction of the sun’s energy that is reflected. The power absorbed per unit area is then (1 – a)S/4. About 30% of the sun’s energy is reflected (a = 0.3), so the power of sunlight absorbed by the earth per unit of surface area is 238 W/m2.

What happens to that energy? The sun heats the earth to a temperature T. Any hot object radiates energy. Such thermal radiation is analyzed in Section 14.8 of Intermediate Physics for Medicine and Biology. The radiated power per unit area is equal to eσT4. The symbol σ is the Stefan-Boltzmann constant, σ = 5.7 × 10–8 W/(m2 K4). As stated earlier, T is the earth’s temperature. When raising the temperature to the fourth power, T must be expressed as the absolute temperature measured in kelvin (K). Sometimes it’s convenient at the end of a calculation to convert kelvin to the more familiar degrees Celsius (°C), where 0°C = 273 K. But remember, all calculations of T4 must use kelvin. Finally, e is the emissivity of the earth, which is a measure of how well the earth absorbs and emits radiation. The emissivity is another dimensionless number ranging between zero and one. The earth is an excellent emitter and absorber, so e = 1. From now on, I’ll not even bother including e in our equations, in which case the power density emitted is just σT4.

Let’s assume the earth is in steady state, meaning the temperature is not increasing or decreasing. Then the power in must equal the power out, so 

(1 – a)S/4 = σT4

Solving for the temperature gives

T = ∜[(1 – a)S/4σ] .

Because we know a, S, and σ, we can calculate the temperature. It is T = 254 K = –19°C. That’s really cold (remember, in the Celsius scale water freezes at 0°C). Without an atmosphere, the earth would be a frozen wasteland.

Earth With an Atmosphere

Often we can learn much from a toy model by adding in complications, one by one. Now, we’ll include an atmosphere around earth. We must keep track of the power into and out of both the earth and the atmosphere. The earth has temperature TE and the atmosphere has temperature TA.

First, let’s analyze the atmosphere. Sunlight passes right through the air without being absorbed because it’s mainly visible light and our atmosphere is transparent in the visible part of the spectrum. The main source of thermal (or infrared) radiation (for which the atmosphere is NOT transparent) is from the earth. We already know how much that is, σTE4. The atmosphere only absorbs a fraction of the earth’s radiation, eA, so the power per unit area absorbed by the atmosphere is eAσTE4.

Just like the earth, the atmosphere will heat up to a temperature TA and emit its own thermal radiation. The emitted power per unit area is eAσTA4. However, the atmosphere has upper and lower surfaces, and we’ll assume they both emit equally well. So the total power emitted by the atmosphere per unit area is 2eAσTA4.

If we balance the power in and out of the atmosphere, we get 

eAσTE4 = 2eAσTA4

Interestingly, the fraction of radiation absorbed by the atmosphere, eA, cancels out of our equation (a good emitter is also a good absorber). The Stefan-Boltzmann constant σ also cancels, and we just get TE4 = 2TA4. If we take the forth root of each side of the equation, we find that TA = 0.84 TE. The atmosphere is somewhat cooler than the earth.

Next, let’s reanalyze the power into and out of the earth when surrounded by an atmosphere. The sunlight power per unit area impinging on earth is still (1 – a)S/4. The radiation emitted by the earth is still σTE4. However, the thermal radiation produced by the atmosphere that is aimed inward toward the earth is all absorbed by the earth (since the emissivity of the earth is one, eE = 1), so this provides another factor of eAσTA4. Balancing power in and out gives

(1 – a)S/4 + eAσTA4 = σTE4 .

Notice that if eA were zero, this would be the same relationship as we found when there was no atmosphere: (1 – a)S/4 = σTE4. The atmosphere provides additional heating, warming the earth.

We found earlier that TE4 = 2TA4. If we rewrite this as TA4 = TE4/2 and plug that into our energy balance equation, we get

(1 – a)S/4 + eAσTE4/2 = σTE4 .

With a bit of algebra, we find

(1 – a)S/4 = σTE4 (1 – eA/2) .

Solving for the earth’s temperature gives

TE = ∜[(1 – a)S/4σ] ∜[1/(1 – eA/2) ] .

If eA were zero, this would be exactly the relationship we had for no atmosphere. The fraction of energy absorbed by the atmosphere is not zero, however, but is approximately eA = 0.8. The atmosphere provides a dimensionless correction factor of ∜[1/(1 – eA/2)]. The temperature we found previously, 254 K, is corrected by this factor, 1.136. We get TE = 288.5 K = 15.5 °C. This is approximately the average temperature of the earth. Our atmosphere raised the earth’s temperature from –19°C to +15.5°C, a change of 34.5°C.

Climate Change

To understand climate change, we need to look more deeply into the meaning of the factor eA, the fraction of energy absorbed by the atmosphere. The main constituents of the atmosphere—oxygen and nitrogen—are transparent to both visible and thermal radiation, so they don’t contribute to eA. Thermal energy is primarily absorbed by greenhouse gases. Examples of such gases are water vapor, carbon dioxide, and methane. Methane is an excellent absorber of thermal radiation, but its concentration in the atmosphere is low. Water vapor is a good absorber, but water vapor is in equilibrium with liquid water, so it isn’t changing much. Carbon dioxide is a good absorber, has a relatively high concentration, and is being produced by burning fossil fuels, so a lot of our discussion about climate change focuses on carbon dioxide.

The key to understanding climate change is that greenhouse gasses like carbon dioxide affect the fraction of energy absorbed, eA. Suppose an increase in the carbon dioxide concentration in the atmosphere increased eA slightly, from 0.80 to 0.81. The correction factor  ∜(1/(1 – eA/2) ) would increase from 1.136 to 1.139, changing the temperature from 288.5 K to 289.3 K, implying an increase in temperature of 0.8 K. Because changes in temperature are the same if expressed in kelvin or Celsius, this is a 0.8°C rise. A small change in eA causes a significant change in the earth’s temperature. The more carbon dioxide in the atmosphere, the greater the temperature rise: Global warming.

Feedback

We have assumed the earth’s albedo, a, is a constant, but that is not strictly true. The albedo depends on how much snow and ice cover the earth. More snow and ice means more reflection, a larger albedo, a smaller amount of sunlight absorbed by the earth, and a lower temperature. But a lower temperature means more snow and ice. We have a viscous cycle: more snow and ice leads to a lower temperature which leads to more snow and ice, which leads to an even lower temperature, and so on. Intermediate Physics for Medicine and Biology dedicates an entire chapter to feedback, but it focuses mainly on negative feedback that tends to maintain a system in equilibrium. A viscous cycle is an example of positive feedback, which can lead to explosive change. An example from biology is the upstroke of a nerve action potential: an increase in the electrical voltage inside a nerve cell leads to an opening of sodium channels in the cell membrane, which lets positively charged sodium ions enter the cell, which causes the voltage inside the cell to increase even more. The earth’s climate has many such feedback loops. They are one of the reasons why climate modeling is so complicated.

Conclusion

Today I presented a simple description of the earth’s temperature and the impact of climate change. Many things were left out of this toy model. I ignored differences in temperature over the earth’s surface and within the atmosphere. I neglected ocean currents and the jet stream that move heat around the globe. I did not account for seasonal variations, or for other greenhouse gasses such as methane and water vapor, or how the amount of water vapor changes with temperature, or how clouds affect the albedo, and a myriad of other factors. Climate modeling is a complex subject. But toy models like I presented today provide insight into the underlying physical mechanisms. For that reason, they are crucial for understanding complex phenomena such as climate change.