Homework Problem 7 in Chapter 11 of Intermediate Physics for Medicine and Biology examines fitting data to a straight line. In that problem, the four data points to be fit are (100, 4004), (101, 4017), (102, 4039), and (103, 4063). The goal is to fit the data to the line y = ax + b. For this data, one must perform the calculation of a and b to high precision or else you get large errors. The solution manual (available to instructors upon request) says that
This problem illustrates how students can run into numerical problems if they are not
careful. With modern calculators that carry many significant figures, this may seem like a moot
point. But the idea is still important and can creep subtly into computer computations and cause
unexpected, difficult-to-debug errors.
Numerical Recipes
Are there other examples of numerical errors creeping into calculations? Yes. You can find one discussed in Numerical Recipes that involves the quadratic formula.
We all know the quadratic formula from high school. If you have a quadratic equation of the form
the solution is
For example,
has two solutions
so x = 1 or 2.
Now, suppose the coefficient b is larger,
The solution is
so x = 300 or 0.00667.
This calculation is susceptible to numerical error. For instance, suppose all numerical calculations are performed to only four significant figures. Then when you reach the step
you must subtract 8 from 90,000. You get 89992, which to four significant figures becomes 89990, which has a square root of (again to four significant figures) 300.0. The solutions are therefore x = 300 or 0. The large solution (300) is correct, but the small solution (0 instead of 0.00667) is completely wrong. The main reason is that when using the minus sign for ± you must subtract two numbers that are almost the same (in this case, 300 – 299.98667) to get a much smaller number.
You might say “so what! Who uses only four significant figures in their calculations?” Okay, try solving
where I increased b from 300 to 3000. You’ll find that using even six significant figures gives one nonsense solution (try it). As you make b larger and larger, the calculation becomes more and more difficult. The situation can cause unexpected, difficult-to-debug errors.
What’s the moral to this story? Is it simply that you must use high precision when doing calculations? No. We can do better. Notice that the solution is fine when using the plus sign in the quadratic equation. We need make no changes. It’s the negative sign that gives the problem,
Let’s try a trick; multiply the expression by a very special form of one:
Simplifying, we get
VoilĂ ! The denominator has the plus sign in front of the square root, so it is not susceptible to numerical error. The numerator is simplicity itself. Try solving x2 – 300x + 2 = 0 using math to four significant figures,
No error, even with just four sig figs. The problem is fixed!
I should note that the problem is fixed only for negative values of b. If b is positive, you can use an analogous approach to get a slightly different form of the solution (I’ll leave that as an exercise for the reader).
So, the moral of the story is: if you find that your numerical calculation is susceptible to numerical error, fix it! Look for a trick that eliminates the problem. Often you can find one.
Most of my blog posts are about physics applied to medicine and biology, but today I want to talk about music. This topic may not seem relevant to Intermediate Physics for Medicine and Biology, but I would argue that it is. Music is, after all, as much about the human perception of sound as about sound itself. So, let’s talk about how we sort the different frequencies into notes.
Each key corresponds to a different pitch, or note. I want to discuss the relationships between the different notes. We have to start somewhere, so let’s take the lowest note on the keyboard and call it C. It will have some frequency, which we’ll call our base frequency. On the piano, this frequency is about 33 Hz, but for our purposes that doesn’t matter. We will consider all frequencies as being multiples of this base frequency, and take our C as having a frequency of 1.
When you double the frequency, our ear perceives that change as going up an octave. So, one octave above the first C is another C, with frequency 2.
Of course, that means there’s another C with frequency 4, and another with frequency 8, and so on. We get a whole series of C’s.
Now, if you pluck a string held down at both ends, it can produce many frequencies. In general, it produces frequencies that are multiples of a fundamental frequencyf, so you get frequency f plus “overtone” frequencies 2f, 3f, 4f, 5f, etc. As we noted earlier, we don’t care about the frequency itself but only how different frequencies are related. If the fundamental frequency is a C with frequency 1, the first overtone is one octave up (with a frequency 2), another C. The second overtone has a frequency 3. That corresponds to a different note on our keyboard, which we’ll call G.
You could raise or lower G by octaves and still have the same note (like we did with C), so you have a whole series of G’s, including 3/2 which is between C’s corresponding to frequencies 1 and 2. When two notes have frequencies such that the upper frequency is 3/2 times the lower frequency (a 3:2 ratio), musicians call that a “fifth,” so G is a fifth above C.
Let’s keep going. The next overtone is 4, which is two octaves above the fundamental, so it’s one of the C’s. But the following overtone, 5, gives us a new note, E.
As always, you can go up or down by octaves, so we get a whole series of E’s.
C and E are related by a ratio of 5:4 (that is, E has a frequency 5/4
times the C below it), which musicians call a “third.” The notes C, E,
and G make up the “C major chord.”
The next overtone would be 6, but we already know 6 is a G. The overtone 7 doesn’t work. Apparently a frequency ratio of 7 is not one that we find pleasant (at least, not to those of us who have been trained on western music), so we’ll skip it. Overtone 8 is another C, but we get a new note with overtone 9 (and all its octaves up and down, which I’ll stop repeating again and again). We’ll call this note D, because it seems to fit nicely between C and E.
The D right next to our base note C has a frequency of 9/8.
Next is overtone 10 (an E), then 11 (like 7, it doesn’t work), 12 (a G), 13 (nope), 14 (no because it’s an octave up from 7), and finally 15, a new note we’ll call B.
We could go on, but we don’t perceive many of the higher overtones as harmonious, so let’s change track. There’s nothing special about our base note, the C on the far left of our keyboard. Suppose we wanted to use a different base note. What note would we use if we wanted it to be a fifth below C? If we started with a frequency of 2/3, then a fifth above that frequency would be 2/3 times 3/2 or 1, giving us C. We’ll call that new base frequency F. It’s off our keyboard to the left, but its octaves appear, including 4/3, 8/3, etc.
What if we want to build a major chord based on F. We already have C as a fifth above F. What note is a third above F? In other words, start at 2/3 and multiply by 5/4 (a third), to get 10/12 which simplifies to 5/6. That’s off the keyboard too, but its octaves 5/3, 10/3, 20/3, etc. appear. Let’s call it A. So a major chord in the key of F is F, A, and C.
Does this work for other base frequencies? Try G (3/2). Go up a fifth from G and you get 9/4, which is a D. Go up a third from G and you get 15/8, which is a B. So G, B, and D make up a major chord in the key of G. It works again!
So now it looks like we’re done. We’ve given names and frequencies to all the notes: C (1), D (9/8), E (5/4), F (4/3), G (3/2), A (5/3), and B (15/8). This collection of frequencies is called “just intonation,” with “just” used in the sense of fair and honest. If you play a song in the key of C, you use only those notes and frequencies and it sounds just right.
What about those strange block notes between some, but not all, of the white notes? How do we determine their frequencies? For example, start at D (9/8) for your base note and build a major chord. First you and go up a third and get 9/8 times 5/4, or 45/32. That note, corresponding to the black key just to the right of F, is F-sharp (or F♯). To express the frequency as a decimal, 45/32 = 1.406, which is midway between F (4/3 = 1.333) and G (3/2 = 1.500). We could continue working out all the frequencies for the various sharps and flats, but we won’t. It gets tedious, and there is an even more interesting and surprising feature to study.
To complete our D major chord, we need to to determine what note is a fifth above D. You get D (9/8) times a fifth (3/2), or 27/16 = 1.688. That is almost the same as A (5/3 = 1.667), but not quite. It’s too close to A to correspond to A-sharp. It’s simply an out-of-tune A. In other words, using the frequencies we have worked out above, if you start with C as your base (that is, you play in the key of C) your G (a fifth) corresponds to a frequency ratio of 5/3 = 1.667. If you play, however, using D as your base (you play in the key of D), your A (what should be a fifth above D) has a frequency ratio of 27/16 = 1.688. Different keys have different intonations. Yikes! This is not a problem with only the key of D. It happens again and again for other keys. The intonation is all messed up. You either play in the key of C, or you play out of tune.
To avoid this problem, nowadays instruments are tuned so that there are 12 steps between octaves (the steps includes both the white and black keys), where each step corresponds to a frequency ratio of 21/12 = 1.0595. A fifth (seven steps) is then 27/12 = 1.498, which is not exactly 3/2 = 1.500 but is pretty close and—importantly—is the same for all keys. A third is 24/12 = 1.260, which is not 5/4 = 1.250 but is not too bad. A keyboard with frequencies adjusted in this way is called “well-tempered.” It means that all the keys sound the same, although each is slightly out of tune compared to just intonation. You don’t have to have your piano tuned every time you change keys.
Johann Sebastian Bach wrote a lovely set of piano pieces called The Well-Tempered Clavichord that showed off the power of well-tempered tuning. My copy of the most famous of these pieces is shown in the photo at the top of this post. Listen to it and other glorious music by Bach below.
One of the key ideas in my book is the clinical trial. Critical thinking lies at the heart of such trials. In the chapter about the health effects of magnets, I discuss the importance of clinical trials being double blind, randomized, and placebo controlled. Why are these features crucial? They keep you from fooling yourself. In particular, a study being double blind (meaning that “not only the patient, but also the physician, does not know who is in the placebo or treatment group”) is vital to prevent a doctor from inadvertently signalling to the patient which group they are in. One of Trecek-King’s favorite sayings is the quote by Richard Feynman that “you must not fool yourself—and you are the easiest person to fool.” That sums up why double blinding is so important.
Placebos are discussed several times in my book. My favorite example of a placebo comes from a clinical trial to evaluate a new drug. “If a medication is being tested, the placebo is a sugar pill with the same size, shape, color, and taste as that of the drug.” One reason I dwell on placebos is that sometimes they are difficult to design. When testing if permanent magnets can reduce pain, “this means that some patients received treatment with real magnets, and others were treated with objects that resembled magnets but produced a much weaker magnetic field or no magnetic field at all.” It is hard to make a “fake magnet” or a “mock transcranial direct current stimulator.” Yet, designing the placebo is exactly a situation where critical thinking skills are essential.
Critical thinking overlaps with the scientific method, with its emphasis on examining the evidence. In Are Electromagnetic Fields Making Me Ill?, my goal was to present the evidence and then let the reader decide what to believe. But that’s hard. For instance, the experimental laboratory studies about the biological effects of cell phone radiation are a mixed bag. Some studies see effects, and some don’t. You can argue either way depending on what studies you emphasize. I tried to rely on critical reviews to sort all this out (after all, where better to find critical thinking than in a critical review). But even the critical reviews are not unanimous. I probably should’ve examined each article individually and weighed its pros and cons, but that would have taken years (the literature on this topic is vast).
Trecek-King often discusses the importance of finding reliable sources of information. I agree, but this too is not always easy. For instance, what could be more authoritative than a report produced by the National Academy of Sciences? In Are Electromagnetic Fields Making Me Ill? I laud the Stevens report published in the 1990s about the health hazards (or should I say lack of hazards) from powerline magnetic fields. Yet, I’m skeptical about the National Academies report published in 2020 regarding microwave weapons being responsible for the Havana Syndrome. What do I conclude? Sometimes deferring to authority is useful, but not always. You can’t delegate critical thinking.
I have found that one useful tool for teaching and illustrating critical thinking are the Point/Counterpoint articles published in the journal Medical Physics. In Are Electromagnetic Fields Making Me Ill? I cite three such articles, on magnets reducing pain, on cell phone radiation causing cancer, and on the safety of airport backscatter radiation scanners. Each of these articles are in the form of a debate, and any lack of critical thinking will be exposed and debunked in the rebuttals. I wrote
When I taught medical physics to college students, we spent 20 minutes each Friday afternoon discussing a point/counterpoint article. One feature of these articles that makes them such an outstanding teaching tool is that there exists no right answer, only weak or strong arguments. Science does not proceed by proclaiming
universal truths, but by accumulating evidence that allows us to be more or less confident in our hypotheses. Conclusions beginning with “the evidence suggests…” are the best science has to offer.
One skill I emphasized in my teaching using IPMB, but which I don’t see mentioned by Trecek-King, is estimation. For instance, when discussing the potential health benefits or hazards of static magnetic fields, I calculated the energy of an electron in a magnetic field and compared it to its thermal energy. Such a simple order-of-magnitude estimate shows that thermal energy is vastly greater than magnetic energy, implying that static magnetic fields should have no effect on chemical reactions. Similarly, in my chapter about powerline magnetic fields, I estimated the electric field induced in the body by a 60 Hz magnetic field and compare it to endogenous electric fields due mainly to the heart’s electrical activity. Finally, in my discussion about cell phone radiation I compared the energy of a single radio-frequencyphoton to the energy of a chemical bond to prove that cell phones cannot cause cancer by directly disrupting DNA. This ability to estimate is crucial, and I believe it should be included under the umbrella of critical thinking skills.
In the video I watched, Trecek-King discussed the idea of consensus, and the different use of this term among scientists and nonscientists. When I analyzed transcranial direct current stimulation, I bemoaned the difficulty in finding a consensus among different research groups.
Finding the truth does not come from a eureka moment, but instead from a slow slog ultimately leading to a consensus among scientists.
I probably get closest to what scientists mean by consensus at the close of my chapter on the relationship (actually, the lack of relationship) between 5G cell phone radiation and COVID-19:
Scientific consensus arises when a diverse group of scientists openly scrutinizes claims and critically evaluates evidence.
Consensus is only valuable if it arises from individuals independently examining a body of evidence, debating an issue with others, and coming to their own conclusion. Peer review, so important in science, is one way scientists thrash out a consensus. I wrote
The reason for peer review is to force scientists to convince other scientists that their ideas and data are sound.
Perhaps the biggest issue in critical thinking is bias. One difficulty is that bias comes in many forms. One example is publication bias: “the tendency for only positive results to be published.” Another is recall bias that can infect a case-controlepidemiological study. But the really thorny type of bias arises from prior beliefs that scientists may be reluctant to abandon. In Are Electromagnetic Fields Making Me Ill? I tell the story of how Robert Tucker and Otto Schmidt performed an experiment to determine if people could detect 60 Hz magnetic fields. They spent five years examining their experiment for possible systematic errors, and eventually concluded that 60 Hz fields are not detectable. I wrote “One reason the bioelectric literature is filled with inconsistent results may be that not all experimenters are as diligent as Robert Tucker and Otto Schmitt.”
After listening to Trecek-King’s video, I began to wonder if the Tucker and Schmidt experiment might alternatively be viewed be a cautionary tale about bias. Was their long effort a heroic example of detecting and eliminating systematic error, or was it a bias marathon where they slaved away until they finally came to the conclusion they wanted? I side with the heroic interpretation, but it does make me wonder about the connection between bias and experimental design. The hallmark of a good experimental scientist is the ability to identify and remove systematic errors from an experiment. Yet one must be careful to root out all systematic errors, not just those that affect the results in one direction. The conclusion: science is difficult, and you must be constantly on guard about fooling yourself.
I reexamined Are Electromagnetic Fields Making Me Ill? to search for signs of my own biases, and came away a little worried. For instance, when talking about 5G cell phone radiation risks, I wrote
After listening to Trecek-King’s video, I am nervous that this was an inadvertent confession of bias. Do my past experiences predispose me to reject claims about electromagnetic fields being dangerous? Or am I merely stating a hard-earned opinion based on experience? Or are those the same thing? Is it bias to believe that Lucy will pull that football away from Charlie Brown at the last second?
All this discussion about critical thinking and bias is related to the claims of pseudoscience and alternative medicine. At the end of Are Electromagnetic Fields Making Me Ill? I ponder the difficulty of debunking false claims.
The study of biological effects of weak electric and magnetic fields attracts pseudoscientists
and cranks. Sometimes I have a difficult time separating the charlatans
from the mavericks. The mavericks—those holding nonconformist views based on
evidence (sometimes a cherry-picked selection of the evidence)—can be useful to
science, even if they are wrong. The charlatans—those snake-oil salesmen out to
make a quick buck—either fool themselves or fool others into believing silly ideas
or conspiracy theories. We should treat the mavericks with respect and let peer
review correct their errors. We should treat the charlatans with disdain. I wish for
the wisdom to tell them apart.
I’ll give Trecek-King’s the last word. Another of her mantras, which to me sums up why we care about critical thinking, is:
I am not saying that all of our problems can be solved with critical thinking. I’m saying that it is our best chance.
Critical Thinking in Education, featuring Melanie Trecek-King, Bertha Vazquez, and Daniel Reed
Despite widespread and striking examples of physiological oscillations, their functional role is often unclear. Even glycolysis, the paradigm example of oscillatory biochemistry, has seen questions about its oscillatory function. Here, we take a systems approach to argue that oscillations play critical physiological roles, such as enabling systems to avoid desensitization, to avoid chronically high and therefore toxic levels of chemicals, and to become more resistant to noise. Oscillation also enables complex physiological systems to reconcile incompatible conditions such as oxidation and reduction, by cycling between them, and to synchronize the oscillations of many small units into one large effect. In pancreatic β-cells, glycolytic oscillations synchronize with calcium and mitochondrial oscillations to drive pulsatile insulin release, critical for liver regulation of glucose. In addition, oscillation can keep biological time, essential for embryonic development in promoting cell diversity and pattern formation. The functional importance of oscillatory processes requires a re-thinking of the traditional doctrine of homeostasis, holding that physiological quantities are maintained at constant equilibrium values, a view that has largely failed in the clinic. A more dynamic approach will initiate a paradigm shift in our view of health and disease. A deeper look into the mechanisms that create, sustain and abolish oscillatory processes requires the language of nonlinear dynamics, well beyond the linearization techniques of equilibrium control theory. Nonlinear dynamics enables us to identify oscillatory (‘pacemaking’) mechanisms at the cellular, tissue and system levels.
In their introduction, Xiong and Garfinkel get straight to the point. Homeostasis examines an equilibrium stabilized by negative feedback loops. Such systems are studied by linearizing the system around the equilibrium point. Oscillatory systems, on the other hand, correspond to limit cycleattractors in a nonlinear system. The regulatory mechanism must both create the oscillation and stabilize it.
In Russ’s and my defense, we do talk about oscillations in our chapter on feedback. One source of oscillations is when a feedback loop has two time constants (Section 10.6), but these aren’t what Xiong and Garfinkel are talking about because those oscillations are transient and only affect the approach to equilibrium. A true oscillation is more like the case of negative feedback plus a time delay (Sec. 10.10 of IPMB). Russ and I mention that such a model can lead to sustained oscillations, but in light of Xiong and Garfinkel’s review I wish now we had stressed that observation more. We analyzed the specific case but missed the big picture; the paradigm shift.
Another mechanism Xiong and Garfinkel highlight is what they call “negative resistance.” They use the FitzHugh-Nagumo model (analyzed in Problem 35 of Chapter 10 in IPMB) as an example of “short-range destabilizing positive feedback, making the equilibrium point inherently unstable.” Although they don’t mention it, I think another example is the Hodgkin and Huxley model of the squid axon with an added constant leakage current destabilizing the resting potential (Section 6.18 of IPMB). Xiong and Garfinkel do discuss oscillations in the heart’s sinoatrial node, which operate by a mechanism similar to the Hodgkin-Huxley model with that extra current.
A third mechanism generating oscillations is illustrated by premature ventricular contractions in the heart caused by “the repolarization gradient that manifests as the T wave.” As a cardiac modeling guy, I am embarrassed that I’m not more aware of this fascinating idea. To learn about it, I recommend you (and I) start with Xiong and Garfinkel’s review (we have no excuse; it’s open access) and then examine some of their references.
Even more interesting is Xiong and Garfinkel’s contention that “oscillations are not an unwanted product of negative feedback regulation. Rather, they represent an essential design feature of nearly all physiological systems.” They’re a feature, not a bug. Presumably evolution has selected for oscillating systems. I wonder how. (Xiong already has a background in molecular biology, bioinformatics, and mathematical modeling; perhaps while she’s at it she should add evolutionary biology.) Let me stress that Xiong and Garfinkel are not merely speculating; they provide many examples and cite many publications supporting this hypothesis.
A key paragraph in their paper introduces a new term: “homeodynamics.”
“If oscillatory processes are central to physiology, then we will need to take a fresh look at the doctrine of homeostasis. We suggest that the concept of homeostasis needs to be parsed into two separate components. The first component is the idea that physiological processes are regulated and must respond to environmental changes. This is obviously critical and true. The second component is that this physiological regulation takes the form of control to a static equilibrium point, a view which we believe is largely mistaken. ‘Homeodynamics’ is a term that has been used in the past to try to combine regulation with the idea that physiological processes are oscillatory... It may be time to revive this terminology.”
This topical review reminds me of Leon Glass and Michael Mackey’s wonderful book From Clocks to Chaos (cited in IPMB),
which introduced the idea of a “dynamical disease.” Like Glass and
Mackey, Xiong and Garfinkel present a convincing argument in support of a
new paradigm in biology, rooted in the mathematics of nonlinear
dynamics.
In their cleverly titled section on “Bad Vibrations,” Xiong and Garfinkel note that “not all oscillations serve a positive physiological function. Oscillations in physiology can also be pathological and dysfunctional.” I suspect their section title is a sly reference to the Beach Boys hit “Good Vibrations.” After all, Xiong and Garfinkel are both from California and their review can be summed up as the mathematical modeling of good vibrations.
From Clocks to Chaos, by Glass and Mackey.
You know what: I think the upcoming 6th edition of Intermediate Physics for Medicine and Biology needs more emphasis on nonlinear dynamics. I’m lucky Xiong and Garfinkel’s article came out just as Gene Surdutovich and I were revising and updating the 5th edition of IPMB.
God Only Knows that as I sit here In My Room working on the revision I’ll have Fun, Fun, Fun. Don’t Worry Baby, now that it’s spring and we have The Warmth of the Sun, Gene and I should make good progress. Unfortunately, because we’re working here in Michigan, we can’t occasionally take a break to Catch A Wave. Wouldn’t It be Nice if we could go on a Surfin’ Safari? (Sorry, I got a little carried away.)
So the answer to the question in the title—are physiological oscillations physiological?—is a resounding “Yes!” I’ll conclude with Xiong and Garfinkel’s final sentence, which I wholeheartedly agree with (except for the annoying British spelling of “center”):
It is time to bring this conception of physiological oscillations to the centre of biological discourse.
Every year the Kresge Library at Oakland University hosts an event called “Authors at Oakland” where they honor publications by Oakland University faculty. This year was “a celebration of the book.” Intermediate Physics for Medicine and Biology was featured at a previous Authors at Oakland event, and this year I submitted Are Electromagnetic Fields Making Me Ill? Two authors were selected to give a short talk about their book, and I was one of them. So on Wednesday, March 20 I spoke to an audience of OU librarians, members of the faculty senate library committee, and other interested professors and students.
The talk was not recorded but below is a transcript, as best as I can remember it.
Thank you for selecting my book Are Electromagnetic Fields Making Me Ill? to be featured here at Authors at Oakland. My friend David Garfinkle once told me that any time a book has a title in the form of a question, the answer’s always “no.” That’s true for my book, and sums it up in a nutshell.
How did I come to write this book? In November of 2019, just before Covid arrived, I was asked to participate in a town hall meeting in Rochester, Michigan about the then-new 5G cell phones. I was to be the health effects expert. I thought I was going to give a short talk to a quiet and respectful audience. Little did I know what was in store. [At this point I showed about the first one and a half minutes of the video below.]
I discussed the hazards of 5G cell phone radiation at a town hall meeting in Rochester, Michigan in 2019. The audience was not convinced by my claim that the risks are small. https://www.youtube.com/watch?v=smQ0Nnz7lLk
This experience got me to wondering why people believe things that aren’t supported by the evidence and what could I do about it? In response to the second question, I wrote this book.
The book covers several topics, but today I’ll focus on the issue that started it all: cell phones and cancer.
Not everyone agrees with me that 5G cell phone radiation is harmless. Devra Davis has written a book titled Disconnect, in which she claims to tell “the TRUTH about cell phone RADIATION, what the INDUSTRY has done to HIDE it, and how to PROTECT your FAMILY.” I disagree with her conclusions, but the issue shouldn’t be viewed as my word against hers. Let’s look at the evidence.
That’s how science works.
Quantum mechanics tells us that electromagnetic radiation is not continuous but comes in lumps called photons.
The energy of a photon is proportional to its frequency. Very high frequency photons like for x-rays have enough energy that they can disrupt DNA, causing mutations leading to cancer. However, cell phones operate at a much lower frequency, on the order of a gigahertz (one billion oscillations per second), in the realm of microwaves. These photons have an energy of about 0.000004 eV (an eV or “electron volt” is a unit of energy appropriate when discussing single atoms or molecules). What should we compare that energy to? All molecules are bouncing around randomly, called thermal motion. The thermal energy at our body temperature is about 0.02 eV. A cell phone photon would be swamped by the thermal noise. Chemical bonds have strengths of several electron volts. A cell phone photon is far too weak to break bonds, so they can’t directly disrupt DNA and cause cancer like x-ray photons can. If they have any effect it must be an indirect one, such as affecting our immune system or suppressing our body’s ability to repair DNA damage.
A microwave oven. (Consumer Reports, CC BY-SA 4.0, https://creativecommons.org/ licenses/by-sa/4.0, via Wikimedia Commons)
Even though one photon can’t damage our tissue, you might be wondering what would happen if we deposited many, many photons into our body? Physicists have a word for that: “heat.” We know microwaves can heat tissue. You prove that every time you warm up your leftovers in your microwave oven. However, physicists understand how microwaves heat tissue very well, and can predict how hot tissue will get when exposed to microwaves. Cell phones don’t emit enough microwave radiation to significantly heat tissue. The Federal Communications Commission limits the amount of radiation a cell phone can emit to levels that don’t cause significant heating. Your cell phone doesn’t cook your brain. If microwave radiation represents a hazard to our tissue, it’s not only through an indirect effect but also a nonthermal effect.
Let’s now look at four types of evidence about the risk of cell phone radiation: 1) theoretical analysis, 2) cancer rates, 3) epidemiological evidence, and 4) laboratory experiments.
Asher Sheppard and his colleagues have analyzed every theoretical mechanism they could think of to determine if microwaves have a significant affect on our tissue. After an exhaustive search, they concluded that
In the frequency range from several megahertz to a few hundred gigahertz, the focus of this paper, the principal mechanism for biological effects, and the only well-established mechanism, is the heating of tissues.
I can imagine that you’re thinking “well, maybe those armchair theorists just weren’t smart enough to dream up the correct mechanism.” Perhaps, but the point I want to make is that the concern about cell phone radiation isn’t being driven by a theoretical prediction. Theory does not predict there should be an effect.
Cell phone use and brain cancer trends between 1976 and 2006. (Data from Inskip et al.)
Now look at this plot of brain cancer trends. Back in the 1980s, when I was a graduate student, no one had cell phones. The use of cell phones has exploded since then. The data shown only goes out to about 2010, but if you extend the data to today essentially everyone has a cell phone. However, the cancer rate has been flat over those decades. And the brain cancer rate, in particular, has been nearly flat. If cell phones are causing brain cancer, it’s not a strong enough signal to show up in the cancer rate data.
Epidemiology studies examine large groups of people, some exposed to a hazard and some not, to compare their health. One of the first epidemiological studies is called the INTERPHONE study, and it did suggest a weak association of heavy cell phone use with cancer. INTERPHONE was a case control study; the researchers interviewed many people with brain cancer to determine their prior cell phone use, and compared these people to a control group without cancer. These studies are useful for getting data on rare hazards quickly, but they’re susceptible to biases, such as “recall bias” where a person with cancer who used their cell phone a lot will remember that clearly and perhaps regretfully while a member of the control group might not remember whether or not they even used a cell phone at all. A cohort study is a better type of epidemiological analysis. A large number of people, some cell phone users and some not, are followed for many years to see who gets cancer. Two large cohort studies—the Million Women Study in Europe and another study that involved essentially the entire population of Denmark—didn’t indicate a signal for an increased rate of cancer caused by cell phone use. A meta-analysis of many epidemiological studies by Martin Röösli and his coworkers concluded that
Epidemiological studies do not suggest increased brain or salivary gland tumor risk with [mobile phone] use, although some uncertainty remains regarding long latency periods (>15 years), rare brain tumor subtypes, and [mobile phone] usage during childhood.
Another large cohort study, called COSMOS, is now being carried out in Europe. When I was preparing this Powerpoint presentation, I thought I’d have to tell you that we’ll need to wait a few years until the results are published. Then, just this week, a preliminary report found that there’s no evidence that cell phone use is associated with higher rates of brain cancer. Some people might claim that there’s a long latency period between the exposure to cell phone radiation and the occurrence of cancer, and that a large uptick in the cancer rate will happen soon. Maybe, but as each year goes by that scenario becomes less and less likely.
The final type of evidence is laboratory experiments, such as studies using rats, mice, or cells in a dish. The evidence here is mixed; many experiments see effects and many do not. In fact, you could make a compelling case for or against cell phone health effects, depending on which articles you read. Unfortunately, the quality of these studies is also mixed.
Often scientists sometimes conduct a systematic review, weighing the pros and cons of the many experiments. For example, Anne Perrin and her collaborators reviewed the effects of radiofrequency electromagnetic fields on the blood brain barrier, and found that
recent studies provide no convincing proof of deleterious effects of [radiofrequency radiation] on the integrity of the [blood brain barrier].
But other systematic reviews have come to different conclusions, and I fear it’s difficult to draw definite conclusions from the experimental investigations.
Federal agencies—such as the Food and Drug Administration, the Center for Disease Control, and the National Cancer Institute (part of the National Institutes of Health)—often conduct their own reviews of the evidence. My favorite is the National Cancer Institute, which was the agency that got the round of boos during that 5G town hall meeting I participated in. These aren’t bureaucrats conducting the review, but instead are our nation’s top cancer scientists. They concluded that
The human body does absorb energy from devices that emit radiofrequency radiation. The only consistently recognized biological effect of radiofrequency radiation absorption in humans that the general public might encounter is heating to the area of the body where a cell phone is held (e.g., the ear and head). However, that heating is not sufficient to measurably increase core body temperature. There are no other clearly established dangerous health effects on the human body from radiofrequency radiation.
So, the evidence from theoretical analysis, cancer trends, epidemiology, and experiments makes a strong case that there are no health risks from cell phone radiation. Impossibility proofs are difficult in biology and medicine, but to me the evidence is compelling that the electromagnetic waves emitted by cell phones are safe.
A final question is if we should believe the scientists. Should we trust the National Cancer Institute to provide a unbiased review, or are they trying to hide hazardous effects. I believe a conspiracy secretly carried out by hundreds if not thousands of scientists and medical doctors is absurd. In my book I wrote
Dangers arising from cell phone radiation strike me as unlikely, but not inconceivable. However, the claims that there exists a vast plot, with scientists colluding to conceal the facts, are ridiculous.
Finally, in the acknowledgments section of my book I thank the Kresge Library for “assisting me with obtaining books and articles related to this research.” In particular, the interlibrary loan office here at Kresge Library has been essential to my research. I worked them pretty hard. You can’t write a book like this without a good interlibrary loan department.
Thank you. Does anyone have questions?
I must admit the biggest applause arose from my comment about the interlibrary loan office, but then the crowd was largely librarians. Overall Authors at Oakland was a wonderful event, and I deeply appreciate being invited to speak at it.
Recently I was reading an article by Ramsay Lewis and Yuhong Dong in The Epoch Times titled Invisible Electromagnetic Fields: Do They Harm Your Health? My friend and colleague David Garfinkle once told me that whenever you see a book or article whose title is in the form of a question, the answer is always “no.” I assumed that would be the case for this article, and I began reading.
The article describes how citizens of Virginia Beach opposed an offshore renewable energy project, justifying their opposition in part because of possible health hazards from electric and magnetic fields produced by transmission cables.
The article started off well and discussed many of the issues described in Chapter 9 of Intermediate Physics for Medicine and Biology and in my book Are Electromagnetic Fields Making Me Ill? (which, by the way, does follow Garfinkle’s rule of the title question having “no” for an answer). Then, suddenly, Lewis and Dong took a bizarre turn. They wrote
In repeated experiments, Nobel Prize laureate Professor Luc Montagnier amazingly demonstrated that a low intensity electromagnetic field (EMF) of 7 HZ (similar to Schumann resonances), could produce DNA in a tube of pure water, simply by being adjacent to another tube containing DNA. In other words, he created something—DNA—out of nothing, simply by being close to DNA and adding low frequency EMFs.
Wait... What?! This sounded serious enough that I decided to look into it. After all, the idea was championed by one of the discovers of HIV, the virus responsible for AIDS.
Luc Montagnier in 2008 Prolineserver, GFDL 1.2, via Wikimedia Commons
A novel property of DNA is described: the capacity of some bacterial DNA sequences to induce
electromagnetic waves at high aqueous dilutions. It appears to be a resonance phenomenon triggered by the
ambient electromagnetic background of very low frequency waves. The genomic DNA of most pathogenic bacteria
contains sequences which are able to generate such signals. This opens the way to the development of highly
sensitive detection system for chronic bacterial infections in human and animal diseases.
The key phrase in the abstract is “at high aqueous dilutions.” The authors repeatedly made 10:1 dilutions of the DNA solution. After 18 dilutions, the concentration of DNA should be 0.000000000000000001 times what it was originally. The purported electromagnetic wave effect persisted, even though there was no DNA left in the sample. The water “remembered” the DNA.
It’s homeopathy all over again.
A 2010 interview in Science politely hinted that this idea is absurd.
Virologist and Nobel laureate Luc Montagnier announced earlier this month that, at age 78, he will take on the leadership of a new research institute at Jiaotong University in Shanghai. What has shocked many scientists, however, isn’t Montagnier’s departure from France but what he plans to study in China: electromagnetic waves that Montagnier says emanate from the highly diluted DNA of various pathogens…
But Montagnier’s new direction evokes one of the most notorious affairs in French science: the “water memory” study by immunologist Jacques Benveniste. Benveniste, who died in 2004, claimed in a 1988 Nature paper that IgE antibodies have an effect on a certain cell type even after being diluted by a factor of 10120. His claim was interpreted by many as evidence for homeopathy, which uses extreme dilutions that most scientists say can’t possibly have a biological effect. After a weeklong investigation at Benveniste’s lab, Nature called the paper a “delusion.”
Here’s part of the interview with Montagnier.
Q: You have called Benveniste a modern Galileo. Why?
L.M.:
Benveniste was rejected by everybody, because he was too far ahead. He lost everything, his lab, his money. … I think he was mostly right, but the problem was that his results weren’t 100% reproducible.
Q:
Do you think there’s something to homeopathy as well?
L.M.:
I can’t say that homeopathy is right in everything. What I can say now is that the high dilutions are right. High dilutions of something are not nothing. They are water structures which mimic the original molecules. We find that with DNA, we cannot work at the extremely high dilutions used in homeopathy; we cannot go further than a 10−18 dilution, or we lose the signal. But even at 10−18, you can calculate that there is not a single molecule of DNA left. And yet we detect a signal.
I am an emeritus professor of physics at Oakland University, and coauthor of the textbook Intermediate Physics for Medicine and Biology. The purpose of this blog is specifically to support and promote my textbook, and in general to illustrate applications of physics to medicine and biology.