Friday, January 9, 2026

Planar Microcoil Arrays for In Vitro Cellular-Level Micromagnetic Activation of Neurons

Russ Hobbie and I discuss magnetic stimulation in Chapter 8 of Intermediate Physics for Medicine and Biology. The technique uses a time-varying magnetic field to induce an electric field in tissue that excites a nerve. I worked on transcranial magnetic stimulation when at the National Institutes of Health in the 1990s. We passed thousands of amps of current through a several centimeter sized, multi-turn coil held above the head. The pulse of current typically rose from zero to its peak in about a tenth of a millisecond. The electric field it induced was large enough to excite neurons in the brain

Nowadays, researchers such as Seung Woo Lee and his colleagues claim to be able to perform magnetic stimulation using single-turn microcoils, which typically have a size of hundreds of microns and are meant to be implanted in tissue. Lee et al. said they could stimulate neurons using currents of about a twentieth of an amp, and calculated the electric field produced by such a coil for a current as small as one thousandth of an amp. Previously in this blog, I have criticized these studies, saying that the electric field induced in the tissue using such a coil is way too small to excite a nerve. In fact, my graduate student, Mohammed Alzahrani, performed a careful calculation and found that the induced electric field was about 100,000 times smaller than what Lee et al. predicted. 

“Planar Microcoil Arrays for
In Vitro Cellular-Level Micromagnetic
Activation of Neurons.”
Today I want to discuss yet another article about magnetic stimulation with a microcoil. This one, by Renata Saha and her collaborators, is titled “Planar Microcoil Arrays for In Vitro Cellular-Level Micromagnetic Activation of Neurons” (Journal of Vacuum Science and Technology B, Volume 42, Article Number 033001, 2024). The most concerning feature of this article is not the magnitude of the induced electric field, which seems about right to me, but its spatial distribution and, especially, its effectiveness at stimulating a neuron. 

Saha et al. passed two amps of current through a coil at a frequency of 2000 Hz. Their coil was square, about 200 microns on each side, and it had five turns. They calculated the electric field 20 microns above the plane of the coil. I will not be reproducing their calculation in detail. Instead, I’ll estimate the induced electric field by examining the equation 


where E is the induced electric field, μ0/4π is a constant equal to 10-7 (V/m)/(A/s), N is the number of turns, dI/dt is the rate of change of the coil current, dl is the length of an element of the coil, and R is the distance from that element to where the electric field is calculated. In the past, I have said that the integral is dimensionless and would likely be on the order of one, so the electric field is approximately 

For Saha et al.’s coil, this gives 

Today, I want to take a closer look at that integral. Let’s calculate the electric field induced a distance z away from a wire that extends from –L/2 to +L/2 along the x axis. The integral, which you can look up in any good integral table, is expressed in terms of the natural logarithm

The integral depends only on the ratio z/L. Getting close to the nerve (small z) and making the coil tiny (small L) do not matter individually; their relative size is what counts. I would like to take L to infinity to find the electric field from a long wire, but in that case the denominator in the argument of the logarithm would go to zero and the logarithm itself would be infinite. So let L, the length of the side of the coil, be 200 microns and let z, the distance from the coil to the measurement point, be 20 microns. Then z/L = 0.1, implying the integral is equal to 4.6. Therefore, the electric field produced by this coil should be about (0.013 V/m)(4.6) = 0.06 V/m. That might be an overestimate, because the width of the five turns is actually about 50 microns, which would spread out the distribution of the electric field and lower its peak strength, and also the other side of the coil might contribute a little. But since I’m estimating, I’ll take the electric field to be 0.06 V/m.  

That’s close to the peak electric field calculated by Saha et al. (see their Figure 2a). So unlike Lee et al., Saha et al. seem to calculate the magnitude of the electric field correctly. I do have some reservations about the spatial distribution of the electric field they predict. I would expect it to be large right under the coil and then fall off rapidly away from it. Instead, they find the electric field is large not only under the coil but also a long ways away from it, out near the edge of the 3 mm by 3 mm region where they perform the calculation. Also, their electric field is oddly asymmetric. The electric field in the x direction looks like what I would expect (except for those strange regions near the outer boundary) but their electric field in the y direction is much larger on one side of the coil than on the other, and is spread out over a region much larger than the coil size (see their Figure 2d; I assume there’s a typo in the labels on their color bar, so it actually ranges from +0.02 to –0.05 V/m). Perhaps some of this is due the sealed boundary at the edge of the tissue region, some due to the feed wires that deliver current to the coil, and perhaps some due to an interaction between those two factors. Who knows? But at least they get the magnitude near the coil right, and for me that’s what’s important. Lee et al. had the spatial distribution correct, but their magnitude was way off.

The other concern I have with Saha’s calculation is the response of the neuron. They placed a neuron, which was about 2 mm long, near the coil and used a program called NEURON to calculate its resulting transmembrane voltage. Let’s do this ourselves in an approximate way. The transmembrane voltage should be on the order of the electric field times the length of the neuron. This would be the case if no current entered the cell, so the intracellular space is at a constant voltage, the extracellular voltage varies linearly along the neuron length, and the transmembrane voltage is the intracellular voltage minus the extracellular voltage. That would be 0.06 V/m = 0.06 mV/mm times 2 mm, or 0.12 mV. No neuron is going to fire if the transmembrane potential changes by about a tenth of a millivolt. Yet Saha et al. show an initial response (their stimulus artifact, which is followed immediately by an action potential) that increases by about 70 mV (see their Figure 2b). This I don’t understand. If anything, I’m overestimating the transmembrane potential in my calculation. The neuron is a little less than 2 mm in length, it is off to one side where the electric field has fallen to about a half its peak value, and you could make a good argument that I should use half the length of the neuron rather than its entire length in my estimate (one end is depolarized and the other end hyperpolarized). So, a 0.06 V/m electric field most likely produces less than 0.12 mV of transmembrane voltage, but let’s use 0.12 mV as an upper limit. That’s gotta be subthreshold.

Typical electric field thresholds in the brain are on the order of, and probably somewhat greater than, 10 V/m (see Mohammed’s first paper cited below). Why do Saha et al. find that such a tiny 0.06 V/m electric field fires an action potential in the nerve? I don’t know. NEURON is a black box to me and it is hard to say why its prediction is so odd.

What do I conclude? Saha et al. seem to calculate the induced electric field correctly (at least its peak magnitude, if not its spatial distribution). But their electric field is too small to excite a nerve. Not 100,000 times too small (like for Lee et al.), but perhaps a hundred times too small. I’m not convinced that what they are looking at is magnetic stimulation. What else could it be? My graduate student Mohammed calculated that capacitive coupling might be an alternative mechanism for excitation. Other mechanisms are possible, such as tissue heating or mechanical motion caused by the magnetic force produced by one side of the coil on the other side.

For those who want to look more deeply into this issue, Mohammed’s three papers on this topic are: 

Alzahrani, M. and B. J. Roth, 2023, The electric field induced by a microcoil during magnetic stimulation. IEEE Trans. Biomed. Eng., 70:3260–3262. 

Alzahrani, M. and B. J. Roth, 2024, The calculation of maximum electric field intensity in brain tissue stimulated by a current pulse through a microcoil via capacitive coupling. Applied Sciences, 14:2994. 

Alzahrani, M. and B. J. Roth, 2024, The difference between traditional magnetic stimulation and microcoil stimulation: Threshold and the electric field gradient. Applied Sciences, 14:8349.

The last two are open access, so anyone can read them online. The first one is not open access, but send an email to roth@oakland.edu and I'll be happy to send you the pdf.

Friday, January 2, 2026

The 6th Edition of IPMB Has Been Submitted!

On December 26, Gene Surdutovich and I submitted our draft of the 6th Edition of Intermediate Physics for Medicine and Biology to Springer, our publisher. To give you all a sneak peak about what’s in store, here’s an excerpt from the preface.
Sadly, Russ Hobbie passed away in 2021 at the age of 87. The Sixth Edition of IPMB is the first edition for which he did not himself supervise the revision. However, his influence lives on, and much of the text contains his original words.

When preparing the Sixth Edition, Hobbie’s coauthor Brad Roth added another coauthor: Eugene Surdutovich. Surdutovich is a physicist who for decades has studied how ions interact with tissue. He has also taught Biological Physics and Medical Physics from the Fifth Edition of IPMB. Finally, his expertise in both the LaTeX typesetting system and producing figures using Mathematica have contributed much to this edition.

The Sixth Edition of IPMB has several new features. Most of the figures were redrawn, and many are now in color. Three chapters have been added. First, a brief introduction to surface tension is provided in the new Chapter 2. The previous chapter on feedback and control contained much material about mathematical modeling using nonlinear dynamics. Now this chapter has been split into two, one on feedback and the other on nonlinear dynamics. Finally, a chapter was added that focuses on damage to cells that are exposed to ionizing radiation. There, Surdutovich shares his insight about radiation damage caused by ion beams. This is crucial for understanding proton and heavy ion beam therapies. Despite this new material, the Sixth Edition is significantly shorter than the Fifth. We ruthlessly removed topics that we did not cover when we taught from the book. 
The Sixth Edition has more end-of-chapter homework problems than previous editions. We now have over 1000 problems that highlight biological applications of physics. Many of the problems extend material in the text. Problems marked with an asterisk are particularly challenging. A solutions manual is available to those teaching the course. Instructors can use it as a reference or provide selected solutions to their students. The solutions manual makes it easier for an instructor to guide an independent-study student. Information about the solutions manual is available at the book's website: https://sites.google.com/view/hobbieroth/home. The book also has an associated blog that is updated every Friday (hobbieroth.blogspot.com).
The good folks at Springer now will do their thing, fixing all our formatting issues and putting the book into their house style. Then, the page proofs come back to us for checking. At that point, finally, the finished product will be available. I’m hoping it’s ready for use during the fall semester.

Thanks to all who made suggestions, provided feedback, and otherwise helped us complete this revision. For those of you who like the earlier editions: fear not. There’s still a lot of Russ Hobbie in the 6th Edition. It’s been just over four years since we lost him. I’ve been thinking about him much throughout this revision.

Friday, December 26, 2025

Twinkle, Twinkle, T2*

In Chapter 18 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the relaxation time constants T1 and T2 used during magnetic resonance imaging. We also introduce the time constant T2* (pronounced tee-two-star), which includes spin dephasing caused by heterogeneities in the external magnetic field. Spin echo methods can be used to reverse relaxation by T2*, recovering T2.

For those of you who don't like such technical explanations, below is a video of the song “Twinkle, Twinkle, T2*.” 

See ya next year!


 “Twinkle, Twinkle, T2*”

https://www.youtube.com/watch?v=uu7Ph25EhLQ

Friday, December 19, 2025

Douglas Lea, Biological Physicist

Radiation and the Single Cell:
The Physicist's Contribution 
to Radiobiology,”
by Eric Hall.
I like to highlight examples of scientists who are trained in physics but make fundamental contributions to biology and medicine. One example is Douglas Edward Lea.  He began his career working on nuclear physics in the Cavendish Laboratory at Cambridge, but switched to biology. I will draw much of this post from a 1976 article by Eric Hall published in the journal Physics in Medicine and Biology (Volume 21, Pages 347–359), titled “Radiation and the Single Cell: The Physicist’s Contribution to Radiobiology.” This article is based on a talk Hall gave as part of the Douglas Lea Memorial Lecture series. Hall begins
Douglas Lea was born in Liverpool on 8 February 1910. From Liverpool Collegiate School, he went with scholarships to Trinity College, Cambridge, in 1928. He gained firsts in Part I of the Mathematical Tripos in 1929, and in Part II (physics) of the Natural Sciences Tripos in 1931... He started research in physics at the Cavendish laboratory at a time when Lord Rutherford’s genius pervaded the laboratory, though Lea’s discovery in 1937 of the capture of a neutron by a proton to form deuterium, with the emission of gamma rays, was associated more with Sir James Chadwick… Lea was elected to a fellowship at Trinity College in 1934 and received his Ph.D. in 1935.
I’m always curious about why scientists decide to make the transition from physics to biology. In this case, Lea was worried that nuclear physics had become overcrowded, and there were more opportunities in the less-explored biological sciences.
What a galaxy of talent there was at the Cavendish at that time and what halcyon days for physics; but Lea could already see the writing on the wall. As Eileen Lea, his wife, put it in a letter to me recently, this turning to biology was the result of a deliberate search for an important unexplored field.
Lea recorded some of the first survival curves. Russ Hobbie and I discuss survival curves in Chapter 16 of Intermediate Physics for Medicine and Biology. These semilog plots of surviving fraction versus dose are the most common data obtained in the field of radiobiology. Hall continues
Lea at once recognized that until survival curves could be generated with good precision, it would not be possible to make any inferences regarding the mode of action of the radiation. He wrote in the first paragraph of his first paper in the field of biology (Lea, Haines and Coulson 1936): ‘The mechanism of disinfection, however, remains obscure. Theories have been proposed, but little attempt seems to have been made to analyse the implications of the various hypotheses and point by point to confirm or disprove them. Moreover, some writers have ignored the fact that the physical processes accompanying the passage of various radiations through matter are fairly completely understood.’
Lea pioneered the “single-target single-hit” model to describe survival curves. This is essentially an mathematical application of the binomial distribution to deduce that the survival probability falls exponentially with the dose. The sixth edition of Intermediate Physics for Medicine and Biology will say more about Lea’s model, in part because my new coauthor, Gene Surdutovich, is an expert on the physics of radiobiology. We will be citing Lea’s influential book Actions of Radiations on Living Cells, published in 1947, the year Lea died at the tender age of 37.

Can we draw any conclusions about Lea’s transition from physics to biology, and his legacy as a scientist? Hall writes
We must view Douglas Lea, his experimental work as well as his attitude to life, against the background of his times. He chose to stay in Cambridge, but meanwhile, in the bigger population centres, events were moving rapidly in the application of physics to radiobiology...

What has been the contribution of the physicist to radiobiology at every stage? To be quantitative; to work with simple systems and to deduce basic principles that have a general application. This is the legacy that we have inherited from men like Douglas Lea. It is clearly difficult to follow in the footsteps of one who walked with such majestic strides, but it is evidently our duty to try.

Friday, December 12, 2025

COSMOS, Cell Phones, and Cancer

Are Electromagnetic Fields Making Me Ill? superimposed on Intermediate Physics for Medicine and Biology.
Are Electromagnetic Fields
Making Me Ill?
The relationship between cell phones and brain cancer is a controversial topic. Russ Hobbie and I discuss the physics behind the risk of electromagnetic radiation from cell phones in Chapter 9 of Intermediate Physics for Medicine and Biology. I also address this topic in my book Are Electromagnetic Fields Making Me Ill? published in 2022.
A large cohort study, called COSMOS, is now following approximately 100,000 volunteers in United Kingdom and 200,000 in Europe. COSMOS began in 2010, and participants will be followed for 20 years. Each participant will complete an online questionnaire probing their health, lifestyle and cell phone use… COSMOS will avoid recall bias by obtaining cell phone records from mobile phone companies to supplement the questionnaire.

Last year the first results of the COSMOS study were reported in an article titled “Mobile Phone Use and Brain Tumour Risk – COSMOS, A Prospective Cohort Study” published in the journal Environment International (Volume 185, Article Number 108552, 2024). Below is a series of questions and answers about that article. 

 

Q: Let’s jump to the bottom line. What did the article conclude?

A: The last sentence of the abstract says “Our findings suggest that the cumulative amount of mobile phone use is not associated with the risk of developing glioma, meningioma, or acoustic neuroma.” In other words, cell phone radiation didn’t cause brain cancer. 

 

Q: Is COSMOS an acronym?

A: Yes. It stands for “COhort Study of MObile phone uSe and health”. Okay, the "S" for "uSe" is a bit of a stretch, but it has a nice ring to it. (Get it? phone... ring.)

 

Q: The “CO” in COSMOS stands for “Cohort”. What’s that? 

A: Two main types of epidemiological research are case-control studies and cohort studies. In a case-control study, researchers look retrospectively at patients diagnosed with some disease, to try and determine the cause. In a cohort study, initially healthy people are followed to see who gets sick with a disease. Cohort studies take longer to perform, require more subjects, and are more expensive, but are less subject to bias. 

 

Q: Bias? What sort of bias?

A: The main concern is recall bias. In a case-control study, patients who have a disease may be focused on what caused their health issue and search their memory more extensively for causal links, whereas a control group without the disease may not be as careful and complete in their assessment. This can potentially exaggerate the relation between a risk factor and the disease. Recall bias is avoided in a cohort study, because the initial questionnaire and patient history is collected when all the participants are healthy.

 

Q: What makes this study better than previous ones? 

A: The authors state that COSMOS is “the largest prospective cohort study of mobile phone use specifically designed to overcome the well-described shortcomings of both case-control and previous cohort studies, through a more comprehensive prospective collection of exposure information, including both self-report and mobile network operator data, while also addressing longer-term exposure and more recent technologies than previous studies ” 


Q: Who did this research?

A: The first author of the article was Maria Feychting, who is with the Institute of Environmental Medicine at the Karolinska Institutet in Stockholm, Sweden. The last (senior) two authors (who contributed equally to the article) are Giorgio Tettamanti of the Karolinska Institutet and Paul Elliott of the Imperial College London.



Q: How many subjects were studied? 

A: 264,574  


Q: What are glioma, meningioma, and acoustic neuroma? 

A: These are types of brain cancer. A glioma originates in the glial cells of the brain. Gliomas comprise 80% of all malignant brain tumors. Meningiomas form in the meninges, the membranes surrounding the brain, and acoustic neuromas affect the nerve that connects the inner ear to the brain. During the study, 149 gliomas, 89 meningiomas, and 29 acoustic neuromas were diagnosed.

 

Q: Everyone uses a cell phone nowadays. What was the control group?

A: The detailed participant histories and data from phone companies allowed the researchers to estimate the total cumulative hours of cell phone use for each participant. They could then compare users with less use to those with more use. Half the uses had less than 464 cumulative hours of call time, a quarter had between 464 and 1062 hours, and another quarter had more than 1062 hours. 



Q: Cell phones are getting more and more common, with new generations like 5G. Isn’t the exposure much worse now than in the past?

A: Interestingly, the answer is no. New technologies provide less exposure. The authors write “Generally, RF-EMF [radio-frequency electromagnetic field] exposure levels to the head during calls have decreased considerably with each new generation of mobile phone technology, most notably between the 2nd (e.g., GSM introduced in the early 1990s) and 3rd generation (e.g., UMTS introduced in the early 2000s); the contribution to the whole-brain RF-EMF exposure from a mobile phone held to the ear while calling on a GSM phone is orders of magnitude higher than that from a 3G phone.” This trend has continued through 4G and 5G technologies, especially with adaptive power control technology.

 

Q: This COSMOS study sounds expensive. Who paid for it?

A: The list of funding organizations at the end of the article runs for almost a page. Each country (Sweden, Denmark, Finland, the United Kingdom, the Netherlands, and France) has different funding sources. Most appear to be government agencies, although there is some industry funding in some countries. The authors state that “they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ” 



Q: The study has been going on for a little over seven years. Is this long enough to detect slow-growing brain tumors? 

A: This question was raised in a letter to the editor by Michael Kundi. In their response, Feychting et al. point out that although the time between patient registration and the article was seven years, this does not mean the study can only detect cancers initiated in the last seven years. Participants provided information about their exposure history, so researchers could examine the relationship between cell phone use and cancer over a much longer time than seven years. 30% of participants had used a mobile phone for 15 years or longer. The study is ongoing, and is supposed to last 20 years. In about a decade, we should have more definitive data. 

 

Q: Were there any other letters to the editor about this research?

A: Joel Moskowitz and his colleagues wrote a letter in which they called the COSMOS study “methodologically flawed.” One of their main complaints is that the participant questionnaires and limited cell phone data usage would not be sufficient to access radio-frequency radiation exposure, which depends on the details of the technology used. The authors responded by noting how their exposure estimates are far better than in past studies. They claim “the prospective collection of exposure information in COSMOS is a key strength, together with the use of objective operator data from a sub-sample of participants to improve the exposure estimation based on recall alone. These data were used to calibrate the self-reported mobile phone use, leading to more accurate estimation of the relation between mobile phone use and health outcomes.”

Moskowitz and his coworkers also objected to the lack of an unexposed control group. The authors responded “Today, close to 100 % of the populations in the included countries are mobile phone users. The tiny proportion of non-users is likely to differ from the mobile phone users in many other aspects, and confounding and random variation would be major problems in analyses with non-users as reference group. Comparing low vs. high or long-term vs. short-term exposures is common in epidemiological studies when exposures are prevalent, and internal comparisons within the cohort ensures comparability in the quality of outcome, exposure and confounding information.”

Finally, Moskowitz et al. complained about industry funding, claiming that it would lead to a funding bias. The authors responded “COSMOS was funded through grant applications to publicly funded research councils or organisations, undergoing the same rigorous and competitive evaluation process as other research grant applications. In some countries, industry complemented the funding either through national research programs led by public authorities without any influence from industry, or by using trusted public authorities as a firewall, with agreements that guaranteed the independence of the researchers. It is reasonable that industry contribute to the costs of research into potential health effects of their products, as long as it can be guaranteed that they have no influence on the conduct of the research, and this independence was fully the case in COSMOS.” 

 

Q: So, what’s your conclusion?

A: When you combine the initial COSMOS results with the Danish cohort study and the Million Women cohort study that I discuss in Are Electromagnetic Fields Making Me Ill?, I conclude that there’s little evidence connecting cell phone use to cancer. It’s just not a problem. 

 

Q: I love the Q&A, but now I would like to read the paper itself. Unfortunately, I don't have a subscription to Environmental International. What can I do?

A: You’re in luck. The article, and all the letters to the editor and the author responses, are published open access. Anyone can read them online, using the links I provide above. Enjoy!

Friday, December 5, 2025

Bernard Katz, Biological Physicst

Nerve, Muscle, and Synapse, by Bernard Katz, superimposed on the cover of Intermediate Physics for Medicine and Biology.
Nerve, Muscle, and Synapse,
by Bernard Katz.
I’ve talked about Bernard Katz before in this blog, when discussing his book Nerve, Muscle, and Synapse (cited in Chapter 6 of Intermediate Physics for Medicine and Biology.) In the foreword to the book, George Wald wrote
Professor Katz... goes far beyond the first essentials to develop the subject in depth… What impresses me particularly is that each idea is pursued to the numerical level. Each theoretical development comes out in this form, in clearly stated problems worked through with the relevant numbers.
One theme of this blog is to explore the intersection of biology and medicine with physics. I often highlight physicists, like myself, who have made the transition from physics to biology. Katz is an example of a scientist who made the less common transition from physiology and medicine to physics.

To explore this topic in more detail, I examined his memoirs published in The History of Neuroscience in Autobiography. Katz was born in 1911 in Leipzig, Germany. That made him a 21 year old Jew when Hitler took power, which explains why he spent most of his career in England.

In elementary school, Katz obtained a classical education, with an emphasis on Latin and Greek. He wrote
During my last three school years, we had to choose between a continuation of the classical linguistic course, and a mathematically and scientifically oriented curriculum. I chose the former … It was not the lack of natural science training that I later came to regret. This deficiency was made up quite satisfactorily by excellent elementary science teaching in the preclinical university course. But the weakness of my grounding in mathematics was something for which I have never been able to compensate.
He went on to study medicine, getting his M.D. in 1934. In medical school, he studied physics with Peter Debye, mentioned several times in IPMB.
During my first year I had to make up for my total lack of knowledge in the natural sciences. The medical students joined the scientists in their elementary courses in botany, chemistry, physics, and zoology, in addition to the preclinical subjects of anatomy, physiology, and biochemistry. I found it an advantage not having taken science in high school. All the material I was presented with during my first year at the university was fresh and new, some of it taught by persons of the highest caliber, and there was a good deal that I found absolutely fascinating. I had the benefit of an outstanding physics teacher, the famous Peter Debye (who a few years later received a Nobel Prize in chemistry). He gave his lectures, accompanied by experimental demonstrations, every morning from 8 until 9. Debye was both a great scientist and a great showman who took visible pride in his lectures. He was a marvelous expositor of facts, ideas, and theories. Debye clearly enjoyed teaching as much as research, and he showed his delight in all the successful tricks that he demonstrated in class with a constant smile on his face.

I guess you don’t need a book like IPMB if you’ve got Peter Debey as your physics teacher. 

Katz was also influenced by one of the best biological physicists ever, Hermann Helmholtz. This reminds me of the influence Isaac Asimov had on me at about the same stage in my education.

I was influenced strongly by the superb collection of Helmholtz's public lectures. In these, Helmholtz--one of the greatest experimental scientists of all time--explained difficult subjects with exemplary clarity.
In medical school he became fascinated with electrophysiology, which at that time was one of biology’s more mathematical subjects.
I was attracted to neurophysiology at an early stage, from about 1930 onward. In those days, the establishment of the laws of electric excitation of nerve, and their precise mathematical formulation were regarded as a great thing… I felt it was fascinating that one could make accurate and repeatable measurements of electric excitability on living tissues and express the results by a simple mathematical equation. ..

Having myself been involved in the experimental tests, I can say that I found the work attractive and indeed fascinating for two quite different reasons. In the first place the work enabled one to make reproducible measurements of quite extraordinary accuracy with simple equipment. Secondly, although the verification of the theoretical equations was not by itself very fruitful, a number of discrepancies from the predictions of the simple theory gradually emerged which did have important consequences. Such discrepancies led to the recognition of the nonlinear characteristic of the nerve membrane, and of the occurrence of a regenerative voltage change even in the subthreshold range of membrane potentials (the local response), which in turn provided a clue to the mechanism whereby an impulse is initiated.
With Hitler’s rise, Katz emigrated to England and found a position in A. V. Hill’s laboratory.
I came to London to join A.V. Hill's laboratory to serve my apprenticeship with him. That time, 1935 to 1939, was the most inspiring period of my life. Hill's personality had a profound influence on me.

Hill is known for his contributions to muscle physiology, and his work had a strong mathematical component. As a student, Hill had attended Cambridge, where he studied mathematics and was Third Wrangler on the Mathematical Tripos exam. 

Katz also worked at Cambridge with Alan Hodgkin and Andrew Huxley, and it is Huxley who is known as one of the greatest mathematical biologists.

Finally, during World War II Katz worked on radar, a physics and math heavy subject.

In 1941 I obtained my British naturalization papers in Sydney and shortly afterwards managed to enlist with the Royal Australian Air Force (RAAF), first as a rookie, then graduating as a radar officer. Otto Schmitt had taught me some fairly advanced tricks that one could play with thermionic valves, and that helped me a great deal during my period as a radar trainee. But my four years in the RAAF taught me a great many more useful things, about electronics as well as about human beings…. During the last year of the war I was posted back to Sydney as a liaison officer at the Radiophysics Laboratory. This was quite an interesting place, housed within the University of Sydney and harboring a number of young physicists who later became Fellows of the Royal Society.

Having become a naturalised British citizen in 1941, he was accepted to join the Royal Australian Air Force in 1942 and served as a flight lieutenant in charge of running a mobile radar unit in the south-west Pacific until 1943. This posting was followed by a job back in Sydney for two years, developing radar at Sydney University’s Radio-Physics Laboratory.

To summarize, I am not sure exactly how physics and mathematics became so important in Katz’s research, but given the scientists he trained under and worked with, it’s hardly a surprise. In any case, I still find Katz’s book Nerve, Muscle, and Synapse useful now, sixty years after its first publication. And I’m quite comfortable classifying Bernard Katz as a biological physicist. 

Bernard Katz, The Fenn Lecture, 1993 

https://www.youtube.com/watch?v=hipXyxddo9s

Friday, November 28, 2025

The Ascent of Man

The Ascent of Man, superimposed on Intermediate Physics for Medicine and Biology.
The Ascent of Man,
by Jacob Bronowski.
One book that had a big impact on me when I was young is Jacob Bronowski’s The Ascent of Man. The book is based on a television series by the same name, broadcast by the British Broadcasting Corporation. You could call The Ascent of Man a history of science, but it is a rich and philosophical history, covering all the sciences. The book was published in 1973, so I probably first read it in high school.

Bronowski is an example of a type of physicist who we encounter often in Intermediate Physics for Medicine and Biology: one who made the transition to biology. He writes in the book’s foreword:

There has been a deep change in the temper of science in the last twenty years: the focus of attention has shifted from the physical to the life sciences. As a result, science is drawn more and more to the study of individuality. But the interested spectator is hardly aware yet how far-reaching the effect is in changing the image of man that science moulds. As a mathematician trained in physics, I too would have been unaware, had not a series of lucky chances taken me into the life sciences in middle age. I owe a debt for the the good fortune that carried me into two seminal fields of science in one lifetime; and though I do not know to whom the debt is due, I conceived The Ascent of Man in gratitude to repay it.
The book is well written and beautifully illustrated. I highly recommend it. 

To view The Ascent of Man TV series, Part 1 click on the link below.

https://www.youtube.com/watch?v=CH7SJf8BnBI

Friday, November 21, 2025

Here Comes The Sun

Here Comes The Sun, by Bill McKibben, superimposed on Intermediate Physics for Medicine and Biology.
Here Comes The Sun,
by Bill McKibben
I recently finished Bill McKibben’s excellent book Here Comes The Sun (McKibben and I are about the same age, so we both like the Beetles reference. The page just before the Table of Contents has a single line of text: “And I say, it’s all right.”). The subtitle is “A Last Chance for the Climate and a Fresh Change for Civilization.” It’s one of the most optimistic climate change books I have read.  After summarizing his past angst-ridden pronouncements on global warming, McKibben writes in the introduction to Here Comes The Sun “And yet, right now, really for the first time, I can see a path forward. A path lit by the sun.”

The heart of his argument is that now, finally, wind power and especially solar power have gotten so cheap that the change to green energy will be not only virtuous but also economically advantageous. In his book, McKibben addresses four questions that are often asked by green energy skeptics. I’ll look at them one by one.

Can We Afford it?

McKibben writes

Sometime in those 10 years [between 2014 and 2024] we passed some invisible line where producing energy pointing a sheet of glass at the sun became the cheapest way to produce power, and catching the breeze the second cheapest... As the energy investor Rob Carlson put it recently, continuing to burn fossil fuel is a “self-imposed financial penalty” that will “ultimately degrade America's long-term global competitiveness.”

The gist of his argument is that with fossil fuels, you have to pay for the fuel each and every time you use it to get energy. Year after year you keep paying for coal or oil or gas. With solar and wind energy, you pay once to set up the technology and then the fuel (the sun and wind) is free. FREE! FOREVER! (Or at least for the lifetime of the solar panel or wind turbine.)  I’m an cheapskate and I love free stuff. And you save the planet as a bonus. As McKibben points out, one problem is that energy becomes so cheap that energy companies can’t make money supplying it. What a wonderful problem to have.

But Can the Poor World Afford It? 

It turns out that the developing world is leapfrogging straight to solar power, skipping the centralized fossil fuel phase. Why?

The switch is being driven by the desire for reliable and affordable power. 

McKibben compares it to how cell phones allowed poor countries to skip the expensive land line infrastructure and go straight to mobile communication. Countries in Africa and the Middle East are right now putting up solar panels, with the process starting at the grass roots rather than from the top down. Who do they buy their solar panels from? China. 

But Is There Enough Stuff?

McKibben thinks the concerns about having enough raw materials such as lithium to build the solar panels, wind turbines, and batteries is a legitimate problem, but probably not an insurmountable one. 

Yes, you have to mine lithium to build a battery. But once you've mined it, that lithium sits patiently in the battery doing its job for a decade or two (after which, as we will see, it can be recycled). If you mine coal, on the other hand, you immediately set it on fire—that's the point of coal. And then it’s gone. And then you have to go mine some more.

He says we should compare the risks and cost of mining and recycling green energy materials to the much greater risks of mining and dealing with the left over from fossil fuels, such as coal ash.

Do We Have Enough Land?

The land needed for solar and wind is surprisingly small, especially compared to that taken up by fossil fuels. McKibben quotes an estimate that oil and gas wells, coal mines, pipelines, power plants, and the like take up about 1.3% of America’s land. Green energy will require far less. McKibben compares a solar array to a corn field.

Converting some of these [corn] fields to solar panels makes enormous ecological sense. That's because one way to look at a field of corn (or any other crop) is that it’s already an array of solar panels.  A plant is a way to convert sunshine into energy through photosynthesis... Somewhere between 1 and 3 percent of the sunlight falling on a leaf actually becomes energy. The photovoltaic panel works considerably better [20, and possibly some day up to 40, percent]...

You could supply all the energy the US currently uses by covering 30 million acres with solar panels. How much land do we currently devote to growing corn ethanol [not the corn we eat, but the corn we use to help fuel our cars]? About 30 million acres. 

The biggest threat is not a lack of land, but the not-in-my-backyard attitude so common in the USA. 

Because this is a blog about my textbook Intermediate Physics for Medicine and Biology, let’s do one of those estimation problems that Russ Hobbie and I encourage. The solar constant is 1390 W/m2. That’s how much light energy from the sun per square meter that reaches the earth (or, at least, the top of our atmosphere). The cross-sectional area of our planet that intercepts this light is πR2, where R is the earth’s radius (6.4 × 106 m2). That gives 1.8 × 1017 W, or 180,000 TW (the “T” is for tera, or 1012). Humanity’s worldwide average power consumption is about 18 TW. So, we only need 0.01% of the solar energy available. Granted, some of that sunlight is reflected or absorbed by the atmosphere, some is incident on the ocean, and no solar panel is 100% efficient. Still, the land area needed for solar and wind farms, while not small, is reasonable. 

The Final Word

When I can, I like to give authors the final word in my blog posts. So, here is how McKibben ends Here Comes The Sun

I end this book saddened, too, of course—saddened by all that happened in the last 40 years, and by all that we haven’t done. But I also end it exhilarated. Convinced that we’ve been given one last chance. Not to stop global warming (too late for that) but perhaps to stop it short of the place where it makes civilization impossible. And a chance to restart that civilization on saner ground, once we’ve extinguished the fires that now both power and threaten it.

I’ve changed my mind. I’m gonna give George Harrison the final word.

Sun, sun, sun, here it comes.  

 

“Here Comes The Sun,” by the Beatles

https://www.youtube.com/watch?v=xUNqsfFUwhY 

   

Bill McKibben on Here Comes The Sun

https://www.youtube.com/watch?v=AADsgqz4nU4 

Friday, November 14, 2025

Mark Hallett (1943–2025)

Mark Hallett,
from the NIH Record.
Readers of this blog may remember neurologist Mark Hallett, who I featured three years ago in a post about his retirement from the National Institutes of Health. Today, I must share some sad news: Hallett died of brain cancer on November 2, 2025.

Mark Hallett was a pioneer in using transcranial magnetic stimulation to study the brain. In Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe magnetic stimulation.
8.7 Magnetic Stimulation

Since a changing magnetic field generates an induced electric field, it is possible to stimulate nerve or muscle cells without using electrodes. The advantage is that for a given induced current deep within the brain, the currents in the scalp that are induced by the magnetic field are far less than the currents that would be required for electrical stimulation. Therefore transcranial magnetic stimulation (TMS) is relatively painless. It is also safe (Rossi et al. 2009). 
Magnetic stimulation can be used to diagnose central nervous system diseases that slow the conduction velocity in motor nerves without changing the conduction velocity in sensory nerves (Hallett and Cohen 1989). It could be used to monitor motor nerves during spinal cord surgery, and to map motor brain function. Because TMS is noninvasive and nearly painless, it can be used to study learning and plasticity (changes in brain organization over time; Wassermann et al. 2008). Recently, researchers have suggested that repetitive TMS might be useful for treating disorders such as depression (O’Reardon et al. 2007) and Alzheimer’s disease (Freitas et al. 2011).
Mark Hallett,
from the NIH Record.
Here is what I wrote about Hallett in a review of my experience with magnetic stimulation.
One of my first tasks at NIH was to meet with two medical doctors in the National Institute of Neurological Disorders and StrokeMark Hallett and Leo Cohen—who had recently begun using magnetic stimulation. Hallett obtained his medical degree from Harvard and was chief of the Human Motor Control Section, housed in NIH’s famous clinical center. He is a leading figure in neurophysiology, specifically in magnetic stimulation research, and is often asked to publish tutorials about magnetic stimulation in leading journals. Hallett once told me that he began college as a physics major but switched to a pre-med program after a year or two. Cohen earned his MD from the University of Buenos Aires in Argentina. In the late 1980s, he worked in Hallett’s section, but eventually became the head of his own Human Cortical Physiology Section at NIH. Together Hallett and Cohen were doing groundbreaking research in magnetic stimulation but lacked the technical expertise in physics required to do things like calculate the electric fields produced by different coils…

Hallett and Cohen obtained a magnetic stimulator at NIH in the late 1980s. They described magnetic stimulation and its potential uses in the Journal of the American Medical Association [Magnetism: A new method forstimulation of nerve and brain. JAMA, 262, 538–541, 1989.], where they highlighted how assessment of central conduction times using magnetic stimulation could be useful for diagnosing diseases, such as multiple sclerosis, and also how the method could be suitable for monitoring the integrity of the spinal cord during surgery. They emphasized that although methods existed to measure the conduction time in the brain for sensory fibers, stimulation of the brain was needed to measure conduction times in central motor fibers.

Not entirely realizing the explosion of research I was lucky enough to be wading into, I started collaborating with Hallett and Cohen to calculate the electric fields produced during magnetic stimulation... Our first work together was a technical paper comparing the electric and magnetic fields produced by a variety of coils with different shapes… Hallett and Cohen were most interested in the electric field induced during transcranial magnetic stimulation, so my next task was to use a three-sphere model to calculate the electric field in the brain...

I was anxious to test the prediction of where excitation occurs along a peripheral nerve during magnetic stimulation [that Peter Basser and I had made]. The ideal experiment would be to dissect a nerve, place it in a dish filled with saline, and then stimulate it. However, Hallett and Cohen were focused mainly on clinical applications, so we tested the prediction in humans. The experiment was performed by Marcela Panizza, an Italian medical doctor, and her husband Jan Nilsson, a biomedical engineer originally from Denmark but working with Panizza in Italy. Panizza and Nilsson would often visit NIH to collaborate with Hallett and Cohen. In the experiment, the median nerve was stimulated at the forearm and the motor response was recorded using electrodesattached to the thumb... [They showed that] that magnetic stimulation did not occur where the electric field was largest, but instead where its spatial derivative was largest.

The research at NIH was assisted by an outstanding group of young scientists who worked with Hallett and Cohen. For example, the Brazilian neurologist Joaquim Brasil-Neto examined how the orientation of the electricfield influenced the stimulation threshold... Peter Fuhr analyzed how the latency of motor-evoked potentials depended on the position of thestimulating coil relative to the head... Eric Wassermann—a medical doctor who trained with Hallett and was editor of the Oxford Handbook of Transcranial Stimulation—wrote a review of safety issues... One of the most serious safety hazards was discovered by Alvaro Pascual-Leone, a Spanish MD/PhD who trained at NIH in the 1990s. Pascual-Leone and his colleagues wanted to record the electroencephalogram (EEG) during and immediately following rapid rate transcranial magnetic stimulation, so they stimulated with silver EEG recording electrodes placed over the scalp. One patient suffered a burn under an electrode.

Hallett was one of my most important collaborators throughout my career. In fact, if you look at Google Scholar to examine my most influential articles (those with over 100 citations each), Hallett was my most common coauthor (13), followed closely by Leo Cohen (11), then my PhD advisor John Wikswo (8), and finally my good friend from NIH Peter Basser (6), who also collaborated with Hallett. One could argue that no other scientist except Wikswo had such an impact on my career.

Hallett was a giant in his field of neurology. He will be missed by many, including me.

Oral History 2013: Stanley Fahn Interviews Mark Hallett

Friday, November 7, 2025

The Pardee and Riley Experiment and the Discovery of mRNA

Today I want to discuss an experiment that led to the discovery of messenger RNA (mRNA). Why did I choose to focus on one specific experiment? First, because of its importance in the history of molecular biology. Second, the experiment highlights the use of radioisotopes like those Russ Hobbie and I describe in Chapter 17 of Intermediate Physics for Medicine and Biology. Third, the recent development and of mRNA vaccines for Covid and other diseases makes this a good time to review how our knowledge of mRNA was established.  

A crucial experiment was performed by Arthur Pardee and Monica Riley at the University of California, Berkeley, and published in 1960. Let me provide some context and set the stage. The structure of DNA had been discovered by Watson and Crick in 1953. By 1960, scientists knew that individual genes in DNA coded for individual proteins. The question was how the genetic information got from DNA to the protein. RNA was suspected to be involved, in part because ribosomes—the stable cellular macromolecules where DNA was produced—are made from RNA. Were the ribosomes the messenger, or was there something else? Many key experiments in biology, like the one by Pardee and Riley, are performed using a simple model system: E coli bacteria. Another important tool of early modern biology was radioisotopes, a product of modern physics from the first half of the twentieth century that was essential for biology during the second half of the century. 

Since I’m neither a molecular biologist nor a historian of science, I’ll let Horace Freeland Judson—author of one of my favorite history of science books, The Eight Day of Creation: The Makers of the Revolution in Biology—tell you about Pardee and Riley’s work.
The experiment Pardee and Riley had done in Berkeley was new, technically amusing, and persuasive. It amounted to removal of the gene from the cell after it had begun to function. They had grown… bacteria… carrying [a specific gene to produce the protein enzyme beta-galactosidase]… in a broth where the available phosphorus [an important element in DNA] was the radioactive isotope 32P. The bacteria, with their DNA heavily labeled, were then centrifuged out... [and] resuspended in a nonradioactive broth… [Next] they added glycerol [a type of antifreeze]. Then they took one sample to test for enzyme activity [to check if beta-galactosidase was produced]. They put other samples into small glass ampules, sealed the ampules by fusing the glass at the neck, and lowered them into a vacuum-insulated flask of liquid nitrogen. The bacteria were frozen almost instantly at 196 degrees below zero centigrade. Protected from bursting by the glycerol, the bacteria were not killed, but their vital processes were arrested while the radiophosphorus in the DNA… continued to decay… From day to day, Riley raised ampules of the frozen bacterial suspension from the liquid nitrogen and thawed them… For comparison, they ran the whole [experiment] in parallel without the radioactivity [this was their control].

Before telling you the result, let me digress a bit about phosphorus-32. It’s an unstable isotope that undergoes beta decay to stable sulfur-32. This means the 32P ejects an electron (and an antineutrino) and transforms to 32S. In many cases (such as in sodium-24 examined in Fig. 17.9 of IPMB), beta decay occurs to an excited state that then emits gamma rays. But 32P is “pure” meaning there are no gamma rays, or even different competing beta decay paths. The book MIRD: Radionuclide Data and Decay Schemes by Eckerman and Endo, often cited in IPMB, shows this simple process with this figure and table. 


Note the half-life of 32P is two weeks, and the average energy of the ejected electron is 695 keV.

What happens when 32P decays? First, the electron can damage the cells. An electron of this energy has a range of about a millimeter, so that damage would not be localized to an individual bacterium (with a size on the order of 0.001 mm). However, when the 32P isotope decays, it will recoil, which could eject it from the DNA molecule, causing a strand break. Even if the recoil is not strong enough remove the atom from DNA, there would now be a sulfur atom where a phosphorus atom should be, and these two atoms, being in different columns of the periodic table, will have different chemical properties which surely would disrupt the DNA structure and function. As Judson says

An atom of 32P decays by emitting a beta particle, which is a high-speed electron, whereupon it is transformed into an atom of sulphur. The transformation, and the recoil of the atom as the electron leaves, breaks the bonds of the backbone of the DNA at that point… Half of those decayed in fourteen days. The [beta-galactosidase] genes were being killed.
So, what was the result? Judson summarizes,
The nonradioactive bacteria sampled before freezing were synthesizing enzyme copiously. So were the radioactive ones before freezing… Thawed after ten days, samples of nonradioactive bacteria synthesized beta-galactosidase just as vigorously as those never frozen. But the bacteria whose [beta-galactosidase] genes had suffered ten days of radioactive decay made the enzyme at less than half the rate they had before. Inactivation of the gene… abolished protein synthesis without delay. Stable intermediates between the gene and its protein—in other words, ribosomes whose RNA carried information to specify the sequence of amino acids—were ruled out. Continual action of the gene was necessary, either directly or by way of an intermediate that was unstable and so had to be steadily renewed.
When Francis Crick and Sydney Breener learned of Pardee and Riley’s results, they combined their knowledge of this experiment with a previous one by Elliot Volkin and Lazarus Astrachan using bacteriophages [a virus that infects bacteria] to hypothesize that a new type of RNA, called messenger RNA, was the unstable intermediary connecting DNA and protein. And the rest is history.

The Pardee and Riley experiment (which made up Monica Riley’s PhD dissertation… wow, what a dissertation topic!) is beautiful and important. It is also relevant today. Why do mRNA vaccines (like the Pfizer and Moderna Covid vaccines) have to be kept so cold when being transported and stored before use? As Pardee and Riley showed, the mRNA is unstable. It will decay quickly if not kept ultra-cold. Can mRNA change the DNA in your cells? No, the mRNA is simply a messenger that transfers the stored genetic information in DNA to the proteins formed on ribosomes. Moreover, one difference between E coli bacteria and human cells is that in humans the DNA is located inside the cell nucleus (bacteria don’t have nuclei) and the ribosomes are in the cytoplasm outside the nucleus. DNA can’t leave the nucleus, and mRNA can only go out of, not into, the nucleus. So an mRNA vaccine will cause human cells to make virus proteins (for the covid vaccine, it will produce the spike protein) that will be detected by your immune system, but the mRNA will only be present a short time before it decays and will not affect your DNA. Finally, the vaccine contains mRNA for only the spike protein, not for the entire virus. So, no actual intact viruses are produced by the vaccine. The spike protein simply activates your immune system, without exposing you to an infection.

Isn’t science great?