Ten years ago, I wrote a blog post about Edward Tufte's book The Visual Display of Quantitative Information. Recently, we discussed this book in a graduate class I am teaching. One goal of the course is for students to learn how to write scientific papers, including drawing figures.
I like Tufte's advice (below) about friendly data graphics. I agree with everything he says, except for his distaste of sans serif fonts.
Advice about making friendly data graphics,
from The Visual Display of Quantitative Information,
by Edward Tufte.
I wanted to give the students examples of how to improve illustrations but I didn't want to pick on a colleague, so I found a couple figures from my own papers that could be friendlier. The first one is from an article about magnetic stimulation that Peter Basser and I wrote. On the left is the original, and on the right is my revision.
The purpose of this figure was to show where magnetic stimulation occurs. That message was in the original figure, but you had to read the caption carefully to find it. In my revision, I clearly marked the locations where depolarization (that is, excitation) and hyperpolarization occur. The big square surrounding the original figure is what Tufte would call "chartjunk" and I deleted it. Instead, I tried to focus on the data. I also labeled the nerve and the coil, so you don't have to read the figure caption to determine what's what. Including units on one of the numerical values practically eliminated the need for a caption at all. I confess, the original figure was cropped from a mediocre scan of the article. Therefore, let's not focus on which figure is crisper, but rather on the overall design. Also, the 30-year-old original data is long lost so I had to retrace the contours in powerpoint using the polygon tool. If you look at the revised figure using high magnification, you may be able to see this. Nevertheless, in my opinion, the revised figure is better.
Another example is from a paper about the electrical stimulation of cardiac tissue.
The message of this figure is that adjacent regions of depolarization and hyperpolarization form when tissue is stimulated by a cathode. By shading the hyperpolarized region, I emphasized this message. I indicated the fiber direction, which is crucial to the main conclusion (tissue hyperpolarizes along the fiber direction). I eliminated the outer circle and subdued the coordinate axes to highlight the data, and inserted a black dot at the location of the cathode. You can decide if it's an improvement. A different version using color and containing all four quadrants is shown below.
A color version of Fig. 3f from
Sepulveda, Roth and Wikswo (1989).
Another confession: each original illustration was one frame from a multipanel figure. They might have been drawn differently were they stand-alone figures (but I doubt it).
Ion channels are macromolecular pores in cell membranes. When they evolved and what role they may have played in the earliest forms of life we still do not know, but today ion channels are most obvious as the fundamental excitable elements in the membranes of excitable cells. Ion channels bear the same relation to electrical signaling in nerve, muscle, and synapse as enzymes bear to metabolism. Although their diversity is less broad than that of enzymes, there are still many types of channels working in concert, opening and closing to shape the signals and responses of the nervous system. Sensitive but potent amplifiers, they detect the sounds of chamber music and guide the artist's paintbrush, yet also generate the violent discharges of the electric eel or the electric ray. They tell the Paramecium to swim backward after a gentle collision, and they propagate the leaf-closing response of the Mimosa plant.
More than in most areas of biology, we see in the study of ion channels how much can be learned by applying simple laws of physics. Much of what we know about ion channels is deduced from electrical measurements. Therefore it is essential to remember some rules of electricity…
Let's look at some topics covered by both IPMB and ICEM.
Toxins
Russ and I briefly mention toxins, saying “an example is tetrodotoxin (TTX), which binds to sodium channels and blocks them, making it a deadly poison.” Hille goes into more detail, explaining how toxins help separate currents and identify channels.
Pharmacological experiments with [channel blocking toxins] provided the first evidence needed to define channels as discrete entities. The magic bullet was tetrodotoxin (dubbed TTX by K.S. Cole), a paralytic poison of some puffer fish... In Japan this potent toxin attracted medical attention because puffer fish is prized there as a delicacy—with occasional fatal effects. Tetrodotoxin blocks action potential conduction in nerve and muscle. Toshio Narahashi brought a sample of TTX to John Moore’s laboratory in the United States. Their first voltage-clamp study with lobster giant axons revealed that TTX blocks INa [the sodium current] selectively, leaving IK and IL [the potassium and leak currents] untouched... Only nanomolar concentrations were needed.
Patch Clamping
Russ and I continue “the next big advance was patch-clamp recording…[which] revealed that the [ion channel] pores open and close randomly.” Hille expands on this idea.
Patch clamp ... forced a revision of the kinetic description of channel gating. At the single-channel level, the gating transitions are stochastic: they can be predicted only in terms of probabilities. Each trial with the same depolarizing step shows a new pattern of openings! Nevertheless, as Hodgkin and Huxley showed, gating does follow rules... Brief openings of Na channels are induced by repeated depolarizing steps... The openings appear after a short delay and cluster early in the sweep. When many records like this are averaged together, the ensemble average has a smoother transient time course of opening and closing, resembling the classical activation-inactivation sequence for macroscopicINa.
Hille's Figures 3.16 and 3.17—showing many individual patch clamp recordings averaged to reproduce the macroscopic Hodgkin and Huxley sodium and potassium currents—are my favorite illustrations in ICEM.
Calcium Channels
Russ and I have one paragraph about calcium channels. Hille has a whole chapter.
The biophysical properties of Ca channels might have been determined by classical voltage-clamp methods if the channels occurred in high density on a reliably clampable membrane. However, these channels are never found in high density, and many of the interesting ones occupy membranes that are difficult to clamp, such as dendrites, nerve terminals, and the complex infoldings of muscle cells. Even when Ca channels are on the surface membranes, as in the cell bodies of neurons, their small currents tend to be masked by those of many other channels, especially K [potassium] channels. The ambiguities caused by these problems delayed biophysical understanding of Ca channels.
Inward Rectification
Russ and I relegate inward rectification to a homework problem. Hille discusses why inward rectifiers are important.
Axons seem to be built for metabolic economy at rest. At the negative resting potential, all their channels tend to shut, minimizing the flow of antagonistic inward and outward currents and minimizing the metabolic costs of idling. Depolarization, on the other hand, tends to open channels and dissipate ion gradients, but the inactivation of Na channels and the delayed activation of K channels in axons keeps even this expenditure at a minimum.
Consider, however, the electrical activity of a tissue that cannot rest: the heart... Its cells spend almost half their time in the depolarized state... Furthermore, each depolarization lasts 100-600 ms. Metabolic economy in this busy but slow electrical activity is achieved in two ways. First, most ion channels are present at very low densities in heart cells, so even when activated, they pass [only small] currents…
The second economy, in non-pacemaker cells of the heart, is a type of K channel, the inward rectifier, that stops conducting during depolarization. The total membrane conductance is actually lower during the plateau phase of such action potentials than during the period between action potentials... Again antagonistic current flows are minimized. Heart muscle has a variety of K channels, many of which have the property of inward rectification or of rapid inactivation.
Having coauthored a textbook, I appreciate how much work must have gone into writing ICEM.
The figures are all clear, drawn in a uniform style, with a focus on the data. To have a consistent look, you can’t just cut and paste figures from an article into a book. They had to be lovingly reproduced and reformatted.
The list of references contains about 1800 articles summarizing the literature up to 2001, the date of the most recent edition.
The language is clear and readable. Young scientists looking for an example of effective scientific writing should read ICEM.
Hille appreciates the history of his subject. Concepts are clearer when placed in historical context.
The book is authoritative because the author is a giant in his field. He received the Lasker award (America's Nobel) for his work on ion channels.
Intermediate Physics for Medicine and Biology is superior to Ion Channels of Excitable Membranes in one way: homework problems. ICEM has none and IPMB has hundreds.
When growing up in Morrison, Illinois, I was a member of the Cub Scouts and then the Boy Scouts. I enjoyed the camping, hiking, and canoeing. Each summer I spent a week at scout camp, and loved it. In the winter, we would have a Klondike Derby, which involved pushing a large sled over the snow and then camping in the cold. I was a member of Morrison’s Troop 96 and Mr. Glenn Van Eaton was our Scoutmaster; behind his back we called him “General Glenn.” One of my fondest memories was being inducted into the Order of the Arrow. At a campfire ceremony, several of us were “tapped-out” for initiation, which involved spending a night in the woods alone.
I found my old Scout Handbook—molding in a box in our basement—and looked up the requirements for the atomic energy merit badge. They are impressive. Completing this merit badge provides a good preparation for Chapter 17 of IPMB.
Make a drawing showing how nuclear fission happens. Label all details. Draw a second picture showing how a chain reaction could be started. Also show how it could be stopped. Show what is meant by “critical mass.”
Draw and color the radiation hazard symbol. Explain where it should be used and not used. Tell why and how people must use radiation or radioactive materials carefully.
Do any THREE of the following:
Build an electroscope. Show how it works. Put a radiation source inside it. Explain any difference seen.
Make a simple Geiger counter. Tell the parts. Tell which types of radiation the counter can spot. Tell how many counts per minute of what radiation you have found in your home.
Build a model of a reactor. Show the fuel, the control rods, the shielding, the moderator, and any cooling material. Explain how a reactor could be used to change nuclear into electrical energy or make things radioactive.
Use a Geiger counter and a radiation source. Show how the counts per minute change as the source gets closer. Put three different kinds of material between the source and the detector. Explain any differences in the counts per minute. Tell which is the best to shield people from radiation and why.
Use fast-speed film and a radiation source. Show the principles of autoradiography and radiography. Explain what happened to the films. Tell how someone could use this in medicine, research, or industry.
Using a Geiger counter (that you have built or borrowed), find a radiation source that has been hidden under a covering. Find it in at least three other places under the cover. Explain how someone could use this in medicine, research, agriculture, or industry.
Visit a place where X-rays are used. Draw a floor plan of the room in which it is used. Show where the unit is. Show where the unit, the person who runs it, and the patient would be when it is used. Describe the radiation dangers from X-rays.
Make a cloud chamber. Show how it can be used to see the tracks caused by radiation. Explain what is happening.
Visit a place where radioisotopes are being used. Explain by a drawing how and why they are used.
Get samples of irradiated seeds. Plant them. Plant a group of nonirradiated seeds of the same kind. Grow both groups. List any differences. Discuss what irradiation does to seeds.
Build a Geiger counter? Mom would have vetoed that!
Working on the atomic energy merit badge may have been my initial exposure to physics; the first step in a long journey. Now it is called the nuclear science merit badge. Some of the requirements are the same, but there is more emphasis on radiation hazards (for example, radon) and nuclear medicine. Probably it is even better at preparing you for Intermediate Physics for Medicine and Biology.
My dad made it to Eagle Scout when he was young, but I didn’t uphold the family tradition. I quit scouts with the rank of Life. Most boys enter high school and lose interest in scouting, but a few hang on and make it to Eagle. I was planning on being one of the few, but when we moved out of town after my sophomore year I didn't restart with a new troop. Besides, I attended high school in the post-Vietnam/Watergate era, when scouting went out of fashion. Over the years, I came to disagree with the Boy Scouts’ positions on homosexuality and religion, so I don’t regret dropping out. But when I was a kid in Morrison, those issues never came up. We just had fun.
My Order of the Arrow sash.
My 18 merit badges (left to right, then top to bottom):
Stamp Collecting, First Aid, Music,
Swimming, Cooking, Canoeing,
Rowing, Camping, Reading,
Citizenship in the Nation, Emergency Preparedness, Citizenship in the Community,
Citizenship in the World, Atomic Energy, Scholarship,
Fish and Wildlife Management, Pioneering, and Environmental Science.
Those with a silver rim are required for Eagle.
Protection from the sun certainly reduces erythema [sunburn] and probably
reduces skin cancer. Protection is most important in
childhood years, both because children receive three times
the annual sun exposure of adults and because the skin of
children is more susceptible to cancer-causing changes. The
simple sun protection factor (SPF) alone is not an adequate
measure of effectiveness, because it is based on erythema,
which is caused mainly by UVB [ultraviolet B light, with wavelengths from
280 to 315 nm]. Some sunscreens do not
adequately protect against UVA radiation [315-400 nm]. Buka (2004) reviews
both sunscreens and insect repellents for children. He
finds several products that adequately block both UVA and
UVB. Look for a sunscreen labeled “broad spectrum” or
with at least three stars in a UVA rating system. An adequate
amount must be used: for children he recommends 1 fluid ounce (30 ml) per application of a product with SPF of 15
or more. The desired application of sunscreen is 2 mg cm−2.
Typical applications are about half this amount. It has been
suggested that one make two applications (Teramura et al.2012) or use a sunscreen with a very high SPF (Hao et al.2012).
This significant action is aimed at bringing nonprescription, over-the-counter (OTC) sunscreens that are marketed without FDA-approved applications up to date with the latest science to better ensure consumers have access to safe and effective preventative sun care options. Among its provisions, the proposal addresses sunscreen active ingredient safety, dosage forms, and sun protection factor (SPF) and broad-spectrum requirements. It also proposes updates to how products are labeled to make it easier for consumers to identify key product information.
“Broad spectrum sunscreens with SPF values of at least 15 are critical to the arsenal of tools for preventing skin cancer and protecting the skin from damage caused by the sun’s rays, yet some of the essential requirements for these preventive tools haven’t been updated in decades. Since the initial evaluation of these products, we know much more about the effects of the sun and about sunscreen’s absorption through the skin. Sunscreen usage has changed, with more people using these products more frequently and in larger amounts. At the same time, sunscreen formulations have evolved as companies innovated. Today’s action is an important step in the FDA’s ongoing efforts to take into account modern science to ensure the safety and effectiveness of sunscreens,” said FDA Commissioner Scott Gottlieb, M.D. “The proposal we’ve put forward would improve quality, safety and efficacy of the sunscreens Americans use every day. We will continue to work with industry, consumers and public health stakeholders to ensure that we’re striking the right balance. To further advance these goals, we’re also working toward comprehensive OTC reform, which will help foster OTC product innovation as well as facilitate changes necessary for the FDA to keep pace with evolving science and new safety data.”
The agency is issuing this proposed rule to put into effect final monograph regulations for OTC sunscreen drug products as required by the Sunscreen Innovation Act. OTC monographs establish conditions under which the FDA permits certain OTC drugs to be marketed without approved new drug applications because they are generally recognized as safe and effective (GRASE) and not misbranded. Over the last twenty years, new scientific evidence has helped to shape the FDA’s perspective on the conditions, including active ingredients and dosage forms, under which sunscreens could be considered GRASE.
The feds then get specific.
Of the 16 currently marketed active ingredients, two ingredients – zinc oxide and titanium dioxide – are GRASE for use in sunscreens; two ingredients – PABA and trolamine salicylate – are not GRASE for use in sunscreens due to safety issues. There are 12 ingredients for which there are insufficient safety data to make a positive GRASE determination at this time.
Sunscreening agents contain titanium dioxide (TiO2), kaolin, talc, zinc oxide (ZnO), calcium carbonate, and magnesium oxide. Newer chemical compounds, such as bemotrizinol, avobenzone, bisoctizole [sic], benzophenone-3 (BZ-3, oxybenzone), and octocrylene, are broad-spectrum agents and are effective against a broad range of solar spectrum both in experimental models and outdoor settings. Ecamsule (terephthalylidene dicamphor sulphonic acid), dometrizole trisiloxane [sic], bemotrizinol, and bisoctrizole are considered organic UVA sunscreening agents... Commercial preparations available in the market include a combination of these agents to cover a wide range of UV rays.
Composition and mechanism of action of sunscreening agents vary from exerting their action through blocking, reflecting, and scattering sunlight. Chemical sunscreens absorb high-energy UV rays, and physical blockers reflect or scatter light. Multiple organic compounds are usually incorporated into chemical sunscreening agents to achieve protection against a range of the UV spectrum. Inorganic particulates may scatter the microparticles in the upper layers of skin, thereby increasing the optical pathway of photons, leading to absorption of more photons and enhancing the sun protection factor (SPF), resulting in high efficiency of the compound.
Researchers are postulating that the generation of sunlight-induced free radicals causes changes in skin; use of sunscreens reduces these free radicals on the skin, suggesting the antioxidant property. Broad-spectrum agents have been found to prevent UVA radiation-induced gene expression in vitro in reconstructed skin and in human skin in vivo.
I'm fascinated by scientists who straddle physics and biology. In particular, I'm curious how a scientist trained in one field switches to another. The quintessential physicist-turned-biologist is Francis Crick. Yes, I mean Crick of “Watson and Crick," the team who discovered the structure of DNA.
Asimov's Biographical Encyclopedia of Science and Technology
At the time, under the leadership of [Max] Perutz, a veritable galaxy of physics-minded scientists was turning to biochemistry at Cambridge and their refined probings established the science of molecular biology, as fusion of biology, chemistry, and physics…
Crick was one of the physicists who turned to biochemistry or, rather, to molecular biology, and with him was a young American, James Dewey Watson…
The Eighth Day of Creation,
by Horace Freeland Judson.
Crick is to an unusual extent self-educated in biology. He went to a minor English public school, Mill Hill, in northern London; his interest in science was already so single-minded that his family thought him odd. At University College, London, he read physics and had nearly finished his doctorate when the war broke out and a German bomb destroyed his laboratory and gear. When Crick left the Admiralty and physics in 1947, he set out to master the literature of biology, reading with an appetite that has slackened, if at all, only in the last few years [Judson published The Eighth Day of Creation in 1979]. His peers concede without question his astonishing reach. Perutz, whose knowledge is encyclopedic in scope and order: “Francis of course reads more widely than the rest of us.” Jacques Monod, the science’s other great theorist: “No one man discovered or created molecular biology. But one man dominates intellectually the whole field, because he knows the most and understands the most. Francis Crick.”
This idea that you have to read the literature resonates with me. I spent the first months of graduate school at Vanderbilt reading research articles about how nerves worked. I took no classes in neurobiology. I didn’t have a biological mentor (my PhD advisor, John Wikswo, was a physicist). I just read. That's the way many physicists learn biology.
Judson humorously tells of Crick's introduction to Perutz's laboratory:
"The first thing Francis did was to read everything we had done," Perutz said. "Then he started criticizing."
Why did Crick change to biology? Judson explains
“An important reason Crick changed to biology, he said to me, was that he is an atheist, and was impatient to throw light into the remaining shadowy sanctuaries of vitalistic illusions. ‘I had read [Erwin] Schrodinger’s little book [What is Life?], too [According to Judson, "everyone read Schrodinger."]. Essentially, if you read that book critically, the main import is very peculiar; for one thing, it’s a book written by a physicist who doesn’t know any chemistry! But the impact—there’s no doubt that Schrodinger wrote it in a compelling style, not like the junk that most people write, and it was imaginative. It suggested that biological problems could be thought about, in physical terms—and thus gave the impression that exciting things in this field were not far off. My own motives I never had any doubt about; I was very clear in my mind. Because when I decided to leave the Admiralty, when I was about thirty, then on the grounds that I knew so little anyway I might just as well go into anything I liked, I looked around for fields which would illuminate this particular point of view, against vitalism. And the two fields I chose were what we would now call molecular biology, though the term wasn’t common then, certainly I didn’t know it—but I would have said the borderline between the living and nonliving. That was the phrase I had in my mind, on the one hand. And on the other, the higher nervous system and this problem of consciousness...”
What Mad Pursuit: A Personal View of Scientific Discovery,
by Francis Crick.
Crick reminisces about his shift from physics to biology in his autobiography What Mad Pursuit.
"By the time most scientists have reached age thirty they are trapped by their own expertise. They have invested so much effort in one particular field that it is often extremely difficult, at that time in their careers, to make a radical change. I, on the other hand, knew nothing, except for a basic training in somewhat old-fashioned physics and mathematics and an ability to turn my hand to new things. I was sure in my mind that I wanted to do fundamental research rather than going into applied research...
Since I essentially knew nothing, I had an almost completely free choice...Working in the Admiralty, I had several friends among the naval officers. They were interested in science but knew even less about it than I did. One day I noticed that I was telling them, with some enthusiasm, about recent advances in antibiotics—penicillin and such. Only that evening did it occur to me that I myself really knew almost nothing about these topics...It came to me that I was not really telling them about science. I was gossiping about it.
This insight was a revelation to me. I had discovered the gossip test—what you are really interested in is what you gossip about. Without hesitation, I applied it to my recent conversations. Quickly I narrowed down my interests to two main areas: the borderline between the living and nonliving, and the workings of the brain..."
I'm not sure Crick's reflections can be generalized. To me, they imply that the choice of a research topic can be haphazard and personal.
Four good books.
Perhaps more useful is Crick's thoughts about theory in biology, which appear in the conclusion of What Mad Pursuit.
"Physicists are all too apt to look for the wrong sorts of generalizations, to concoct theoretical models that are too neat, too powerful, and too clean. Not surprisingly, these seldom fit well with the data. To produce a really good biological theory one must try to see through the clutter produced by evolution to the basic mechanisms lying beneath them, realizing that they are likely to be overlaid by other, secondary mechanisms. What seems to physicists to be a hopelessly complicated process may have been what nature found simplest, because nature could only build on what was already there...
The job of theorists, especially in biology, is to suggest new experiments. A good theory makes not only predictions, but surprising predictions that then turn out to be true. (If its predictions appear obvious to experimentalists, why would they need a theorist?) ... If this book helps anyone to produce good biological theories, it will have performed one of its main functions."
Crick is a case study into how and why a physicist switches to studying biology. Readers of Intermediate Physics for Medicine and Biology who are contemplating such a switch may benefit from his story.
Want to learn more? Listen to Crick himself discuss how he became interested in science.
Listen to Francis Crick talking about how he became interested in science.
“The Electric Field Induced
During Magnetic Stimulation.”
Magnetic stimulation has been studied widely since its use in 1982 for stimulation of peripheral nerves (Polson et al. 1982), and in 1985 for stimulation of the cortex (Barker et al. 1985). The technique consists of inducing current in the body by Faraday’s law of induction: a time-dependent magnetic field produces an electric field. The transient magnetic field is created by discharging a capacitor through a coil held near the target neuron. Magnetic stimulation has potential clinical applications for the diagnosis of central nervous system disorders such as multiple sclerosis, and for monitoring the corticospinal tract during spinal cord surgery (for review, see Hallett and Cohen 1989). When activating the cortex transcranially, magnetic stimulation is less painful than electrical stimulation.
Appendix 1.
Although there have been many clinical studies of magnetic stimulation, until recently there have been few attempts to measure or calculate the electric field distribution induced in tissue. However, knowledge of the electric field is important for determining where stimulation occurs, how localized the stimulated region is, and what the relative efficacy of different coil designs is. In this paper, the electric field induced in tissue during magnetic stimulation is calculated, and results are presented for stimulation of both the peripheral and central nervous systems.
In Appendix 1 of this article, I derived an expression for the electric field E at position r, starting from
where N is the number of turns in the coil, μ0 is the permeability of free space (4Ï€ × 10-7 H/m), I is the coil current, r' is the position along the coil, and the integral of dl' is over the coil path. For all but the simplest of coil shapes this integral can't be evaluated analytically, so I used a trick: approximate the coil as a polygon. A twelve-sided polygon looks a lot like a circular coil. You can make the approximation even better by using more sides.
A circular coil (black) approximated by
a 12-sided polygon (red).
With this method I needed to calculate the electric field only from line segments. The calculation for one line segment is summarized in Figure 6 of the paper.
Figure 6 from “The Electric Field
Induced During Magnetic Stimulation.”
I will present the calculation as a new homework problem for IPMB. (Warning: t has two meanings in this problem: it denotes time and is also a dimensionless parameter specifying location along the line segment.)
Section 8.7
Problem 32 ½. Calculate the integral
for a line segment extending from x2 to x1. Define δ = x1 - x2 and R = r - ½(x1 + x2).
(a) Interpret δ and R physically.
(b) Define t as a dimensionless parameter ranging from -½ to ½. Show that r' equals r – R – tδ.
(e) Express the integral in terms of δ, R, and φ (the angle between R and δ).
The resulting expression for the electric field is Equation 15 in the article
Equation (15) in “The Electric Field Induced During Magnetic Stimulation.”
The photograph below shows the preliminary result in my research notebook from when I worked at the National Institutes of Health. I didn't save the reams of scrap paper needed to derive this result.
The November 10, 1988 entry
in my research notebook.
To determine the ends of the line segments, I took an x-ray of a coil and digitized points on it. Below are coordinates for a figure-of-eight coil, often used during magnetic stimulation. The method was low-tech and imprecise, but it worked.
The November 17, 1988 entry
in my research notebook.
The calculation above gives the electric field in an unbounded,
homogeneous tissue. The article also analyzes the effect of tissue
boundaries on the electric field.
The integral is dimensionless. “For distances from the coil that are similar to the coil size, this integral is approximately equal to one, so a rule of thumb for determining the order of magnitude of E is 0.1 N dI/dt, where dI/dt has units of A/μsec and E is in V/m.”
The inverse hyperbolic sine can be expressed in terms of logarithms: sinh-1z = ln[z + √(z2 + 1)]. If you're uncomfortable with hyperbolic functions, perhaps logarithms are more to your taste.
To be successful in science you must be in the right place at the right time. I was lucky to arrive at NIH as a young physicist in 1988—soon after magnetic stimulation was invented—and to have two neurologists using the new technique on their patients and looking for a collaborator to calculate electric fields.
A week after deriving the expression for the electric field, I found a similar expression for the magnetic field. It was never published. Let me know if you need it.
If you look up my article, please forgive the incorrect units for μ0 given in the Appendix. They should be Henry/meter, not Farad/meter. In my defense, I had it correct in the body of the article.
Correspondence about the article was to be sent to “Bradley J. Roth,
Building 13, Room 3W13, National Institutes of Health, Bethesda, MD
20892.” This was my office when I worked at the NIH intramural program
between 1988 and 1995. I loved working at NIH as part of the Biomedical Engineering and Instrumentation Program,
which consisted of physicists, mathematicians and engineers who
collaborated with the medical doctors and biologists. Cohen and Hallett
had their laboratory in the NIH Clinical Center (Building 10), and were part of the National Institute of Neurological Disorders and Stroke.
Hallett once told me he began his undergraduate education as a physics
major, but switched to medicine after one of his professors tried to
explain how magnetic fields are related to electric fields in special relativity.
A map of the National Institutes of Health campus
in Bethesda, Maryland.
The human brain is the most complex biological structure in the known universe. Its roughly 86 billion nerve cells power all of our thoughts, perceptions, memories, emotions, and actions. It’s what inspires us to build cities and compels us to gaze at the stars.
Powered by the global neuroscience community and overseen by an editorial board of leading neuroscientists from around the world, BrainFacts.org shares the stories of scientific discovery and the knowledge they reveal. Unraveling the mysteries of the brain has the potential to impact every aspect of human experience and civilization.
Join us as we explore the universe between our ears. Because when you know your brain, you know yourself.
The brain is the most complex computational device we know in the universe…and unless we do the math, unless we use mathematical theories, there’s absolutely no way we’re ever going to make sense of it.
Chapter 6 is about how nerve axons fire action potentials, which is critical for understanding how neurons in the brain communicate with other neurons and with the rest of the body.
Chapter 9 evaluates how weak electric and magnetic fields—say, from power lines or cell phones—affect the brain, and critically assesses if these fields cause brain cancer.
Chapters 12 and 16 describe tomography, which produces exquisite images of brain anatomy.
Chapter 17 deals with positron emission tomography, which provides functional information about which regions in the brain are active.
IPMB may not reach the cutting edge of brain science as BrainFacts.org does, but it does discuss many of the technological devices and mathematical tools needed to explore the frontier.
First page of Pennes (1948) J Appl Physiol 1:93-122.
I admire scientists who straddle the divide between physics and physiology, and who are comfortable with both mathematics and medicine. In particular, I am interested in how such interdisciplinary scientists are trained. Many, like myself, are educated in physics and subsequently shift focus to biology. But more remarkable are those (such as Helmholtz and Einthoven) who begin in biology and later contribute to physics.
Dr. Harry H. Pennes.—Dr. Harry H. Pennes [born 1918], who had been active in clinical work and research in psychiatry and neurology died in November, 1963, at his home in New York City at the age of 45. Dr. Pennes had worked with Dr. Paul H. Hoch and Dr. James Cattell at the Psychiatric Institute of New York Columbia-Presbyterian Medical Center on new techniques of research and medical experimentation.
Dr. Pennes was born in Philadelphia and studied medicine at the University of Pennsylvania where he received a degree in 1942. In 1944 he came to New York to do research at the Neurological Institute. Soon afterward he took a two-year residency at the New York State Psychiatric Institute, and he later joined the staff as Senior Research Psychiatrist. He was also the Research Associate in Psychiatry at Columbia University. At Morris Plains, N. J., Dr. Pennes participated in intensive studies in the Columbia-Greystone Brain Research Project. He did research into chemical warfare from 1953 to 1955 at the Army Chemical Center in Maryland. Later, in Philadelphia, he was Director of Clinical Research for the Eastern Pennsylvania Psychiatric Institute for several years. He subsequently returned to New York a few years ago and resumed private practice.
It can be argued that one of the most influential articles ever published in the Journal of Applied Physiology is the “Analysis of tissue and arterial blood temperatures in the resting human forearm” by Harry H. Pennes, which appeared in Volume 1, No. 2, published in August, 1948. Pennes measured the radial temperature distribution in the forearm by pulling fine thermocouples through the arms of nine recumbent subjects. He also conducted an extensive survey of forearm skin temperature and measured rectal and brachial arterial temperatures. The purpose of Pennes’ study was “to evaluate the applicability of heat flow theory to the forearm in basic terms of the local rate of tissue heat production and volume flow of blood.” An important feature of Pennes’ approach is that his microscopic thermal energy balance for perfused tissue is linear, which means that the equation is amenable to analysis by various methods commonly used to solve the heat-conduction equation. Consequently, it has been adopted by many authors who have developed mathematical models of heat transfer in the human. For example, I used the Pennes equation to analyze digital cooling in 1958 and developed a whole body human thermal model in 1961. The equation proposed by Pennes is now generally known either as the bioheat equation or as the Pennes equation.
So, how did a psychiatrist make a fundamental contribution to physics? I don’t know. Indeed, I have many questions about this fascinating man.
Did he work together with a mathematician?No. Pennes was the sole author on the paper. There was no acknowledgment thanking a physicist friend or an engineer buddy. The evidence suggests the work was done by Pennes alone.
Did he merely apply an existing model? No. He was the first to include a term in the heat equation to account for convection by flowing blood. He cited a previous study by Gagge et al., but their model was far simpler than his. He didn’t just adopt an existing equation, but rather developed a new and powerful mathematical model.
Was the mathematics elementary? No. He solved the heat equation in cylindrical coordinates. The solution of this partial differential equation included Bessel functions with imaginary arguments (aka modified Bessel functions). He didn’t cite a reference about these functions, but introduced them as if they were obvious.
Was his paper entirely theoretical? No. The paper was primarily experimental and the math appeared late in the article when interpreting the results.
Were the experiments easy? No, but they were a little gross. They required threading thermocouples through the arm with no anesthesia. Pennes claimed the “phlegmatic subjects occasionally reported no unusual pain.” I wonder what the nonphlegmatic subjects reported?
Was Pennes’s undergraduate degree in physics? I don’t know.
Did Pennes’s interest in math arise late in his career? No. His famous 1948 paper was submitted a few weeks before his 30th birthday.
Did Pennes work at an institution out of the mainstream that might promote unusual or quirky career paths? No. Pennes worked at
Columbia University’s College of Physicians and Surgeons, one of the oldest and most respected medical schools in the country.
Did Pennes pick up new skills while in the military? Probably not. He was 23 years old when the Japanese attacked Pearl Harbor, but I can’t find any evidence he served in the military during World War II. He earned his medical degree in 1942 and arrived at Columbia in 1944.
Do other papers published by Pennes suggest an expertise in math? I doubt it. I haven’t read them all, but most study how drugs affect the brain. In fact, his derivation of the bioheat equation seems so out-of-place that I’ve entertained the notion there were two researchers named Harry H. Pennes at Columbia University.
Did Pennes’ subsequent career take advantage of his math skills? Again, I am not sure but my guess is no. The Columbia-Greystone Brain Project is famous for demonstrating that lobotomies are not an effective treatment of brain disorders. Research on chemical warfare should require expertise in toxicology.
How did Pennes die? According to Wikipedia he committed suicide. What a tragic loss of a still-young scientist!
I fear my analysis of Harry Pennes provides little insight into how biologists or medical doctors can contribute to physics, mathematics, or engineering. If you know more about Pennes’s life and career, please contact me (roth@oakland.edu).
Even though Harry Pennes’s legacy is the bioheat equation, my guess is that he would’ve been shocked that we now think of him as a biological physicist.
I am an emeritus professor of physics at Oakland University, and coauthor of the textbook Intermediate Physics for Medicine and Biology. The purpose of this blog is specifically to support and promote my textbook, and in general to illustrate applications of physics to medicine and biology.