Friday, September 21, 2018

Quick Calculus

In Intermediate Physics for Medicine and Biology, Russ Hobbie and I assume the reader knows calculus. Some readers, however, have weak or rusty math skills. Is there an easy way to learn what is needed?

Quick Calculus is a self-teaching guide written by Daniel Klepner and Norman Ramsey
Quick Calculus.
Yes! Quick Calculus is a self-teaching guide written by Daniel Klepner and Norman Ramsey. Their preface states:
Before you plunge into Quick Calculus, perhaps we ought to tell you what it is supposed to do. Quick Calculus should teach you the elementary techniques of differential and integral calculus with a minimum of wasted effort on your part; it is designed for you to study by yourself. Since the best way for anyone to learn calculus is to work problems, we have included many problems in this book. You will always see the solution to your problem as soon as you have finished it, and what you do next will depend on your answer. A correct answer generally sends you to new material, while an incorrect answer sends you to further explanations and perhaps another problem.
The book covers nearly all the calculus needed in IPMB.
  • Chapter One reviews functions and graphs, emphasizing trigonometry, exponentials, and logarithms.
  • Chapter Two discusses differentiation—including the product rule and the chain rule—and maximum/minimum problems.
  • Chapter Three analyzes integration, both definite and indefinite, and covers techniques such as change of variable, integration by parts, and multiple integrals.
  • Chapter Four summarizes all the results in a few pages.
Math Book useful for Intermediate Physics for Medicine and Biology
Math Books Useful for IPMB.
The only calculus in IPMB that Quick Calculus doesn’t teach is vector calculus; for that you should consult Div, Grad, Curl and All That. Used Math covers more ground than Quick Calculus, but it is a handbook rather than a self-teaching guide.

Quick Calculus has several virtues. It is clearly written, it emphasizes understanding math visually with lots of plots, and it focuses on utilitarian techniques without distracting rigor. If you want to understand math at a fundamental level, you should take a real calculus class. If you want to brush up on what's needed to get through IPMB, use Quick Calculus.

One disadvantage is that Quick Calculus is old. The second edition—the most recent one I am aware of—was published in 1985. It might be difficult to purchase, although Amazon seems to have copies for sale. The authors make quaint comments about "readers who have an electronic calculator," as opposed to slide rules I suppose. I also found several typos, which might frustrate readers using the book for self-study.

A sample from Quick Calculus
A sample from Quick Calculus.
The format is unusual. The text is divided into approximately half-page “frames,” and the reader is guided from one frame to the next. Someone should put this book online, because it would lend itself to an interactive online format. Rather than explain how the book is organized, I've taken Section 1.17 of IPMB and rewritten it in the style of Quick Calculus (see below). In my opinion, if all of Intermediate Physics for Medicine and Biology were organized like this it would be tedious. What do you think?

Friday, September 14, 2018

Gulliver was a Bad Biologist

Gulliver's Travels
Gulliver's Travels by Jonathan Swift.
Most of my reading is nonfiction, but recently I read Jonathan Swift’s Gulliver’s Travels. The story describes Englishman Lemuel Gulliver's journeys to exotic lands, including Lilliput inhabited by tiny people, and Brobdingnag where giants live. Swift was a delightful and funny writer, but Florence Moog claims Gulliver was a Bad Biologist (Scientific American, Volume 179, November 1948, Pages 52-55). The problem is scaling, which Russ Hobbie and I discuss in Chapter 2 of Intermediate Physics for Medicine and Biology. The properties of animals change as they get bigger or smaller; you can’t just scale people up or down and expect them to function correctly. As Moog writes “for a student of comparative biology Gulliver’s book may serve as an unpremeditated textbook on biological absurdities.”

Gulliver was a Bad Biologist
Gulliver was a Bad Biologist, by Florence Moog.
Moog’s first example was the 60-foot tall Brobdingnagians. She notes that because their mass increases as the cube of their height, supporting their body would “necessitate a truly ponderous skeleton” (A point I’ve discussed before in this blog when contemplating elephants). The giants would need thick stubby legs and fat bones.

Gulliver's Travels Title Page
Title Page of Gulliver's Travels.
Moog then considers the six-inch-tall Lilliputians. “If the Brobdingnagians were too big to exist, the mouse-sized Lilliputians were too small to be human.” She explains that smaller animals have a higher specific metabolic rate (that is, rate per unit mass) than larger animals. “Gulliver … failed to realize that the creatures of his invention would have spent the larger part of their time stuffing themselves with food.”

Why was I reading Gulliver’s Travels? Blame Neil deGrasse Tyson. The Public Broadcasting System is sponsoring the Great American Read this summer, where we vote for our favorite of one hundred famous books. In their Launch Special, various celebrities select their personal favorite, and Tyson—one of the few scientists featured on the special—chose Gulliver. Apparently he hasn't studied Chapter 2 of IPMB. Regular readers of this blog know that I am a fan of Isaac Asimov, and I have been voting for his Foundation Series twice a day (once using the Firefox browser, and once using Safari) all summer.

Neil deGrasse Tyson likes Gulliver's Travels
Neil deGrasse Tyson discussing Gulliver's Travels.
Maybe Tyson has a point. Moog concludes that “after all, we must not be too hard on Gulliver for failing to understand the biological conditions that made him a man—and an implausible liar. His talents … were in the psychological realm.” His satirical story provides great insight into human behavior.

Friday, September 7, 2018

Microwave Weapons are Prime Suspect in Ills of U.S. Embassy Workers

Last Saturday, The New York Times published an article by Pulitzer Prize-winning science writer William Broad with the headline Microwave Weapons are Prime Suspect in Ills of U.S. Embassy Workers.
“Doctors and scientists say microwave strikes may have caused sonic delusions and very real brain damage among embassy staff and family members.”
The article has made quite a splash; I even heard about it on the news.

This topic is relevant to Intermediate Physics for Medicine and Biology, so I'll address it in this post. I hesitate, however, because the science is uncertain and the topic of electromagnetic effects on health is fraught with conspiracy theories and voodoo science. Yet, the issue has more than academic importance; U.S.-Cuban relations suffered because of these unexplained health effects. So, reluctantly, I wade in.

I begin with a report from last March in the prestigious Journal of the American Medical Association (JAMA) by Swanson et al. about Neurological Manifestations Among US Government Personnel Reporting Directional Audible and Sensory Phenomena in Havana, Cuba (Volume 319, Pages 1125-1133).
  • Question: Are there neurological manifestations associated with reports of audible and sensory phenomena among US government personnel in Havana, Cuba? 
  • Findings: In this case series of 21 individuals exposed to directional audible and sensory phenomena, a constellation of acute and persistent signs and symptoms were identified, in the absence of an associated history of blunt head trauma. Following exposure, patients experienced cognitive, vestibular, and oculomotor dysfunction, along with auditory symptoms, sleep abnormalities, and headache. 
  • Meaning: The unique circumstances of these patients and the consistency of the clinical manifestations raised concern for a novel mechanism of a possible acquired brain injury from a directional exposure of undetermined etiology.
The article's claim of cognitive dysfunction has been hotly debated. A post in the blog Neuroskeptic was….er….skeptical. It concludes
“Overall … the JAMA paper is pretty weak. Clearly, something has happened to make these 21 people experience so many unpleasant symptoms, but at present I don’t think we can rule out the possibility that the cause is psychological in nature.”
Last week's New York Times article was triggered by the recently proposed hypothesis that microwaves are responsible for these health issues. Russ Hobbie and I discuss the biological effects of electric and magnetic fields in Section 9.10 of IPMB. We focus on the potential of microwaves to induce tumors, and conclude that nonthermal mechanisms are implausible. In other words, radiofrequency fields can heat tissue—just like in your microwave oven—but they don’t cause cancer. The hypothesis touted in the Times article, however, is a thermal mechanism: a thermoelastic pressure wave sensed as sound by part of the inner ear called the cochlea.

Hearing induced by microwaves has been studied for years, and is known as the “Frey effect” after Allen Frey, who first reported it. A 2007 article in the journal Health Physics by James Lin and Zhangwei Wang (Volume 92, Pages 621-628) describes this phenomenon.
Hearing of Microwave Pulses by Humans and Animals: Effects, Mechanism, and Thresholds

The hearing of microwave pulses is a unique exception to the airborne or bone-conducted sound energy normally encountered in human auditory perception. The hearing apparatus commonly responds to airborne or bone-conducted acoustic or sound pressure waves in the audible frequency range. But the hearing of microwave pulses involves electromagnetic waves whose frequency ranges from hundreds of MHz to tens of GHz. Since electromagnetic waves (e.g., light) are seen but not heard, the report of auditory perception of microwave pulses was at once astonishing and intriguing. Moreover, it stood in sharp contrast to the responses associated with continuous-wave microwave radiation. Experimental and theoretical studies have shown that the microwave auditory phenomenon does not arise from an interaction of microwave pulses directly with the auditory nerves or neurons along the auditory neurophysiological pathways of the central nervous system. Instead, the microwave pulse, upon absorption by soft tissues in the head, launches a thermoelastic wave of acoustic pressure that travels by bone conduction to the inner ear. There, it activates the cochlear receptors via the same process involved for normal hearing. Aside from tissue heating, microwave auditory effect is the most widely accepted biological effect of microwave radiation with a known mechanism of interaction: the thermoelastic theory. The phenomenon, mechanism, power requirement, pressure amplitude, and auditory thresholds of microwave hearing are discussed in this paper. A specific emphasis is placed on human exposures to wireless communication fields and magnetic resonance imaging (MRI) coils.
Their introduction gives some useful numbers.
The microwave auditory phenomenon or microwave hearing effect pertains to the hearing of short-pulse, modulated microwave energy at high peak power by humans and laboratory animals (Frey 1961, 1962; Guy et al.1975a, b; Lin 1978, 1980, 2004). The effect can arise, for example, at an incident energy density threshold of 400 mJ m-2 for a single, 10-µs-wide pulse of 2,450 MHz microwave energy, incident on the head of a human subject (Guy et al. 1975a, b; Lin 1978). It has been shown to occur at a specific absorption rate (SAR) threshold of 1.6 kW kg-1 for a single 10-µs-wide pulse of 2,450 MHz microwave energy. A single microwave pulse can be perceived as an acoustic click or knocking sound, and a train of microwave pulses to the head can be sensed as an audible tune, with a pitch corresponding to the pulse repetition rate (Lin 1978).
The temperature increase caused by such a microwave pulse is rapid (microseconds) and tiny (microdegrees Celsius), and the associated pressure is small (tenths of a Pascal, or equivalently millionths of an atmosphere). People can hear these sounds because the cochlea is so sensitive.

One reason that microwaves might be a more plausible mechanism than sound waves for the apparent embassy attacks is acoustic impedance, discussed in Chapter 13 of IPMB. Air and water have very different impedances. When a sound wave impinges on a person, most of the acoustic energy is lost by reflection, and little (perhaps one part in a thousand) enters the fluid-filled body. Animals have evolved elaborate structures in the middle ear to mitigate this acoustic mismatch. However, a pressure wave caused by microwave heating originates inside the ear. No energy is lost by sound reflecting from the air-tissue interface.

I am no expert on thermoeleastic effects, but it seems plausible that they could be responsible for the perception of sound by embassy workers in Cuba. By modifying the shape and frequency of the microwave pulses, you might even induce sounds more distinct than vague clicks. However, I don’t know how you get from little noises to brain damage and cognitive dysfunction. My brain isn't damaged by listening to clicky sounds. Either there is more to this that I don’t understand, or—as neuroskeptic speculates—the rest of the cause is “psychological in nature.”

Right now, our country could use a hard-nosed scientist or engineer expert in the bioeffects of microwave radiation to look into this problem. Where have you gone John Moulder and Ken Foster? We need you!

Friday, August 31, 2018

Interface between Physics and Biology: Training a New Generation of Creative Bilingual Scientists

The journal Trends in Cell Biology publishes a type of article called “Scientific Life”. The journal website states:
Interface between Physics and Biology: Training a New Generation of Creative Bilingual Scientists by Daniel Riveline and Karsten Kruse
Scientific Life articles are short pieces that aim to discuss important issues pertaining to the scientific community or the advancement of science. The content of these articles could range from focused topics such as an unusual career path, to more broad topics such as education and training policies, ethics, publishing, funding, etc. These articles should be aimed at a broad audience and written in a journalistic style and are also intended to be provocative and to stimulate debate.
In June 2017, Daniel Riveline and Karsten Kruse published a Scientific Life article that's particularly relevant to Intermediate Physics for Medicine and Biology.
Interface between Physics and Biology: Training a New Generation of Creative Bilingual Scientists

Daniel Riveline and Karsten Kruse

Whereas physics seeks for universal laws underlying natural phenomena, biology accounts for complexity and specificity of molecular details. Contemporary biological physics requires people capable of working at this interface. New programs prepare scientists who transform respective disciplinary views into innovative approaches for solving outstanding problems in the life sciences.
Riveline and Kruse highlight two physicists who contributed to biology: Hermann von Helmholtz and Max Delbrück. Then they ask: how do we train scientists like these?
This necessity for a thorough understanding of physics concepts and a broad knowledge of genuine biology to make contributions in the spirit of Helmholtz and Delbrück calls for a new way of training the coming generation in this interdisciplinary field.
They conclude
We need translators who are able to rephrase a specific biological phenomenon in the language of physics and vice versa.
I like this idea of "translators," and I believe that Intermediate Physics for Medicine and Biology helps train them. In Section 1.2 of IPMB, Russ Hobbie and I express our view of how to translate between physics and biology.
Biologists and physicists tend to make models differently (Blagoev et al. 2013). Biologists are used to dealing with complexity and diversity in biological systems. Physicists seek to explain as many phenomena with as few overarching principles as possible. Modeling a process is second nature to physicists. They willingly ignore some features of the biological system while seeking these principles. It takes experience and practice to decide what can be simplified and what can not.
IPMB's way of preparing students to work at the interface between physics and biology is to analyze examples that capture some important biological idea using simple mathematical tools: toy models. We stress that
In many cases, simple models are developed in the homework problems at the end of each chapter. Working these problems will provide practice in the art of modeling.
Do your homework! Those problems are the most important part of the book.

Riveline and Kruse conclude that training scientists at the intersection of physics and biology is crucial.
Scientific, educational, and administrative challenges abound in this endeavor to form upcoming generations of scientists at the interface between physics and biology, but we anticipate that the gain in quality for this interdisciplinary field will benefit science in general and throughout the world. The need for such scientists appears to be essential to answer the new challenges in biology.
I concur.

Friday, August 24, 2018

Baring the Sole: The Rise and Fall of the Shoe-Fitting Fluoroscope

A homework problem in Intermediate Physics for Medicine and Biology asks the student to estimate the dose experienced by customers exposed to x-rays when buying shoes.
A homework problem in Chapter 16 of Intermediate Physics for Medicine and Biology asks the student to estimate the dose experienced by customers exposed to x-rays when buying shoes.
Problem 8. During the 1930s and 1940s it was popular to have an x-ray fluoroscope unit in shoe stores to show children and their parents that shoes were properly fit. These marvellous units were operated by people who had no concept of radiation safety and aimed a beam of x-rays upward through the feet and right at the reproductive organs of the children! A typical unit had an x-ray tube operating at 50 kVp with a current of 5 mA.
(a) What is the radiation yield for 50-keV electrons on tungsten? How much photon energy is produced with a 5-mA beam in a 30-s exposure?
(b) Assume that the x-rays are radiated uniformly in all directions (this is not a good assumption) and that the x-rays are all at an energy of 30 keV. (This is a very poor assumption.) Use the appropriate values for striated muscle to estimate the dose to the gonads if they are at a distance of 50 cm from the x-ray tube. Your answer will be an overestimate. Actual doses to the feet were typically 0.014–0.16 Gy. Doses to the gonads would be less because of 1/r2. Two of the early articles pointing out the danger are Hempelmann (1949) and Williams (1949).
To learn more about using x-rays to fit shoes, see the wonderfully titled article “Baring the Sole: The Rise and Fall of the Shoe-Fitting Fluoroscope” (Isis, 91:260-282, 2000) by Jacalyn Duffin and Charles Hayter. The abstract states
One of the most conspicuous nonmedical uses of the x-ray was the shoe-fitting fluoroscope. It allowed visualization of the bones and soft tissues of the foot inside a shoe, purportedly increasing the accuracy of shoe fitting and thereby enhancing sales. From the mid 1920s to the 1950s, shoe-fitting fluoroscopes were a prominent feature of shoe stores in North America and Europe. Despite the widespread distribution and popularity of these machines, few have studied their history. In this essay we trace the origin, technology, applications, and significance of the shoe-fitting fluoroscope in Britain, Canada, and the United States. Our sources include medical and industrial literature, oral and written testimony of shoe retailers, newspapers, magazines, and government reports on the uses and dangers of these machines. The public response to shoe-fitting fluoroscopes changed from initial enthusiasm and trust to suspicion and fear, in conjunction with shifting cultural attitudes to radiation technologies.
Why use x-rays to size loafers? Duffin and Hayter claim “the shoe-fitting fluoroscope was nothing more nor less than an elaborate form of advertising designed to sell shoes.” They say that the device was “aimed especially at mothers…the fluoroscope became yet another instrument of experts’ advice about ‘scientific motherhood.’” These fluoroscopes were rather fancy: “Like an altar to commerce, it became a featured part of the décor in high-class stores, situated on a specially lit and often elevated ‘fitting platform’…Whether in a traditional mahogany finish or art deco shapes and colors, the design responded to the demands of interior decorating.” But these x-ray sources were dangerous. “Store personnel and the adult and child customers were at risk of stunted growth, dermatitis, cataracts, malignancy, and sterility.” The papers by Louis Hempelmann and Charles Williams were the turning point.
“In 1949, two landmark articles on the hazards of shoe-fitting fluoroscopes appeared in the 1 September issue of the New England Journal of Medicine. The first, by Charles R. Williams of the Harvard School of Public Health, contained actual measurements of the high and inconsistent radiation outputs of twelve operating machines. The second, by L. H. Hemplemann [sic], also of Harvard, described the dangers of the uncontrolled use of shoe-fitting fluoroscopes, including interference with foot development in children and radiation damage to the skin and bone marrow.”
A shoe fluoroscope displayed at the US National Museum of Health and Medicine
A shoe fluoroscope displayed at the US National Museum of Health and Medicine. This machine was manufactured by Adrian Shoe Fitter, Inc. circa 1938 and used in a Washington, D.C. shoe store. From Wikipedia.
We don't have much evidence indicating how this exposure affected people's health. It was not lethal like that suffered by the Radium Girls who painted luminous dials using radium-based paint, but everyone buys shoes so millions of people were exposed. The risk of widespread low-dose radiation is difficult to assess, especially years latter.

By the early 1960s the fad was over. I was born in 1960. Yikes! I just missed getting zapped. Did you?

Friday, August 17, 2018

Scientific Babel

Intermediate Physics for Medicine and Biology, and this blog, are written in English. As far as I am aware, no one has ever translated the book into another language. A few of you may be reading the blog using a program like Google translate, but I doubt it. English is now the universal language of science, so anyone interested in science blogs can probably read what I write.

How did English become so dominant? That story is told in Scientific Babel: How Science was Done Before and After Global English, by Michael Gordin. Much of the book is summarized by the illustration below, adapted from Gordin’s Figure 1.
Percentage of the global scientific literature for several languages versus time.
Percentage of the global scientific literature for several languages versus time. Adapted from Fig. 0.1 in Scientific Babel, by Michael Gordin.
In his introduction, Gordin writes:
“English is dominant in science today, and we can even say roughly how much. Sociolinguists have been collecting data for the past several decades on the proportions of the world scientific literature that are published in various tongues, which reveal a consistent pattern. Fig. 0.1 exhibits striking features, and most of the chapters of this book—after an introductory chapter about Latin—move across the same years that are plotted here. In each chapter, I focus on a language or set of languages in order to highlight the lived experience of scientists, and those features are sometimes obscured as well as revealed by these curves. Starting from the most recent end of this figure and walking back, we can begin to uncover elements of the largely invisible story. The most obvious and startling aspect of this graph is the dramatic rise of English beginning from a low point at 1910. The situation is actually even more dramatic than it appears from this graph, for these are percentages of scientific publication—slices of a pie, if you will—and that pie is not static. On the contrary, scientific publication exploded across this period, which means that even in the period from 1940 to 1970 when English seems mostly flat, it is actually a constant percentage of an exponentially growing baseline. By the 1990s, we witness a significant ramp-up on top of an increasingly massive foundation: waves on top of deluges on top of tsunamis of scientific English. This is, in my view, the broadest single transformation in the history of modern science, and we have no history of it. That is where the book will end, with a cluster of chapters focusing on the phenomenon of global scientific English, the way speakers of other once dominant languages (principally French and German) adjusted to the change, preceded by how Anglophones in the Cold War confronted another prominent feature of the midpoint of the graph (1935-1965): the dramatic growth of scientific Russian.

But on second glance, one of the most interesting aspects of this figure is how much of it is not about English, how the story of scientific language correlates with, but does not slavishly follow, the trajectory of globalization. Knowledge and power are bedfellows; they are not twins. Simply swinging our gaze leftward across the graph sets aside the juggernaut of English and allows other, overshadowed aspects of these curves (such as the rise of Russian) to come to the fore. Before Russian, in the period 1910 to 1945, the central feature of the graph is no longer English but the prominent rise and decline of German as a scientific language. German, according to this figure, was the only language ever to overtake English since 1880, and during that era a scientist would have had excellent grounds to conclude that German was well poised to dominate scientific communication. The story of the twentieth century, which from the point of view of the history of globalization is ever-rising English, from the perspective of scientific languages might be better reformulated as the decline of German. That decline started, one can see, before the advent of the Nazi regime in 1933, and one of the main arguments of this book is that the aftermath of World War I was central in cementing both the collapse of scientific German and the ballistic ascent of English. We can move further left still, and in the period from 1880 to 1910 we see an almost equal partition of publications, hovering around 30% apiece for English, French, and German, a set I will call the 'triumvirate.' (The existence of the triumvirate is simply observed as a fact in this book; I do not propose to trace the history of its emergence.) French underwent a monotonic decline throughout the twentieth century; one gets the impression (although the data is lacking) that this decline began before our curve does, but to participants in the scientific community at the beginning of our modern story, it appeared stable. My narrative for this earlier period comes in two forms: the emergence of Russian, with a minor peak in the late nineteenth century, as the first new language to threaten to seriously destabilize the triumvirate; and the countervailing alternative (never broadly popular but still quite revealing in microcosm) to replace the multilingual scientific communication system with one conducted in a constructed language such as Esperanto. Long before all of this data, all of these transformations, there was Latin, and that is where the book properly begins.

For all the visual power of the graph, most of this book pushes against its most straightforward reading: the seemingly inexorable rise of English. Behind the graph lie a million stories, and it is history's task to uncover them….”
I am not skilled with languages. In fact, one of my favorite jokes is to brag that “between my wife and I, we know five languages!” My wife Shirley speaks Mandarin Chinese, Fukienese (another dialect of Chinese), Tagalog (the language of the Philippines), Spanish, and English. The punchline, of course, is that I know only English.

We scientists who grew up speaking English are lucky; we don’t need to learn a foreign language to read modern scientific papers. This isn't fair, but that's the way it is. I tell my international graduate students that they must learn to write English well, or their careers will suffer. Scientists are judged by their journal articles and grant proposals, and both are documents written in English. I review many papers for journals, and I complain obnoxiously in my critique if the manuscript's English is not clear. Pity the poor soul who has me as their referee.

Although I don’t speak any foreign languages, that doesn't mean I have never studied any. In high school I took three years of Latin. I translated sections of Caeser’s Gallic Wars and Cicero’s speeches against Catiline, but slowly and always with my Latin-English dictionary at my side. I never could simply read Latin; I would laboriously translate Latin into English, and then read the English to figure out what the text was talking about. Although my Latin was never fluent, I did learn much about Roman culture. In my junior year of high school, I had the top score in a statewide Junior Classical League exam about Roman history. I love Isaac Asimov's science fiction and popularizations, but the first books by Asimov that I read were histories: his two volume set The Roman Republic and The Roman Empire.

I ought to know German, because my dad's side of the family all immigrated from Germany. But that was two generations back, and there is little of the old country in our family gatherings. I tried to master French before Shirley and I visited Paris. I learned just enough to buy breakfast: "Bonjour Madam," "Trois Croissant," "Merci." All went well as long as no one asked me a question.

In college I expected to have a language requirement, and my plan was to take Russian. I started college in 1978, and the figure above explains how at that time Russian was the logical second language for an English-speaking physics student. Ultimately, the University of Kansas accepted FORTRAN as my foreign language, ending any chance of my becoming bilingual.

For those of you interested in how English became the language of science, I recommend Scientific Babel. For those of you interested in how the title of Intermediate Physics for Medicine and Biology is written in various languages, see below (found using Google Translate). Enjoy!

Friday, August 10, 2018


Intermediate Physics for Medicine and Biology
Intermediate Physics for Medicine and Biology.
This week I spent three days in Las Vegas.

I know you’ll be shocked...shocked! hear there is gambling going on in Vegas. If you want to improve your odds of winning, you need to understand probability. Russ Hobbie and I discuss probability in Intermediate Physics for Medicine and Biology. The most engaging way to introduce the subject is through analyzing games of chance. I like to choose a game that is complicated enough to be interesting, but simple enough to explain in one class. A particularly useful game for teaching probability is craps.

The rules: Throw two dice. If you role a seven or eleven you win. If you role a two, three, or twelve you lose. If you role anything else you keep rolling until you either “make your point” (get the same number that you originally rolled) and win, or “crap out” (roll a seven) and lose.

Two laws are critical for any probability calculation.
  1. For independent events, the probability of both event A and event B occurring is the product of the individual probabilities: P(A and B) = P(A) P(B).
  2. For mutually exclusive events, the probability of either event A or event B occurring is the sum of the individual probabilities: P(A or B) = P(A) + P(B).
Snake eyes
Snake Eyes.
For instance, if you roll a single die, the probability of getting a one is 1/6. If you roll two dice (independent events), the probability of getting a one on the first die and a one on the second (snake eyes) is (1/6) (1/6) = 1/36. If you roll just one die, the probability of getting either a one or a two (mutually exclusive events) is 1/6 + 1/6 = 1/3. Sometimes these laws operate together. For instance, what are the odds of rolling a seven with two dice? There are six ways to do it: roll a one on the first die and a six on the second die (1,6), or (2,5), or (3,4), or (4,3), or (5,2), or (6,1). Each way has a probability of 1/36 (the two dice are independent) and the six ways are mutually exclusive, so the probability of a seven is 1/36 + 1/36 + 1/36 + 1/36 + 1/36 + 1/36 = 6/36 = 1/6.

Now let's analyze craps. The probability of winning immediately is 6/36 for a seven plus 2/36 for an eleven (a five and a six, or a six and a five), for a total of 8/36 = 2/9 = 22%. The probability of losing immediately is 1/36 for a two, plus 2/36 for a three, plus 1/36 for a twelve (boxcars), for a total of 4/36 = 1/9 = 11%. The probability of continuing to roll is….we could work it out, but the sum of the probabilities must equal 1 so a shortcut is to just calculate 1 – 2/9 – 1/9 = 6/9 = 2/3 = 67%.

The case when you continue rolling gets interesting. For each additional roll, you have three possibilities:
  1. Make you point and win with probability a
  2. Crap out and lose with probability b, or 
  3. Roll again with probability c.
What is the probability that, if you keep rolling, you make your point before crapping out? You could make your point on the first additional roll with probability a; you could roll once and then roll again and make your point on the second additional roll with probability ca; you could have three additional rolls and make your point on the third one with probability cca, etc. The total probability of making your point is a + ca + cca + … = a (1 + c + c2 + …). But the quantity in parentheses is the geometric series, and can be evaluated in closed form: 1 + c + c2 + … = 1/(1 - c). The probability of making your point is therefore a/(1 - c). We know that one of the three outcomes must occur, so a + b + c = 1 and the odds of making your point can be expressed equivalently as a/(a + b). If your original roll was a four, then a = 3/36. The chance of getting a seven is b = 6/36. So, a/(a + b) = 3/9 = 1/3 or 33%. If your original roll was a five, then a = 4/36 and the odds of making your point is 4/10 = 40%. If your original roll was a six, the likelihood of making your point is 5/11 = 45%. You can work out the probabilities for 8, 9, and 10, but you’ll find they are the same as for 6, 5, and 4.

Now we have all we need to determine the probability of winning at craps. We have a 2/9 chance of rolling a seven or eleven immediately, plus a 3/36 chance of rolling a four originally followed by the odds of making your point of 1/3, plus…I will just show it as an equation.

P(winning) = 2/9 + 2 [ (3/36) (1/3) + (4/36) (4/10) + (5/36) (5/11) ] = 49.3 % .

The probability of losing would be difficult to work out from first principles, but we can take the easy route and calculate P(losing) = 1 – P(winning) = 50.7 %.

The chance of winning is almost even, but not quite. The odds are stacked slightly against you. If you play long enough, you will almost certainly lose on average. That is how casinos in Las Vegas make their money. The odds are close enough to 50-50 that players have a decent chance of coming out ahead after a few games, which makes them willing to play. But when averaged over thousands of players every day, the casino always wins.

Lady Luck, by Warren Weaver
Lady Luck, by Warren Weaver.
I hope this analysis helps you better understand probability. Once you master the basic rules, you can calculate other quantities more relevant to biological physics, such as temperature, entropy, and the Boltzmann factor (for more, see Chapter 3 of IPMB). When I teach statistical thermodynamics or quantum mechanics, I analyze craps on the first day of class. I arrive early and kneel in a corner of the room, throwing dice against the wall. As students come in, I invite them over for a game. It's a little creepy, but by the time class begins the students know the rules and are ready to start calculating. If you want to learn more about probability (including a nice description of craps), I recommend Lady Luck by Warren Weaver.

I stayed away from the craps table in Vegas. The game is fast paced and there are complicated side bets you can make along the way that we did not consider. Instead, I opted for blackjack, where I turned $20 into $60 and then quit. I did not play the slot machines, which are random number generators with flashing lights, bells, and whistles attached. I am told they have worse odds than blackjack or craps.

The trip to Las Vegas was an adventure. My daughter Stephanie turned 30 on the trip (happy birthday!) and acted as our tour guide. We stuffed ourselves at one of the buffets, wandered about Caesar’s Palace, and saw the dancing fountains in front of the Bellagio. The show Tenors of Rock at Harrah's was fantastic. We did some other stuff too, but let’s not go into that (What Happens in Vegas stays in Vegas).

A giant flamingo at the Flamingo
A giant flamingo at the Flamingo.
The High Roller Observation Wheel
The High Roller Observation Wheel.
Two Pina Coladas, one for each hand
Two Pina Coladas, one for each hand.

Friday, August 3, 2018

The Fourier Series of the Cotangent Function

In Section 11.5 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe the Fourier Series. I'm always looking for illustrative examples of Fourier series to assign as homework (see Problems 10.20 and 10.21), to explain in class, or to include on an exam. Not every function will work; it must be well behaved (in technical jargon, it must obey the Dirichlet conditions). Sometimes, however, I like to examine a case that does not satisfy these conditions, just to see what happens.

Consider the Fourier series for the cotangent function, cot(x) = cos(x)/sin(x).

The cotangent function, from Schaum's Outlines: Mathematical Handbook of Formulas and Tables.
The cotangent function, from Schaum's Outlines: Mathematical Handbook of Formulas and Tables.

The function is periodic with period π, but at zero it asymptotically approaches infinity. Its Fourier series is defined as

The Fourier series written as a sum of sines (a's) and cosines (b's), of different frequencies.

The DC terms of the Fourier series, which is the average of the function.
The n'th coefficient a_n, an integral of the function times the cosine, for different frequencies.
The n'th coefficient b_n, an integral of the function times the sine, for different frequencies.

The cotangent is odd, implying that only sines contribute to the sum and a0 = an = 0. Because the product of two odd functions is even, we can change the lower limit of the integral for bn to zero and multiply the integral by two

The n'th coefficient b_n, an integral of cotangent times the sine, for different frequencies, integrated from zero to pi/2.

To evaluate this integral, I looked in the best integral table in the world (Gradshteyn and Ryzhik) and found

From Gradshteyn and Ryzhik: The integral of cot(x) times sin(2nx) is pi/2.

implying that bn = 2, independent of n. The Fourier series of the cotangent is therefore

cotangent written as a sum of 2 times sin(2x) plus 2 times sin(4x) plus 2 times sin(6x) and so on.

When I teach Fourier series, I require that students plot the function using just a few terms in the sum, so they can gain intuition about how the function is built from several frequencies. The first plot shows only the first term (red). It's not a good approximation to the cotangent (black), but what can you expect from a single frequency?
The cotangent function approximated by a single frequency.
The cotangent function approximated by a single frequency.
The second plot shows the first term (green, solid), the second term (green dashed), and their sum (red). It's better, but still has a long ways to go.

The cotangent function approximated by two frequencies.
The cotangent function approximated by two frequencies.
If you add lots of frequencies the fit resembles the third plot (red, first ten terms). The oscillations don’t seem to converge to the function and their amplitude remains large.

The cotangent function approximated by ten frequencies.
The cotangent function approximated by ten frequencies.
The Youtube video below shows that the oscillation amplitude never dies down. It is like the Gibbs phenomenon on steroids; instead of narrow spikes near a discontinuity you get large oscillations everywhere.

The bottom line: the Fourier method fails for the cotangent; it's Fourier series does not converge. High frequencies contribute as much as low ones, and there are more of them (infinitely more). Nevertheless, we do gain insight by analyzing this case. The method fails in a benign enough way to be instructive.

I hope this analysis of a function that does not have a Fourier series helps you understand better functions that do. Enjoy!

Friday, July 27, 2018

Extrema of the Sinc Function

Intermediate Physics for Medicine and Biology: Extrema of the Sinc Function In Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
“The function sin(x)/x has its maximum value of 1 at x = 0. It is also called the sinc(x) function.”
Sinc(x) oscillates like sin(x), but its amplitude decays as 1/x. If sin(x) is zero then sinc(x) is also zero, except for the special point x = 0, where 0/0 becomes 1.
A plot of the sinc function
A plot of the sinc function.
Trigonomentric Delights  by Eli Maor
Trigonometric Delights, by Eli Maor
In IPMB, Russ and I don't evaluate the values of x corresponding to local maximum and minimum values of sinc(x). Eli Maor examines the peak values of f(x) = sinc(x) in his book Trigonometric Delights. He writes
“We now wish to locate the extreme points of f(x)—the points where it assumes its maximum or minimum values. And here a surprise is awaiting us. We know that the extreme points of g(x) = sinx occur at all odd multiples of π/2, that is, at x = (2n+1)π/2. So we might expect the same to be true for the extreme points of f(x) = (sinx)/x. This, however, is not the case. To find the extreme point, we differentiate f(x) using the quotient rule and equate the result to zero:

          f’(x) = (x cosx – sinx)/x2 = 0.         (1)

Now if a ratio is equal to zero, then the numerator itself must equal to zero, so we have x cosx – sinx = 0, from which we get

          tan x = x.                                         (2)

Equation (2) cannot be solved by a closed formula in the same manner as, say, a quadratic equation can; it is a transcendental equation whose roots can be found graphically as the points of intersection of the graphs of y = x and y = tan x.”
Plots of y=x and y=tan(x), showing where they intersect
A plot of y=tanx versus x and y=x versus x.

The extreme values are at x = 0, 4.49 = 1.43π, 7.73 = 2.46π, etc. As x becomes large, the roots approach (2n+1)π/2.

Books by Eli Maor, including e, The Story of a Number
Eli Maor is a rare breed: a writer of mathematics. Russ and I cite his wonderful book e, The Story of a Number in Chapter 2 of IPMB. I also enjoyed The Pythagorean Theorem: A 4,000-year History. Maor has written many books about math and science. His most recent came out in May: Music by the Numbers--From Pythagoras to Schoenberg. I put it on my summer reading list.

Friday, July 20, 2018

A Dozen Units from Intermediate Physics for Medicine and Biology

Intermediate Physics for Medicine and Biology: A Dozen Units from Intermediate Physics for Medicine and Biology Medical and biological physics have their share of colorful and sometimes obsolete units. For the most part, Intermediate Physics for Medicine and Biology sticks with standard metric, or SI, units; mass, distance and time are in kilograms, meters, and seconds (mks). Some combinations of units are given special names, usually in honor of a famous physicist, such as the newton (N) for kg m s-2. I have always found the units for electricity and magnetism difficult to remember. The coulomb (C) for charge is easy enough, but units such as the tesla (T) for magnetic field strength in kg s-1 C-1 are tricky. IPMB uses some common non-SI units, such as the liter (l) for 10-3 m3, the angstrom (Å) for 10-10 m, and the electron volt (eV) for 1.6 × 10-19 J.

Let’s count down a dozen unfamiliar units discussed in Intermediate Physics for Medicine and Biology. We'll start with the least important, and end with the one you really need to know.
12. The roentgen (R). Chapter 16 of IPMB states that the roentgen “is an old unit of [radiation] exposure equivalent to the production of 2.58 × 10-4 C kg-1 in dry air." The unit's name written out as "roentgen" begins with a lower case letter “r” even though Wilhelm Roentgen’s last name starts with an upper case “R.” It's always that way with units.

11. The diopter (diopter). The diopter is a nickname for m-1, just as the hertz is a nickname for s-1. It is used mainly when discussing the power, or vergence, of a lens, and appears in Chapter 14 of IPMB. The diopter does not have a symbol, you just write out "diopter" ("dioptre" if you are English, but that is so wrong).

10. The einstein (E). Homework Problem 2 of Chapter 14 defines the einstein as “1 mol of photons.” Units like the mole (mol) and the einstein are really dimensionless numbers: a mole is 6 × 1023 molecules and an einstein is 6 × 1023 photons. John Wikswo and I have proposed the leibniz (Lz) to be 6 × 1023 differential equations. Some define the einstein as the energy of a mole of photons, so be careful when using this unit. I'll let you guess who the unit was named for.

9. The poise (P). Chapter 1 of IPMB analyzes the coefficient of viscosity, which is often expressed in units of poise or centipoise. The poise is a leftover from the old centimeter-gram-second system of units, and is equal to a gram per centimeter per second. The viscosity of water at 20 °C is about 1 cP. The poise is named after Jean Leonard Marie Poiseuille (sort of), just as the unit of capacitance (the farad) is kind of named after Micheal Faraday. The mks unit of viscosity is the poiseuille (Pl), where 1 Pl = 10 P. The poiseuille is not used much, probably because no one can pronounce it.

8. The torr (Torr). Pressure is measured in many units. The torr is nearly the same as a millimeter of mercury (mmHg), and is named after the Italian physicist Evangelista Torricelli. The SI unit for pressure is the pascal (Pa), a nickname for a newton per square meter. One Torr is about 133 Pa. The bar (bar) is 100,000 Pa, and is approximately equal to one atmosphere (atm). How confusing! All five units—torr, bar, atm, mmHg, and pascal—are used often, so you need to know them all.

7. The barn (b). The barn measures area and is 10-28 m2. It is equivalent to 100 fm2 (the femtometer is also known as a fermi). Nuclear cross sections are measured in barns. By nuclear physics standards a barn is a pretty big cross section. The term barn comes from the idiom about “hitting the broad side of a barn.”

6. The debye (D). Homework Problem 3 in Chapter 6 of IPMB introduces the debye. It is defined as 10-18 statcoulomb cm, where a statcoulomb is the unit of charge in the old cgs system. It is equivalent to 3.34 × 10-30 C m. The debye is named after Dutch physicist Peter Debye, and measures dipole moment. The dipole moment of a water molecule is 1.85 D.

5. The candela (cd). Radiometry measures radiant energy using SI units. Photometry measures the sensation of human vision with its own oddball collection of units, such as lumens, candelas, lux, and nits. A candela depends on the color of the light; for green 1 cd is equal to a radiant intensity of about 0.0015 watts per steradian. A burning candle has a luminous intensity of about 1 cd.

4. The svedberg (Sv). The centrifuge is a common instrument in biological physics. A particle has a sedimentation coefficient equal to its sedimentation velocity per unit of centrifugal acceleration. The units of speed (m s-1) divided by acceleration (m s-2) is seconds, so sedimentation coefficient has dimensions of time. The svedberg is equal to 10-13 s. IPMB gives the symbol as “Sv”, but sometimes it is just “S” (easily confused with a unit of conductance called the siemens and a unit of effective dose called the sievert). The unit is named after the Swedish chemist Theodor Svedberg, who invented the ultracentrifuge.

3. The curie (Ci). The curie is an older unit of radioactivity that is now out of fashion. It is named in honor of Pierre and Marie Curie, and it measures the activity, equal to the disintegration rate. The SI unit for activity is the becquerel (Bq), or disintegrations per second. The becquerel is named after Henri Becquerel, the French physicist who discovered radioactivity. One curie is 3.7 × 1010 Bq. The cumulated activity is the total number of disintegrations, and is a dimensionless number often expressed in Bq s (why bother?). An older unit for cumulated activity is the odd-sounding microcurie hour (µCi h).

2. The Hounsfield unit (HU). The Hounsfield unit is used to measure the x-ray attenuation coefficient µ during computed tomography. It is a dimensionless quantity defined by Eq. 16.25 in IPMB: H = 1000 (µ – µwater)/µwater (for some reason Russ Hobbie and I use H rather than HU). The unit is strange because everyone says the attenuation coefficient is so many Hounsfield units, including the word “units” (you never say a force is so many "newton units"). The attenuation coefficient of water is 0 HU. Air has a very small small attenuation coefficient, so on the Hounsfield scale it is -1000 HU. Many soft tissues have an attenuation coefficient on the order of +40 HU, and bone can be more than +1000 HU. The unit is named after English electrical engineer Godfrey Hounsfield, who won the 1979 Nobel Prize in Physiology or Medicine for developing the first clinical computed tomography machine.
and the winner is....
1. The sievert (Sv). The most important unusual unit in IPMB is the sievert. Both the sievert and the gray (Gy) are equal to a joule per kilogram. The gray is a physical unit measuring the energy deposited in tissue per unit mass, or the dose. The sievert is the gray multiplied by a dimensionless coefficient called the relative biological effectiveness and measures the effective dose. For x-rays, the sievert and gray are the same, but for alpha particles one gray can be many sieverts. An older unit for the gray is the rad (1 Gy = 100 rad) and an older unit for the sievert is the rem (1 Sv = 100 rem). The gray is named after English physicist Louis Gray, and the sievert after Swedish medical physicist Rolf Sievert.