Friday, July 12, 2013

The Bohr Model

One hundred years ago this month, Niels Bohr published his model of the atom (“On the Constitution of Atoms and Molecules,” Philosophical Magazine, Volume 26, Pages 1–25, 1913). In the May 2013 issue of Physics Today, Helge Kragh writes
Published in a series of three papers in the summer and fall of 1913, Niels Bohr’s seminal atomic theory revolutionized physicists’ conception of matter; to this day it is presented in high school and undergraduate-level textbooks.
The Making of the Atomic Bomb, by Richard Rhodes, superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.
I find Bohr’s model fascinating for several reasons: 1) it was the first application of quantum ideas to atom structure, 2) it predicts the size of the atom, 3) it implies discrete atomic energy levels, 4) it explains the hydrogen spectrum in terms of transitions between energy levels, and 5) it provides an expression for the Rydberg constant in terms of fundamental parameters. In his book The Making of the Atomic Bomb, Richard Rhodes discusses the background leading to Bohr’s discovery.
Johann Balmer, a nineteenth-century Swiss mathematical physicist, identified in 1885 … a formula for calculating the wavelengths of the spectral lines of hydrogen… A Swedish spectroscopist, Johannes Rydberg, went Balmer one better and published in 1890 a general formula valid for a great many different line spectra. The Balmer formula then became a special case of the more general Rydberg equation, which was built around a number called the Rydberg constant [R]. That number, subsequently derived by experiment and one of the most accurately known of all universal constants, takes the precise modern value of 109,677 cm−1.

Bohr would have known these formulae and numbers from undergraduate physics, especially since Christensen [Bohr’s doctorate advisor] was an admirer of Rydberg and had thoroughly studied his work. But spectroscopy was far from Bohr’s field and he presumably had forgotten them. He sought out his old friend and classmate, Hans Hansen, a physicist and student of spectroscopy just returned from Gottingen. Hansen reviewed the regularity of the line spectra with him. Bohr looked up the numbers. “As soon as I saw Balmer’s formula,” he said afterward, “the whole thing was immediately clear to me.”

What was immediately clear was the relationship between his orbiting electrons and the lines of spectral light… The lines of the Balmer series turn out to be exactly the energies of the photons that the hydrogen electron emits when it jumps down from orbit to orbit to its ground state. Then, sensationally, with the simple formula R = 2π2me4/h3 (where m is the mass of the electron, e the electron charge and h Planck’s constant—all fundamental numbers, not arbitrary numbers Bohr made up) Bohr produced Rydberg’s constant, calculating it within 7 percent of its experimentally measured value!...
In chapter 14 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Bohr model, but interestingly we do not attribute the model to Bohr. However, at other locations in the book, we casually refer to Bohr’s model by name: see Problem 33 of Chapter 15 where we mention “Bohr orbits,” and Sections 15.9 and 16.1.1 where we refer to the “Bohr formula.” I guess we assumed that everyone knows what the Bohr model is (a pretty safe assumption for readers of IPMB). In Problem 4 of Chapter 14 (one of the new homework problems in the 4th edition), the reader is asked to derive the expression for the Rydberg constant in terms of fundamental parameters (you don’t get exactly the same answer as in the quote above; presumably Rhodes didn’t use SI units).

Bohr would become one the principal figures in the development of modern quantum mechanics. He also made fundamental contributions to nuclear physics, and contributed to the Manhattan project. He was awarded the Nobel Prize in Physics in 1922 “for his services in the investigation of the structure of atoms and of the radiation emanating from them.” He is Denmark’s most famous scientist, and for years he led the Institute of Theoretical Physics at the University of Copenhagen. A famous play, titled Copenhagen is about his meeting with former collaborator Werner Heisenberg in then-Nazi-controlled Denmark in 1941. Here is a clip.

Bohr and Heisenberg discussing the uncertainty principle, in Copenhagen.

Physicists around the world are celebrating this 100-year anniversary; for instance here, here, here and here.

I end with Bohr’s own words: an excerpt from the introduction of his first 1913 paper (references removed).
In order to explain the results of experiments on scattering of α rays by matter Prof. Rutherford has given a theory of the structure of atoms. According to this theory, the atoms consist of a positively charged nucleus surrounded by a system of electrons kept together by attractive forces from the nucleus; the total negative charge of the electrons is equal to the positive charge of the nucleus. Further, the nucleus is assumed to be the seat of the essential part of the mass of the atom, and to have linear dimensions exceedingly small compared with the linear dimensions of the whole atom. The number of electrons in an atom is deduced to be approximately equal to half the atomic weight. Great interest is to be attributed to this atom-model; for, as Rutherford has shown, the assumption of the existence of nuclei, as those in question, seems to be necessary in order to account for the results of the experiments on large angle scattering of the α rays.

In an attempt to explain some of the properties of matter on the basis of this atom-model we meet however, with difficulties of a serious nature arising from the apparent instability of the system of electrons: difficulties purposely avoided in atom-models previously considered, for instance, in the one proposed by Sir J. J. Thomson. According to the theory of the latter the atom consists of a sphere of uniform positive electrification, inside which the electrons move in circular orbits. The principal difference between the atom-models proposed by Thomson and Rutherford consists in the circumstance the forces acting on the electrons in the atom-model of Thomson allow of certain configurations and motions of the electrons for which the system is in a stable equilibrium; such configurations, however, apparently do not exist for the second atom-model. The nature of the difference in question will perhaps be most clearly seen by noticing that among the quantities characterizing the first atom a quantity appears—the radius of the positive sphere—of dimensions of a length and of the same order of magnitude as the linear extension of the atom, while such a length does not appear among the quantities characterizing the second atom, viz. the charges and masses of the electrons and the positive nucleus; nor can it be determined solely by help of the latter quantities.

The way of considering a problem of this kind has, however, undergone essential alterations in recent years owing to the development of the theory of the energy radiation, and the direct affirmation of the new assumptions introduced in this theory, found by experiments on very different phenomena such as specific heats, photoelectric effect, Röntgen [etc]. The result of the discussion of these questions seems to be a general acknowledgment of the inadequacy of the classical electrodynamics in describing the behaviour of systems of atomic size. Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i. e. Planck’s constant, or as it often is called the elementary quantum of action. By the introduction of this quantity the question of the stable configuration of the electrons in the atoms is essentially changed as this constant is of such dimensions and magnitude that it, together with the mass and charge of the particles, can determine a length of the order of magnitude required. This paper is an attempt to show that the application of the above ideas to Rutherford’s atom-model affords a basis for a theory of the constitution of atoms. It will further be shown that from this theory we are led to a theory of the constitution of molecules.

In the present first part of the paper the mechanism of the binding of electrons by a positive nucleus is discussed in relation to Planck’s theory. It will be shown that it is possible from the point of view taken to account in a simple way for the law of the line spectrum of hydrogen. Further, reasons are given for a principal hypothesis on which the considerations contained in the following parts are based.

I wish here to express my thanks to Prof. Rutherford his kind and encouraging interest in this work.

Friday, July 5, 2013

The Spark of Life

The Spark of Life: Electricity in the Human Body, by Frances Ashcroft, superimposed on Intermediate Physics for Medicine and Biology.
The Spark of Life:
Electricity in the Human Body,
by Frances Ashcroft.
This week I finished the book The Spark of Life: Electricity in the Human Body, by Frances Ashcroft. In the introduction, Ashcroft explains the goal of her book.
In essence, this book is a detective story about a special kind of protein—the ion channel—that takes us from Ancient Greece to the forefront of scientific research today. It is very much a tale for today, as although the effects of static electricity and lightning on the body have been known for centuries, it is only in the last few decades that ion channels have been discovered, their functions unravelled and their beautiful, delicate, intricate structures seen by scientists for the first time. It is also a personal panegyric for my favourite proteins, which captured me as a young scientist and never let me go; they have been a consuming passion throughout my life. In Walt Whitman’s wonderful words, my aim is to “sing the body electric.”
The book examines much of the history behind topics that Russ Hobbie and I discuss in the 4th edition of Intermediate Physics for Medicine and Biology, such as the work of Hodgkin and Huxley on the squid nerve axon, the electrocardiogram, and modern medical devices such as the pacemaker and cochlear implants. The book is definitely aimed at a general audience; having worked in the field of bioelectricity, I sometimes wish for more depth in the discussion. For instance, anyone wanting to know the history of pacemakers and defibrillators would probably prefer something like Machines in our Hearts, by Kirk Jeffrey. Nevertheless, it was useful to find the entire field of bioelectricity described in one relatively short and easily readable book. With its focus on ion channels, I consider this book as a popularization of Bertil Hille’s text Ion Channels of Excitable Membranes. Her book was also useful to me as a review of various drugs and neurotransmitters, which I don’t know nearly as much about as I should.

Here is a sample of Ashcroft’s writing, in which she tells about Rod MacKinnon’s determination of the structure of a potassium channel. Russ and I discuss MacKinnon’s work in Chapter 9 (Electricity and Magnetism at the Cellular Level) of IPMB.
A slight figure with an elfin face, MacKinnon is one of the most talented scientists I know. He was determined to solve the problem of how channels worked and he appreciated much earlier than others that the only way to do so was to look at the channel structure directly, atom by atom. This was not a project for the faint-hearted, for nobody had ever done it before, no one really knew how to do it and most people did not even believe it could be done in the near future. The technical challenges were almost insurmountable and at that time he was not even a crystallographer. But MacKinnon is not only a brilliant scientist; he is also fearless, highly focused and extraordinarily hard-working (he is famed for working around the clock, snatching just a few hours’ sleep between experiments). Undeterred by the difficulties, he simultaneously switched both his scientific field and his job, resigning his post at Harvard and moving to Rockefeller University because he felt the environment there was better. Some people in the field wondered if he was losing his mind. In retrospect, it was a wise decision. A mere two years later, MacKinnon received a standing ovation—an unprecedented event at a scientific meeting—when he revealed the first structure of a potassium channel. And ion channels went to Stockholm all over again.
Ashcroft is particularly good at telling the human interest stories behind the discoveries described in the book. There were several interesting tales about neurotoxins; not only the well-known tetrodotoxin, but also others such as saxitoxin, aconite, batrachotoxin, and grayanotoxin. The myotonic goats, who because of an ion channel disease fall over stiff whenever startled, are amazing. Von Humboldt’s description of natives using horses to capture electric eels is incredible. The debate in Parliament about August Waller’s demonstration of the electrocardiogram, using his dog Jimmie as the subject, was funny. If, like me, you enjoy such stories, read The Spark of Life.

I will end with Ashcroft’s description of how Hodgkin and Huxley developed their mathematical model of the action potential in the squid giant axon, a topic covered in detail in Chapter 6 of IPMB.
Having measured the amplitude and time course of the sodium and potassium currents, Hodgkin and Huxley needed to show that they were sufficient to generate the nerve impulse. They decided to do so by theoretically calculating the expected time course of the action potential, surmising that if it were possible to mathematically simulate the nerve impulse it was a fair bet that only the currents they had recorded were involved. Huxley had to solve the complex mathematical equations involved using a hand-cranked calculator because the Cambridge University computer was “off the air” for six months. Strange as it now seems, the university had only one computer at that time (indeed it was the first electronic one Cambridge had). It took Huxley about three weeks to compute an action potential: times have moved on—it takes my current computer just a few seconds to run the same simulation. What is perhaps equally remarkable is that we often still use the equations Hodgkin and Huxley formulated to describe the nerve impulse.

Three years after finishing their experiments, in 1952, Hodgkin and Huxley published their studies in a landmark series of five papers that transformed forever our ideas about how nerves work. The long time between completing their experiments and publication seems extraordinary to present-day scientists, who would be terrified of being scooped by their rivals. Not so in the 1950s—Huxley told me, “It never even entered my head.” In 1963, Hodgkin and Huxley were awarded the Nobel Prize. Deservedly so, for they got such beautiful results and analysed them so precisely that they revolutionized the field and provided the foundations for modern neuroscience.
For more about The Spark of Life, see here, here, here, and here. Listen to and watch Ashcroft being interviewed by Denis Noble here, and giving the Croonian Lecture here.

Francis Ashcroft being interviewed by Denis Noble.

Francis Ashcroft giving the Croonian Lecture.

Friday, June 28, 2013

Lotka-Volterra equations

Russ Hobbie and I don’t study population dynamics much in the 4th edition of Intermediate Physics for Medicine and Biology. To me, it’s more of a mathematical biology topic rather than an application of physics to biology. However, we do discuss one well-known model for population dynamics, the Lotka-Volterra equations, in a Homework Problem in Chapter 2.
Problem 34 Consider a classic predator-prey problem. Let the number rabbits be R and the number of foxes be F. The rabbits eat grass, which is plentiful. The foxes eat only rabbits. The number of rabbits and foxes can be modeled by the Lotka-Volterra equations
dR/dt = a R – b R F
dF/dt = - c F + d R F .
(a) Describe the physical meaning of each term on the right-hand side of each equation. What does each of the constants a, b, c, and d denote?
(b) Solve for the steady-state values of R and F.

These differential equations are difficult to solve because they are nonlinear (see Chapter 10). Typically, R and F oscillate about the steady-state solutions found in part (b). For more information, see Murray (2001).
There are two steady-state solutions. One is the trivial R = F = 0. The most interesting aspect of this solution is that it is not stable. If R and F are both small, the nonlinear terms in the Lotka-Volterra equations are negligible, and the number of foxes falls exponentially (they is no prey to eat) but the number of rabbits rises exponentially (there is no predator to eat them).

The other steady-state solution is (spoiler alert!) R = c/d and F = a/b. We claim in the problem that these equations are difficult to solve, and that is true in general, at least when searching for analytical solutions. However, if we focus on small deviations from this steady-state, we can solve the equations. Let  

R = c/d + r 
F = a/b + f ,

where r and f are small (much less than the steady state solutions). Plug these into the original differential equations, and ignore any terms containing r times f (these “doubly small” terms are negligible). The new equations for r and f are

dr/dt = - b (c/d) f 
df/dt =   d (a/b) r .

Now let’s use my favorite technique for solving differential equations: guess and check. I will guess

r = A sin(ωt)
f = B cos(ωt) .

If we plug these expressions into the differential equations, we get a solution only if ω2 = ac. In that case, B = -(d/b) √(a/c) A. You can’t get A in this way; it depends on the initial conditions.

A plot of the solution shows two oscillating populations, with the rabbits lagging 90 degrees behind the foxes. In words, suppose you start with foxes at their equilibrium value, but a surplus of rabbits above their equilibrium. In this case, there are lots of rabbits for the foxes to eat, so the foxes gorge themselves and their population grows. However, as the number of foxes rises, the number of rabbits starts to fall (they are being ravaged by all those foxes). After a while, the number of rabbits declines back to its equilibrium value, but by then the number of foxes has surged above its steady-state value. Foxes continue to devour rabbits, reducing the rabbit population below equilibrium. Now there are too many foxes competing for too few rabbits, so the fox population starts to shrink as some inevitably go hungry. During this difficult time, both populations are plummeting as a large but decreasing number of ravenous foxes hunt the rare and frightened rabbits. When the foxes finally fall back to their equilibrium value there is a shortage of rabbits, so the foxes continue to starve and their number keeps falling. With less foxes, the rabbits breed like…um…rabbits and begin to make a comeback. Once they climb to their equilibrium value, there are still relatively few foxes, so the rabbits prosper all the more. With the rabbit population surging, there is plenty of food for the foxes, and the fox population begins to increase. During these happy days, both populations thrive. Eventually, the foxes return to their equilibrium value, but by this time the rabbits are plentiful. But this is just where we started, so the process repeats, over and over again. I needed a lot of words to explain about those foxes and rabbits. I think you can begin to see the virtue of a succinct mathematical analysis, rather than a verbose nonmathematical description.

For larger oscillations, the nonlinear nature of the model becomes important. The populations still oscillate, but not sinusoidally. For some parameters, one population may rise slowly and then suddenly drop precipitously, only to gradually rise again. You can see some of those results here and here.

The Lotka-Volterra model is rather elementary. For instance, there is no damping; the oscillations never decay away but instead continue forever. Moreover, the oscillations do not approach some fixed amplitude (a limit cycle behavior). Instead, the amplitude depends entirely on the initial conditions. Many more realistic models have a threshold, above which oscillations occur but below which the systems returns to its steady state.

Mathematical Biology, by James Murray, superimposed on Intermediate Physics for Medicine and Biology.
Mathematical Biology,
by James Murray.
Population dynamics is a large field, of which we only scratch the surface. One place to learn more is James Murray’s book that we cite at the end of the homework problem:

Murray, J. D. (2001) Mathematical Biology. New York, Springer-Verlag.
The most recent (3rd) edition of Murray’s book is actually in two volumes:
Murray, J. D. (2002) Mathematical Biology: I. An Introduction. New York, Springer-Verlag. 

Murray, J. D. (2002) Mathematical Biology: II. Spatial Models and Biomedical Applications. New York, Springer-Verlag.
Hear Murray talk about his research here


Listen to James Murray talk about Mathematical Biology.
https://www.youtube.com/watch?v=6Yj5Nyb_VyU

Alfred Lotka (1880–1949) was an American scientist. In 1925 he published a book, Elements of Physical Biology, that is in some ways a precursor to Intermediate Physics for Medicine and Biology, or perhaps an early version of Murray’s Mathematical Biology. You can download a copy of the book here.

Friday, June 21, 2013

Life’s Ratchet

Life's Ratchet: How Molecular Machines Extract Order from Chaos, by Peter Hoffmann, superimposed on Intermediate Physics for Medicine and Biology.
Lifes Ratchet: How Molecular Machines
Extract Order from Chaos,
by Peter Hoffmann.
This week I finished reading Life’s Ratchet: How Molecular Machines Extract Order from Chaos, by Peter Hoffmann. This book is mostly about molecular biophysics, which Russ Hobbie and I purposely avoid in the 4th edition of Intermediate Physics for Medicine and Biology. But the workings of tiny molecular motors is closely related to thermal motion (Hoffmann calls it the “molecular storm”) and the second law of thermodynamics, topics that Russ and I do address. One fascinating topic I want to focus on is a discussion of Feynman’s ratchet.

Let us begin with Richard Feynman’s discussion in Chapter 16 of Volume 1 of The Feynman Lectures on Physics. I recall reading The Feynman Lectures the summer between graduating from the University of Kansas and starting graduate school at Vanderbilt University. All physics students should find time to read these great lectures. Feynman writes
Let us try to invent a device which will violate the Second Law of Thermodynamics, that is, a gadget which will generate work from a heat reservoir with everything at the same temperature. Let us say we have a box of gas at a certain temperature and inside there is an axle with vanes in it… Because of the bombardments of gas molecules on the vane, the vane oscillates and jiggles. All we have to do is to hook onto the other end of the axle a wheel which can turn only one way—the ratchet and pawl. Then when the shaft tries to jiggle one way, it will not turn, and when it jiggles the other, it will turn… If we just look at it, wee see, prima facie, that it seems quite possible. So we must look more closely. Indeed, if we look at the ratchet and pawl, we see a number of complications.

First, our idealized ratchet is as simple as possible, but even so, there is a pawl, and there must be a spring in the pawl. The pawl must return after coming off a tooth, so the spring is necessary…
Feynman goes on to explore this device in detail. He concludes that, as we would expect, the device does not violate the second law. He explains
It is necessary to work against the spring in order to lift the pawl to the top of a tooth. Let us call this energy ε… The chance that the system can accumulate enough energy ε to get the pawl over the top of the tooth is eε/kT [T is the absolute temperature, and k is Boltzmann’s constant]. But the probability that the pawl will accidentally be up is also eε/kT. So the number of times that the pawl is up the wheel can turn backwards freely is equal to the number of times that we have enough energy to turn it forward when the pawl is down. We thus get a “balance,” and the wheel will not go around.
Hoffmann explains that a lot of molecular machines important in biology operate analogously to Feynman’s ratchet and pawl. He writes
What kind of molecular device could channel random molecular motion into oriented activity? Such a device would need to allow certain directions of motion, while rejecting others. A ratchet, that is, a wheel with asymmetric teeth blocked by a spring-loaded pawl, could do the job... Maybe nature has made molecular-size ratchets that allow favorable pushes from the molecular storm in one direction, while rejecting unfavorable pushes from the opposite direction…

For the ratchet-and-pawl machine to extract energy from the molecular storm, it has to be easy to push the pawl over one of the teeth of the ratchet. The pawl spring must be very weak to allow the ratchet to move at all. Otherwise, a few water molecules hitting the ratchet would not be strong enough to force the pawl over one of the teeth. Just like the ratchet wheel, the pawl is continuously bombarded by water molecules. Its weak spring allows the pawl to bounce up and down randomly, opening from time to time, allowing the ratchet to slip backward… Worse, because the spring is most relaxed when the pawl is at the lowest point between two teeth [the compressed spring pushes the pawl down against the ratchet], the pawl spends most of its time touching the steep edge of one of the teeth. When an unfavorable hit pushes the ratchet backward just as the pawl has opened, it does not need to go far to end up on the incline of the next tooth—rotating the ratchet backward!... The ratchet will move, bobbing back and forth, but it will not make any net headway.
How then do molecular machines work? They require in input of energy, which eventually gets dissipated into heat. Hoffmann concludes
We could, in fact, make Feynman’s ratchet work, if from time to time, we injected energy to loosen and then retighten the pawl’s spring. On loosening the spring, the wheel would rotate freely, with a slightly higher probability of rotating one way rather than the other. Tightening the pawl’s spring would push the wheel further in the direction we want. On average, the wheel would move forward and do work. In fact, it can be shown that any molecular machine that operates on an asymmetric energy landscape and incorporates and irreversible, energy-degrading step can extract useful work from the molecular storm.
This may all seem abstract, but Hoffmann brings it down to specifics. The molecular machine could be myosin moving along actin (as in muscles) or kinesin moving along a microtubule (as in separating chromosomes during mitosis). The energy source for the irreversible step is ATP. This step allows the motor to extract energy from the “molecular storm” of thermal energy that is constantly bombarding it.

Friday, June 14, 2013

I WENT TO PARIS AND I MISSED THE VERY BEST THING!

Three summers ago, my wife and I visited Paris for our 25th wedding anniversary. We carefully planned our trip so we could see all the most famous sites—the Eifel Tower, the Arc de Triomphe, the Notre Dame Cathedral, the Palace of Versailles, the Pantheon, the Louvre, and the Musee d’Orsay—but somehow WE MISSED THE MOST IMPORTANT THING! Apparently there is a giant painting in the Musee d’Art Moderne by Raoul Dufy, depicting many scientists who have contributed to the study of electricity. What more could a physicist like me ask for? I first learned about this painting in a book I am now reading, The Spark of Life: Electricity and the Human Body, by Frances Ashcroft. I’ll have more on that book in a future post. Here is what she writes about the painting:
An unusual tribute to the scientists and philosophers who contributed to the discovery of electricity hangs in Musee d’Art Moderne in Paris. A giant canvas known as “La Fee Electricite,” which measures 10 metres high and 60 metres long, it was commissioned by a Paris electricity company to decorate its Hall of Light at the 1937 world exhibition in Paris. It is the work of French Fauvist painter Raoul Dufy, better known for his wonderful colourful depictions of boats, and it took him and two assistants four months to complete. The Electricity Fairy sails through the sky at the far left of the painting above some of the world’s most famous landmarks, the Eiffel Tower, Big Ben and St Peter’s Basilica in Rome among them. Behind her follow some 110 people connected with the development of electricity, from Ancient Greece to modern times. As time and the canvas progress, the landscape changes from scenes of rural idyll to steam trains, furnaces, the trappings of the industrial revolution and finally the giant pylons that support the power lines carrying electricity to the planet.
Short of going to see the painting in Paris, the next best thing is to view it in sections at the Electricity Online website of the University of Leeds. I won’t list all the scientists depicted in it, but let me note those Russ Hobbie and I mention in the 4th edition of Intermediate Physics for Medicine and Biology (roughly in chronological order): Newton, Bernoulli, Laplace, Poisson, Gauss, Ohm, Oersted, Clausius, Clapeyron, Fourier, Savart, Fresnel, Biot, Ampere, Faraday, Gibbs, Helmholtz, Maxwell, Poincare, Moseley, Lorentz, and Pierre Curie. A few were only present in IPMB because they have a unit named after them: Pascal, Watt, Joule, Kelvin, Roentgen, Becquerel, Hertz, and Marie Curie. Galvani is shown with a frog, Faraday with a coil and galvanometer, Pierre Curie (mentioned in IPMB through the Curie temperature) is standing next to his wife Marie Curie (only mentioned in IPMB in association with her unit, and the only female scientist in the painting), and Edison is next to his light bulbs.

I’m still not sure how I never knew about this magnificent painting. I guess we need to take another trip to Paris. Honey, start packing!

Friday, June 7, 2013

Resource Letter BSSMF-1: Biological Sensing of Static Magnetic Fields

In the October 2012 issue of the American Journal of Physics, physicist Leonard Finegold published “Resource Letter BSSMF-1: Biological Sensing of Static Magnetic Fields” (Volume 80, Pages 851–861). Finegold recommends that a good starting point for mastering the topic of magnetoreception is Kenneth Lohmann’s News and Views article in Nature.
35. “Magnetic-field perception: News and Views Q and A,” K. J. Lohmann, Nature, 464, 1140–1142 (2010). (E) 
I looked it up, and it does indeed provide a well-written summary of the field in a reader-friendly question-and-answer format.

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss magnetotactic bacteria. We write that
Bacteria in the northern hemisphere have been shown to seek the north pole. Because of the tilt of the earth’s field, they burrow deeper into the environment in which they live. Similar bacteria in the southern hemisphere burrow down by seeking the south pole.
Finegold also reviews this topic. The excerpt reproduced below serves both as an up-date to IPMB and as a sample of the style of an American Journal of Physics resource letter.
Certain bacteria move in response to the earth’s magnetic field (Ref. 35), swimming along the field lines, and have been excellently reviewed (Ref. 36). The “sensing” element is magnetite (an iron oxide) or greigite (an iron sulfide) (Ref. 37). The bacteria would swim toward the boundary between oxygenated and oxygen-poor regions. Until recently, there was the comforting idea that there are two groups of bacteria with opposite sensors, depending on which of the earth’s hemispheres they reside. Alas, both groups have now been found in the same place; it appears that their polarity is correlated with the local redox potential (Ref. 38 and 39). In addition, some bacteria use only the axial property of the field (i.e., they swim both with or against the field direction), whereas others use the vector property (i.e., they swim either with or against the field direction). Details of the behavior have been elucidated by applying magnetic fields to bacteria in a spectrophotometer cuvette, with genetic analysis (Ref. 39).

35. “South-seeking magnetotactic bacteria in the Southern Hemisphere,” R. P. Blakemore, R. B. Frankel, and Ad. J. Kalmijn, Nature 286, 384–385 (1980). (A)

36. “Bacteria that synthesize nano-sized compasses to navigate using Earth’s geomagnetic field,” L. Chen, D. A. Bazylinski, and B. H. Lower, Nature Education Knowledge 1(10), 14 (2010). (I)

37. “The identification and biogeochemical interpretation of fossil magnetotactic bacteria,” R. E. Kopp and J. L. Kirschvink, Earth-Sci. Rev. 86, 42–61 (2008). (A)

38. “South-seeking magnetotactic bacteria in the northern hemisphere,” S. L. Simmons, D. A. Bazylinski, and K. J. Edwards, Science 311, 371–374 (2006). (A)

39. “Characterization of bacterial magnetotactic behaviors by using a magnetospectrophotometry assay,” C. T. Lefevre, T. Song, J. P. Yonnet, and L. F. Wu, Appl. Environ. Microbiol. 75, 3835–3841 (2009). (A)”
Magnetoreception is a field that often stirs debate. Russ and I outline one such debate in IPMB
Kirschvink (1992) proposed a model whereby a magnetosome in a field of 10−4–10−3 T could rotate to open a membrane channel. As an example of the debate that continues in this area, Adair (1991, 1992, 1993, 1994) argued that a magnetic interaction cannot overcome thermal noise in a 60-Hz field of 5 × 10−6 T. However, Polk (1994) argues that more biologically realistic parameters, including a large number of magnetosomes in a cell, could allow an interaction at 2 × 10−6 T.
The key citations in the debate are
Adair, R. (1991) “Constraints on biological effects of weak extremely-low-frequency electromagnetic fields,” Phys. Rev. A, Volume 43, Pages 1039–1048.
Kirschvink, J. L. (1992) “Comment on “Constraints on biological effects of weak extremely-low-frequency electromagnetic fields,” Phys. Rev. A, Volume 46, Pages 2178–2184.
Adair, R. (1992) “Reply to “Comment on ‘Constraints on biological effects of weak extremely-low-frequency electromagnetic fields’,” Phys. Rev. A, Volume 46, Pages 2185–2187.
For those of you who like this sort of thing, here is another example from Finegold’s resource letter. The debate is about, of all things, if cows align themselves in magnetic fields!
A surprising finding is that cattle and deer seem to align themselves in an approximate north-south (geomagnetic) direction. The evidence is from world-wide satellite photographs from Google Earth, supported by ground observations of more than 10,000 animals, and is hard to rebut. The satellite photographs do not have enough resolution to show the direction (north or south) in which the animals face.
72. “Magnetic alignment in grazing and resting cattle and deer,” S. Begall, J. Cerveny, J. Neef, O. Vojtech, and H. Burda, Proc. Natl. Acad. Sci. U.S.A. 105, 13453–13455 (2008). (I)
As Usherwood asks, why on Earth should cattle and deer prefer this alignment? Possible interpretations are that the satellite photographs are made close to noon, so there may be physiological reasons (heating, cooling) for animals to align or to view predators better.
73. “Cattle and deer align north (-north-east),” J. Usherwood, J. Exp. Biol. 212, iv (2009). (E)
Partly to rule out sun compass effects, Burda et al. investigated ruminant alignment under high-voltage (and hence high-current, low-frequency) power lines and found that the geomagnetic north-south alignment was disturbed; the disturbance was correlated with the alternating fields. Such disturbance might instead be because the animals felt protected by (or preferring) the overhead lines or pylons or because of the audible (to humans at least) corona discharge. A good control for this would be to look at ruminants under power lines being repaired, carrying no current; this is difficult to do. The authors ingeniously compared the nonalignment under N-S and E-W trending power lines and found that the nonalignment followed the resultant total magnetic field. Their conclusions have been challenged (Ref. 75), and they have a lively rebuttal (Ref. 76), to which the challengers have replied (Ref. 77). Hence, the initially persuasive evidence, that cattle and deer detect magnetic fields, may need re-examination.

74. “Extremely low-frequency electromagnetic fields disrupt magnetic alignment of ruminants,” H. Burda, S. Begall, J. Cerven, J. Neef, and P. Nemec, Proc. Natl. Acad. Sci. U.S.A. 106, 5708–5713 (2009). (I)
75. “No alignment of cattle along geomagnetic field lines found,” J. Hert, L. Jelinek, L. Pekarek, and A. Pavlicek, J. Comp. Physiol., A 197, 677–682 (2011). (I)
76. “Further support for the alignment of cattle along magnetic field lines: Reply to Hert et al.,” S. Begall, H. Burda, J. Cerveny, O. Gerter, J. Neef-Weisse, and P. Nemec, J. Comp. Physiol. [A] 197, 1127–1133 (2011). (I)
77. “Authors’ Response,” J. Hert, L. Jelinek, L. Pekarek, and A. Pavlicek, J. Comp. Physiol. [A] 197(12), 1135– 1136 (2011). (I) 
Finegold also discusses magnet therapy, a topic I am extremely skeptical about, and that I have discussed before in this blog. He cites his own editorial with Flamm
Magnet therapy,” L. Finegold and B. L. Flamm, Br. Med. J. 332, 4 (2006) (E) 
which concludes
Extraordinary claims demand extraordinary evidence. If there is any healing effect of magnets, it is apparently small since published research, both theoretical and experimental, is weighted heavily against any therapeutic benefit. Patients should be advised that magnet therapy has no proved benefits. If they insist on using a magnetic device they could be advised to buy the cheapest—this will at least alleviate the pain in their wallet.

Friday, May 31, 2013

Rounding Off the Cow

In the October 2012 issue of the American Journal of Physics, Dawn Meredith and Jessica Bolker published an article about “Rounding Off the Cow: Challenges and Successes in an Interdisciplinary Physics Course for Life Science Students” (Volume 80, Pages 913–922). The article is interesting, and much of the motivation for their work is nearly identical to that of Russ Hobbie and I in writing the 4th edition of Intermediate Physics for Medicine and Biology. They focus on an introductory physics class, whereas Russ and I wrote an intermediate level textbook. Nevertheless, many of the ideas and challenges are the same. Here, I want to focus on their Table 1, in which they list topics that are emphasized and deemphasized compared to standard introductory classes.

Table I. Changes in topic emphasis compared to standard course
Semester 1Semester 2
Included/stressedKinematicsHeat transfer
DynamicsKinetic theory of gases
Static torqueEntropy
EnergyDiffusion, convection, conduction
Stress/strain and fractureSimple harmonic motion
Fluids (far more)Waves (sound, optics)
Omitted/de-emphasizedProjectile motionHeat engines
Relative motionMagnetism (less)
Rotational motionInduction (qualitatively)
StaticsAtomic physics (instrumentation)
CollisionsRelativity
Newton’s law of gravitation
Kepler’s laws

How does this list compare with the content of IPMB? We don’t stress kinematics and dynamics much. In fact, most of our mechanics discussion centers on static equilibrium. Interestingly, Meredith and Bolker emphasize static torque, which is absolutely central to our analysis of biomechanics in Chapter 1. Rotational equilibrium and torque is what explains why bones, muscles and tendons often experience forces far larger than the weight of the body. It also underlies our rather extensive discussion of the role of a cane. We discuss mechanical energy in Chapter 1, but energy doesn’t become an essential topic until our Chapter 3 on thermodynamics. We agree completely with Meredith and Bolker’s listing of “stress/strain and fracture” and “Fluids (far more)”, and I second the “far more”. Our Chapter 1 contains a lot of fluid dynamics, including the biologically-important concept of buoyancy, the idea of high and low Reynolds number, and applications of fluid dynamics to the circulatory system.

The time allotted to an introductory physics class is limited, so something must get deemphasized to free up time for topics like fluids. Meredith and Bolker mention projectile motion (we agree, it is nowhere in IPMB), relative motion (not crucial if not covering relativity), and rotational motion (we don’t emphasize this either, except when analyzing the centrifuge). I don’t really understand the omission of statics, because as I said earlier static mechanical equilibrium is crucial for biomechanics. They deemphasize collisions, and so do we, although we do discuss the collision of an electron with a photon when analyzing Compton Scattering in Chapter 15. Newton’s law of gravity and Kepler’s laws of planetary motion are absent from both our book and their class.

In the second semester, Meredith and Bolker stress heat transfer (convection and conduction), the kinetic theory of gases, and entropy. Russ and I discuss all these topics in our Chapter 3. Diffusion is a topic they emphasize, and rightly so. It is a topic that is typically absent from an introductory physics class, but is crucial for biology. We discuss it in detail in Chapter 4 of IPMB. Meredith and Bolker list simple harmonic motion among the topics they stress. We talk about harmonic motion in Chapter 10, but mainly as a springboard for the study of nonlinear dynamics. Much of the analysis of linear harmonic motion is found in IPMB in an appendix. Finally, they stress waves (sound and optics). We do too, mainly in our Chapter 13 about sound and ultrasound; a new chapter in the 4th edition.

Topics they omit or deemphasize in the second semester include heat engines. We barely mention heat engines at the end of Chapter 3, and the well-known Carnot heat engine is never analyzed in our book. Meredith and Bolker deemphasize magnetism and magnetic induction. As a researcher in biomagnetism, I would hate to see these topics go. Russ and I analyze biomagnetism in Chapter 8. However, I could see how one might be tempted to deemphasize these topics; biomagnetic fields are very weak and do not play a large role in either biology or medicine. I personally would keep them in, and they remain an important part of IPMB. They do not stress “Atomic Physics (Instrumentation)” and I am not sure exactly what they mean, especially with their parenthetical comment about instruments. We talk a lot about atomic physics in Chapter 14 on Atoms and Light. Finally, Meredith and Bolker omit relativity, and so do Russ and I, except as needed to understand photons. We never discuss the more traditional phenomena of relativity, such as the Lorentz contraction, time dilation, or simultaneity.

Some topics should get about the same amount of attention as in a traditional class, but with slight changes in emphasis. For instance, I would cover geometrical optics, including lenses (when discussing the eye and eyeglasses) but I would skip mirrors. I would cover nuclear physics, but I would skip fission and fusion, and focus on radioactive decay, including positron decay (positron emission tomography).

I think that Meredith and Bolker provide some useful guidance on how to construct an introductory physics class for students interested in the life sciences. Russ and I aim at an intermediate class for students who have taken a traditional introductory class and want to explore applications to biology and medicine in more detail. Our book is clearly at a higher mathematical level: we use calculus while most introductory physics classes for life science majors are algebra based. But for the most part, we agree with Meredith and Bolker about what physics topics are central for biology majors and pre-med students.

Friday, May 24, 2013

Eleanor Adair (1926-2013)

Eleanor Adair, who studied the health risks of microwave radiation, died on April 20 in Hamden, Connecticut at the age of 86. A 2001 interview with Adair, published in the New York Times, began
Eleanor R. Adair wants to tell the world what she sees as the truth about microwave radiation.

New widely reported studies have failed to find that cellular phones, which use microwaves to transmit signals, cause cancer. And most academic scientists say the microwave radiation that people are exposed to with devices like cell phones is harmless. But still, Dr. Adair knows that many people deeply fear these invisible rays.

She knows that many people hear the word “radiation” and assume that all radiation is dangerous, equating microwaves to the very different X-rays.

Microwaves, she points out, are at the other end of the electromagnetic spectrum from high energy radiation like X-rays and gamma rays. And unlike gamma rays and X-rays, which can break chemical bonds and injure cells, even causing cancer, microwaves, she says, can only heat cells. Of course, if cells get hot enough, they can die, but the heat level has to be closer to that in an oven than the extremely low level from cell phones.
The interview ends with this exchange:
Q. If I were to say to people, “Hey there’s this really cool idea: Why heat your whole house when you could use microwaves to heat yourself?” they would say: “You’ve got to be kidding. Don’t you know that microwaves are dangerous? They can even cause cancer.” What do you say to people who respond like that?

A. I try to educate them in exactly what these fields are. That they are part of the full electromagnetic spectrum that goes all the way from the radio frequency and microwave bands, through infrared, ultraviolet, the gamma rays and all that.

And the difference between the ionizing X-ray, gamma ray region and the microwave frequency is in the quantum energy. The lower you get in frequency the lower you get in quantum energy and the less it can do to the cells in your body.

If you have a really high quantum energy such as your X-rays and ionizing-radiation region of the spectrum, this energy is high enough that it can bump electrons out of the orbit in your cells and it can create serious changes in the cells of your body such that they can turn into cancers and various other things that are not good for you.

But down where we are working, in the microwave band, you are millions of times lower in frequency and there the quantum energy is so low that they can’t do any damage to the cells whatsoever. And most people don’t realize this.

Somehow, something is missing in their basic science education, which is something I keep trying to push. Learn the spectrum. Learn that you’re in far worse shape if you lie out on the beach in the middle of summer and you soak up that ultraviolet radiation than you are if you use your cell phone.

Q. Some people say that with the ever-increasing exposure of the population to microwaves—cell phones have really taken off in the past few years—we need to redouble our research efforts to look for dangerous effects of microwaves on cells and human tissues. Do you agree?

A. No. All the emphasis that we need more research on power line fields, cell phones, police radar—this involves billions of dollars that could be much better spent on other health problems. Because there is really nothing there.
We don’t cite Adair’s research in the 4th edition of Intermediate Physics for Medicine and Biology, but we do cover the interaction of electromagnetic fields with tissue in Chapter 9. Much of our discussion is about powerline (60 Hz) fields, but many of the same considerations apply to microwaves. In our discussion, we do cite Robert Adair, Eleanor’s husband and an emeritus professor of physics at Yale, who shares his wife’s interest in the health effect of microwave radiation.

Adair won the d’Arsonval Award, presented by the Bioelectromagnetics Society, to recognize her accomplishments in the field of bioelectromagnetics. In an editorial announcing the award, Ben Greenebaum writes (Bioelectromagnetics, Volume 29, Page 585, 2008)
It gives me great pleasure to introduce Dr. Eleanor R. Adair, the recipient of the Bioelectromagnetics Society’s 2007 D’Arsonval Award, as she presents her Award Lecture (Fig. 1). Dr. Adair is being honored by the Society for her body of work investigating physiological thermoregulatory responses to radio frequency and microwave fields. Her bioelectromagnetic career began with extensive experimental studies of electromagnetic radiation-induced thermophysiological responses in monkeys and concluded with experiments that accomplished the critical extrapolation of the earlier findings to humans. I believe that this body of work constitutes a majority of the literature on the latter topic.

She spent most of her career as a research scientist at the John B. Pierce Foundation Laboratory at Yale University, but finished it as a scientist at the US Air Force’s Brooks City Base in San Antonio, Texas. As she notes in her D’Arsonval address [Adair, 2008], she took her undergraduate degree at Mount Holyoke College in 1948 and her doctorate in psychology at the University of Wisconsin-Madison in 1955. Interspersed among her academic accomplishments in Madison were others—marriage to Robert Adair and children. We should not forget that combining a research career and family at that time was much rarer and required overcoming greater difficulties than those still encountered today. Those of us who have interacted with Dr. Adair over the years know that she has determination in plenty.

Dr. Adair was a charter member of the Society and was its Secretary-Treasurer (1983–1986) during a difficult time, when the Society decided to replace its first Executive Director with Bill Wisecup. She has also been active outside the Society, both with groups concerned with research into bioelectromagnetic effects and with groups concerned with the implications of these results.

However, it is for her overall scientific contributions to bioelectromagnetics that she is being presented the D’Arsonval Award. The criteria for the Award state that “. . . the D’Arsonval Medal is to recognize outstanding achievement in research in the field of Bioelectromagnetics.” And that is the topic that she will address today in her presentation entitled, “Reminiscences of a Journeyman Scientist.”
For those who want to read Adair's own words, you can find her presentation at:
Adair. E. R. (2008) “Reminiscences of a journeyman scientist: Studies of thermoregulation in non-human primates and humans,”  Bioelectromagnetics  Volume 29, Pages 586–597.

Friday, May 17, 2013

The Lorenz equations and chaos

Fifty years ago, Edward Lorenz (1917–2008) published an analysis of Rayleigh-Benard convection that began the study of a field of mathematics called chaos theory. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce chaos by analyzing the logistic map, which
is an example of chaotic behavior or deterministic chaos. Deterministic chaos has four important characteristics: 1. The system is deterministic, governed by a set of equations that define the evolution of the system. 2. The behavior is bounded. It does not go off to infinity. 3. The behavior of the variables is aperiodic in the chaotic regime. The values never repeat. 4. The behavior depends very sensitively on the initial conditions.
The sensitivity to initial conditions is sometimes called the “butterfly effect,” a term coined by Lorenz. His model is a simplified description of the atmosphere, and has implications for weather prediction.

The mathematical model that Lorenz analyzed consists of three first-order coupled nonlinear ordinary differential equations. Because of their historical importance, I have written a new homework problem that introduces Lorenz’s equations. These particular equations don’t have any biological applications, but the general idea of chaos and nonlinear dynamics certainly does (see, for example, Glass and Mackey’s book From Clock’s to Chaos.
Section 10.7

Problem 33 1/2. Edward Lorenz (1963) published a simple, three-variable (x, y, z) model of Rayleigh-Benard convection
dx/dt = σ (y – x)
dy/dt = x (ρ – z) – y
dz/dt = xy – β z
where σ=10, ρ=28, and β=8/3.
(a) Which terms are nonlinear?
(b) Find the three equilibrium points for this system of equations.
(c) Write a simple program to solve these equations on the computer (see Sec. 6.14 for some guidance on how to solve differential equations numerically). Calculate and plot x(t) as a function of t for different initial conditions. Consider two initial equations that are very similar, and compute how the solutions diverge as time goes by.
(d) Plot z(t) versus x(t), with t acting as a parameter of the curve.

Lorenz, E. N. (1963) “Deterministic nonperiodic flow,” Journal of the Atmospheric Sciences, Volume 20, Pages 130–141.
If you want to examine chaos in more detail, see Steven Strogatz’s excellent book Nonlinear Dynamics and Chaos. He has an entire chapter (his Chapter 9) dedicated to the Lorenz equations.

The story of how Lorenz stumbled upon the sensitivity of initial conditions is a fascinating tale. Here is one version in a National Academy of Sciences Biographical Memoir about Lorenz written by Kerry Emanuel.
At one point, in 1961, Ed had wanted to examine one of the solutions [to a preliminary version of his model that contained 12 equations] in greater detail, so he stopped the computer and typed in the 12 numbers from a row that the computer had printed earlier in the integration. He started the machine again and stepped out for a cup of coffee. When he returned about an hour later, he found that the new solution did not agree with the original one. At first he suspected trouble with the machine, a common occurrence, but on closer examination of the output, he noticed that the new solution was the same as the original for the first few time steps, but then gradually diverged until ultimately the two solutions differed by as much as any two randomly chosen states of the system. He saw that the divergence originated in the fact that he had printed the output to three decimal places, whereas the internal numbers were accurate to six decimal places. His typed-in new initial conditions were inaccurate to less than one part in a thousand.

“At this point, I became rather excited,” Ed relates. He realized at once that if the atmosphere behaved the same way, long-range weather prediction would be impossible owing to extreme sensitivity to initial conditions. During the following months, he persuaded himself that this sensitivity to initial conditions and the nonperiodic nature of the solutions were somehow related, and was eventually able to prove this under fairly general conditions. Thus was born the modern theory of chaos.
To learn more about the life of Edward Lorenz, see his obituary here and here. I have not read Chaos: Making a New Science by James Gleick, but I understand that he tells Lorenz’s story there.

Friday, May 10, 2013

Graduation

Today, my wife Shirley and I will attend the graduation of our daughter Kathy from Vanderbilt University. She is getting her undergraduate degree, with a double major in biology and history.

Kathy spent part of her time in college working with John Wikswo in the Department of Physics. As regular readers of this blog may know, Wikswo was my PhD advisor when I was a graduate student at Vanderbilt in the 1980s. Russ Hobbie and I often cite Wikswo’s work in the 4th edition of Intermediate Physics for Medicine and Biology, for his contributions to both cardiac electrophysiology and biomagnetism. You can see a picture of Kathy and John in an article about undergraduate research in Arts and Science, the magazine of Vanderbilt University’s College of Arts and Science. Interestingly, there are now publications out there with “Roth and Wikswo” among the authors that I had nothing to do with; for example, see poster A53 at the 6th q-bio conference (Santa Fe, New Mexico, 2012). You can watch and listen to Wikswo give his TEDxNashville talk here. Kathy also worked with Todd Graham of the Department of Cell and Developmental Biology at Vanderbilt. This fall, she plans to attend graduate school studying biology at Michigan State University.