Friday, August 9, 2013

Martha Chase (1927-2003)

Ten years ago yesterday, the American biologist Martha Chase passed away. Chase is famous for her participation in a fundamental genetics experiment. In collaboration with Alfred Hershey, she performed this experiment in 1952 at Cold Spring Harbor Laboratory (see last week's blog entry).  Their results supported the hypothesis that DNA is the biological molecule that carries genetic information. They showed that the DNA, not the protein, of the bacteriophage T2 (a virus that infects bacteria) entered E. coli upon infection.

The Eighth Day of Creation: The Makers of the Revolution in Biology, by Horance Freeland Judson, suuperimposed on Intermediate Physics for Medicine and Biology.
The Eighth Day of Creation:
The Makers of the Revolution in Biology,
by Horace Freeland Judson.
To describe this experiment, I quote from Horace Freeland Judson’s wonderful book The Eighth Day of Creation: The Makers of the Revolution in Biology.
Hershey and Chase decided to see if they could strip off the empty phage ghosts from the bacteria and find out what they were and where their contents had gone. DNA contains no sulphur; phage protein has no phosphorus. Accordingly, they began by growing phage in a bacterial culture with a radioactive isotope as the only phosphorus in the soup [P32], which was taken up in all the phosphate groups as the DNA of the phage progeny was assembled, or, in the parallel experiment, by growing phage whose coat protein was labelled with hot sulphur [S35]. They used the phage to infect fresh bacteria in broths that were not radioactive, and a few minutes after infection tried to separate the bacteria from the emptied phage coats. “We tried various grinding arrangements, with results that weren’t very encouraging,” Hershey wrote later. Then they made a technological breakthrough, in the best Delbruck fashion of homely improvisation. “When Margaret McDonald loaned us her blender the experiment promptly succeeded.”
This ordinary kitchen blender provided just the right shear forces to strip the empty bacteriophage coats off the bacteria. When tested, those bacteria infected by phages containing radioactive phosphorus were themselves radioactive, but those infected by phages containing radioactive sulphur were not. Thus, the DNA and not the protein is the genetic material responsible for infection. This was truly an elegant experiment. They key was the use of radioactive tracers. Russ Hobbie and I discuss nuclear physics and nuclear medicine in Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology. We focus on medical applications of radioactive isotopes, but we should remember that these tracers also have played a crucial role in experiments in basic biology.

Hershey and Chase’s experiment, often called the Warring Blender experiment, is a classic studied in introductory biology classes. It was the high point of Chase’s career. She obtained her bachelor’s degree from the College of Wooster and was then hired by Hershey to work in his Cold Spring Harbor laboratory. She stayed at Cold Spring Harbor only three years, but in that time she and Hershey performed their famous experiment. In 1964 she obtained her PhD from the University of Southern California. Unfortunately, things did not go so well from Chase after that. Writer Milly Dawson tells the story.
In the late 1950s in California, she had met and married a fellow scientist, Richard Epstein, but they soon divorced… Chase suffered several other personal setbacks, including a job loss, in the late 1960s, a period that saw the end of her scientific career. Later, she experienced decades of dementia, with long-term but no short-term memory. [Waclow] Szybalski [a colleague at Cold Spring Harbor Laboratory in the 1950s] remembered his friend as “a remarkable but tragic person.”
A good description of the Hershey-Chase experiment can be found here. You can learn more about life of Martha Chase in obituaries here and here.  Szybalski’s reminiscences are recording in a Cold Spring Harbor oral history available here. Dawson’s tribute can be found here. And most importantly, the 1952 Hershey-Chase paper can be found here.

Friday, August 2, 2013

Cold Spring Harbor Laboratory

A photograph of me standing next to the entrance of Cold Spring Harbor Laboratory.
Me standing next to the entrance of
Cold Spring Harbor Laboratory.
Last week my wife, my mother-in-law, and I made a brief trip to Long Island, New York, where we made a quick stop at the Cold Spring Harbor Laboratory. What a lovely setting for a research center. We drove around the grounds, looking at the various labs. It sits right on a bay off the Long Island Sound, and looks more like a resort than a scientific laboratory. James Watson, of DNA fame, was the long-time director of Cold Spring Harbor Lab.

In the last few years, the lab has begun a thrust into “Quantitative Biology.” This area of research has much overlap with the 4th edition of Intermediate Physics for Medicine and Biology. I view this development as evidence that science is going in “our direction,” toward a larger role for physics and math in medicine and biology. The Cold Spring Harbor website describes the new Simons Center for Quantitative Biology.
Cold Spring Harbor Laboratory (CSHL) has recently opened the Simons Center for Quantitative Biology (SCQB). The areas of expertise in the SCQB include applied mathematics, computer science, theoretical physics, and engineering. Members of the SCQB will interact closely with other CSHL researchers and will apply their approaches to research areas including genomic analysis, population genetics, neurobiology, evolutionary biology, and signal and image processing.
We passed by CSHL during a trip that included stops at Sagamore Hill National Historic Site in Oyster Bay (President Theodore Roosevelt’s home), Planting Fields Arboretum, and the Montauk Point Lighthouse.

Friday, July 26, 2013

Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields

Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields, by Jakko Malmivuo and Robert Plonsey, superimposed on Intermediate Physics for Medicine and Biology.
Bioelectromagnetism,
by Malmivuo and Plonsey.
A good textbook about bioelectricity and biomagnetism is Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields by Jaakko Malmivuo and Robert Plonsey (Oxford University Press, 1995). One of the best features of the book is that it is available for free online at www.bem.fi/book/index.htm. The book covers many of the topics Russ Hobbie and I discuss in Chapters 6–9 of the 4th edition of Intermediate Physics for Medicine and Biology: the cable equation, the Hodgkin and Huxley model, patch-clamp recordings, the electrocardiogram, biomagnetism, the bidomain model, and magnetic stimulation. The book’s introduction outlines its eight parts:
Part I discusses the anatomical and physiological basis of bioelectromagnetism. From the anatomical perspective, for example, Part I considers bioelectric phenomena first on a cellular level (i.e., involving nerve and muscle cells) and then on an organ level (involving the nervous system (brain) and the heart).

Part II introduces the concepts of the volume source and volume conductor and the concept of modeling. It also introduces the concept of impressed current source and discusses general theoretical concepts of source-field models and the bidomain volume conductor. These discussions consider only electric concepts.

Part III explores theoretical methods and thus anatomical features are excluded from discussion. For practical (and historical) reasons, this discussion is first presented from an electric perspective in Chapter 11. Chapter 12 then relates most of these theoretical methods to magnetism and especially considers the difference between concepts in electricity and magnetism.

The rest of the book (i.e., Parts IV–IX) explores clinical applications. For this reason, bioelectromagnetism is first classified on an anatomical basis into bioelectric and bio(electro)magnetic constituents to point out the parallelism between them. Part IV describes electric and magnetic measurements of bioelectric sources of the nervous system, and Part V those of the heart.

In Part VI, Chapters 21 and 22 discuss electric and magnetic stimulation of neural and Part VII, Chapters 23 and 24, that of cardiac tissue. These subfields are also referred to as electrobiology and magnetobiology. Part VIII focuses on Subdivision III of bioelectromagnetism—that is, the measurement of the intrinsic electric properties of biological tissue. Chapters 25 and 26 examine the measurement and imaging of tissue impedance, and Chapter 27 the measurement of the electrodermal response.

In Part IX, Chapter 28 introduces the reader to a bioelectric signal that is not generated by excitable tissue: the electro-oculogram (EOG). The electroretinogram (ERG) also is discussed in this connection for anatomical reasons, although the signal is due to an excitable tissue, namely the retina.
Jaakko Malmivuo is a Professor in the School of Electrical Engineering at Aalto University in Helsinki, Finland. He is also the director of the Ragnar Granit Institute.

Robert Plonsey is the Pfizer-Pratt University Professor Emeritus of Biomedical Engineering at Duke University. This year, he received the IEEE Biomedical Engineering Award “for developing quantitative methods to characterize the electromagnetic fields in excitable tissue, leading to a better understanding of the electrophysiology of nerve, muscle, and brain.” Plonsey is cited on 16 pages of Intermediate Physics for Medicine and Biology, the most of any scientist or author.

Friday, July 19, 2013

Reinventing Physics For Life-Science Majors

The July issue of Physics Today contained an article by Dawn Meredith and Joe Redish titled “Reinventing Physics for Life-Science Majors.” Much in the article is relevant to the 4th edition of Intermediate Physics for Medicine and Biology. The main difference between the goals of their article and IPMB is that they discuss the introductory physics course, whereas Russ Hobbie and I wrote an intermediate-level text. Nevertheless, many of the aims remain the same. Meredith and Redish begin
Physics departments have long been providing service courses for premedical students and biology majors. But in the past few decades, the life sciences have grown explosively as new techniques, new instruments, and a growing understanding of biological mechanisms have enabled biologists to better understand the physiochemical processes of life at all scales, from the molecular to the ecological. Quantitative measurements and modeling are emerging as key biological tools. As a result, biologists are demanding more effective and relevant undergraduate service classes in math, chemistry, and physics to help prepare students for the new, more quantitative life sciences.
Their section on what skills should students learn reads like a list of goals for IPMB:
  • Drawing inferences from equations…. 
  • Building simple quantitative models…. 
  • Connecting equations to physical meaning…. 
  • Integrating multiple representations…. 
  • Understanding the implications of scaling and functional dependence…. 
  • Estimating….”
Meredith and Redish realize the importance of developing appropriate homework problems for life-science students, which is something Russ and I have spent an enormous amount of time on when revising IPMB. “We have spent a good deal of time in conversation with our biology colleagues and have created problems of relevance to them that are also doable by students in an introductory biology course.” They then offer a delightful problem about calculating how big a worm can grow (see their Box 4). They also include a photo of a “spherical cow”; you need to see it to understand. And they propose the Gauss gun (see a video here) as a model for exothermic reactions. They conclude
Teaching physics to biology students requires far more than watering down a course for engineers and adding in a few superficial biological applications. What is needed is for physicists to work closely with biologists to learn not only what physics topics and habits of mind are useful to biologists but also how the biologist’s work is fundamentally different from ours and how to bridge that gap. The problem is one of pedagogy, not just biology or physics, and solving it is essential to designing an IPLS [Introductory Physics for the Life Sciences] course that satisfies instructors and students in both disciplines.

Friday, July 12, 2013

The Bohr Model

One hundred years ago this month, Niels Bohr published his model of the atom (“On the Constitution of Atoms and Molecules,” Philosophical Magazine, Volume 26, Pages 1–25, 1913). In the May 2013 issue of Physics Today, Helge Kragh writes
Published in a series of three papers in the summer and fall of 1913, Niels Bohr’s seminal atomic theory revolutionized physicists’ conception of matter; to this day it is presented in high school and undergraduate-level textbooks.
The Making of the Atomic Bomb, by Richard Rhodes, superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.
I find Bohr’s model fascinating for several reasons: 1) it was the first application of quantum ideas to atom structure, 2) it predicts the size of the atom, 3) it implies discrete atomic energy levels, 4) it explains the hydrogen spectrum in terms of transitions between energy levels, and 5) it provides an expression for the Rydberg constant in terms of fundamental parameters. In his book The Making of the Atomic Bomb, Richard Rhodes discusses the background leading to Bohr’s discovery.
Johann Balmer, a nineteenth-century Swiss mathematical physicist, identified in 1885 … a formula for calculating the wavelengths of the spectral lines of hydrogen… A Swedish spectroscopist, Johannes Rydberg, went Balmer one better and published in 1890 a general formula valid for a great many different line spectra. The Balmer formula then became a special case of the more general Rydberg equation, which was built around a number called the Rydberg constant [R]. That number, subsequently derived by experiment and one of the most accurately known of all universal constants, takes the precise modern value of 109,677 cm−1.

Bohr would have known these formulae and numbers from undergraduate physics, especially since Christensen [Bohr’s doctorate advisor] was an admirer of Rydberg and had thoroughly studied his work. But spectroscopy was far from Bohr’s field and he presumably had forgotten them. He sought out his old friend and classmate, Hans Hansen, a physicist and student of spectroscopy just returned from Gottingen. Hansen reviewed the regularity of the line spectra with him. Bohr looked up the numbers. “As soon as I saw Balmer’s formula,” he said afterward, “the whole thing was immediately clear to me.”

What was immediately clear was the relationship between his orbiting electrons and the lines of spectral light… The lines of the Balmer series turn out to be exactly the energies of the photons that the hydrogen electron emits when it jumps down from orbit to orbit to its ground state. Then, sensationally, with the simple formula R = 2π2me4/h3 (where m is the mass of the electron, e the electron charge and h Planck’s constant—all fundamental numbers, not arbitrary numbers Bohr made up) Bohr produced Rydberg’s constant, calculating it within 7 percent of its experimentally measured value!...
In chapter 14 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Bohr model, but interestingly we do not attribute the model to Bohr. However, at other locations in the book, we casually refer to Bohr’s model by name: see Problem 33 of Chapter 15 where we mention “Bohr orbits,” and Sections 15.9 and 16.1.1 where we refer to the “Bohr formula.” I guess we assumed that everyone knows what the Bohr model is (a pretty safe assumption for readers of IPMB). In Problem 4 of Chapter 14 (one of the new homework problems in the 4th edition), the reader is asked to derive the expression for the Rydberg constant in terms of fundamental parameters (you don’t get exactly the same answer as in the quote above; presumably Rhodes didn’t use SI units).

Bohr would become one the principal figures in the development of modern quantum mechanics. He also made fundamental contributions to nuclear physics, and contributed to the Manhattan project. He was awarded the Nobel Prize in Physics in 1922 “for his services in the investigation of the structure of atoms and of the radiation emanating from them.” He is Denmark’s most famous scientist, and for years he led the Institute of Theoretical Physics at the University of Copenhagen. A famous play, titled Copenhagen is about his meeting with former collaborator Werner Heisenberg in then-Nazi-controlled Denmark in 1941. Here is a clip.

Bohr and Heisenberg discussing the uncertainty principle, in Copenhagen.

Physicists around the world are celebrating this 100-year anniversary; for instance here, here, here and here.

I end with Bohr’s own words: an excerpt from the introduction of his first 1913 paper (references removed).
In order to explain the results of experiments on scattering of α rays by matter Prof. Rutherford has given a theory of the structure of atoms. According to this theory, the atoms consist of a positively charged nucleus surrounded by a system of electrons kept together by attractive forces from the nucleus; the total negative charge of the electrons is equal to the positive charge of the nucleus. Further, the nucleus is assumed to be the seat of the essential part of the mass of the atom, and to have linear dimensions exceedingly small compared with the linear dimensions of the whole atom. The number of electrons in an atom is deduced to be approximately equal to half the atomic weight. Great interest is to be attributed to this atom-model; for, as Rutherford has shown, the assumption of the existence of nuclei, as those in question, seems to be necessary in order to account for the results of the experiments on large angle scattering of the α rays.

In an attempt to explain some of the properties of matter on the basis of this atom-model we meet however, with difficulties of a serious nature arising from the apparent instability of the system of electrons: difficulties purposely avoided in atom-models previously considered, for instance, in the one proposed by Sir J. J. Thomson. According to the theory of the latter the atom consists of a sphere of uniform positive electrification, inside which the electrons move in circular orbits. The principal difference between the atom-models proposed by Thomson and Rutherford consists in the circumstance the forces acting on the electrons in the atom-model of Thomson allow of certain configurations and motions of the electrons for which the system is in a stable equilibrium; such configurations, however, apparently do not exist for the second atom-model. The nature of the difference in question will perhaps be most clearly seen by noticing that among the quantities characterizing the first atom a quantity appears—the radius of the positive sphere—of dimensions of a length and of the same order of magnitude as the linear extension of the atom, while such a length does not appear among the quantities characterizing the second atom, viz. the charges and masses of the electrons and the positive nucleus; nor can it be determined solely by help of the latter quantities.

The way of considering a problem of this kind has, however, undergone essential alterations in recent years owing to the development of the theory of the energy radiation, and the direct affirmation of the new assumptions introduced in this theory, found by experiments on very different phenomena such as specific heats, photoelectric effect, Röntgen [etc]. The result of the discussion of these questions seems to be a general acknowledgment of the inadequacy of the classical electrodynamics in describing the behaviour of systems of atomic size. Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i. e. Planck’s constant, or as it often is called the elementary quantum of action. By the introduction of this quantity the question of the stable configuration of the electrons in the atoms is essentially changed as this constant is of such dimensions and magnitude that it, together with the mass and charge of the particles, can determine a length of the order of magnitude required. This paper is an attempt to show that the application of the above ideas to Rutherford’s atom-model affords a basis for a theory of the constitution of atoms. It will further be shown that from this theory we are led to a theory of the constitution of molecules.

In the present first part of the paper the mechanism of the binding of electrons by a positive nucleus is discussed in relation to Planck’s theory. It will be shown that it is possible from the point of view taken to account in a simple way for the law of the line spectrum of hydrogen. Further, reasons are given for a principal hypothesis on which the considerations contained in the following parts are based.

I wish here to express my thanks to Prof. Rutherford his kind and encouraging interest in this work.

Friday, July 5, 2013

The Spark of Life

The Spark of Life: Electricity in the Human Body, by Frances Ashcroft, superimposed on Intermediate Physics for Medicine and Biology.
The Spark of Life:
Electricity in the Human Body,
by Frances Ashcroft.
This week I finished the book The Spark of Life: Electricity in the Human Body, by Frances Ashcroft. In the introduction, Ashcroft explains the goal of her book.
In essence, this book is a detective story about a special kind of protein—the ion channel—that takes us from Ancient Greece to the forefront of scientific research today. It is very much a tale for today, as although the effects of static electricity and lightning on the body have been known for centuries, it is only in the last few decades that ion channels have been discovered, their functions unravelled and their beautiful, delicate, intricate structures seen by scientists for the first time. It is also a personal panegyric for my favourite proteins, which captured me as a young scientist and never let me go; they have been a consuming passion throughout my life. In Walt Whitman’s wonderful words, my aim is to “sing the body electric.”
The book examines much of the history behind topics that Russ Hobbie and I discuss in the 4th edition of Intermediate Physics for Medicine and Biology, such as the work of Hodgkin and Huxley on the squid nerve axon, the electrocardiogram, and modern medical devices such as the pacemaker and cochlear implants. The book is definitely aimed at a general audience; having worked in the field of bioelectricity, I sometimes wish for more depth in the discussion. For instance, anyone wanting to know the history of pacemakers and defibrillators would probably prefer something like Machines in our Hearts, by Kirk Jeffrey. Nevertheless, it was useful to find the entire field of bioelectricity described in one relatively short and easily readable book. With its focus on ion channels, I consider this book as a popularization of Bertil Hille’s text Ion Channels of Excitable Membranes. Her book was also useful to me as a review of various drugs and neurotransmitters, which I don’t know nearly as much about as I should.

Here is a sample of Ashcroft’s writing, in which she tells about Rod MacKinnon’s determination of the structure of a potassium channel. Russ and I discuss MacKinnon’s work in Chapter 9 (Electricity and Magnetism at the Cellular Level) of IPMB.
A slight figure with an elfin face, MacKinnon is one of the most talented scientists I know. He was determined to solve the problem of how channels worked and he appreciated much earlier than others that the only way to do so was to look at the channel structure directly, atom by atom. This was not a project for the faint-hearted, for nobody had ever done it before, no one really knew how to do it and most people did not even believe it could be done in the near future. The technical challenges were almost insurmountable and at that time he was not even a crystallographer. But MacKinnon is not only a brilliant scientist; he is also fearless, highly focused and extraordinarily hard-working (he is famed for working around the clock, snatching just a few hours’ sleep between experiments). Undeterred by the difficulties, he simultaneously switched both his scientific field and his job, resigning his post at Harvard and moving to Rockefeller University because he felt the environment there was better. Some people in the field wondered if he was losing his mind. In retrospect, it was a wise decision. A mere two years later, MacKinnon received a standing ovation—an unprecedented event at a scientific meeting—when he revealed the first structure of a potassium channel. And ion channels went to Stockholm all over again.
Ashcroft is particularly good at telling the human interest stories behind the discoveries described in the book. There were several interesting tales about neurotoxins; not only the well-known tetrodotoxin, but also others such as saxitoxin, aconite, batrachotoxin, and grayanotoxin. The myotonic goats, who because of an ion channel disease fall over stiff whenever startled, are amazing. Von Humboldt’s description of natives using horses to capture electric eels is incredible. The debate in Parliament about August Waller’s demonstration of the electrocardiogram, using his dog Jimmie as the subject, was funny. If, like me, you enjoy such stories, read The Spark of Life.

I will end with Ashcroft’s description of how Hodgkin and Huxley developed their mathematical model of the action potential in the squid giant axon, a topic covered in detail in Chapter 6 of IPMB.
Having measured the amplitude and time course of the sodium and potassium currents, Hodgkin and Huxley needed to show that they were sufficient to generate the nerve impulse. They decided to do so by theoretically calculating the expected time course of the action potential, surmising that if it were possible to mathematically simulate the nerve impulse it was a fair bet that only the currents they had recorded were involved. Huxley had to solve the complex mathematical equations involved using a hand-cranked calculator because the Cambridge University computer was “off the air” for six months. Strange as it now seems, the university had only one computer at that time (indeed it was the first electronic one Cambridge had). It took Huxley about three weeks to compute an action potential: times have moved on—it takes my current computer just a few seconds to run the same simulation. What is perhaps equally remarkable is that we often still use the equations Hodgkin and Huxley formulated to describe the nerve impulse.

Three years after finishing their experiments, in 1952, Hodgkin and Huxley published their studies in a landmark series of five papers that transformed forever our ideas about how nerves work. The long time between completing their experiments and publication seems extraordinary to present-day scientists, who would be terrified of being scooped by their rivals. Not so in the 1950s—Huxley told me, “It never even entered my head.” In 1963, Hodgkin and Huxley were awarded the Nobel Prize. Deservedly so, for they got such beautiful results and analysed them so precisely that they revolutionized the field and provided the foundations for modern neuroscience.
For more about The Spark of Life, see here, here, here, and here. Listen to and watch Ashcroft being interviewed by Denis Noble here, and giving the Croonian Lecture here.

Francis Ashcroft being interviewed by Denis Noble.

Francis Ashcroft giving the Croonian Lecture.

Friday, June 28, 2013

Lotka-Volterra equations

Russ Hobbie and I don’t study population dynamics much in the 4th edition of Intermediate Physics for Medicine and Biology. To me, it’s more of a mathematical biology topic rather than an application of physics to biology. However, we do discuss one well-known model for population dynamics, the Lotka-Volterra equations, in a Homework Problem in Chapter 2.
Problem 34 Consider a classic predator-prey problem. Let the number rabbits be R and the number of foxes be F. The rabbits eat grass, which is plentiful. The foxes eat only rabbits. The number of rabbits and foxes can be modeled by the Lotka-Volterra equations
dR/dt = a R – b R F
dF/dt = - c F + d R F .
(a) Describe the physical meaning of each term on the right-hand side of each equation. What does each of the constants a, b, c, and d denote?
(b) Solve for the steady-state values of R and F.

These differential equations are difficult to solve because they are nonlinear (see Chapter 10). Typically, R and F oscillate about the steady-state solutions found in part (b). For more information, see Murray (2001).
There are two steady-state solutions. One is the trivial R = F = 0. The most interesting aspect of this solution is that it is not stable. If R and F are both small, the nonlinear terms in the Lotka-Volterra equations are negligible, and the number of foxes falls exponentially (they is no prey to eat) but the number of rabbits rises exponentially (there is no predator to eat them).

The other steady-state solution is (spoiler alert!) R = c/d and F = a/b. We claim in the problem that these equations are difficult to solve, and that is true in general, at least when searching for analytical solutions. However, if we focus on small deviations from this steady-state, we can solve the equations. Let  

R = c/d + r 
F = a/b + f ,

where r and f are small (much less than the steady state solutions). Plug these into the original differential equations, and ignore any terms containing r times f (these “doubly small” terms are negligible). The new equations for r and f are

dr/dt = - b (c/d) f 
df/dt =   d (a/b) r .

Now let’s use my favorite technique for solving differential equations: guess and check. I will guess

r = A sin(ωt)
f = B cos(ωt) .

If we plug these expressions into the differential equations, we get a solution only if ω2 = ac. In that case, B = -(d/b) √(a/c) A. You can’t get A in this way; it depends on the initial conditions.

A plot of the solution shows two oscillating populations, with the rabbits lagging 90 degrees behind the foxes. In words, suppose you start with foxes at their equilibrium value, but a surplus of rabbits above their equilibrium. In this case, there are lots of rabbits for the foxes to eat, so the foxes gorge themselves and their population grows. However, as the number of foxes rises, the number of rabbits starts to fall (they are being ravaged by all those foxes). After a while, the number of rabbits declines back to its equilibrium value, but by then the number of foxes has surged above its steady-state value. Foxes continue to devour rabbits, reducing the rabbit population below equilibrium. Now there are too many foxes competing for too few rabbits, so the fox population starts to shrink as some inevitably go hungry. During this difficult time, both populations are plummeting as a large but decreasing number of ravenous foxes hunt the rare and frightened rabbits. When the foxes finally fall back to their equilibrium value there is a shortage of rabbits, so the foxes continue to starve and their number keeps falling. With less foxes, the rabbits breed like…um…rabbits and begin to make a comeback. Once they climb to their equilibrium value, there are still relatively few foxes, so the rabbits prosper all the more. With the rabbit population surging, there is plenty of food for the foxes, and the fox population begins to increase. During these happy days, both populations thrive. Eventually, the foxes return to their equilibrium value, but by this time the rabbits are plentiful. But this is just where we started, so the process repeats, over and over again. I needed a lot of words to explain about those foxes and rabbits. I think you can begin to see the virtue of a succinct mathematical analysis, rather than a verbose nonmathematical description.

For larger oscillations, the nonlinear nature of the model becomes important. The populations still oscillate, but not sinusoidally. For some parameters, one population may rise slowly and then suddenly drop precipitously, only to gradually rise again. You can see some of those results here and here.

The Lotka-Volterra model is rather elementary. For instance, there is no damping; the oscillations never decay away but instead continue forever. Moreover, the oscillations do not approach some fixed amplitude (a limit cycle behavior). Instead, the amplitude depends entirely on the initial conditions. Many more realistic models have a threshold, above which oscillations occur but below which the systems returns to its steady state.

Mathematical Biology, by James Murray, superimposed on Intermediate Physics for Medicine and Biology.
Mathematical Biology,
by James Murray.
Population dynamics is a large field, of which we only scratch the surface. One place to learn more is James Murray’s book that we cite at the end of the homework problem:

Murray, J. D. (2001) Mathematical Biology. New York, Springer-Verlag.
The most recent (3rd) edition of Murray’s book is actually in two volumes:
Murray, J. D. (2002) Mathematical Biology: I. An Introduction. New York, Springer-Verlag. 

Murray, J. D. (2002) Mathematical Biology: II. Spatial Models and Biomedical Applications. New York, Springer-Verlag.
Hear Murray talk about his research here


Listen to James Murray talk about Mathematical Biology.
https://www.youtube.com/watch?v=6Yj5Nyb_VyU

Alfred Lotka (1880–1949) was an American scientist. In 1925 he published a book, Elements of Physical Biology, that is in some ways a precursor to Intermediate Physics for Medicine and Biology, or perhaps an early version of Murray’s Mathematical Biology. You can download a copy of the book here.

Friday, June 21, 2013

Life’s Ratchet

Life's Ratchet: How Molecular Machines Extract Order from Chaos, by Peter Hoffmann, superimposed on Intermediate Physics for Medicine and Biology.
Lifes Ratchet: How Molecular Machines
Extract Order from Chaos,
by Peter Hoffmann.
This week I finished reading Life’s Ratchet: How Molecular Machines Extract Order from Chaos, by Peter Hoffmann. This book is mostly about molecular biophysics, which Russ Hobbie and I purposely avoid in the 4th edition of Intermediate Physics for Medicine and Biology. But the workings of tiny molecular motors is closely related to thermal motion (Hoffmann calls it the “molecular storm”) and the second law of thermodynamics, topics that Russ and I do address. One fascinating topic I want to focus on is a discussion of Feynman’s ratchet.

Let us begin with Richard Feynman’s discussion in Chapter 16 of Volume 1 of The Feynman Lectures on Physics. I recall reading The Feynman Lectures the summer between graduating from the University of Kansas and starting graduate school at Vanderbilt University. All physics students should find time to read these great lectures. Feynman writes
Let us try to invent a device which will violate the Second Law of Thermodynamics, that is, a gadget which will generate work from a heat reservoir with everything at the same temperature. Let us say we have a box of gas at a certain temperature and inside there is an axle with vanes in it… Because of the bombardments of gas molecules on the vane, the vane oscillates and jiggles. All we have to do is to hook onto the other end of the axle a wheel which can turn only one way—the ratchet and pawl. Then when the shaft tries to jiggle one way, it will not turn, and when it jiggles the other, it will turn… If we just look at it, wee see, prima facie, that it seems quite possible. So we must look more closely. Indeed, if we look at the ratchet and pawl, we see a number of complications.

First, our idealized ratchet is as simple as possible, but even so, there is a pawl, and there must be a spring in the pawl. The pawl must return after coming off a tooth, so the spring is necessary…
Feynman goes on to explore this device in detail. He concludes that, as we would expect, the device does not violate the second law. He explains
It is necessary to work against the spring in order to lift the pawl to the top of a tooth. Let us call this energy ε… The chance that the system can accumulate enough energy ε to get the pawl over the top of the tooth is eε/kT [T is the absolute temperature, and k is Boltzmann’s constant]. But the probability that the pawl will accidentally be up is also eε/kT. So the number of times that the pawl is up the wheel can turn backwards freely is equal to the number of times that we have enough energy to turn it forward when the pawl is down. We thus get a “balance,” and the wheel will not go around.
Hoffmann explains that a lot of molecular machines important in biology operate analogously to Feynman’s ratchet and pawl. He writes
What kind of molecular device could channel random molecular motion into oriented activity? Such a device would need to allow certain directions of motion, while rejecting others. A ratchet, that is, a wheel with asymmetric teeth blocked by a spring-loaded pawl, could do the job... Maybe nature has made molecular-size ratchets that allow favorable pushes from the molecular storm in one direction, while rejecting unfavorable pushes from the opposite direction…

For the ratchet-and-pawl machine to extract energy from the molecular storm, it has to be easy to push the pawl over one of the teeth of the ratchet. The pawl spring must be very weak to allow the ratchet to move at all. Otherwise, a few water molecules hitting the ratchet would not be strong enough to force the pawl over one of the teeth. Just like the ratchet wheel, the pawl is continuously bombarded by water molecules. Its weak spring allows the pawl to bounce up and down randomly, opening from time to time, allowing the ratchet to slip backward… Worse, because the spring is most relaxed when the pawl is at the lowest point between two teeth [the compressed spring pushes the pawl down against the ratchet], the pawl spends most of its time touching the steep edge of one of the teeth. When an unfavorable hit pushes the ratchet backward just as the pawl has opened, it does not need to go far to end up on the incline of the next tooth—rotating the ratchet backward!... The ratchet will move, bobbing back and forth, but it will not make any net headway.
How then do molecular machines work? They require in input of energy, which eventually gets dissipated into heat. Hoffmann concludes
We could, in fact, make Feynman’s ratchet work, if from time to time, we injected energy to loosen and then retighten the pawl’s spring. On loosening the spring, the wheel would rotate freely, with a slightly higher probability of rotating one way rather than the other. Tightening the pawl’s spring would push the wheel further in the direction we want. On average, the wheel would move forward and do work. In fact, it can be shown that any molecular machine that operates on an asymmetric energy landscape and incorporates and irreversible, energy-degrading step can extract useful work from the molecular storm.
This may all seem abstract, but Hoffmann brings it down to specifics. The molecular machine could be myosin moving along actin (as in muscles) or kinesin moving along a microtubule (as in separating chromosomes during mitosis). The energy source for the irreversible step is ATP. This step allows the motor to extract energy from the “molecular storm” of thermal energy that is constantly bombarding it.

Friday, June 14, 2013

I WENT TO PARIS AND I MISSED THE VERY BEST THING!

Three summers ago, my wife and I visited Paris for our 25th wedding anniversary. We carefully planned our trip so we could see all the most famous sites—the Eifel Tower, the Arc de Triomphe, the Notre Dame Cathedral, the Palace of Versailles, the Pantheon, the Louvre, and the Musee d’Orsay—but somehow WE MISSED THE MOST IMPORTANT THING! Apparently there is a giant painting in the Musee d’Art Moderne by Raoul Dufy, depicting many scientists who have contributed to the study of electricity. What more could a physicist like me ask for? I first learned about this painting in a book I am now reading, The Spark of Life: Electricity and the Human Body, by Frances Ashcroft. I’ll have more on that book in a future post. Here is what she writes about the painting:
An unusual tribute to the scientists and philosophers who contributed to the discovery of electricity hangs in Musee d’Art Moderne in Paris. A giant canvas known as “La Fee Electricite,” which measures 10 metres high and 60 metres long, it was commissioned by a Paris electricity company to decorate its Hall of Light at the 1937 world exhibition in Paris. It is the work of French Fauvist painter Raoul Dufy, better known for his wonderful colourful depictions of boats, and it took him and two assistants four months to complete. The Electricity Fairy sails through the sky at the far left of the painting above some of the world’s most famous landmarks, the Eiffel Tower, Big Ben and St Peter’s Basilica in Rome among them. Behind her follow some 110 people connected with the development of electricity, from Ancient Greece to modern times. As time and the canvas progress, the landscape changes from scenes of rural idyll to steam trains, furnaces, the trappings of the industrial revolution and finally the giant pylons that support the power lines carrying electricity to the planet.
Short of going to see the painting in Paris, the next best thing is to view it in sections at the Electricity Online website of the University of Leeds. I won’t list all the scientists depicted in it, but let me note those Russ Hobbie and I mention in the 4th edition of Intermediate Physics for Medicine and Biology (roughly in chronological order): Newton, Bernoulli, Laplace, Poisson, Gauss, Ohm, Oersted, Clausius, Clapeyron, Fourier, Savart, Fresnel, Biot, Ampere, Faraday, Gibbs, Helmholtz, Maxwell, Poincare, Moseley, Lorentz, and Pierre Curie. A few were only present in IPMB because they have a unit named after them: Pascal, Watt, Joule, Kelvin, Roentgen, Becquerel, Hertz, and Marie Curie. Galvani is shown with a frog, Faraday with a coil and galvanometer, Pierre Curie (mentioned in IPMB through the Curie temperature) is standing next to his wife Marie Curie (only mentioned in IPMB in association with her unit, and the only female scientist in the painting), and Edison is next to his light bulbs.

I’m still not sure how I never knew about this magnificent painting. I guess we need to take another trip to Paris. Honey, start packing!

Friday, June 7, 2013

Resource Letter BSSMF-1: Biological Sensing of Static Magnetic Fields

In the October 2012 issue of the American Journal of Physics, physicist Leonard Finegold published “Resource Letter BSSMF-1: Biological Sensing of Static Magnetic Fields” (Volume 80, Pages 851–861). Finegold recommends that a good starting point for mastering the topic of magnetoreception is Kenneth Lohmann’s News and Views article in Nature.
35. “Magnetic-field perception: News and Views Q and A,” K. J. Lohmann, Nature, 464, 1140–1142 (2010). (E) 
I looked it up, and it does indeed provide a well-written summary of the field in a reader-friendly question-and-answer format.

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss magnetotactic bacteria. We write that
Bacteria in the northern hemisphere have been shown to seek the north pole. Because of the tilt of the earth’s field, they burrow deeper into the environment in which they live. Similar bacteria in the southern hemisphere burrow down by seeking the south pole.
Finegold also reviews this topic. The excerpt reproduced below serves both as an up-date to IPMB and as a sample of the style of an American Journal of Physics resource letter.
Certain bacteria move in response to the earth’s magnetic field (Ref. 35), swimming along the field lines, and have been excellently reviewed (Ref. 36). The “sensing” element is magnetite (an iron oxide) or greigite (an iron sulfide) (Ref. 37). The bacteria would swim toward the boundary between oxygenated and oxygen-poor regions. Until recently, there was the comforting idea that there are two groups of bacteria with opposite sensors, depending on which of the earth’s hemispheres they reside. Alas, both groups have now been found in the same place; it appears that their polarity is correlated with the local redox potential (Ref. 38 and 39). In addition, some bacteria use only the axial property of the field (i.e., they swim both with or against the field direction), whereas others use the vector property (i.e., they swim either with or against the field direction). Details of the behavior have been elucidated by applying magnetic fields to bacteria in a spectrophotometer cuvette, with genetic analysis (Ref. 39).

35. “South-seeking magnetotactic bacteria in the Southern Hemisphere,” R. P. Blakemore, R. B. Frankel, and Ad. J. Kalmijn, Nature 286, 384–385 (1980). (A)

36. “Bacteria that synthesize nano-sized compasses to navigate using Earth’s geomagnetic field,” L. Chen, D. A. Bazylinski, and B. H. Lower, Nature Education Knowledge 1(10), 14 (2010). (I)

37. “The identification and biogeochemical interpretation of fossil magnetotactic bacteria,” R. E. Kopp and J. L. Kirschvink, Earth-Sci. Rev. 86, 42–61 (2008). (A)

38. “South-seeking magnetotactic bacteria in the northern hemisphere,” S. L. Simmons, D. A. Bazylinski, and K. J. Edwards, Science 311, 371–374 (2006). (A)

39. “Characterization of bacterial magnetotactic behaviors by using a magnetospectrophotometry assay,” C. T. Lefevre, T. Song, J. P. Yonnet, and L. F. Wu, Appl. Environ. Microbiol. 75, 3835–3841 (2009). (A)”
Magnetoreception is a field that often stirs debate. Russ and I outline one such debate in IPMB
Kirschvink (1992) proposed a model whereby a magnetosome in a field of 10−4–10−3 T could rotate to open a membrane channel. As an example of the debate that continues in this area, Adair (1991, 1992, 1993, 1994) argued that a magnetic interaction cannot overcome thermal noise in a 60-Hz field of 5 × 10−6 T. However, Polk (1994) argues that more biologically realistic parameters, including a large number of magnetosomes in a cell, could allow an interaction at 2 × 10−6 T.
The key citations in the debate are
Adair, R. (1991) “Constraints on biological effects of weak extremely-low-frequency electromagnetic fields,” Phys. Rev. A, Volume 43, Pages 1039–1048.
Kirschvink, J. L. (1992) “Comment on “Constraints on biological effects of weak extremely-low-frequency electromagnetic fields,” Phys. Rev. A, Volume 46, Pages 2178–2184.
Adair, R. (1992) “Reply to “Comment on ‘Constraints on biological effects of weak extremely-low-frequency electromagnetic fields’,” Phys. Rev. A, Volume 46, Pages 2185–2187.
For those of you who like this sort of thing, here is another example from Finegold’s resource letter. The debate is about, of all things, if cows align themselves in magnetic fields!
A surprising finding is that cattle and deer seem to align themselves in an approximate north-south (geomagnetic) direction. The evidence is from world-wide satellite photographs from Google Earth, supported by ground observations of more than 10,000 animals, and is hard to rebut. The satellite photographs do not have enough resolution to show the direction (north or south) in which the animals face.
72. “Magnetic alignment in grazing and resting cattle and deer,” S. Begall, J. Cerveny, J. Neef, O. Vojtech, and H. Burda, Proc. Natl. Acad. Sci. U.S.A. 105, 13453–13455 (2008). (I)
As Usherwood asks, why on Earth should cattle and deer prefer this alignment? Possible interpretations are that the satellite photographs are made close to noon, so there may be physiological reasons (heating, cooling) for animals to align or to view predators better.
73. “Cattle and deer align north (-north-east),” J. Usherwood, J. Exp. Biol. 212, iv (2009). (E)
Partly to rule out sun compass effects, Burda et al. investigated ruminant alignment under high-voltage (and hence high-current, low-frequency) power lines and found that the geomagnetic north-south alignment was disturbed; the disturbance was correlated with the alternating fields. Such disturbance might instead be because the animals felt protected by (or preferring) the overhead lines or pylons or because of the audible (to humans at least) corona discharge. A good control for this would be to look at ruminants under power lines being repaired, carrying no current; this is difficult to do. The authors ingeniously compared the nonalignment under N-S and E-W trending power lines and found that the nonalignment followed the resultant total magnetic field. Their conclusions have been challenged (Ref. 75), and they have a lively rebuttal (Ref. 76), to which the challengers have replied (Ref. 77). Hence, the initially persuasive evidence, that cattle and deer detect magnetic fields, may need re-examination.

74. “Extremely low-frequency electromagnetic fields disrupt magnetic alignment of ruminants,” H. Burda, S. Begall, J. Cerven, J. Neef, and P. Nemec, Proc. Natl. Acad. Sci. U.S.A. 106, 5708–5713 (2009). (I)
75. “No alignment of cattle along geomagnetic field lines found,” J. Hert, L. Jelinek, L. Pekarek, and A. Pavlicek, J. Comp. Physiol., A 197, 677–682 (2011). (I)
76. “Further support for the alignment of cattle along magnetic field lines: Reply to Hert et al.,” S. Begall, H. Burda, J. Cerveny, O. Gerter, J. Neef-Weisse, and P. Nemec, J. Comp. Physiol. [A] 197, 1127–1133 (2011). (I)
77. “Authors’ Response,” J. Hert, L. Jelinek, L. Pekarek, and A. Pavlicek, J. Comp. Physiol. [A] 197(12), 1135– 1136 (2011). (I) 
Finegold also discusses magnet therapy, a topic I am extremely skeptical about, and that I have discussed before in this blog. He cites his own editorial with Flamm
Magnet therapy,” L. Finegold and B. L. Flamm, Br. Med. J. 332, 4 (2006) (E) 
which concludes
Extraordinary claims demand extraordinary evidence. If there is any healing effect of magnets, it is apparently small since published research, both theoretical and experimental, is weighted heavily against any therapeutic benefit. Patients should be advised that magnet therapy has no proved benefits. If they insist on using a magnetic device they could be advised to buy the cheapest—this will at least alleviate the pain in their wallet.