Friday, December 31, 2010

Brownian Motion

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Brownian motion. We first address this topic in Chapter 3 when deriving the equipartition of energy: the average thermal kinetic energy of an object at temperature T is 3kBT/2, where kB is Boltzmann’s constant.
“This result is true for particles of any mass: atoms, molecules, pollen grains, and so forth. Heavier particles will have a smaller velocity but the same average kinetic energy. Even heavy particles are continually moving with this average kinetic energy. The random motion of pollen particles in water was first seen by a botanist, Robert Brown, in 1827. This Brownian motion is an important topic in the next chapter.”
We next address this topic in Chapter 4, as we motivate the reader for a discussion of diffusion.
“This movement of microscopic-sized particles, resulting from bombardment by much smaller invisible atoms, was first observed by the English botanist Robert Brown in 1827 and is called Brownian motion. Solute particles are also subject to this random motion. If the concentration of particles is not uniform, there will be more particles wandering from a region of high concentration to one of low concentration than vice versa. This motion is called diffusion.”
As so often happens when you look deeply into a subject, the story is more complicated than can be described in an introductory (or even an intermediate) textbook. In the December 2010 issue of the American Journal of Physics, Philip Pearle and his colleagues published the fascinating article What Brown Saw and You Can Too (Volume 78, Pages 1278-1289).
“A discussion of Robert Brown’s original observations of particles ejected by pollen of the plant Clarkia pulchella undergoing what is now called Brownian motion is given. We consider the nature of those particles and how he misinterpreted the Airy disk of the smallest particles to be universal organic building blocks. Relevant qualitative and quantitative investigations with a modern microscope and with a 'homemade' single lens microscope similar to Brown’s are presented.”
One interesting conclusion of their study is that Brown did not actually see pollen grains move.
"We emphasize that Brown did not observe the pollen move. Instead, he observed the motion of much smaller objects that reside within the pollen.15 Nonetheless, statements that Brown saw the pollen move are common.16"
Fortunately, reference 16 does not cite Intermediate Physics for Medicine and Biology. But it raises the question: Did Russ and I get it wrong? Our discussion in Chapter 4 seems safe. The text in Chapter 3 depends on if you interpret “pollen particles” as the entire “pollen grain” or “particles arising from pollen”. Russ may have been aware of this distinction when he wrote the original text, but I confess I was not. I always thought Brown saw the entire pollen grain move.

Pearle et al. show electron microscope pictures of pollen grains, which are 50-100 microns in diameter. I summarize their analysis about what Brown actually saw move as a new homework problem for Chapter 4

Problem 5 1/2. This problem looks at the original observations of Robert Brown that established Brownian motion.
(a) Combine Eqs. 4.23 and 4.71 to determine an expression for the average distance a particle of radius a will diffuse through a fluid of viscosity η in time t.
(b) Assume you observe a pollen grain with a radius of 50 microns in water at room temperature, and that your visual perception is particularly sensitive to motions occurring over a time of about one second. What is the average distance you observe the grain to move?
(c) Now assume your eye cannot see movements that occur over angles of less than 1 minute of arc, or 3 × 10-4 radians (in Chapter 14, we estimate 3 minutes of arc, but use 1 arc min to be conservative). Most eyes cannot focus on objects closer than 25 cm. Determine the smallest displacement you can observe with the naked eye.
(d) Robert Brown had a microscope that could magnify objects by a factor of about 370. What is the smallest displacement he could observe with his microscope? Is this larger or smaller than the displacement of a pollen grain in one second?
In fact, Brown did not observe the motion of entire pollen grains. He observed fat and starch particles about 2 microns in diameter that are released by pollen. For more on Brown’s original observations, see Pearle et al. (2010).
The authors also analyze the microscope that Brown used, and estimate the diffraction effects he had to contend with. Using an analysis similar to that presented in Section 13.7 (Medical Uses of Ultrasound) of Intermediate Physics for Medicine and Biology, they show that Brown probably could not resolve some of the smaller particles, but instead observed their diffraction pattern. As in Eq. 13.40 in our textbook, the diffraction pattern involves a Bessel function, and implies that the apparent size of an object is larger than the real size. The effect is minor for large objects but dominates for small objects.

I find the history and analysis of Brown’s original studies to be fascinating. For me, Pearle et al.’s paper reminds me that 1) the American Journal of Physics is still my favorite journal, and 2) physics has much to offer biology and medicine.

Friday, December 24, 2010

The littlest things can drive you nuts

I hope that our readers (and Russ Hobbie and I do value and appreciate all our dear readers) find the list of references at the end of each chapter in the 4th edition of Intermediate Physics for Medicine and Biology useful. We tried to include books and articles that you would enjoy, and that would help you understand the material in our textbook better. But, you may wonder, what do I see when I look at those lists of references? The first thing I see—the thing that jumps out of the page and screams at me—is that in each list, the first reference is not indented like the rest!!! As I recall, it is some issue in LaTex that is difficult to fix. I think it is related to the policy of not indenting the first paragraph of a section (a practice that I don’t care for).

I suppose what really should worry me are the errors that creep into the book. But at least we can correct those in the erratum, available at the book website. For some reason, I can live with those errors (que sera, sera) but the indentation issue is killing me. You can find a lot of other useful information at the book website, including an interview with Russ Hobbie published in the December 2006 issue of the American Physical Society Division of Biological Physics newsletter, a movie of Russ Hobbie explaining how radiation interacts with tissue based on his Mac Dose computer program, an American Journal of Physics resource letter that Russ and I published last year, and other supplementary material.

Let me use this post to update you on a few issues mentioned previously in this blog. In an October post, I talked about tanning and skin cancer. A recent article in the online newspaper MinnPost.com suggests that the problem is not getting any better, especially in the midwest, and that "people are still not recognizing that indoor tanning use is linked to skin cancer". An article in medicalphysicsweb.com reports that "supply shortages of molybdenum-99 could become commonplace over the next decade unless longer-term actions are taken." I discussed this issue several times before: here, here, and here. Felix Baumgartner's attempt to jump out of a balloon at the edge of space and break the sound barrier in free fall has been put on hold, apparently because of a law suit over who owns the rights to this idea. Finally, you can watch online a series of lectures about the physics of hearing and cochlear implants delivered at the University of Michigan.

I wish you all a peaceful and happy Christmas Eve. If you are lucky, you will wake up tomorrow morning to find that Santa has left the 4th edition of Intermediate Physics for Medicine and Biology in your stocking. For those unfortunate few who received something else from Santa, I suggest amazon.com.

Merry Christmas!

Friday, December 17, 2010

Subtracting Large Numbers

One of the most notorious difficulties in numerical computations is the loss of precision when subtracting two similar, large numbers to obtain a smaller one. Russ Hobbie and I illustrate this hazard in Chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology. We begin this chapter with a discussion of the method of least squares, and we derive the formulas (Eqs. 11.5a and 11.5b) for fitting data to a straight line, y=ax+b. We then add “In doing computations where the range of data is small compared to the mean, better numerical accuracy can be obtained from…” and then present alternative formulas (Eqs. 11.5c, 11.5d, and 11.5e). Homework Problem 7 in Chapter 11 (one of the many new problems in the 4th edition) illustrates the advantage of the second set of equations.
"Problem 7 Consider the data

x y
100 4004
101 4017
102 4039
103 4063

(a) Fit these data with a straight line y=ax+b using Eqs. 11.5a and 11.5b to find a.
(b) Use Eq. 11.5c to determine a. Your result should be the same as in part (a)
(c) Repeat parts (a) and (b) while rounding all the intermediate numbers to 4 significant figures. Do Eqs. 11.5a and 11.5b give the same result as Eq. 11.5c? If not, which is more accurate?"
(Spoiler alert: Don’t continue reading if you want to solve the problem yourself first, as you should.) If you solve this problem, you will find that Eqs. 11.5a and 11.5b do not work very well at all for this problem. Their flaw is that they require you to subtract two really big numbers to get a much smaller one.

A good discussion of this issue can be found in Forman Acton’s book Numerical Methods that Work.
“The following problem often appears as a puzzle in Sunday Supplements. The difficulties are numerical rather than formulative and hence it is an especially appropriate challenge to the aspiring numerical analyst. We strongly urge that the reader solve it in his own way before turning to the ‘official’ solution.

A railroad rail 1 mile long is firmly fixed at both ends. During the night some prankster cuts the rail and welds in an additional foot, causing the rail to bow up in the arc of a circle. The classical question concerns the maximum height this rail now achieves over its former position. To put it more precisely: We are faced…with the chord of a circle AB that is exactly 1 mile long and the corresponding arc AB that is 1 mile plus 1 foot and our question concerns the distance d between the chord and the arc at their midpoints. [See Acton’s book for the accompanying figure]

The relationships available are the simple ones from trigonometry involving the subtended half angle, θ, and the Pythagorean relationship. The student at this point should attempt to solve the problem before turning to the solution given in Chapter 2. He should attempt to find the distance d to an accuracy of three significant figures. In his effort he will probably be faced with subtracting two large and nearly equal numbers, which will cause a horrendous loss of significant figures. He can live with this process by shear brute force, but it will involve use eight-significant-figure trigonometric tables to preserve three figures in his answer. The point of the problem here is to find another method of calculating d, one that does not require such extreme measures. The three-figure answer can, indeed, be obtained rather easily using nothing more than pencil, paper, and a slide rule. The student should seek such a method.”
If you find numerical methods interesting (as I do), you will love Acton’s delightfully written book. Originally published in 1970, it is all the more charming for its now-quaint references to slide rules and trigonometric tables. Yet, the concepts are not out-of-date. Even with powerful computers, errors can arise from subtracting nearly equal numbers. I have run into the issue myself when using the finite difference method and relaxation to solve Laplace’s equation with a fine grid and only single precision arithmetic.

Unfortunately, Acton’s book is not cited in the 4th edition of Intermediate Physics for Medicine and Biology (we’ll have to fix that in later editions), although I have mentioned it before in this blog. Acton is an emeritus professor in the Department of Computer Science at Princeton University (a department with an illustrious history). Also interesting is his more recent book Real Computing Made Real: Preventing Errors in Scientific and Engineering Calculations.

Friday, December 10, 2010

Robert Millikan

One fundamental constant that appears repeatedly in the 4th edition of Intermediate Physics for Medicine and Biology is the charge of the electron (the elementary charge, e), equal to 1.6 × 10-19 C. The first appearance of e that I can find is in Section 3.8 on the Nernst Equation. It appears in another context in Section 8.9, The Detection of Weak Magnetic Fields, when discussing Superconducting Quantum Interference Device (SQUID) magnetometers and the quantum of flux, equal to Planck’s constant divided by two times e. It shows up repeatedly in Chapter 9 on Electricity and Magnetism at the Cellular Level, and then again in Chapter 14 when discussing the energy levels of the hydrogen atom. It appears in Chapter 15 in the Klein-Nishina formula and in the expression for the classical radius of the electron.

How was the charge of the electron first measured? Isaac Asimov tells the story in Understanding Physics: The Electron, Proton, and Neutron:
“The experiments that determined the size of the electric charge on the electron were conducted by the American physicist Robert Andrews Millikan (1868-1953) in 1911.

Millikan made use of two horizontal plates, separated by about 1.6 centimeters, in a closed vessel containing air at low pressure, The upper plate had a number of fine holes in it and was connected to a battery that could place a positive charge upon it. Millikan sprayed fine drops of nonvolatile oil into the closed vessel above the plates. Occasionally, one droplet would pass through one of the holes in the upper plate and would appear in the space between the plates. There it could be viewed through a magnifying lens because it was made to gleam like a star through its reflection of a powerful beam of light entering from one side.

Left to itself, the droplet of oil would fall slowly, under the influence of gravity. The rate of this fall in response to gravity, against the resistance of air (which is considerable for so small and light an object as an oil droplet), depends on the mass of the droplet. Making use of an equation first developed by the British physicist George Gabriel Stokes (1819-1903), Millikan could determine the mass of the oil droplets.

Millikan then exposed the container to the action of X rays. This produced ions in the atmosphere within (see page 110). Occasionally, one of these ions attached itself to the droplet. If it were a positive ion, the droplet, with a positive charge suddenly added, would be repelled by the positively-charged plate above, and would rush downward at a rate greater than could be accounted for by the action of gravity alone. If the ion were negative, the droplet would be attracted to the positively-charged plate and might even begin to rise in defiance of gravity.

The change in velocity of the droplet would depend on the intensity of the electric field (which Millikan knew) and the charge on the droplet, which he could now calculate.

Millikan found that the charge on the droplet varied according to the nature of the ion that was adsorbed and on the number of ions that were adsorbed. All the charges were, however, multiples of some minimum unit, and this minimum unit could reasonably be taken as the smallest possible charge on an ion and therefore, equal to the charge on the electron. Millikan's final determination of this minimum charge was quite close to the value now accepted, which is 4.80298 × 10-10 electrostatic units ("esu"), or 0.000000000480298 esu.”
We don’t use electrostatic units in Intermediate Physics for Medicine and Biology (although they appear briefly in homework problem 3 in Chapter 6), but this is equivalent to 1.6 × 10-19 Coulombs.

I remember doing Millikan’s oil drop experiment as an undergraduate physics major at the University of Kansas. It required several hours in a dark room staring at small oil drops through a microscope. When in graduate school, I read one of Millikan’s papers in the book Selected Papers of Great American Physicists: The Bicentennial Commemorative Volume of The American Physical Society. I was particularly impressed by Millikan’s careful analysis of sources of systematic error in his experiment. In fact, I used that paper as a model for one of my few experimental papers: “The magnetic field of a single axon: A comparison of theory and experiment,” (Roth and Wikswo, Biophys. J., 48:93-109, 1985). Some have claimed that Millikan committed scientific fraud by an improper selection of data to use in his analysis, but that claim has been debunked (see Data Selection and Responsible Conduct: Was Millikan a Fraud? By Richard Jennings, Science and Engineering Ethics, Volume 10, Pages 639-653, 2004).

I have a personal reason for being interested in the work of Robert Millikan. According to his Nobel Prize biography, he was born in Morrison Illinois, a small town 120 miles west of Chicago, about 15 miles from the Mississippi River. This is the town I grew up in, from an age of just a few months until I was 12 years old. At the time, I didn’t realize who Robert Millikan was, or that Morrison was the home to a Nobel Prize winning physicist. But over the years I have become a big fan of “Millikan from Morrison”. According to the Morrison chamber of commerce, there is now a downtown park named after Millikan. I must go visit.

Friday, December 3, 2010

Physical Biology of the Cell

I spent some time this week looking over the recently published textbook Physical Biology of the Cell, by Rob Phillips, Jane Kondev, and Julie Theriot. In some ways this book is a competitor of the 4th edition of Intermediate Physics for Medicine and Biology (it is always good to know your competition). Bernard Chasan reviewed Physical Biology of the Cell in the November 2010 issue of the American Journal of Physics.
“The authors of this book are, in a very real sense, missionaries. They want to convince a wide audience to share their enthusiasm for and commitment to a more quantitative and scientifically rigorous approach to cell biology than is normally encountered in the teaching literature.

To achieve this goal, they set out a program of quantitative model building based on physical principles…. What the authors describe (awkwardly but evocatively) as the mathematizing of the semiqualitative models of cell biology (referred to as “cartoons” in some circles) has now become central to cell biology—as evidenced by a half a dozen recent texts and the relatively new and thriving discipline of systems biology. The work being reviewed is the latest and most comprehensive attempt to foster and advocate for this approach…

At the center of their approach is the art of model making—well presented with the aid of some excellent figures, which show the choices needed to model proteins, as one example. The main point is that modeling requires a simplifying choice, which emphasizes one view of the protein and essentially ignores others. If it suits your purposes to model the protein as a collection of hydrophobic and hydrophilic amino acid residues—a good model for protein folding—then you cannot at the same time consider the protein as a two state system.”
After skimming through Physical Biology of the Cell (I wish I had time to read it thoroughly), I have several observations.

1. The second half of Intermediate Physics for Medicine and Biology (IPMB) is about clinical medical physics: imaging and therapy. None of this appears in Physical Biology of the Cell (PBC). Also, in IPMB Russ Hobbie and I steer clear of molecular biology, saying in the preface that “molecular biophysics has been almost completely ignored: excellent texts already exist, and this is not our area of expertise”. PBC is all molecular and cellular. The main overlap between the two books is several chapters in PBC that cover similar topics as are in the first half of IPMB. So, I guess IPMB and PBC are not really in direct competition. However, if I was Phil Nelson, author of Biological Physics: Energy, Information, Life, I might be concerned about market share.

2. PBC is illustrated by Nigel Orme. Let me be frank; Orme’s drawings are much better than what we have in IPMB. One thing I like about PBC is that you can skip the text altogether and just look at the pictures, and still learn the gist of the subject. Figure 1.4 showing the genetic code reminds me of the sort of graphics that Edward Tufte promotes in The Visual Display of Quantitative Information. The authors of PBC state in the acknowledgments “this book would never have achieved its present incarnation without the close and expert collaboration of our gifted illustrator, Nigel Orme, who is responsible for the clarity and visual appeal of the more than 550 figures found in these pages, as well as the overall design of the book.” As generous as this tribute is, it may be an understatement. Then, just when I thought the artwork couldn’t get any better, I found that PBC contains several beautiful figures contributed by David Goodsell, author of The Machinery of Life.

3. In the 4th edition of IPMB, Russ and I added an initial section exploring the relative size of biological objects. In PBC, a similar discussion fills the entire Chapter 2. There is lots of numerical estimating in this chapter, reminding me of the Bionumbers website. Chapter 3 looks at different temporal scales, which is more difficult to show visually than spatial scales (Russ and I didn’t try), although Orme’s drawings do a pretty good job. Chapter 4 of PBC looks at the many model systems used in biology, with an eye toward history (Mendel’s pea plants, hemoglobin and the structure of a protein, the bacteriophage in genetics, etc.). Great reading.

4. Some subjects--such as diffusion, fluid dynamics, thermodynamics, and bioelectricity—are covered in both PBC and IPMB. Which book explains these topics better? Obviously I am biased, but I suggest that Russ and I develop the physics in a more detailed and systematic way, starting from the fundamentals, whereas Phillips, Kondev and Theriot present the physics rather quickly, and then apply it to many interesting biological applications. I would say that PBC does for molecular and cellular biology what Air and Water by Mark Denny does for physiology: use physics and math to explain biological concepts quantitatively. Russ and I, on the other hand, teach physics using biological examples. The difference is more about approach, tone, and point-of-view than about substance. The reader can look at both books and draw their own conclusions.

5. PBC has a few nice homework problems, but I prefer IPMB’s more extensive collection. The student learns more by doing than by reading.

6. The final chapter in PBC, “Wither Physical Biology,” is an excellent summary of the “the role of quantitative analysis in the study of living matter.” Anyone working at the interface between physics and biology must read these ten pages.

Phillips, Kondev, and Theriot ought to have the last word, so I will finish this blog entry by quoting PBC’s eloquent closing paragraph.
“The act of writing this book has convinced each of us that the study of living matter is one of the most exciting frontiers in human thought. Just as the makings of the large scale universe are being revealed by ever more impressive telescopes, living matter is now being viewed in ways that were once as unimaginable as was going to the Moon. Despite the muscle-enhancing weight of this book, we feel that we have only scratched the surface of the rich and varied applications of physical reasoning to biological problems. Our overall goal has been to communicate a style of thinking about problems where we have done our best to illustrate the power of the style using examples chosen from biological systems that are well defined and usually well studied from a biological perspective. As science moves forward into the twenty-first century, it is our greatest hope that synthetic approaches for understanding the natural world from biological, physical, chemical, and mathematical perspectives simultaneously will enrich all of these fields and illuminate the world around us. We can only hope the reader has at least a fraction of the pleasure in answering that charge as we have had in attempting to describe the physical biology of the cell.”

Friday, November 26, 2010

Acetylcholine and Loewi’s Dream

In 1936, Otto Loewi was awarded the Nobel Prize in Physiology or Medicine for the discovery of the role of acetylcholine and other chemicals in nerve and muscle transmission. Russ Hobbie and I don’t mention Loewi in the 4th edition of Intermediate Physics for Medicine and Biology, but we do discuss acetylcholine. In Chapter 6, we write
“At the end of a nerve cell the signal passes to another nerve cell or to a muscle cell across a synapse or junction. A few synapses in mammals are electrical; most are chemical…In electrical synapses, channels connect the interior of one cell with the next. In the chemical case a neurotransmitter chemical is secreted by the first cell. It crosses the synaptic cleft (about 50 nm) and enters the next cell.

At the neuromuscular junction the transmitter is acetylcholine (ACh). ACh increases the permeability of nearby muscle to sodium, which then enters and depolarizes the muscle membrane. The process is quantized. Packets of acetylcholine of definite size are liberated.”
In Homework Problem 20 in Chapter 4, we ask the student to calculate the time required for acetylcholine to diffuse across the synaptic cleft. The release of acetylcholine at the nerve-muscle junction in discrete quanta provides a nice example of Poisson Statistics described in Appendix J. In Chapter 7, when discussing the heart, we mention how acetylcholine, released by parasympathetic nerves, decreases the heart rate.

I can’t tell you about Otto Loewi and acetylcholine without mentioning the fascinating tale of Loewi’s dream. Since Isaac Asimov is a much better storyteller than I am, I will simply quote from his essay “The Eureka Phenomenon” published in The Left Hand of the Electron.
“The German physiologist Otto Loewi was working on the mechanism of nerve action, in particular, on the chemicals produced by nerve endings. He woke at 3 A.M. one night in 1921 with a perfectly clear notion of the type of experiment he would have to run to settle a key point that was puzzling him. He wrote it down and went back to sleep. When he woke in the morning, he found he couldn't remember what his inspiration had been. He remembered he had written it down, but he couldn't read his writing.

The next night, he woke again at 3 A.M. with the clear thought once more in mind. This time, he didn't fool around. He got up, dressed himself, went straight to the laboratory and began work. By 5 A.M. he had proved his point and the consequences of his findings became important enough in later years so that in 1936 he received a share in the Nobel prize in medicine and physiology.”

Friday, November 19, 2010

Viral Outbreak: The Science of Emerging Disease

Textbooks such as the 4th edition of Intermediate Physics for Medicine and Biology are essential for studying and learning a new topic, but other ways of learning can be equally effective (or, sometimes, even better). Today I want to mention two examples.

Each December, the Howard Hughes Medical Institute presents its Holiday Lectures on Science. These excellent seminars will be webcast live on December 2 and 3, starting at 10 am. This year, the lectures are about Viral Outbreak: The Science of Emerging Disease. Joseph DeRisi (University of California, San Francisco) and Eva Harris (University of California, Berkeley) will explain how to detect and fight infectious agents. The lectures will answer questions such as “Why is dengue fever becoming a worldwide health threat?”, “What other epidemics are on the horizon?” and “How can we detect and counter emerging infectious diseases?”. If you miss the live webcast, you can download an on-demand webcast starting December 6. I have watched these holiday lectures in the past, and they are very good. They are aimed at a serious high school student, or an undergraduate science major. They are also great for a physicist looking for a general introduction to a biological or medical topic.

Of course, the best way to learn science is to do science. For undergraduate students (the main readers of Intermediate Physics for Medicine and Biology), the first exposure to doing science may come during a summer research project. Now is the time to start looking for summer research opportunities. One that I recommend is the Summer Internship Program in Biomedical Science at the National Institutes of Health. I worked at NIH for seven years, and it is a wonderful place to do scientific research. My advice is to apply for this internship today. You won’t regret it.

Friday, November 12, 2010

Bionumbers

One feature of the 4th edition of Intermediate Physics for Medicine and Biology that distinguishes it from many other medical or biological textbooks is its focus on analyzing biomedical topics quantitatively. This point of view is also promoted at the BIONUMB3R5 (bionumbers) website, established by researchers in the systems biology department at Harvard. There is also a BIONUMB3R5 wiki where many researchers are coming together to provide new insights into key numbers in biology.

I particularly like the “bionumber of the month” feature. The March 2010 entry (“what are the time scales for diffusion in cells”) could easily be made into a homework problem for Chapter 4 of Intermediate Physics for Medicine and Biology. The January 2010 entry (“what is faster, transcription or translation?”) is fascinating:
“Transcription, the synthesis of mRNA from DNA, and translation, the synthesis of protein from mRNA, are the main pillars of the central dogma of molecular biology. How do the speeds of these two processes compare? …

Transcription of RNA by RNA polymerase in E. coli cells proceeds at a maximal speed of about 40-80 bp/sec … Translation by the ribosome in E. coli proceeds at a maximal speed of about 20 aa/sec … Interestingly, since every 3 base pairs code for one amino acid, the rates of the two processes are quite similar…”
The “collection of fundamental numbers in molecular biology” found at the bionumbers website has the same tone as the first section of Chapter 1 in Intermediate Physics for Medicine and Biology, in which Russ Hobbie and I look at the relative size of biological objects. The collection contains this gem: “concentration of 1 nM in a cell the volume of e. coli is ~ 1 molecule/cell”.

The bionumbers website arose from an article by Rob Phillips and Ron Milo in the Proceedings of the National Academy of Sciences (Volume 106, pages 21465-21471, 2009), A Feeling for the Numbers in Biology. The abstract of their paper is given below:
“Although the quantitative description of biological systems has been going on for centuries, recent advances in the measurement of phenomena ranging from metabolism to gene expression to signal transduction have resulted in a new emphasis on biological numeracy. This article describes the confluence of two different approaches to biological numbers. First, an impressive array of quantitative measurements make it possible to develop intuition about biological numbers ranging from how many gigatons of atmospheric carbon are fixed every year in the process of photosynthesis to the number of membrane transporters needed to provide sugars to rapidly dividing Escherichia coli cells. As a result of the vast array of such quantitative data, the BioNumbers web site has recently been developed as a repository for biology by the numbers. Second, a complementary and powerful tradition of numerical estimates familiar from the physical sciences and canonized in the so-called “Fermi problems” calls for efforts to estimate key biological quantities on the basis of a few foundational facts and simple ideas from physics and chemistry. In this article, we describe these two approaches and illustrate their synergism in several particularly appealing case studies. These case studies reveal the impact that an emphasis on numbers can have on important biological questions.”
Russ and I introduce similar order-of-magnitude estimates (Fermi problems) in Chapter 1 of our book (for example, see homework problems 1-4, which are new in the 4th edition). One of my favorite Fermi problems, which I first encountered in the book Air and Water by Mark Denny, is to calculate the concentration of oxygen molecules in blood and in air, and compare them. Not too surprisingly, they are nearly the same (about 8 mM). I suspect the bionumbers folks would enjoy Air and Water. (I hope they would enjoy Intermediate Physics for Medicine and Biology, too.)

For those of you who find all of this interesting but prefer video over text, see the bionumbers video on YouTube.

Friday, November 5, 2010

Seeing the Natural World with a Physicist’s Lens

One theme of this blog—and indeed, one theme of the 4th edition of Intermediate Physics for Medicine and Biology—is the role of physics in the biological sciences. So imagine my delight when Russ Hobbie sent me a similarly themed article from the November 1 issue of the New York Times (a publication that, alas, has more readers than does my blog). Natalie Angier, who studied for two years at that little college down the road in Ann Arbor, wrote an article titled Seeing the Natural World With a Physicist’s Lens. Its thesis is that many biological systems have evolved to perfection, in the sense that physical laws don’t let them get any better. Angier writes

“Yet for all these apparent flaws, the basic building blocks of human eyesight turn out to be practically perfect. Scientists have learned that the fundamental units of vision, the photoreceptor cells that carpet the retinal tissue of the eye and respond to light, are not just good or great or phabulous at their job. They are not merely exceptionally impressive by the standards of biology, with whatever slop and wiggle room the animate category implies. Photoreceptors operate at the outermost boundary allowed by the laws of physics, which means they are as good as they can be, period. Each one is designed to detect and respond to single photons of light—the smallest possible packages in which light comes wrapped….


Photoreceptors exemplify the principle of optimization, an idea, gaining ever wider traction among researchers, that certain key features of the natural world have been honed by evolution to the highest possible peaks of performance, the legal limits of what Newton, Maxwell, Pauli, Planck et Albert will allow. Scientists have identified and mathematically anatomized an array of cases where optimization has left its fastidious mark…In each instance, biophysicists have calculated, the system couldn’t get faster, more sensitive or more efficient without first relocating to an alternate universe with alternate physical constants.”

Angier has written a lot of articles for the NYT, and has published several books, that will be of interest to readers of Intermediate Physics for Medicine and Biology. Enjoy!

Friday, October 29, 2010

Iatrogenic Problems in End-Stage Renal Failure

In Section 5.7 of the 4th edition of Intermediate Physics for Medicine and Biology, where Russ Hobbie and I discuss the artificial kidney, we say
“The artificial kidney provides an example of the use of the transport equations to solve an engineering problem….The reader should also be aware that this ‘high-technology’ solution to the problem of chronic renal disease is not an entirely satisfactory one. It is expensive and uncomfortable and leads to degenerative changes in the skeleton and severe atherosclerosis

The alternative treatment, a transplant, has it own problems, related primarily to the immunosuppressive therapy. Anyone who is going to be involved in biomedical engineering or in the treatment of patients with chronic disease should read the account by Calland (1972), a physician with chronic renal failure who had both chronic dialysis and several transplants.”
The paper by Chad Calland, in the New England Journal of Medicine (Iatrogenic Problems in End-Stage Renal Failure, Volume 287, pages 334-336, 1972), was published on the same day that Calland took his own life. Wikipedia defines “iatrogenic” as “inadvertent adverse effects or complications caused by or resulting from medical treatment or advice.” It is a problem we must constantly be aware of as we seek to improve medical care through technology. Calland wrote
“The physician is more often a voyeur than a partaker in human suffering. I am a physician who has undergone chronic renal failure, dialysis and multiple transplants. As a physician-partaker, I am distressed by the controversial dialogue that separates the nephrologist from the transplant surgeon, so that, in the end, it is the patient who is given short shrift. I have observed that both nephrologist and transplant surgeon work alone in their own separate fields, and that the patient becomes lost in a morass of professional role playing and physician self-justification. As legitimate as their altruistic but differing opinions may be, the nephrologist and the transplant surgeon must work together for the patient, so that therapy is tailored to suit the individual patient, his circumstances, his needs and the quality of his life.”

Friday, October 22, 2010

Glimpses of Creatures in Their Physical Worlds

I am a loyal member of Sigma Xi, the Scientific Research Society, and am a regular reader of its marvelous magazine American Scientist. One of the best parts of this bimonthly periodical is its book reviews. In the November-December 2010 issue of American Scientist, Mark Denny (author of Air and Water) reviews the new book by Steven Vogel: Glimpses of Creatures in Their Physical Worlds (Princeton University Press, 2009). Both Denny and Vogel appear in the 4th edition of Intermediate Physics for Medicine and Biology. Denny writes
“Vogel’s contributions to biomechanics have had two admirable objectives. In Life in Moving Fluids (1981), Life’s Devices (1988), Vital Circuits (1992), Prime Mover (2001) and Comparative Biomechanics (2003), his goal is to explain the mechanics of biology to a general audience. If you want to know how fish swim, fleas jump and bats fly, or why hardening of your arteries is a bad thing, them dip into these sources; you will come away both informed and amused….

All too often, biologists observe only what they are prepared to see. Vogel’s second objective is therefore to expand their perspectives by conjuring up and carefully analyzing systems that might be…. For example, dogs don’t sweat as humans do. Instead, they pant, evaporating water from their respirator tracts and expelling the resulting warm, moist air with each breath. But panting requires the repeated contraction of chest muscles, which adds to the heat the animal desires to loss. Could there be a better way?....

To find out, read Glimpses of Creatures in Their Physical Worlds. Here, as in Cats’ Paws and Catapults (1998), Vogel takes a decidedly nontraditional look at biology, unleashing his talent for unbridled speculation. The 12 chapters of Glimpses, which began as a series of essays in the Journal of Biosciences, have been revised and updated. They cover topics that range from the ballistics of seeds (plants use both catapults and cannons to launch their propagules) to the breathing apparatus of diving spiders (tiny hairs on the body take advantage of surface tension to maintain an airspace into which oxygen can flow), with stops along the way to explore the efficiency of man-made and natural pumps, the twist-to-bend ratios of daffodils in the breeze, and the physics of cow tipping….

If what you desire in a readable science book is food for thought, Glimpses of Creatures in Their Physical Worlds provides a feast. Biologists, engineers and physicists—indeed, anyone with curiosity about the natural world—will revel in this smorgasbord of biomechanical ideas.”
I will put reading Glimpses on my to do list, maybe during the semester break.

If you get a copy of American Scientist so you can read Denny’s entire review, don’t miss another review in the same issue about a new edition (with notes and commentary) of the classic Flatland by Edwin Abbott. Flatland is a favorite of mine, and I agree with Colin Adams who says in his review: “In the pantheon of popular books about mathematics, one would be hard-pressed to name another that has lasted so long in popularity or had such a dramatic impact.”

Friday, October 15, 2010

Michael Faraday, Biological Physicist?

Last week in this blog I discussed the greatest physicist of all time, Isaac Newton. However, if we narrow consideration to only experimental physicists, I would argue that the greatest is Michael Faraday (with apologies to Ernest Rutherford, who is a close second). In section 8.6 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss Faraday’s greatest discovery: electromagnetic induction.
“In 1831 Faraday discovered that a changing magnetic field causes an electric current to flow in a circuit. It does not matter whether the magnetic field is from a permanent magnet moving with respect to the circuit or from the changing current in another circuit. The results of many experiments can be summarized in the Faraday induction law.”
I have always admired the 19th century Victorian physicists, such as Faraday, Maxwell and Kelvin. Michael Faraday, in particular, is a hero of mine (it is good to have heroes; they help you stay inspired when the mundane chores of life distract you). I had the pleasure of quoting from Faraday’s Experimental Researches in Electricity in an editorial I wrote in 2005 for the journal Heart Rhythm: Michael Faraday and Painless Defibrillation. I tried to get a picture of Faraday included as part of the editorial, but alas the journal editor removed it. The article described a heart defibrillator having a design that included a type of Faraday cage.
“Michael Faraday, arguably the greatest experimental physicist who ever lived, first demonstrated the shielding effect of a hollow conductor in 1836 by building a 12 ft x 12 ft x 12 ft cubic chamber out of metal. We would now call it a 'Faraday cage.'

‘I went into the cube and lived in it, and using lighted candles, electrometers, and all other tests of electrical states, I could not find the least influence upon them, or indication of anything particular given by them, though all the time the outside of the cube was powerfully charged, and large sparks and brushes were darting off from every part of its outer surface.’ [Faraday M. Experimental Researches in Electricity. Paragraph 1174. Reprinted in: Hutchins RM, editor. Great Books of the Western World, vol 45. Encyclopedia Britannica, Chicago, 1952.]

Faraday cages are used to shield sensitive electronic equipment. The metal skin of an airplane, acting as a Faraday cage, protects passengers from injury by lightning. Researchers perform electrophysiology experiments inside a Faraday cage to prevent external noise from contaminating the data. A rather spectacular example of shielding can be seen in the Boston Museum of Science, where a van de Graaff generator of over one million volts produces a dramatic display of lightning, while the operator stands nearby—safe inside a Faraday cage.
Why this little physics lesson? In this issue of Heart Rhythm, Jayam et al. [Jayam V, Zviman M, Jayanti V, Roguin A, Halperin H, Berger RD. Internal defibrillation with minimal skeletal muscle activation: a new paradigm toward painless defibrillation. Heart Rhythm 2005;2:1108–1113.] describe a new electrode system for internal defibrillation that eliminates the skeletal muscle activation and pain associated with a shock. The central feature of their design is a Faraday cage: a conducting sock fitted over the epicardial surface of the heart…”
In Section 8.7, Russ and I describe what may be the most important biomedical application of Faraday’s work: magnetic stimulation.
"Since a changing magnetic field generates an induced electric field, it is possible to stimulate nerve or muscle cells without using electrodes. The advantage is that for a given induced current deep within the brain, the currents in the scalp that are induced by the magnetic field are far less than the currents that would be required for electrical stimulation. Therefore transcranial magnetic stimulation (TMS) is relatively painless. Magnetic stimulation can be used to diagnose central nervous system diseases that slow the conduction velocity in motor nerves without changing the conduction velocity in sensory nerves [Hallett and Cohen (1989)]. It could be used to monitor motor nerves during spinal cord surgery, and to map motor brain function. Because TMS is noninvasive and nearly painless, it can be used to study learning and plasticity (changes in brain organization over time). Recently, researchers have suggested that repetitive TMS might be useful for treating depression and other mood disorders."
I worked on magnetic stimulation for many years while at the National Institutes of Health in the 1990s. It was a pleasure to explore an application of Faraday induction; it is my kind of biological physics.

Faraday’s name can be found in a few other places in our book. It first appears in Chapter 3, when the Faraday constant is defined: F = 96,485 Coulombs per mole. It also appears in an abbreviated form in the unit of capacitance: a farad (F).

I suppose by now the reader realizes that I like Mike. But is he a biological physicist? Doubters might want to look at another physics blog: http://skullsinthestars.com/2010/05/15/shocking-michael-faraday-does-biology-1839. Faraday apparently did studies on the electrodynamics of electric fish. So, yes, I claim him as a biological physicist, and the question mark in the title of this entry is unnecessary.

Friday, October 8, 2010

Isaac Newton, Biological Physicist?

Arguably the greatest physicist of all time (and probably the greatest scientist of all time) is Isaac Newton (1643-1727). Newton is so famous that the English put him on their one pound note (although I gather nowadays they use a coin instead of paper currency for one pound). Given Newton’s influence, it is fair to ask what his role is in the 4th edition of Intermediate Physics for Medicine and Biology. One way Newton (along with Leibniz) contributes to nearly every page of our book is through the invention of calculus (or, as I prefer, “the calculus”). Russ Hobbie states in the preface of our book that “calculus is used without apology.”

When I search the book for Newton’s name, I find quite a few references to Newton’s laws of motion, and in particular the second law, F=ma. Newton presented his three laws in his masterpiece, the Principia (1687). (Few people have read the Principia, including me, but a good place to learn about it is the book Newton’s Principia for the Common Reader by Subrahmanyan Chandrasekhar) Of course, the unit of force is the newton, so his name pops up often in that context. The only place where we talk about Newton the man is very briefly in the context of light.
"A controversy over the nature of light existed for centuries. In the seventeenth century, Sir Isaac Newton explained many properties of light with a particle model. In the early nineteenth century, Thomas Young performed some interference experiments that could be explained only by assuming that light is a wave. By the end of the nineteenth century, nearly all known properties of light, including many of its interactions with matter, could be explained by assuming that light consists of an electromagnetic wave."
Newton’s name also arises when talking about Newtonian fluids (Chapter 1): a fluid in which the shear stress is proportional to the velocity gradient. Not all fluids are Newtonian, with blood being one example. Newton appears again when discussing Newton’s law of cooling (Chapter 3, Problem 45).

Some of Newton’s greatest discoveries are not addressed in our book. For instance, Newton’s universal law of gravity is never mentioned. Except for a few intrepid astronauts, animals live at the surface of the earth where gravity is simply a constant downward force and Newton’s inverse square law is not relevant. I suppose tides influence animals and plants that live near the ocean shore, and the behavior of tides is a classic application of Newtonian gravity, but we never discuss tides in our book. (By the way, harkening back to my vacation in France last summer, the tides at Mont Saint Michel are fascinating to watch. I really must plan a trip to the Bay of Fundy next.) Newton, in his book Optiks, made important contributions to our understanding of color, but Russ and I introduce that subject without referring to him. We don’t discuss telescopes in our book, and thus miss a chance to honor Newton for his invention of the reflecting telescope.

A wonderful biography of Newton is Never at Rest, by Richard Westfall. I must admit, Newton is a strange man. His argument with Leibniz about the invention of calculus is perhaps the classic example of an ugly priority dispute. He does not seem to be particularly kind or generous, despite his undeniable genius.

Was Newton a biological physicist? Well, that may be a stretch, but Colin Pennycuick has written a book titled Newton Rules Biology, so we cannot deny his influence. I would say that Newton’s contributions are so widespread and fundamental that they play an important role in all subfields of physics.

Friday, October 1, 2010

Ultraviolet Light Causes Skin Cancer

The New England Journal of Medicine is arguably the premier medical journal in the world. Russ Hobbie is a regular reader, and he sometimes calls my attention to articles that are closely related to topics in the 4th edition of Intermediate Physics for Medicine and Biology. The September 2, 2010 issue of the NEJM contains the article “Indoor tanning—Science, Behavior, and Policy” (Volume 363, Pages 901-903), by David Fisher and William James. The article begins
"The concern arises from increases in the incidence of melanoma and its related mortality. In the United States, the incidence of melanoma is increasing more rapidly than that of any other cancer. From 1992 through 2004, there was a particularly alarming trend in new melanoma diagnoses among girls and women between the ages of 15 and 39. Data from the National Cancer Institute’s Surveillance, Epidemiology, and End Results Registry show an estimated annual increase of 2.7% in this group. Researchers suspect that the increase results at least partially from the expanded use of tanning beds.”
Russ and I discuss ultraviolet light in Section 14.9 of Intermediate Physics for Medicine and Biology. In particular, subsetion 14.9.4 is titled “Ultraviolet light causes skin cancer.”
“Chronic exposure to ultraviolet radiation causes premature aging of the skin. The skin becomes leathery and wrinkled and loses elasticity. The characteristics of photoaged skin are quite different from skin with normal aging [Kligman (1989)]. UVA radiation was once thought to be harmless. We now understand that UVA radiation contributes substantially to premature skin aging because it penetrates into the dermis. There has been at least one report of skin cancer associated with purely UVA radiation from a cosmetic tanning bed [Lever and Lawrence (1995)]. This can be understood in the context of studies showing that both UVA and UVB suppress the body’s immune system, and that this immunosuppression plays a major role in cancer caused by ultraviolet light [Kripke (2003); Moyal and Fourtanier (2002)]. There are three types of skin cancer. Basal-cell carcinoma (BCC) is most common, followed by squamous cell carcinoma (SCC). These are together called nonmelanoma or nonmelanocytic skin cancer (NMSC). Basal-cell carcinomas can be quite invasive (Fig. 16.44) but rarely metastasize or spread to distant organs. Squamous-cell carcinomas are more prone to metastasis. Melanomas are much more aggressive and frequently metastasize.”
The Skin Cancer Foundation advocates vigorously for the reduction of indoor tanning, and the American Association for Cancer Research has also spoken out against tanning beds. The problem seems to be growing.

Fisher and James conclude their article
“An estimated six of every seven melanomas are now being cured, thanks to early detection, but the U.S. Preventive Services Task Force does not recommend skin-cancer screening, since the evidence for its benefit has not been validated in large, prospective, randomized trials. Meanwhile, a number of promising new drugs for metastatic melanoma are progressing slowly through clinical trials to satisfy the FDA’s stringent safety and efficacy criteria— requirements that, remarkably, have not been applied to indoor tanning devices. Relatively few human cancers are tightly linked to a known environmental carcinogen. Given the mechanistic and epidemiologic data, we believe that regulation of this industry may offer one of the most profound cancer-prevention opportunities of our time.”

Friday, September 24, 2010

Adrien-Marie Legendre

On page 181 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce Legendre polynomials. The Legendre polynomial P2(cos(θ)) arises naturally when calculating the extracellular potential in a volume conductor at a position far from an active nerve axon. We include the footnote “You can learn more about Legendre polynomials in texts on differential equations or, for example, in Harris and Stocker (1998).” On page 184, we list the first four Legendre polynomials (and have another footnote referring to Harris and Stocker). Any physics student should memorize at least the first three of these polynomials:

P0(x) = 1
P1(x) = x
P2(x) = (3 x2 – 1)/2 .

Legendre polynomials have many interesting properties. They are a solution of Legendre’s differential equation

(1-x2) d2Pn/dx2 – 2 x dPn/dx + n(n+1) Pn = 0 .

You can calculate any Legendre polynomial using Rodrigues formula

Pn(x) = 1/(2n n!) dn((x2-1)n)/dxn .

They form an orthogonal set of functions for x over the range from -1 to 1, which is rather too technical of a property to explain in this blog entry, but is very important.

The astute reader might note that Legendre’s differential equation is second order, so there should be two solutions. That is right, but the other solution—called a Legendre function of the second kind, Qn—is rarely used, and tends to be poorly behaved at x = 1 and x = -1. For instance

Q0(x) = ½ ln((1+x)/(1-x)) .

A definitive source for information about Legendre polynomials is the Handbook of Mathematical Functions, by Milton Abramowitz and Irene Stegun.

When do Legendre’s polynomials appear in physics? You often find them when working in spherical coordinates, especially when (to use an analogy with the earth) a function depends on latitude but not longitude (axisymmetry). For instance, the general axisymmetric solution to Laplace’s equation in spherical coordinates is a series of powers of the radius r multiplied by Legendre polynomials with x = cos(θ), where θ is measured from the z-axis (or, to use the earth analogy again, from the north pole). Take an introductory class in Electricity and Magnetism (from, say, the book by Griffiths), and you will use Legendre polynomials all the time.

Why do I bring up Legendre polynomials today? Regular readers of this blog may recall my recent obsession with all things French. Adrien-Marie Legendre (1752-1833) was a French mathematician. Details of his life are given in A Short Account of the History of Mathematics, by Rouse Ball.
Adrian Marie Legendre was born at Toulouse on September 18, 1752, and died at Paris on January 10, 1833. The leading events of his life are very simple and may be summed up briefly. He was educated at the Mazarin College in Paris, appointed professor at the military school in Paris in 1777, was a member of the Anglo-French commission of 1787 to connect Greenwich and Paris geodetically; served on several of the public commissions from 1792 to 1810; was made a professor at the Normal school in 1795; and subsequently held a few minor government appointments. The influence of Laplace was steadily exerted against his obtaining office or public recognition, and Legendre, who was a timid student, accepted the obscurity to which the hostility of his colleague condemned him.

Legendre's analysis is of a high order of excellence, and is second only to that produced by Lagrange and Laplace, though it is not so original. His chief works are his Géométrie, his Théorie des nombres, his Exercices de calcul intégral, and his Fonctions elliptiques. These include the results of his various papers on these subjects. Besides these he wrote a treatise which gave the rule for the method of least squares, and two groups of memoirs, one on the theory of attractions, and the other on geodetical operations."

Friday, September 17, 2010

Augustin-Jean Fresnel

Apparently, dear reader, I am still obsessed by my trip to Paris last summer, because this will be the third week in a row that this blog has been about a famous French scientist. I hope you enjoy it.

Diffraction is a fundamental topic in physical optics that receives scant attention in the 4th edition of Intermediate Physics for Medicine and Biology. In fact, the index contains no entry for diffraction. (By the way, Russ Hobbie and I worked hard to make the index as complete and useful as possible.) However, a search for the term "diffraction" yields many appearances. Often it shows up as part of the term “X-ray diffraction,” but I have already addressed that technique in this blog a few weeks ago. A footnote on page 327, in Chapter 12 about images, mentions interference and diffraction in the context of coherence, and diffraction appears several times when discussing point-spread functions in that chapter. In Chapter 13 on ultrasound, diffraction is mentioned again as representing a limit to our ability to obtain an image. In Chapter 14, diffraction is discussed as a factor limiting our visual acuity.

The study of diffraction has a fascinating history, going back to the fundamental work of the French physicist Augustin-Jean Fresnel (1788-1827). Fresnel makes only one brief appearance in Intermediate Physics for Medicine and Biology, when discussing diffraction effects and the “Fresnel Zone” produced by an ultrasound transducer. To try and make up for Fresnel’s absence from our book, I will provide here some of the highlights of his short life (he died at age 39). Incidentally, I’m not the only blogger interested in Fresnel.

I first came to appreciate Fresnel’s contributions when reading the books of physicist Mark Silverman. In particular, I enjoyed Silverman’s Waves and Grains: Reflections on Light and Learning. He writes
“Fresnel, as the reader will discover (if it is not already obvious), is a central figure and something of a hero in this book. Pathetically all too human in his desperate desire to distinguish himself in the world of science, his ambitions are the ambitions of all of us who do research, write papers, and seek recognition. As a young man trained in engineering, he first turned his attention to industrial chemistry but learned to his chagrin that what he thought was original work was anticipated by others. Disappointed, he later immersed himself in the wave theory of light, guided and encouraged by Francois Arago—one of very few wave enthusiasts in the Paris Academy—who helped publicize his work both in France and abroad. […]

In 1817 the Paris Academy launched a competition for the essay best accounting for the diffraction of light. With the exception of Arago, the committee responsible for the event consisted exclusively of partisans, like Laplace and Biot, of the particle hypothesis [of light…]. Fresnel, as one might imagine, was not initially enthusiastic about entering—his whole direction of research having apparently already been ruled out by the wording. Nevertheless, urged on again by Arago, he composed a lengthy paper summarizing his philosophical approach, his methods, and his results. It is an amusing irony of history that Simeon-Denis Poisson—another graduate of the Polytechnique noted for his broad theoretical contributions to physics and mathematics, and a staunch advocate of the corpuscular theory—noted a glaring inconsistency in Fresnel’s theory. Applying this theory to an opaque circular screen, Poisson deduced the (to him) ludicrous result that the center of the shadow (doit) etre aussi eclaire que si l’ecran n’existait pas (must be as brightly illuminated as if the screen did not exist). Arago performed the experiment in advance of the committee’s decision, and the bright center—which history records as Poisson’s spot—showed up as predicted.

Fresnel, his relentless efforts finally recognized, received the prize—but Biot, Poisson, and other remained unshaken in their particle convictions.”
If you get a copy of Silverman’s book, don’t miss the last chapters on Science and Learning.

Living here in Michigan, surrounded by the Great Lakes, I have become fond of lighthouses, and particularly with the spectacular Fresnel Lenses that you can find in many of them. Click here to see pictures of some, and here to see information about Fresnel lenses found in Michigan. It is another of Fresnel’s many contributions to science.

Friday, September 10, 2010

Joseph Fourier

The August 2010 issue of Physics Today, published by the American Institute of Physics, contains an article by T. N. Narasimhan about Thermal Conductivity Through the 19th Century. A large part of the article deals with Joseph Fourier (1768-1830), the French physicist and mathematician. Russ Hobbie and I discuss Fourier’s mathematical technique of representing a periodic function as a sum of sines and cosines of different frequencies in Chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology. Interestingly, this far-reaching mathematical idea grew out of Fourier’s study of heat conduction and thermal conductivity. Russ and I introduce thermal conductivity in homework problem 15 of Chapter 4 about diffusion. This is not as odd as it sounds because, as shown in the problem, heat conduction and diffusion are both governed by the same partial differential equation, typically called the diffusion equation (Eq. 4.24). The concept of heat conduction is crucial when developing the bioheat equation (Chapter 14), which has important medical applications in tissue heating and ablation.

Narasimhan’s article provides some interesting insights into Fourier and his times.
“In 1802, upon his return to France from Napoleon’s Egyptian campaign, Fourier was appointed perfect of the department of Isere. Despite heavy administrative responsibilities, Fourier found time to study heat diffusion. He was inspired by deep curiosity about Earth and such phenomena as the attenuation of seasonal temperature variations in Earth’s subsurface, oceanic and atmospheric variations in Earth’s subsurface, oceanic and atmospheric circulation driven by solar heat, and the background temperature of deep space. […]

Thermal conductivity, appropriate for characterizing the internal conduction, was defined by Fourier as the quantity of heat per unit time passing through a unit cross-section divided by the temperature difference of two constant-temperature surfaces separated by unit distance […] Fourier presented his ideas in an unpublished 1807 paper submitted to the Institut de France.

Fourier was not satisfied with the 1807 work. It took him an additional three years to go beyond the discrete finite-difference description of flow between constant-temperature surfaces and to express heat flow across an infinitesimally thin surface segment in terms of the temperature gradient.

When Fourier presented his mathematical theory, the nature of heat was unknown […] Fourier considered mathematical laws governing the effects of heat to be independent of all hypotheses about the nature of heat. […] No method was available to measure flowing heat. Consequently, in order to demonstrate that his mathematical theory was physically credible, Fourier had to devise suitable experiments and methods to measure thermal conductivity.

It is not widely recognized that in his unpublished 1807 manuscript and in the prize essay he submitted to the Institut de France in 1811, Fourier provided results from transient and steady-state experiments and outlined methods to invert exponential data to estimate thermal conductivity. For some reason, he decided to restrict his 1822 masterpiece, The Analytical Theory of Heat, to mathematics and omit experimental results.”
For more insight on Fourier’s life and times, see Keston’s article Jospeh Fourier: Policitian and Scientist. It begins
“The life of Baron Jean Baptiste Joseph Fourier (1768 - 1830) the mathematical physicist has to be seen in the context of the French Revolution and its reverberations. One might say his career followed the peaks and troughs of the political wave. He was in turns: a teacher; a secret policeman; a political prisoner; governor of Egypt; prefect of Isère and Rhône; friend of Napoleon; and secretary of the Académie des Sciences. His major work, The Analytic Theory of Heat, (Théorie analytique de la chaleur) changed the way scientists think about functions and successfully stated the equations governing heat transfer in solids. His life spanned the eruption and aftermath of the Revolution; Napoleon's rise to power, defeat and brief return (the so-called Hundred Days); and the Restoration of the Bourbon Kings.”

Friday, September 3, 2010

Jean Leonard Marie Poiseuille

Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology contains an analysis of the flow of a viscous fluid through a pipe. Russ Hobbie and I show that the fluid flow is proportional to the fourth power of the pipe radius. We then state that
“This relationship was determined experimentally in painstaking detail by a French physician, Jean Leonard Marie Poiseuille, in 1835. He wanted to understand the flow of blood through capillaries. His work and knowledge of blood circulation at that time have been described by Herrick (1942).”
The paper by Herrick appeared in my favorite journal, the American Journal of Physics (J. F. Herrick, Poiseuille’s observations on blood flow lead to a new law in hydrodynamics, Volume 10, pages 33–39, 1942.). The key paragraph in the paper is quoted below.
“The important role which the physical sciences have played in the progress of the biological sciences has eclipsed, more or less, the contributions which biologists have made to the physical sciences. Some of these contributions have become such an integral part of the physical sciences that their origin seems to have been forgotten. An outstanding example of such a contribution is that by Jean Leonard Marie Poiseuille (1799-1869). About 100 years ago Poiseuille brought a fundamental law to that division of physics known as hydrodynamics—which is a branch of rheology, according to more recent terminology. This law resulted indirectly from his observations on the capillary circulation of certain animals. Most physicists, chemists and mathematicians associate the name of Poiseuille with the phenomenon of viscosity because the cgs absolute unit for the viscosity coefficient has been named the poise in his honor. Few know the story leading up to the discovery of the law which bears his name. This law had more fundamental significance than Poiseuille himself realized. It established an excellent experimental method for the measurement of viscosity coefficients of liquids. The underlying principle of this method is in use today. Since Poiseuille’s law was based entirely on experiment, it was purely empirical. However, the law can be obtained theoretically. Those who are familiar with only the theoretical development are generally surprised to learn that the law was originally determined experimentally—and still more surprised to know that Poiseuille got his idea from studying the character of the flow of blood in the capillaries of certain animals.”
More about Poiseuille and his law can be found in a paper by Pfitzner (Poiseuille and his law, Anaesthesia, Volume 31, Pages 273-275, 1976)
"Jean Leonard Marie Poiseuille (1791-1869) was born and died in Paris. Remarkably little seems to be known about his life. He studied medicine for a considerable time and submitted a thesis for his Doctorate in 1828 (aged 30-31 years). Where he carried out his early experiments studies, and how they were financed, is obscure.

His published work includes […] “experimental studies on movement of liquids in tubes of very small diameter” (his most famous paper, completed in 1842 and published in 1846). For his work “On the causes of the movement of the blood in the capillaries” he was awarded the Paris Academie des Sciences prize for experimental physiology. In later life he became a foundation member of the Academie de Medecine of Paris.”
My biggest question about Poiseuille is the pronounciation of his name. I gather that it is pronounced pwah-zweez. The unit of the Poiseuille has been proposed for a pascal second (or, newton second per square meter), but is not commonly used.