Friday, August 8, 2014

On Size and Life

I have recently been reading the fascinating book On Size and Life, by Thomas McMahon and John Tyler Bonner (Scientific American Library, 1983). In their preface, McMahon and Bonner write
This book is about the observable effects of size on animals and plants, seen and evaluated using the tools of science. It will come as no surprise that among those tools are microscopes and cameras. Ever since Antoni Van Leeuwenhoek first observed microorganisms (he called them “animalcules”) in a drop of water from Lake Berkel, the reality of miniature life has expanded our concepts of what all life could possibly be. Some other tools we shall use—equally important ones—are mathematical abstractions, including a type of relation we shall call an allometric formula. It turns out that allometric formulas reveal certain beautiful regularities in nature, describing a pattern in the comparisons of animals as different in size as the shrew and the whale, and this can be as delightful in its own way as the view through a microscope.
Their first chapter is similar to Sec. 1.1 on Distances and Sizes in the 4th edition of Intermediate Physics for Medicine and Biology, except it contains much more detail and is beautifully illustrated. They focus on larger animals; if you want to see a version of our Figs. 1.1 and 1.2 but with a scale bar of about 10 meters, take a look at McMahon and Bonner’s drawing of “the biggest living things” on Page 2 (taken from the 1932 book The Science of Life by the all-star team of H. G. Wells, J. S. Huxley, and G. P. Wells).

In their Chapter 2 (Proportions and Size) is a discussion of allometric formulas and their representation in log-log plots, similar to but more extensive than Russ Hobbie and my Section 2.10 (Log-Log Plots, Power Laws, and Scaling). McMahon and Bonner present in-depth analysis of several biomechanical explanations for many allometric relationships. For instance, below is their description of “elastic similarity” in their Chapter 4 (The Biology of Dimensions).
Let us now consider a new scaling rule as an alternative to isometry (geometric similarity [all length scales increase together, leading to a change in size but no change in shape]), which was the main rule employed for discussing the theory of models in Chapter 3. This new scaling theory, which we shall call elastic similarity, uses two length scales instead of one. Longitudinal lengths, proportional to the longitudinal length scale ℓ, will be measured along the axes of the long bones and generally along the direction in which muscle tensions act. The transverse length scale, d, will be defined at right angles to ℓ, so that bone and muscle diameters will be proportional to d…When making the transformations of shape from a small animal to a large one, all longitudinal lengths (or simply “lengths”) will be multiplied by the same factor that multiples the basic length, ℓ, and all diameters will be multiplied by the factor that multiplies the basic diameter, d. Furthermore, there will be a rule connecting ℓ and dd ∝ ℓ3/2.
They then show that elastic similarity can be used to derive Kleiber’s law (metabolic rate is proportional to mass to the ¾ power), and justify elastic similarity using biomechanical analysis of buckling of a leg. I must admit I am a bit skeptical that the ultimate source of Kleiber’s law is biomechanics. In IPMB, Russ and I review more recent work suggesting that Kleiber’s law arises from general considerations of models that supply nutrients through branching networks, which to me sound more plausible. Nevertheless, McMahon and Bonner’s ideas are interesting, and do suggest that biomechanics can sometimes play a significant role in scaling.

Their Chapter 5 (On Being Large) presents a succession of intriguing allometric relationships related to the motion of large animals (running, flying, swimming, etc). Let me give you one example: large animals have a harder time running uphill than smaller animals. McMahon and Bonner present a plot of oxygen consumption per unit mass versus running speed, and find that for a 30 g mouse there is almost no difference between running uphill and downhill, but for a 17.5 kg chimpanzee running uphill requires about twice as much oxygen as running downhill. In Chapter 6 (On Being Small) they examine what life is like for little organisms, and analyze some of the same issues Edward Purcell discusses in “Life at Low Reynolds Number.”

Overall, I enjoyed the book very much. I have a slight preference for Knut Schmidt-Nielsen’s book Scaling: Why Is Animal Size So Important?, although I must admit that Size and Life is the better illustrated of the two books.

Author Thomas McMahon was a major figure in biomechanics. He was a Harvard professor particularly known for his study of animal motion, and even wrote a paper about “Groucho Running”; running with bent knees like Groucho Marx. Russ and I cite his paper “Size and Shape in Biology” (Science, Volume 179, Pages 1201–1204, 1973) in IPMB. I understand that his book Muscles, Reflexes and Locomotion is also excellent, although more technical, but I have not read it. Below is the abstract from the article “Thomas McMahon: A Dedication in Memoriam” by Robert Howe and Richard Kronauer (Annual Review of Biomedical Engineering, Volume 3, Pages xv-xxxix, 2001).
Thomas A. McMahon (1943–1999) was a pioneer in the field of biomechanics. He made primary contributions to our understanding of terrestrial locomotion, allometry and scaling, cardiac assist devices, orthopedic biomechanics, and a number of other areas. His work was frequently characterized by the use of simple mathematical models to explain seemingly complex phenomena. He also validated these models through creative experimentation. McMahon was a successful inventor and also published three well-received novels. He was raised in Lexington, Massachussetts, attended Cornell University as an undergraduate, and earned a PhD at MIT. From 1970 until his death, he was a member of the faculty of Harvard University, where he taught biomedical engineering. He is fondly remembered as a warm and gentle colleague and an exemplary mentor to his students.
His New York Times obituary can be found here.

Friday, August 1, 2014

Interview with Russ Hobbie in The Biological Physicist

In 2006, just as Springer was about to publish the 4th edition of Intermediate Physics for Medicine and Biology, an interview with Russ Hobbie appeared in The Biological Physicist, a newsletter of the Division of Biological Physics of the American Physical Society. Below are some excerpts from the interview. You can read the entire thing in the December 2006 newsletter.
THE BIOLOGICAL PHYSICIST: Are there any stories you have about particular physics examples you have used in the book or in the classroom that have really awakened the interest of medical students to the importance of physics?

Russ Hobbie: I cannot speak to what has triggered a response in different students. But there is one amusing story. I was working with a pediatric cardiologist, Jim Moeller, to understand the electrocardiogram. I finally wrote up a 5-page paper explaining it with an electrostatic model. When I showed what I thought was simplicity itself to Jim, he could not understand a word of it. But he finally agreed to show it to some second- year medical students. Their response: “Thanks goodness it is rational.” I think this shows the gap between our premed course and what the student needs in medical school and also the fact that the physics we love so dearly may be helpful to a medical student during the basic science years but is not so helpful later on. It also became clear to me that what we teach about x-rays and radioactivity is the only exposure to those topic that physicians will receive, unless they go into radiology!

THE BIOLOGICAL PHYSICIST: How has the book changed over its four editions? Has the way you have presented material evolved over the years?

Russ Hobbie: It is amusing to compare my explanation of the electrocardiogram in the four editions. In the first, I was thinking in terms of an electrostatic model. By the second edition, I had realized that a current dipole model was much better and had been in the literature for a long time. This has been improved even more in the 3rd and 4th editions. I am a slow learner! But as an excuse, I was confused for a long time because the physiologists called the current dipole moment “the electric force vector.”

As I have added material (such as non-linear systems and chaos) it has been necessary to remove material. For example, the first edition had 11 pages and 3 color plates on polarized light and birefringence. This was gone to save money and to make room for biomagnetism in the second edition. I wish it was still there. I did not get around to discussing acoustics, hearing, and ultrasound until the fourth edition.

THE BIOLOGICAL PHYSICIST: How would you assess the impact of the book on the field of interdisciplinary research, and on interdisciplinary education? Do you have any information on the history of how quickly it was adopted by other departments, and how it is used in other interdisciplinary programs?

Russ Hobbie: I have always hoped that a physicist without the biological background could teach from the book, and the solutions manual was written in the hope that students could use it for an independent study course. (At the request of instructors, the solutions manual is now an Adobe Acrobat file which is password-protected. Instructors can ask me or Brad for the password and give it to students if they wish.)

Many physicists are more interested in molecular biophysics than physiology- and radiology- oriented physics and find that other books better meet their needs. However, there seems to be a growing interest in the book among biomedical engineers. One teaching technique that was very successful in the early years of the course had to be abandoned while I was serving as Associate Dean, because it took too much of my time. I required the students to find an article in the research literature that interested them and then to write a paper filling in all the missing steps. They could come to me for help as often as they needed. Then, three days after they submitted the paper, I would give them an oral exam on anything that I suspected they did not fully understand. They said this was a valuable experience; my office was packed with students the week before the papers were due; and I learned a lot myself.

THE BIOLOGICAL PHYSICIST: Have you found that there is a “cultural divide” between physicists and MDs? Some people in the Division of Biological Physics describe having difficulty communicating with medical researchers. Do you ever find that?

Russ Hobbie: Absolutely. One friend, Robert Tucker, got a PhD in biophysics with Otto Schmitt and then went to medical school. Bob said that medical school destroyed his ability to reason. This was probably an extreme statement, but it does capture the “drink from a fire hose” character of medical school. On the other hand, if I am having a myocardial infarct, I would prefer that the clinician taking care of me not start with Coulomb’s law!

Friday, July 25, 2014

The Eighteenth Elephant

I know that there are very few people out there interested in reading a blog about physics applied to medicine and biology. But those few (those wonderful few) might want to know of ANOTHER blog about physics applied to medicine and biology. It is called The Eighteenth Elephant. The blog is written by Professor Raghuveer Parthasarathy at the University of Oregon. He is a biological physicist, with an interest in teaching “The Physics of Life” to non-science majors. He also leads a research lab that studies many biological physics topics, such as imaging and the mechanical properties of membranes. If you like my blog about the 4th edition of Intermediate Physics for Medicine and Biology, you will also like The Eighteenth Elephant. Even if you don’t enjoy my blog, you still might like Parthasarathy’s blog (he doesn’t constantly bombard you with links to the amazon.com page where you can purchase his book).

One of my favorite entries from The Eighteenth Elephant was from last April. I’ve talked about animal scaling of bones in this blog before. A bone must support an animal’s weight (proportional to the animal’s volume), its strength increases with its cross-sectional area, and its length generally increases with the linear size of an animal. Therefore, large animals need bones that are thicker relative to their length, in order to support their weight. I demonstrate this visually by showing my class pictures of bones from different animals. Parthasarathy doesn’t mess around with pictures; he brings a dog femur and an elephant femur to class! (See the picture here, its enormous.) How much better than showing pictures! Now, I just need to find my own elephant femur….

Be sure to read the delightful story about 18 elephants that gives the blog its name.

Friday, July 18, 2014

Hexagons and Cellular Excitable Media

Two of my favorite homework problems in the 4th edition of Intermediate Physics for Medicine and Biology are Problems 39 and 40 in Chapter 10. Russ Hobbie and I ask the student to analyze a cellular excitable medium (often called a cellular automaton), which provides much insight into propagation of excitation in cardiac tissue. I’ve discussed these problems before in this blog. I’m always amazed how well you can understand cardiac arrhythmias using such a simple model that you could teach it to third graders.

When Time Breaks Down, The Three-Dimensional Dynamics of Electrochemical Waves and Cardiac Arrhythmias, by Art Winfree, superimposed on Intermediate Physics for Medicine and Biology.
When Time Breaks Down,
The Three-Dimensional Dynamics of
Electrochemical Waves and Cardiac Arrhythmias,
by Art Winfree.
I learned about cellular excitable media from Art Winfree’s book When Time Breaks Down. To the best of my knowledge, the idea was first introduced by James Greenberg and Stuart Hastings in their paper “Spatial Patterns for Discrete Models of Diffusion in Excitable Media” (SIAM Journal on Applied Mathematics, Volume 34, pages 515–523, 1978), although they performed their simulations on a rectangular grid rather than on a hexagonal grid as in the homework problems from IPMB. Winfree, with his son Erik Winfree and Herbert Seifert, extended the model to three dimensions, and found exotic “organizing centers” such as a “linked pair of twisted scroll rings” (“Organizing Centers in a Cellular Excitable Medium,” Physica D: Nonlinear Phenomena, Volume 17, Pages 109–115, 1985).

Predrawn hexagon grids to use with homework problems about cellular automata in Intermediate Physics for Medicine and Biology.
Predrawn hexagon grids to use with
homework problems about
cellular automata.
I imagine that students may have a difficult time with our homework problems, not because the problems themselves are difficult, but because they don’t have easy access to predrawn hexagon grids. It would be like trying to play chess without a chessboard. When I assign these problems, I provide my students with pages of hexagon grids, so they can focus on the physics. I thought my blog readers might also find this useful, so now you can find a page of predrawn hexagons on the book website. Or, if you prefer, you can find hexagon graph paper for free online here.

In the previous blog entry I mention a paper I published in the Online Journal of Cardiology in which I extended the cellular excitable medium to account for the virtual electrodes created when stimulating cardiac tissue. This change allowed the model to predict quatrefoil reentry. I concluded the paper by writing
This extremely simple cellular excitable medium—which is nothing more than a toy model, stripped down to contain only the essential features—can, with one simple modification for strong stimuli, predict many interesting and important phenomena. Much of what we have learned about virtual electrodes and deexcitation is predicted correctly by the model (Efimov et al., 2000; Trayanova, 2001). I am astounded that this simple model can reproduce the complex results obtained by Lindblom et al. (2000). The model provides valuable insight into the essential mechanisms of electrical stimulation without hiding the important features behind distracting details.
Virtual Electrodes Made Simple: A Cellular Excitable Medium Modified for Strong Electrical Stimuli.
“Virtual Electrodes Made Simple.”
Unfortunately, the Online Journal of Cardiology no longer exists, so the link in my previous blog entry does not work. You can download a copy of this paper at my website. It contains everything except the animations that accompanied the figures in the original journal article. If you want to see the animations, you can look at the article archived here.

Friday, July 11, 2014

Naked to the Bone

Naked to the Bone: Medical Imaging in the Twentieth Century, by Bettyann Kevles, superimposed on Intermediate Physics for Medicine and Biology.
Naked to the Bone:
Medical Imaging in the
Twentieth Century,
by Bettyann Kevles.
I recently finished reading Bettyann Kevles’ excellent book Naked to the Bone: Medical Imaging in the Twentieth Century. This fine history covers medical imaging in much the same way that Kirk Jeffrey’s Machines in our Hearts analyzes the development of pacemakers and defibrillators. Both books are outstanding examples of insightful writing about the history of modern technology. In Naked to the Bone, Kevles examines many topics that Russ Hobbie and I describe in the 4th edition of Intermediate Physics for Medicine and Biology. In fact, Naked to the Bone is a valuable resource for readers interested in the history of medical physics, and serves as a great supplement to the last eight chapters of IPMB. In her introduction, Kevles writes
Naked to the Bone tells the history of medical imaging from Roentgen’s discovery [of x-rays] in 1895 to the present, as imaging affected our entire culture. While this book traces the technological developments and their consequences in medicine, it also explores the impact that this new way of seeing has had upon society at large. Citizens of the twentieth century often sensed that their world differed in kind from what came before, and that science and technology are responsible for that difference…
The book falls naturally into two parts, corresponding roughly in time with the two halves of the century. The first part traces the history of the single technology of X-ray imaging: the second, the array of new competing technologies that arose after World War II when television and computers began to contribute to medical imaging.

In the first part, the emphasis is on the refinement of the technology of the X-ray and the immediate consequences of its discovery. As the machines improved, physicians gradually pushed back the veil in front of the internal organs, revealing first the living skeleton, then the stomach, intestines, gall bladder, lungs, heart, and brain….

Part II deals with the second stage of the imaging revolution. Thomas Hughes suggests in American Genesis that the convergence of two new technologies can cause a revolution. This is precisely what happened when X-rays met computers and produced CT, MRI, PET, and ultrasound. Each of these scanners reconstructs cross-sectional slices of the interior of the body, or creates three-dimensional volume images.
Kevles reviews the development of X-ray imaging in detail. Its use become ubiquitous in modern society. In Problem 8 of Chapter 16 in IPMB, Russ and I analyze the fluoroscopy units used in shoe stores in the early twentieth century.
During the 1930s and 1940s it was popular to have an x-ray flouroscope unit in shoe stores to show children and their parents that shoes were properly fit. These marvellous units were operated by people who had no concept of radiation safety and aimed the bean of x rays upward through the feet and right at the reproductive organs of the children!
Kevles describes the same thing (one can hardly avoid sarcasm when describing these devices).
All over the world, people who grew up between World War I and the 1960s recall the joy of standing inside the [“Foot-O-Scope” fluoroscope x-ray] machine, pressing the appropriate button (usually labeled “Man,” “Woman,” and “Child” although the X-ray dosage was identical) and staring at their wriggling toe bones. The Foot-O-Scope signaled the acceptance of X-ray machines in everyday life. Present in local shoe stores everywhere, they suggested that X-rays were safe and cheap enough so that just about anyone who shopped for shoes could see beyond the skin barrier.
Kevles’ book explores in detail the many contributors to the development of computed tomography. Russ and I just hint at this complex history in Chapter 16 of IPMB (Medical Use of X Rays).
The Nobel Prize acceptance speeches [Cormack (1980); Hounsfield (1980)] are interesting to read. A neurologist, William Oldendorf, had been working independently on the problem but did not share in the Nobel Prize [See DiChiro and Brooks (1979), and Broad (1980)].
Before reading Naked to the Bone, I didn’t realize that EMI, the British music publishing company associated with the Beatles, was also the company that Hounsfield worked for when he developed computed tomography, and that sold the first CT scanner in 1972, starting the tomography revolution.

The invention of magnetic resonance imaging (Chapter 18 in IPMB) was similarly complicated and controversial. Kevles tells the story of the contributions of Raymond Damadian, Paul Lauterbur, Peter Mansfield, and others (Lauterbur and Mansfield shared the Nobel Prize, but Damadian was left out). She then describes the development of positron emission tomography (discussed by Russ and I in our Chapter 17) and ultrasound imaging (our Chapter 13).

One disadvantage of Naked to the Bone is that it was written nearly 20 years ago, and a lot has happened in medical imaging in the last two decades. I would love to read an up-dated twentieth anniversary edition. Particularly interesting to me were some of the predictions made in the epilogue.
Looking ahead, it is easier to imagine an exhibition in the Imaging Museum of the future than to foresee machines in future imaging centers. Radiographs on glass, the original X-ray technology, have already disappeared. Film is likely to follow soon, replaced by versions that have been digitized for easy storage and electronic telecommunication. The use of CT will probably diminish but not disappear; its speed guarantees it a place in emergency medicine. MRI has a clear path before it, for while machines are costly, they last a long time and the upgrades they need are largely a matter of software programs. Ultrasound, the cheapest method of all, offers excellent images of organs that defy other imaging approaches and will probably occupy more space in the future. PET is a different story: full of promise for over a generation, and excellent for specialized procedures, it continues to run into regulatory obstacles that arise like dragon’s teeth even as others are overcome.
She then offers this insight about what might have happened if CT had been invented just after MRI, instead of just before it.
Looking back at the competition that accompanied the linkage of computers with imaging technologies, the timing suggests that, but for its few years’ head start, CT could never have competed with MRI. But it is hard to see how, without CT as a precedent, the chemical laboratory’s small-scale nuclear magnetic resonance technology would ever have, on its own, have become body-imaging MRI.
Kevles is particularly interested in how these imaging technologies impacted modern art. For me, having little background in art, these chapters were not my favorites. But perhaps Kevles liked them best, because she concludes her epilogue on this topic. As usual, I will give her the last word
Artists will most likely continue to extrapolate and elaborate on the remarkable technologies already available. For as a civilization, perhaps even as a species, we like to look, like to look through, and like to look at and through ourselves. In black and white and in color, in two-dimensional slices or in three-dimensional volumes, in frozen instants or moving sequences, the X-ray and its daughter technologies seem to satisfy an innate curiosity to see ourselves naked to the bone.

Friday, July 4, 2014

Our Job is to Find Stupid and Get Rid of It

This week I have been on vacation, including a trip to Kansas City to see relatives and a visit to the Grand Canyon. So, I don’t have much time for updating this blog. My work on the 5th edition of Intermediate Physics for Medicine and Biology has slowed to a crawl, and I need to get back to it next week.

This week I will simply suggest you watch and listen to the inspiring Boston University 2014 commencement address by my friend Kevin Kit Parker.


My favorite quote from the address is the title of this blog entry. Parker is with the Harvard School of Engineering and Applied Sciences. Academically speaking, he and I are brothers; we share a common PhD advisor, John Wikswo of Vanderbilt University. Parker obtained his PhD about ten years after I received mine, and I met him when I was on the faculty at Vanderbilt for a few years in the late 1990s.

Parker is known both for his science, and for being a scientist/soldier. You can learn more about his experiences in an interview that aired on the TV show 60 Minutes.

Friday, June 27, 2014

Microscopes

The microscope is one of the most widely used instruments in science. Microscopy is a huge subject, and I am definitely not an expert. Russ Hobbie and I talk about the microscope only briefly in the 4th edition of Intermediate Physics for Medicine and Biology. In Chapter 14 (Atoms and Light) we give a series of homework problems about lenses. Problem 43 considers the case of an object placed just outside the focal point of a converging lens. The resulting image is real, inverted and magnified (a slide projector, for those of you old enough to remember such things). In Problem 44, the object is just inside the focal point of the lens. The image is virtual, upright, and magnified (a magnifying glass). Then in Problem 45 we put these two lenses together, first a slide projector casting an intermediate image, then a magnifying glass to view that image; a compound microscope. Our discussion is useful, but very simple.

Nowadays, microscopes are extremely complicated, and can do all sorts of wonderful things. Our simple example is nearly obsolete, because almost no one looks through the second lens (the eyepiece) to view the image anymore. Rather, the image produced by the first lens (the objective) is recorded digitally, and one looks at it on a computer screen. I could spend the rest of this blog entry describing the complexities of microscopes, but I want to go in another direction. Can a student build a simple yet modern microscope?

They can, and it makes a marvelous upper-level physics laboratory project. The proof is given by Jennifer Ross of the University of Massachusetts Amherst. In a preprint at her website, Ross describes a microscope project for undergraduates. The abstract reads:
Optics is an important subfield of physics required for instrument design and used in a variety of other disciplines, including materials science, physics, and life sciences such as developmental biology and cell biology. It is important to educate students from a variety of disciplines and backgrounds in the basics of optics in order to train the next generation of interdisciplinary researchers and instrumentalists who will push the boundaries of discovery. In this paper, we present an experimental system developed to teach students in the basics of geometric optics, including ray and wave optics. The students learn these concepts through designing, building, and testing a home-built light microscope made from component parts. We describe the experimental equipment and basic measurements students can perform to learn principles, technique, accuracy, and resolution of measurement. Students find the magnification and test the resolution of the microscope system they build. The system is open and versatile to allow advanced building projects, such as epi-fluorescence, total internal reflection fluorescence, and optical trapping. We have used this equipment in an optics course, an advanced laboratory course, and graduate-level training modules.
This fascinating paper then goes on to describe many aspects of microscope design.
The light source was a white light emitting diode (LED)… We chose inexpensive but small and powerful CMOS cameras to capture images with a USB link to a student’s laptop….The condenser designs of students are the most variable and interesting part of the microscope design. Students in prior years have used one, two, or three lenses to create evenly illuminated light on the sample plane…After creating the condenser, students next have to use an objective to create an image onto the CMOS camera chip.
The equipment is not terribly expensive compared to buying a microscope, but it’s not cheap: each microscope costs about $3000 to build, which means for a team of three students the cost is $1000 per person. But the leaning is tremendous, and Ross suggests that you can scavenge used parts to reduce the cost.

But perhaps even this student-built $3000 microscope is too complicated and expensive for you. Can we go simpler and cheaper? Yes! Consider “foldscope.” The website of foldscope’s inventors says (my italics)
We are a research team at PrakashLab at Stanford University, focused on democratizing science by developing scientific tools that can scale up to match problems in global health and science education. Here we describe Foldscope, a new approach for mass manufacturing of optical microscopes that are printed-and-folded from a single flat sheet of paper, akin to Origami….Although it costs less than a dollar in parts, it can provide over 2,000X magnification with sub-micron resolution (800 nm), weighs less than two nickels (8.8 g), is small enough to fit in a pocket (70 × 20 × 2 mm3), requires no external power, and can survive being dropped from a 3-story building or stepped on by a person. Its minimalistic, scalable design is inherently application-specific instead of general-purpose gearing towards applications in global health, field based citizen science and K12-science education.
Details are described in a preprint available at http://arxiv.org/abs/1403.1211. Also, listen to Manu Prakash give a TED talk about foldscope. The goal is to provide “a microscope for every child.” I think Prakash and his team means EVERY child (as in every single child in the whole wide world).

Friday, June 20, 2014

The Airy Disk

I hate to find errors in the 4th edition of Intermediate Physics for Medicine and Biology. When we do find any, Russ Hobbie and I let our readers know through an errata, published on the book’s website. Last week, I found another error, and it’s a particularly annoying one. First, let me tell you the error, and then I’ll fill in the backstory.

In the errata, you will now find this entry:
Page 338: In Chapter 12, Problem 10. The final equation, a Bessel function integral, should be
An integral relationship among Bessel functions.
Error found 6-10-14.
In the 4th edition, we left out the leading factor of “u” on the right-hand-side. Why does this bother me so much? In part, because Problem 10 is about a famous and important calculation. Chapter 12 is about imaging, and Problem 10 asks the reader to calculate the two-dimensional Fourier transform of the “top hat,” function equal to 1 for r less than a (a circular disk), and zero otherwise. This Fourier transform is, to within a constant factor, equal to J1(u)/u, where J1 is a Bessel function and u = ka, with k being the magnitude of the spatial frequency. This function is known as the “Airy pattern” or “Airy disk.” The picture below shows what the Airy disk looks like when plotted versus spatial frequencies kx and ky:

A plot of the Airy disk.
The Airy disk.

A picture of the square of this function is shown in Fig. 12.1 of IPMB. If you make a smaller, so the “top hat” is narrower, then in frequency space the Airy disk spreads out. Conversely, if you make a larger, so the “top hat” is wider, then in frequency space the Airy disk is more localized. The Bessel function oscillates, passing through zero many times. Qualitatively, J1(u)/u looks similar to the more familiar sinc function, sin(ka)/ka. (The sinc function appears in the Fourier transform of a rectangular “top hat” function).

The Airy disk plays a particularly important role in diffraction, a topic only marginally discussed in IPMB. Interestingly, diffraction isn’t important enough in our book even to make the index. We do mention it briefly in Chapter 13
One property of waves is that diffraction limits our ability to produce an image. Only objects larger than or approximately equal to the wavelength can be imaged effectively. This property is what limits light microscopes (using electromagnetic waves to form an image) to resolutions equal to about the wavelength of visible light, 500 nm.
We don’t talk at all about Fourier optics in IPMB. When light passes through an aperture, the image formed by Fraunhofer diffraction is the Fourier transform of the aperture function. So, for instance, when light passes through the objective lens of a microscope (or some other aperture in the optical path), the aperture function is the top hat function: all the light passes through at radii less than the radius of the lens, and no light passes through at larger radii. So the image formed by the lens of a point object (to the extent that the assumptions underlying Fraunhofer diffraction apply) is the Airy disk. Instead of a point image, you get a little blur.

Suppose you are trying to image two point objects. After diffraction, the image is two Airy disks. Can you resolve them as two separate objects? It depends on the extent of the overlap of the little blurs. Typically one uses the Rayleigh criterion to answer this question. If the two Airy disks are separated by at least the distance from the center of one Airy disk to its first zero, then the two objects are considered resolved. This is, admittedly, an arbitrary definition, but is entirely reasonable and provides a quantitative meaning to the vague term “resolved.” Thus, the imaging resolution of a microscope is determined by the zeros of the J1 Bessel function, which I find pretty neat. (I love Bessel functions).

So, you see, when I realized our homework problem had a typo and it meant the student would calculate the Airy disk incorrectly, my heart sunk. To any students who got fooled by this problem, I apologize. Mea culpa. It makes me all the more determined to keep errors out of the upcoming 5th edition, which Russ and I are working on feverishly.

On the lighter side, when I run into scientists I am not familiar with, I often look them up in Asimov’s Biographical Encyclopedia of Science and Technology. When I looked up George Biddell Airy (1801–1892), Astronomer Royal of the Greenwich Observatory, I was shocked. Asimov writes “he was a conceited, envious, small-minded man and ran the observatory like a petty tyrant.” Oh Myyy!

Friday, June 13, 2014

Physics Research & Education: The Complex Intersection of Biology and Physics

This morning, I am heading home after a productive week at a Gordon Research Conference about “Physics Research and Education: The Complex Intersection of Biology and Physics.” I wish I could tell you more about it, but Gordon Conferences have this policy…
To encourage open communication, each member of a Conference agrees that any information presented at a Gordon Research Conference, whether in a formal talk, poster session, or discussion, is a private communication from the individual making the contribution and is presented with the restriction that such information is not for public use….
So, there is little I can say, other than to point you to the meeting schedule published on the GRC website. I suspect that future blog entries will be influenced by what I learned this week, but I will only write about items that have also been published elsewhere.

 I can say a bit about Gordon Conferences in general. The GRC website states
The Gordon Research Conferences were initiated by Dr. Neil E. Gordon, of the Johns Hopkins University, who recognized in the late 1920s the difficulty in establishing good, direct communication between scientists, whether working in the same subject area or in interdisciplinary research. The Gordon Research Conferences promote discussions and the free exchange of ideas at the research frontiers of the biological, chemical and physical sciences. Scientists with common professional interests come together for a full week of intense discussion and examination of the most advanced aspects of their field. These Conferences provide a valuable means of disseminating information and ideas in a way that cannot be achieved through the usual channels of communication—publications and presentations at large scientific meetings.
Before this, the only Gordon Conference I ever attended was one at which I was the trailing spouse. My wife studied the interaction of lasers with tissue in graduate school, and she attended a Gordon Conference on that topic in the 1980s; I tagged along. I don’t remember that conference being as intense as this one, but maybe that’s because I’m getting older.

The conference was at Mount Holyoke College, a small liberal arts college in South Hadley, Massachusetts, about 90 minutes west of Boston. It is a lovely venue, and we were treated well. I hadn’t lived in a dormitory since college, but I managed to get used to it.

For those of you interested in education at the intersection of physics and biology—a topic of interest for readers of the 4th edition of Intermediate Physics for Medicine and Biology—I suggest you take a look at the recent special issue of the American Journal of Physics about “Research and Education at the Crossroads of Biology and Physics,” discussed in this blog before. In addition, see the website set up based on the “Conference on Introductory Physics for the Life Sciences,” held March 14–16, 2014 in Arlington, Virginia. I’ve also discussed the movement to improve introductory physics classes for students in the life sciences previously in this blog here, here, here, and here.

Now, I need to run so I can catch my plane….

Friday, June 6, 2014

Plant Physics

Perhaps the 4th edition of Intermediate Physics for Medicine and Biology should have a different title. It really should be Intermediate Physics for Medicine and Zoology. Russ Hobbie and I talk a lot about the physics of animals, but not much about plants. There is little botany in our book. This is not completely true. Homework Problem 34 in Chapter 1 (Mechanics) analyzes the ascent of sap in trees, and we briefly mention photosynthesis in Chapter 3 (Systems of Many Particles). I suppose our discussion of Robert Brown’s observation of the random motion of pollen particles counts as botany, but just barely. Chapter 8 (Biomagnetism) is surprisingly rich in plant examples, with both magnetotactic and biomagnetic signals from algae. But on the whole, our book talks about the physics of animals, and especially humans. I mean, really, who cares about plants?

Plant Physics, by Karl Niklas and Hans-Christof Spatz.
Plant Physics, by
Karl Niklas and Hans-Christof Spatz.
Guess what? Some people care very much about plants! Karl Niklas and Hanns-Christof Spatz have written a book titled Plant Physics. What is it about? In many ways, it is IPMB redone with only plant examples. Their preface states
This book has two interweaving themes—one that emphasizes plant biology and another that emphasizes physics. For this reason, we have called it Plant Physics. The basic thesis of our book is simple: plants cannot be fully understood without examining how physical forces and processes influence their growth, development, reproduction, and evolution….This book explores…many…insights that emerge when plants are studied with the aid of physics, mathematics, engineering, and chemistry. Much of this exploration dwells on the discipline known as solid mechanics because this has been the focus of much botanical research. However, Plant Physics is not a book about plant solid mechanics. It treats a wider range of phenomena that traditionally fall under the purview of physics, including fluid mechanics, electrophysiology, and optics. It also outlines the physics of physiological processes such as photosynthesis, phloem loading, and stomatal opening and closing.
The chapter titles in Plant Physics overlap with topics in IPMB, such as Chapter 4 (The Mechanical Behavior of Materials), Chapter 6 (Fluid Mechanics), and Chapter 7 (Plant Electrophysiology). I found the mathematical level of the book to be somewhat lower than IPMB, and probably closer to Denny’s Air and Water. (Interestingly, they did not cite Air and Water in their Section 2.3, Living in Water Versus Air, but they do cite another of Denny’s books, Biology and the Mechanics of the Wave-Swept Environment.) The differences between air and water plays a key role in plant life: “It is very possible that the colonization of land by plant life was propelled by the benefits of exchanging a blue and often turbid liquid for an essentially transparent mixture of gasses.” The book discusses diffusion, the Reynold’s number, chemical potential, Poiseuille flow, and light absorption. Chapter 3 is devoted to Plant Water Relations, and contains an example that serves as a model for how physics can play a role in biology. The opening and closing of stomata (“guard cells”) in leaves involves diffusion, osmotic pressure, feedback, mechanics, and optics. Fluid flow through both the xylem (transporting water from the roots to the leaves) and phloem (transporting photosynthetically produced molecules from the leaves to the rest of the plant) are discussed. Biomechanics plays a larger role in Plant Physics than in IPMB, and at the start of Chapter 4 the authors explain why.
The major premise of this book is that organisms cannot violate the fundamental laws of physics. A corollary to this premise is that organisms have evolved and adapted to mechanical forces in a manner consistent with the limits set by the mechanical properties of the materials out of which they are constructed…We see no better expression of these assertions that when we examine how the physical properties of different plant materials influence the mechanical behavior of plants.
Russ and I discuss Poisson’s ratio in a homework problem in Chapter 1. Niklas and Spatz give a nice example of how a large Poisson’s ratio can arise when a cylindrical cell has inextensible fibers in its cell wall that follow a spiral pattern. 
Values [of the Poisson’s ratio] can be very different [from isotropic materials] for composite biological materials such as most tissues, for which Poisson’s ratios greater than 1.0 can be found. A calculation presented in box 4.2 shows that in a sclerenchyma cell, in which practically inextensible cellulose microfibers provide the strengthening material in the cell wall, the Poisson’s ratio strongly depends on the microfibrillar angle; that is, the angle between fibers and the longitudinal axis of the cell.
Given my interest in bioelectric phenomena, I was especially curious about the chapter on Plant Electrophysiology (Chapter 7). The authors derive the Nernst-Planck equation, and the Goldman equation for the transmembrane potential. Interestingly, plants contain potassium and calcium ion channels, but no sodium channels. Many plants have cells that fire action potentials, but the role of the sodium channel for excitation is replaced by a calcium-dependent chloride channel. These are slowly propagating waves; Niklas and Spatz report conduction velocities of less than 0.1 m/s, compared to propagation in a large myelinated human axon, which can reach up to 100 m/s. Patch clamp recordings are more difficult in plant than in animal cells (plants have a cell wall in addition to a cell membrane). Particularly interesting to me were the gravisensitive currents in Lepidium sativum roots. The distribution of current is determined by the orientation of the root in a gravitational field.

Botanists need physics just as much as zoologists do. Plants are just one more path leading from physics to biology.

For those wanting to learn more, my colleague at Oakland University, Steffan Puwal, plans to offer a course in Plant Physics in the winter 2015 semester.