I know that there are very few people out there interested in reading a blog about physics applied to medicine and biology. But those few (those wonderful few) might want to know of ANOTHER blog about physics applied to medicine and biology. It is called The Eighteenth Elephant. The blog is written by Professor Raghuveer Parthasarathy at the University of Oregon. He is a biological physicist, with an interest in teaching “The Physics of Life” to non-science majors. He also leads a research lab that studies many biological physics topics, such as imaging and the mechanical properties of membranes. If you like my blog about the 4th edition of Intermediate Physics for Medicine and Biology, you will also like The Eighteenth Elephant. Even if you don’t enjoy my blog, you still might like Parthasarathy’s blog (he doesn’t constantly bombard you with links to the amazon.com page where you can purchase his book).
One of my favorite entries from The Eighteenth Elephant was from last April. I’ve talked about animal scaling of bones in this blog before. A bone must support an animal’s weight (proportional to the animal’s volume), its strength increases with its cross-sectional area, and its length generally increases with the linear size of an animal. Therefore, large animals need bones that are thicker relative to their length, in order to support their weight. I demonstrate this visually by showing my class pictures of bones from different animals. Parthasarathy doesn’t mess around with pictures; he brings a dog femur and an elephant femur to class! (See the picture here, its enormous.) How much better than showing pictures! Now, I just need to find my own elephant femur….
Be sure to read the delightful story about 18 elephants that gives the blog its name.
Friday, July 25, 2014
Friday, July 18, 2014
Hexagons and Cellular Excitable Media
Two of my favorite homework problems in the 4th edition of Intermediate Physics for Medicine and Biology are Problems 39 and 40 in Chapter 10. Russ Hobbie and I ask the student to analyze a cellular excitable medium (often called a cellular automaton), which provides much insight into propagation of excitation in cardiac tissue. I’ve discussed these problems before in this blog. I’m always amazed how well you can understand cardiac arrhythmias using such a simple model that you could teach it to third graders.
I learned about cellular excitable media from Art Winfree’s book When Time Breaks Down. To the best of my knowledge, the idea was first introduced by James Greenberg and Stuart Hastings in their paper “Spatial Patterns for Discrete Models of Diffusion in Excitable Media” (SIAM Journal on Applied Mathematics, Volume 34, pages 515–523, 1978), although they performed their simulations on a rectangular grid rather than on a hexagonal grid as in the homework problems from IPMB. Winfree, with his son Erik Winfree and Herbert Seifert, extended the model to three dimensions, and found exotic “organizing centers” such as a “linked pair of twisted scroll rings” (“Organizing Centers in a Cellular Excitable Medium,” Physica D: Nonlinear Phenomena, Volume 17, Pages 109–115, 1985).
I imagine that students may have a difficult time with our homework problems, not because the problems themselves are difficult, but because they don’t have easy access to predrawn hexagon grids. It would be like trying to play chess without a chessboard. When I assign these problems, I provide my students with pages of hexagon grids, so they can focus on the physics. I thought my blog readers might also find this useful, so now you can find a page of predrawn hexagons on the book website. Or, if you prefer, you can find hexagon graph paper for free online here.
In the previous blog entry I mention a paper I published in the Online Journal of Cardiology in which I extended the cellular excitable medium to account for the virtual electrodes created when stimulating cardiac tissue. This change allowed the model to predict quatrefoil reentry. I concluded the paper by writing
Unfortunately, the Online Journal of Cardiology no longer exists, so the link in my previous blog entry does not work. You can download a copy of this paper at my website. It contains everything except the animations that accompanied the figures in the original journal article. If you want to see the animations, you can look at the article archived here.
When Time Breaks Down, The Three-Dimensional Dynamics of Electrochemical Waves and Cardiac Arrhythmias, by Art Winfree. |
Predrawn hexagon grids to use with homework problems about cellular automata. |
In the previous blog entry I mention a paper I published in the Online Journal of Cardiology in which I extended the cellular excitable medium to account for the virtual electrodes created when stimulating cardiac tissue. This change allowed the model to predict quatrefoil reentry. I concluded the paper by writing
This extremely simple cellular excitable medium—which is nothing more than a toy model, stripped down to contain only the essential features—can, with one simple modification for strong stimuli, predict many interesting and important phenomena. Much of what we have learned about virtual electrodes and deexcitation is predicted correctly by the model (Efimov et al., 2000; Trayanova, 2001). I am astounded that this simple model can reproduce the complex results obtained by Lindblom et al. (2000). The model provides valuable insight into the essential mechanisms of electrical stimulation without hiding the important features behind distracting details.
“Virtual Electrodes Made Simple.” |
Friday, July 11, 2014
Naked to the Bone
Naked to the Bone: Medical Imaging in the Twentieth Century, by Bettyann Kevles. |
Naked to the Bone tells the history of medical imaging from Roentgen’s discovery [of x-rays] in 1895 to the present, as imaging affected our entire culture. While this book traces the technological developments and their consequences in medicine, it also explores the impact that this new way of seeing has had upon society at large. Citizens of the twentieth century often sensed that their world differed in kind from what came before, and that science and technology are responsible for that difference…
The book falls naturally into two parts, corresponding roughly in time with the two halves of the century. The first part traces the history of the single technology of X-ray imaging: the second, the array of new competing technologies that arose after World War II when television and computers began to contribute to medical imaging.Kevles reviews the development of X-ray imaging in detail. Its use become ubiquitous in modern society. In Problem 8 of Chapter 16 in IPMB, Russ and I analyze the fluoroscopy units used in shoe stores in the early twentieth century.
In the first part, the emphasis is on the refinement of the technology of the X-ray and the immediate consequences of its discovery. As the machines improved, physicians gradually pushed back the veil in front of the internal organs, revealing first the living skeleton, then the stomach, intestines, gall bladder, lungs, heart, and brain….
Part II deals with the second stage of the imaging revolution. Thomas Hughes suggests in American Genesis that the convergence of two new technologies can cause a revolution. This is precisely what happened when X-rays met computers and produced CT, MRI, PET, and ultrasound. Each of these scanners reconstructs cross-sectional slices of the interior of the body, or creates three-dimensional volume images.
During the 1930s and 1940s it was popular to have an x-ray flouroscope unit in shoe stores to show children and their parents that shoes were properly fit. These marvellous units were operated by people who had no concept of radiation safety and aimed the bean of x rays upward through the feet and right at the reproductive organs of the children!Kevles describes the same thing (one can hardly avoid sarcasm when describing these devices).
All over the world, people who grew up between World War I and the 1960s recall the joy of standing inside the [“Foot-O-Scope” fluoroscope x-ray] machine, pressing the appropriate button (usually labeled “Man,” “Woman,” and “Child” although the X-ray dosage was identical) and staring at their wriggling toe bones. The Foot-O-Scope signaled the acceptance of X-ray machines in everyday life. Present in local shoe stores everywhere, they suggested that X-rays were safe and cheap enough so that just about anyone who shopped for shoes could see beyond the skin barrier.Kevles’ book explores in detail the many contributors to the development of computed tomography. Russ and I just hint at this complex history in Chapter 16 of IPMB (Medical Use of X Rays).
The Nobel Prize acceptance speeches [Cormack (1980); Hounsfield (1980)] are interesting to read. A neurologist, William Oldendorf, had been working independently on the problem but did not share in the Nobel Prize [See DiChiro and Brooks (1979), and Broad (1980)].Before reading Naked to the Bone, I didn’t realize that EMI, the British music publishing company associated with the Beatles, was also the company that Hounsfield worked for when he developed computed tomography, and that sold the first CT scanner in 1972, starting the tomography revolution.
The invention of magnetic resonance imaging (Chapter 18 in IPMB) was similarly complicated and controversial. Kevles tells the story of the contributions of Raymond Damadian, Paul Lauterbur, Peter Mansfield, and others (Lauterbur and Mansfield shared the Nobel Prize, but Damadian was left out). She then describes the development of positron emission tomography (discussed by Russ and I in our Chapter 17) and ultrasound imaging (our Chapter 13).
One disadvantage of Naked to the Bone is that it was written nearly 20 years ago, and a lot has happened in medical imaging in the last two decades. I would love to read an up-dated twentieth anniversary edition. Particularly interesting to me were some of the predictions made in the epilogue.
Looking ahead, it is easier to imagine an exhibition in the Imaging Museum of the future than to foresee machines in future imaging centers. Radiographs on glass, the original X-ray technology, have already disappeared. Film is likely to follow soon, replaced by versions that have been digitized for easy storage and electronic telecommunication. The use of CT will probably diminish but not disappear; its speed guarantees it a place in emergency medicine. MRI has a clear path before it, for while machines are costly, they last a long time and the upgrades they need are largely a matter of software programs. Ultrasound, the cheapest method of all, offers excellent images of organs that defy other imaging approaches and will probably occupy more space in the future. PET is a different story: full of promise for over a generation, and excellent for specialized procedures, it continues to run into regulatory obstacles that arise like dragon’s teeth even as others are overcome.She then offers this insight about what might have happened if CT had been invented just after MRI, instead of just before it.
Looking back at the competition that accompanied the linkage of computers with imaging technologies, the timing suggests that, but for its few years’ head start, CT could never have competed with MRI. But it is hard to see how, without CT as a precedent, the chemical laboratory’s small-scale nuclear magnetic resonance technology would ever have, on its own, have become body-imaging MRI.Kevles is particularly interested in how these imaging technologies impacted modern art. For me, having little background in art, these chapters were not my favorites. But perhaps Kevles liked them best, because she concludes her epilogue on this topic. As usual, I will give her the last word
Artists will most likely continue to extrapolate and elaborate on the remarkable technologies already available. For as a civilization, perhaps even as a species, we like to look, like to look through, and like to look at and through ourselves. In black and white and in color, in two-dimensional slices or in three-dimensional volumes, in frozen instants or moving sequences, the X-ray and its daughter technologies seem to satisfy an innate curiosity to see ourselves naked to the bone.
Friday, July 4, 2014
Our Job is to Find Stupid and Get Rid of It
This week I have been on vacation, including a trip to Kansas City to see relatives and a visit to the Grand Canyon. So, I don’t have much time for updating this blog. My work on the 5th edition of Intermediate Physics for Medicine and Biology has slowed to a crawl, and I need to get back to it next week.
This week I will simply suggest you watch and listen to the inspiring Boston University 2014 commencement address by my friend Kevin Kit Parker.
My favorite quote from the address is the title of this blog entry.
Parker is with the Harvard School of Engineering and Applied Sciences. Academically speaking, he and I are brothers; we share a common PhD advisor, John Wikswo of Vanderbilt University. Parker obtained his PhD about ten years after I received mine, and I met him when I was on the faculty at Vanderbilt for a few years in the late 1990s.
Parker is known both for his science, and for being a scientist/soldier. You can learn more about his experiences in an interview that aired on the TV show 60 Minutes.
This week I will simply suggest you watch and listen to the inspiring Boston University 2014 commencement address by my friend Kevin Kit Parker.
Parker is known both for his science, and for being a scientist/soldier. You can learn more about his experiences in an interview that aired on the TV show 60 Minutes.
Friday, June 27, 2014
Microscopes
The microscope is one of the most widely used instruments in science. Microscopy is a huge subject, and I am definitely not an expert. Russ Hobbie and I talk about the microscope only briefly in the 4th edition of Intermediate Physics for Medicine and Biology. In Chapter 14 (Atoms and Light) we give a series of homework problems about lenses. Problem 43 considers the case of an object placed just outside the focal point of a converging lens. The resulting image is real, inverted and magnified (a slide projector, for those of you old enough to remember such things). In Problem 44, the object is just inside the focal point of the lens. The image is virtual, upright, and magnified (a magnifying glass). Then in Problem 45 we put these two lenses together, first a slide projector casting an intermediate image, then a magnifying glass to view that image; a compound microscope. Our discussion is useful, but very simple.
Nowadays, microscopes are extremely complicated, and can do all sorts of wonderful things. Our simple example is nearly obsolete, because almost no one looks through the second lens (the eyepiece) to view the image anymore. Rather, the image produced by the first lens (the objective) is recorded digitally, and one looks at it on a computer screen. I could spend the rest of this blog entry describing the complexities of microscopes, but I want to go in another direction. Can a student build a simple yet modern microscope?
They can, and it makes a marvelous upper-level physics laboratory project. The proof is given by Jennifer Ross of the University of Massachusetts Amherst. In a preprint at her website, Ross describes a microscope project for undergraduates. The abstract reads:
But perhaps even this student-built $3000 microscope is too complicated and expensive for you. Can we go simpler and cheaper? Yes! Consider “foldscope.” The website of foldscope’s inventors says (my italics)
Nowadays, microscopes are extremely complicated, and can do all sorts of wonderful things. Our simple example is nearly obsolete, because almost no one looks through the second lens (the eyepiece) to view the image anymore. Rather, the image produced by the first lens (the objective) is recorded digitally, and one looks at it on a computer screen. I could spend the rest of this blog entry describing the complexities of microscopes, but I want to go in another direction. Can a student build a simple yet modern microscope?
They can, and it makes a marvelous upper-level physics laboratory project. The proof is given by Jennifer Ross of the University of Massachusetts Amherst. In a preprint at her website, Ross describes a microscope project for undergraduates. The abstract reads:
Optics is an important subfield of physics required for instrument design and used in a variety of other disciplines, including materials science, physics, and life sciences such as developmental biology and cell biology. It is important to educate students from a variety of disciplines and backgrounds in the basics of optics in order to train the next generation of interdisciplinary researchers and instrumentalists who will push the boundaries of discovery. In this paper, we present an experimental system developed to teach students in the basics of geometric optics, including ray and wave optics. The students learn these concepts through designing, building, and testing a home-built light microscope made from component parts. We describe the experimental equipment and basic measurements students can perform to learn principles, technique, accuracy, and resolution of measurement. Students find the magnification and test the resolution of the microscope system they build. The system is open and versatile to allow advanced building projects, such as epi-fluorescence, total internal reflection fluorescence, and optical trapping. We have used this equipment in an optics course, an advanced laboratory course, and graduate-level training modules.This fascinating paper then goes on to describe many aspects of microscope design.
The light source was a white light emitting diode (LED)… We chose inexpensive but small and powerful CMOS cameras to capture images with a USB link to a student’s laptop….The condenser designs of students are the most variable and interesting part of the microscope design. Students in prior years have used one, two, or three lenses to create evenly illuminated light on the sample plane…After creating the condenser, students next have to use an objective to create an image onto the CMOS camera chip.The equipment is not terribly expensive compared to buying a microscope, but it’s not cheap: each microscope costs about $3000 to build, which means for a team of three students the cost is $1000 per person. But the leaning is tremendous, and Ross suggests that you can scavenge used parts to reduce the cost.
But perhaps even this student-built $3000 microscope is too complicated and expensive for you. Can we go simpler and cheaper? Yes! Consider “foldscope.” The website of foldscope’s inventors says (my italics)
We are a research team at PrakashLab at Stanford University, focused on democratizing science by developing scientific tools that can scale up to match problems in global health and science education. Here we describe Foldscope, a new approach for mass manufacturing of optical microscopes that are printed-and-folded from a single flat sheet of paper, akin to Origami….Although it costs less than a dollar in parts, it can provide over 2,000X magnification with sub-micron resolution (800 nm), weighs less than two nickels (8.8 g), is small enough to fit in a pocket (70 × 20 × 2 mm3), requires no external power, and can survive being dropped from a 3-story building or stepped on by a person. Its minimalistic, scalable design is inherently application-specific instead of general-purpose gearing towards applications in global health, field based citizen science and K12-science education.Details are described in a preprint available at http://arxiv.org/abs/1403.1211. Also, listen to Manu Prakash give a TED talk about foldscope. The goal is to provide “a microscope for every child.” I think Prakash and his team means EVERY child (as in every single child in the whole wide world).
Friday, June 20, 2014
The Airy Disk
I hate to find errors in the 4th edition of Intermediate Physics for Medicine and Biology. When we do find any, Russ Hobbie and I let our readers know through an errata, published on the book’s website. Last week, I found another error, and it’s a particularly annoying one. First, let me tell you the error, and then I’ll fill in the backstory.
In the errata, you will now find this entry:
A picture of the square of this function is shown in Fig. 12.1 of IPMB.
If you make a smaller, so the “top hat” is narrower, then in frequency space the Airy disk spreads out. Conversely, if you make a larger, so the “top hat” is wider, then in frequency space the Airy disk is more localized. The Bessel function oscillates, passing through zero many times. Qualitatively, J1(u)/u looks similar to the more familiar sinc function, sin(ka)/ka. (The sinc function appears in the Fourier transform of a rectangular “top hat” function).
The Airy disk plays a particularly important role in diffraction, a topic only marginally discussed in IPMB. Interestingly, diffraction isn’t important enough in our book even to make the index. We do mention it briefly in Chapter 13
Suppose you are trying to image two point objects. After diffraction, the image is two Airy disks. Can you resolve them as two separate objects? It depends on the extent of the overlap of the little blurs. Typically one uses the Rayleigh criterion to answer this question. If the two Airy disks are separated by at least the distance from the center of one Airy disk to its first zero, then the two objects are considered resolved. This is, admittedly, an arbitrary definition, but is entirely reasonable and provides a quantitative meaning to the vague term “resolved.” Thus, the imaging resolution of a microscope is determined by the zeros of the J1 Bessel function, which I find pretty neat. (I love Bessel functions).
So, you see, when I realized our homework problem had a typo and it meant the student would calculate the Airy disk incorrectly, my heart sunk. To any students who got fooled by this problem, I apologize. Mea culpa. It makes me all the more determined to keep errors out of the upcoming 5th edition, which Russ and I are working on feverishly.
On the lighter side, when I run into scientists I am not familiar with, I often look them up in Asimov’s Biographical Encyclopedia of Science and Technology. When I looked up George Biddell Airy (1801–1892), Astronomer Royal of the Greenwich Observatory, I was shocked. Asimov writes “he was a conceited, envious, small-minded man and ran the observatory like a petty tyrant.” Oh Myyy!
In the errata, you will now find this entry:
Page 338: In Chapter 12, Problem 10. The final equation, a Bessel function integral, should be
Error found 6-10-14.In the 4th edition, we left out the leading factor of “u” on the right-hand-side. Why does this bother me so much? In part, because Problem 10 is about a famous and important calculation. Chapter 12 is about imaging, and Problem 10 asks the reader to calculate the two-dimensional Fourier transform of the “top hat,” function equal to 1 for r less than a (a circular disk), and zero otherwise. This Fourier transform is, to within a constant factor, equal to J1(u)/u, where J1 is a Bessel function and u = ka, with k being the magnitude of the spatial frequency. This function is known as the “Airy pattern” or “Airy disk.” The picture below shows what the Airy disk looks like when plotted versus spatial frequencies kx and ky:
The Airy disk. |
The Airy disk plays a particularly important role in diffraction, a topic only marginally discussed in IPMB. Interestingly, diffraction isn’t important enough in our book even to make the index. We do mention it briefly in Chapter 13
One property of waves is that diffraction limits our ability to produce an image. Only objects larger than or approximately equal to the wavelength can be imaged effectively. This property is what limits light microscopes (using electromagnetic waves to form an image) to resolutions equal to about the wavelength of visible light, 500 nm.We don’t talk at all about Fourier optics in IPMB. When light passes through an aperture, the image formed by Fraunhofer diffraction is the Fourier transform of the aperture function. So, for instance, when light passes through the objective lens of a microscope (or some other aperture in the optical path), the aperture function is the top hat function: all the light passes through at radii less than the radius of the lens, and no light passes through at larger radii. So the image formed by the lens of a point object (to the extent that the assumptions underlying Fraunhofer diffraction apply) is the Airy disk. Instead of a point image, you get a little blur.
Suppose you are trying to image two point objects. After diffraction, the image is two Airy disks. Can you resolve them as two separate objects? It depends on the extent of the overlap of the little blurs. Typically one uses the Rayleigh criterion to answer this question. If the two Airy disks are separated by at least the distance from the center of one Airy disk to its first zero, then the two objects are considered resolved. This is, admittedly, an arbitrary definition, but is entirely reasonable and provides a quantitative meaning to the vague term “resolved.” Thus, the imaging resolution of a microscope is determined by the zeros of the J1 Bessel function, which I find pretty neat. (I love Bessel functions).
So, you see, when I realized our homework problem had a typo and it meant the student would calculate the Airy disk incorrectly, my heart sunk. To any students who got fooled by this problem, I apologize. Mea culpa. It makes me all the more determined to keep errors out of the upcoming 5th edition, which Russ and I are working on feverishly.
On the lighter side, when I run into scientists I am not familiar with, I often look them up in Asimov’s Biographical Encyclopedia of Science and Technology. When I looked up George Biddell Airy (1801–1892), Astronomer Royal of the Greenwich Observatory, I was shocked. Asimov writes “he was a conceited, envious, small-minded man and ran the observatory like a petty tyrant.” Oh Myyy!
Friday, June 13, 2014
Physics Research & Education: The Complex Intersection of Biology and Physics
This morning, I am heading home after a productive week at a Gordon Research Conference about “Physics Research and Education: The Complex Intersection of Biology and Physics.” I wish I could tell you more about it, but Gordon Conferences have this policy…
I can say a bit about Gordon Conferences in general. The GRC website states
The conference was at Mount Holyoke College, a small liberal arts college in South Hadley, Massachusetts, about 90 minutes west of Boston. It is a lovely venue, and we were treated well. I hadn’t lived in a dormitory since college, but I managed to get used to it.
For those of you interested in education at the intersection of physics and biology—a topic of interest for readers of the 4th edition of Intermediate Physics for Medicine and Biology—I suggest you take a look at the recent special issue of the American Journal of Physics about “Research and Education at the Crossroads of Biology and Physics,” discussed in this blog before. In addition, see the website set up based on the “Conference on Introductory Physics for the Life Sciences,” held March 14–16, 2014 in Arlington, Virginia. I’ve also discussed the movement to improve introductory physics classes for students in the life sciences previously in this blog here, here, here, and here.
Now, I need to run so I can catch my plane….
To encourage open communication, each member of a Conference agrees that any information presented at a Gordon Research Conference, whether in a formal talk, poster session, or discussion, is a private communication from the individual making the contribution and is presented with the restriction that such information is not for public use….So, there is little I can say, other than to point you to the meeting schedule published on the GRC website. I suspect that future blog entries will be influenced by what I learned this week, but I will only write about items that have also been published elsewhere.
I can say a bit about Gordon Conferences in general. The GRC website states
The Gordon Research Conferences were initiated by Dr. Neil E. Gordon, of the Johns Hopkins University, who recognized in the late 1920s the difficulty in establishing good, direct communication between scientists, whether working in the same subject area or in interdisciplinary research. The Gordon Research Conferences promote discussions and the free exchange of ideas at the research frontiers of the biological, chemical and physical sciences. Scientists with common professional interests come together for a full week of intense discussion and examination of the most advanced aspects of their field. These Conferences provide a valuable means of disseminating information and ideas in a way that cannot be achieved through the usual channels of communication—publications and presentations at large scientific meetings.Before this, the only Gordon Conference I ever attended was one at which I was the trailing spouse. My wife studied the interaction of lasers with tissue in graduate school, and she attended a Gordon Conference on that topic in the 1980s; I tagged along. I don’t remember that conference being as intense as this one, but maybe that’s because I’m getting older.
The conference was at Mount Holyoke College, a small liberal arts college in South Hadley, Massachusetts, about 90 minutes west of Boston. It is a lovely venue, and we were treated well. I hadn’t lived in a dormitory since college, but I managed to get used to it.
For those of you interested in education at the intersection of physics and biology—a topic of interest for readers of the 4th edition of Intermediate Physics for Medicine and Biology—I suggest you take a look at the recent special issue of the American Journal of Physics about “Research and Education at the Crossroads of Biology and Physics,” discussed in this blog before. In addition, see the website set up based on the “Conference on Introductory Physics for the Life Sciences,” held March 14–16, 2014 in Arlington, Virginia. I’ve also discussed the movement to improve introductory physics classes for students in the life sciences previously in this blog here, here, here, and here.
Now, I need to run so I can catch my plane….
Friday, June 6, 2014
Plant Physics
Perhaps the 4th edition of Intermediate Physics for Medicine and Biology should have a different title. It really should be Intermediate Physics for Medicine and Zoology. Russ Hobbie and I talk a lot about the physics of animals, but not much about plants. There is little botany in our book. This is not completely true. Homework Problem 34 in Chapter 1 (Mechanics) analyzes the ascent of sap in trees, and we briefly mention photosynthesis in Chapter 3 (Systems of Many Particles). I suppose our discussion of Robert Brown’s observation of the random motion of pollen particles counts as botany, but just barely. Chapter 8 (Biomagnetism) is surprisingly rich in plant examples, with both magnetotactic and biomagnetic signals from algae. But on the whole, our book talks about the physics of animals, and especially humans. I mean, really, who cares about plants?
Guess what? Some people care very much about plants! Karl Niklas and Hanns-Christof Spatz have written a book titled Plant Physics. What is it about? In many ways, it is IPMB redone with only plant examples. Their preface states
Botanists need physics just as much as zoologists do. Plants are just one more path leading from physics to biology.
For those wanting to learn more, my colleague at Oakland University, Steffan Puwal, plans to offer a course in Plant Physics in the winter 2015 semester.
Plant Physics, by Karl Niklas and Hans-Christof Spatz. |
This book has two interweaving themes—one that emphasizes plant biology and another that emphasizes physics. For this reason, we have called it Plant Physics. The basic thesis of our book is simple: plants cannot be fully understood without examining how physical forces and processes influence their growth, development, reproduction, and evolution….This book explores…many…insights that emerge when plants are studied with the aid of physics, mathematics, engineering, and chemistry. Much of this exploration dwells on the discipline known as solid mechanics because this has been the focus of much botanical research. However, Plant Physics is not a book about plant solid mechanics. It treats a wider range of phenomena that traditionally fall under the purview of physics, including fluid mechanics, electrophysiology, and optics. It also outlines the physics of physiological processes such as photosynthesis, phloem loading, and stomatal opening and closing.The chapter titles in Plant Physics overlap with topics in IPMB, such as Chapter 4 (The Mechanical Behavior of Materials), Chapter 6 (Fluid Mechanics), and Chapter 7 (Plant Electrophysiology). I found the mathematical level of the book to be somewhat lower than IPMB, and probably closer to Denny’s Air and Water. (Interestingly, they did not cite Air and Water in their Section 2.3, Living in Water Versus Air, but they do cite another of Denny’s books, Biology and the Mechanics of the Wave-Swept Environment.) The differences between air and water plays a key role in plant life: “It is very possible that the colonization of land by plant life was propelled by the benefits of exchanging a blue and often turbid liquid for an essentially transparent mixture of gasses.” The book discusses diffusion, the Reynold’s number, chemical potential, Poiseuille flow, and light absorption. Chapter 3 is devoted to Plant Water Relations, and contains an example that serves as a model for how physics can play a role in biology. The opening and closing of stomata (“guard cells”) in leaves involves diffusion, osmotic pressure, feedback, mechanics, and optics. Fluid flow through both the xylem (transporting water from the roots to the leaves) and phloem (transporting photosynthetically produced molecules from the leaves to the rest of the plant) are discussed. Biomechanics plays a larger role in Plant Physics than in IPMB, and at the start of Chapter 4 the authors explain why.
The major premise of this book is that organisms cannot violate the fundamental laws of physics. A corollary to this premise is that organisms have evolved and adapted to mechanical forces in a manner consistent with the limits set by the mechanical properties of the materials out of which they are constructed…We see no better expression of these assertions that when we examine how the physical properties of different plant materials influence the mechanical behavior of plants.Russ and I discuss Poisson’s ratio in a homework problem in Chapter 1. Niklas and Spatz give a nice example of how a large Poisson’s ratio can arise when a cylindrical cell has inextensible fibers in its cell wall that follow a spiral pattern.
Values [of the Poisson’s ratio] can be very different [from isotropic materials] for composite biological materials such as most tissues, for which Poisson’s ratios greater than 1.0 can be found. A calculation presented in box 4.2 shows that in a sclerenchyma cell, in which practically inextensible cellulose microfibers provide the strengthening material in the cell wall, the Poisson’s ratio strongly depends on the microfibrillar angle; that is, the angle between fibers and the longitudinal axis of the cell.Given my interest in bioelectric phenomena, I was especially curious about the chapter on Plant Electrophysiology (Chapter 7). The authors derive the Nernst-Planck equation, and the Goldman equation for the transmembrane potential. Interestingly, plants contain potassium and calcium ion channels, but no sodium channels. Many plants have cells that fire action potentials, but the role of the sodium channel for excitation is replaced by a calcium-dependent chloride channel. These are slowly propagating waves; Niklas and Spatz report conduction velocities of less than 0.1 m/s, compared to propagation in a large myelinated human axon, which can reach up to 100 m/s. Patch clamp recordings are more difficult in plant than in animal cells (plants have a cell wall in addition to a cell membrane). Particularly interesting to me were the gravisensitive currents in Lepidium sativum roots. The distribution of current is determined by the orientation of the root in a gravitational field.
Botanists need physics just as much as zoologists do. Plants are just one more path leading from physics to biology.
For those wanting to learn more, my colleague at Oakland University, Steffan Puwal, plans to offer a course in Plant Physics in the winter 2015 semester.
Friday, May 30, 2014
Pierre Auger and Lise Meitner
Last week in this blog, I discussed Auger electrons and their role in determining the radiation dose to biological tissue. This week, I would like to examine a bit of history behind the discovery of Auger electrons.
Auger electrons are named for Pierre Auger (1899–1993), a French physicist. Lars Persson discusses Auger’s life and work in a short biographical article (Acta Oncologica, Volume 35, Pages 785–787, 1996)
What is most interesting to me about the discovery of Auger electrons is that Auger may have been scooped by one of my favorite physicists, Lise Meitner (1878–1968). I didn’t think I would have the opportunity to discuss Meitner in a blog about physics in medicine and biology, and her name never appears in the 4th edition of Intermediate Physics for Medicine and Biology. But the discovery of Auger electrons gives me an excuse to tell you about her. In the book Lise Meitner: A Life in Physics, Ruth Lewin Sime writes about Meitner’s research on UX1 (now known to be the isotope thorium-234)
Meitner is best know for her work on nuclear fission, described so eloquently by Richard Rhodes in his masterpiece The Making of the Atomic Bomb. Meitner was an Austrian physicist of Jewish descent working in Germany with Otto Hahn. After the Anschluss, Hitler planned to expel Jewish scientists from their academic positions, but also forbade their emigration. With the help of her Dutch colleague Dirk Coster (who is mentioned in IPMB because of Coster-Kronig transitions), she slipped out of Berlin in July 1938. Rhodes writes
Auger electrons are named for Pierre Auger (1899–1993), a French physicist. Lars Persson discusses Auger’s life and work in a short biographical article (Acta Oncologica, Volume 35, Pages 785–787, 1996)
From the onset of his scientific work in 1922 Pierre Auger took an interest in the cloud chamber method discovered by Wilson and applied it to studying the photoelectric effect produced by x-rays on gas atoms. The Wilson method provided him with the most direct means of obtaining detailed information on the photoelectrons produced, since their trajectories could be followed when leaving the atom that had absorbed the quantum of radiation. He filled the chamber with hydrogen, which has a very low x-ray absorption coefficient, and a small proportion of highly absorbent and chemically neutral heavy gases, such as krypton and xenon. Auger observed some reabsorption in the gas, but most often found that the expected electron trajectory started from the positive ion itself. Numerous experiments enabled Auger to show that the phenomenon is frequent and amounts to non-radiactive transitions among the electrons of atoms ionized in depth. This phenomenon was named the auger effect, and the corresponding electrons auger electrons. His discovery was published in the French scientific journal Comptes Rendus as a note titled “On secondary beta-rays produced in a gas by x-rays” (1925; 180: 65–8). He was awarded several scientific prizes and was also a nominee for the Nobel Prize in physics which however, he never received. He was a member of the French Academy of Science. Pierre Auger was certainly one of the great men who created the 20th century in science.
Lise Meitner: A Life in Physics, by Ruth Lewin Sime. |
According to Meitner, the primary process was simply the emission of a decay electron from the nucleus. In UX1 she believed there was no nuclear gamma radiation at all. Instead the decay electron directly ejected a K shell electron, an L electron dropped into the vacancy, and the resultant Kα radiation was mostly reabsorbed to eject L, M, or N electrons from their orbits, all in the same atom. The possibility of multiple transitions without the emission of radiation had been discussed theoretically; Meitner was the first to observe and describe such radiationless transitions. Two years later, Pierre Auger detected the short heavy tracks of the ejected secondary electrons in a cloud chamber, and the effect was named for him. It has been suggested that the “Auger effect” might well have been the “Meitner effect” or at least the “Meitner-Auger effect” had she described it with greater fanfare, but in 1923 it was only part of a thirteen-page article whose main thrust was the beta spectrum of UX1 and the mechanism of its decay.On the other hand, for an argument in support of Auger’s priority, see Duparc, O. H. (2009) “Pierre Auger – Lise Meitner: Comparative Contributions to the Auger Effect,” International Journal of Materials Research Volume 100, Pages 1162–1166.
The Making of the Atomic Bomb, by Richard Rhodes. |
Meitner left with Coster by train on Saturday morning. Nine years later she remembered the grim passage as if she had traveled alone: “I took a train for Holland on the pretext that I wanted to spend a week’s vacation. At the Dutch border, I got the scare of my life when a Nazi military patrol of five men going through the coaches picked up my Austrian passport, which had expired long ago. I got so frightened, my heart almost stopped beating. I knew that the Nazis had just declared open season on Jews, that the hunt was on. For ten minutes I sat there and waited, ten minutes that seemed like so many hours. Then one of the Nazi officials returned and handed me back the passport without a word. Two minutes later I descended on Dutch territory, where I was met by some of my Holland colleagues.”Even better reading is Rhodes’s description of Meitner’s fateful December 1938 walk in the woods with her nephew Otto Frisch, during which they sat down on a log, worked out the mechanism of nuclear fission, and correctly interpreted Hahn’s experimental data. Go buy his book and enjoy the story. Also, you can listen to Ruth Lewin Sime talk about Meitner’s life and work here.
Listen to Ruth Lwein talk about Lise Meitner’s life.
Friday, May 23, 2014
The Amazing World of Auger Electrons
When analyzing how ionizing radiation interacts with biological tissue, one important issue is the role of Auger electrons. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce Auger electrons in Chapter 15 (Interaction of Photons and Charged Particles with Matter). An X-ray or charged particle ionizes an atom, leaving a hole in the electron shell.
In IPMB, Russ and I cite a paper by Amin Kassis with the wonderful title “The Amazing World of Auger Electrons” (International Journal of Radiation Biology, Volume 80, Pages 789–803). Kassis begins
Civil War buffs might compare these two isotopes to the artillery ammunition of the 1860s. 131I is like a cannon firing shot (solid cannon balls), whereas 125I is like firing canister. If you are trying to take out an enemy battery 1000 yards away, you need shot. But if you are trying to repulse an enemy infantry charge that is only 10 yards away, you use canister or, better, double canister. 131I is shot, and 125I is double canister.
The hole in the shell can be filled by two competing processes: a radiative transition, in which a photon is emitted as an electron falls into the hole from a higher level, or a nonradiative or radiationless transition, such as the emission of an Auger electron from a higher level as a second electron falls from a higher level to fill the hole.We consider Auger electrons again in Chapter 17 (Nuclear Physics and Nuclear Medicine). In some cases, a cascade of relatively low energy electrons are produced by one ionizing event.
The Auger cascade means that several of these electrons are emitted per transition. If a radionuclide is in a compound that is bound to DNA, the effect of several electrons released in the same place is to cause as much damage per unit dose as high-LET [linear energy transfer] radiation….Many electrons (up to 25) can be emitted for one nuclear transformation, depending on the decay scheme [Howell (1992)]. The electron energies vary from a few eV to a few tens of keV. Corresponding electron ranges are from less than 1 nm to 15 μm. The diameter of the DNA double helix is about 2 nm…When it [the radionuclide emitting Auger electrons] is bound to the DNA, survival curves are much steeper, as with the α particles in Fig. 15.32 (RBE [relative biological effectiveness] ≈ 8)
“The Amazing World of Auger Electrons.” |
In 1925, a 26-year-old French physicist named Pierre Victor Auger published a paper describing a new phenomenon that later became known as the Auger effect (Auger 1925). He reported that the irradiation of a cloud chamber with low-energy, X-ray photons results in the production of multiple electron tracts and concluded that this event is a consequence of the ejection of inner-shell electrons from the irradiated atoms, the creation of primary electron vacancies within these atoms, a complex series of vacancy cascades composed of both radiative and nonradiative transitions, and the ejection of very low-energy electrons from these atoms. In later studies, it was recognized that such low-energy electrons are also ejected by many radionuclides that decay by electron capture (EC) and/or internal conversion (IC). Both of these processes introduce primary vacancies in the inner electronic shells of the daughter atoms which are rapidly filled up by a cascade of electron transitions that move the vacancy towards the outermost shell. Each inner-shell electron transition results in the emission of either a characteristic atomic X-ray photon or low-energy and short-range monoenergetic electrons (collectively known as Auger electrons, in honor of their discoverer).
Typically an atom undergoing EC and/or IC emits several electrons with energies ranging from a few eV to approximately 100 keV. Consequently, the range of Auger electrons in water is from a fraction of a nanometer to several hundreds of micrometers (table 1). The ejection of these electrons leaves the decaying atoms transiently with a high positive charge and leads to the deposition of highly localized energy around the decay site. The dissipation of the potential energy associated with the high positive charge and its neutralization may, in principle, also act concomitantly and be responsible for any observed biological effects. Finally, it is important to note that unlike energetic electrons, whose linear energy transfer (LET) is low (~0.2 keV/mm) along most of their rather long linear path (up to one cm in tissue), i.e. ionizations occur sparingly, the LET of Auger electrons rises dramatically to ~26 keV/mm (figure 1) especially at very low energies (35–550 eV) (Cole 1969) with the ionizations clustered within several cubic nanometers around the point of decay. From a radiobiological prospective, it is important to recall that the biological functions of mammalian cells depend on both the genomic sequences of double- stranded DNA and the proteins that form the nucleoprotein complex, i.e. chromatin, and to note that the organization of this polymer involves many structural level compactions (nucleosome, 30-nm chromatin fiber, chromonema fiber, etc.) [see Fig. 16.33 in IPMB] whose dimensions are all within the range of these high-LET (8–26 keV/mm), low-energy low-energy (less than 1.6 keV), short-range (less than 130 nm) electrons.An example of an isotope that emits a cascade of Auger electrons is iodine-125. It has a half-life of 59 days, and decays to an excited state of tellurium-125. The atom deexcites by various mechanisms, including up to 21 Auger electrons with energies of 50 to 500 eV each. Kassis says
Among all the radionuclides that decay by EC and/or IC, the Auger electron emitter investigated most extensively is iodine-125. Because these processes lead to the emission of electrons with very low energies, early studies examined the radiotoxicity of iodine-125 in mammalian cells when the radioelement was incorporated into nuclear DNA consequent to in vitro incubations of mammalian cells with the thymidine analog 5-[125I]iodo-2’-deoxyuridine (125IdUrd). These studies demonstrated that the decay of DNA-incorporated 125I is highly toxic to mammalian cells.I find it useful to compare 125I with 131I, another iodine radioisotope used in nuclear medicine. 131I undergoes beta decay, followed by emission of a gamma ray. Both the high energy electron from beta decay (up to 606 keV) and the gamma ray (364 keV) can travel millimeters in tissue, passing through many cells. In contrast, 125I releases its cascade of Auger electrons, resulting in extensive damage over a very small distance.
Civil War buffs might compare these two isotopes to the artillery ammunition of the 1860s. 131I is like a cannon firing shot (solid cannon balls), whereas 125I is like firing canister. If you are trying to take out an enemy battery 1000 yards away, you need shot. But if you are trying to repulse an enemy infantry charge that is only 10 yards away, you use canister or, better, double canister. 131I is shot, and 125I is double canister.
Subscribe to:
Posts (Atom)