Friday, November 4, 2011

Countercurrent Heat Exchange

Problem 17 in Chapter 5 of the 4th edition of Intermediate Physics for Medicine and Biology considers a countercurrent heat exchanger. Countercurrent transport in general is discussed in Section 5.8 in terms of the movement of particles. However, Russ Hobbie and I conclude the section by applying the concept to heat exchange.
The principle [of countercurrent exchange] is also used to conserve heat in the extremities—such as a person’s arms and legs, whale flippers, or the leg of a duck. If a vein returning from an extremity runs closely parallel to the artery feeding the extremity, the blood in the artery will be cooled and the blood in the vein warmed. As a result, the temperature of the extremity will be lower and the heat loss to the surroundings will be reduced.
How Animals Work,  by Knut Schmidt-Nielsen, superimposed on Intermediate Physics for Medicine and Biology.
How Animals Work,
by Knut Schmidt-Nielsen.
Problem 17 provides an example of this behavior, and cites Knut Schmidt-Nielsen’s book How Animals Work (1972, Cambridge University Press), which describes countercurrent exchange in more detail. (His comments below about the nose refer to an earlier section of the book, in which Schmidt-Nielsen discusses heat exchange in the nose of the kangaroo rat).
The heat exchange in the nose has a great similarity to the well-known countercurrent heat exchange which takes place, for example, in the extremities of many aquatic animals, such as in the flippers of whales and the legs of wading birds. The body of a whale that swims in water near the freezing point is well insulated with blubber, but the thin streamlined flukes and flippers are uninsulated and highly vascularized and would have an excessive heat loss if it were not for the exchange of heat between arterial and venous blood in these structures. As the cold venous blood returns to the body from the flipper, the vessels run in close proximity to the arteries, in fact, they completely surround the artery, and heat from the arterial blood flows into the returning venous blood, which is thus reheated before it returns to the body (figure 3). Similarly, in the limbs of many animals both arteries and veins split up into a large number of parallel, intermingled vessels each with a diameter of about 1 mm or so, forming a discrete vascular bundle known as a rete…Whether the blood vessels form such a rete system, or in some other way run in close proximity, as in the flipper of the whale, is a question of design and does not alter the principle of the heat recovery mechanism. The blood flows in opposite directions in the arteries and veins, and heat exchange takes place between the two parallel sets of tubes; the system is therefore known as a countercurrent heat exchanger.
The Camel's Nose: Memoirs of a Curious Scientist, by Knut Schmidt-Nielsen, with Intermediate Physics for Medicine and Biology.
The Camel's Nose:
Memoirs of a Curious Scientist,
by Knut Schmidt-Nielsen.
Schmidt-Nielsen also wrote Scaling: Why is Animal Size So Important?, which Russ and I cite often in Chapter 2 and which I included in my top ten list of biological physics books. I have also read Schmidt-Nielsen's autobiography The Camel’s Nose: Memoirs of a Curious Scientist. (See the review of this book in the New England Journal of Medicine.) His Preface begins
This is a personal story of a life spent in science. It tells about curiosity, about finding out and finding answers. The questions I have tried to answer have been very straightforward, perhaps even simple. Do marine birds drink sea water? How do camels in hot deserts manage for days without drinking when humans could not survive without water for more than a day? How can kangaroo rats live in the desert without any water to drink? How can snails find water and food in the most barren deserts? Can crab-eating frogs really survive in sea water?

These are important questions. The answers not only tell us how animals overcome seemingly insurmountable obstacles in hostile environments; they also give us insight into general principles of life and survival.
A statue of Knut Schmidt-Nielsen with a camel on the campus of Duke University.
A statue of Knut Schmidt-Nielsen with a camel
on the campus of Duke University.
Schmidt-Nielsen died in 2007, and Steven Vogel (who I quoted in last week’s blog entry) wrote an article about him for the Biographical Memoirs of Fellows of the Royal Society (Volume 54, Pages 319–331, 2008). See also his obituary in the Journal of Experimental Biology. A statue of Schmidt-Nielsen with a camel (which he famously studied) graces the Duke University campus.

Friday, October 28, 2011

Murray’s Law

Homework Problem 33 in Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology is about Murray’s law, a relationship describing the radii of branching vessels.
A parent vessel of radius Rp branches into two daughter vessels of radii Rd1 and Rd2. Find a relationship between the radii such that the shear stress on the vessel wall is the same in each vessel. (Hint: Use conservation of the volume flow.) This relationship is called ‘Murray’s Law’. Organisms may use shear stress to determine the appropriate size of vessels for fluid transport [LaBarbera (1990)].
The reference is to
LaBarbera, M. (1990) “Principles of Design of Fluid Transport Systems in Zoology.” Science, Volume 249, Pages 992–1000.
Vital Circuits: On Pumps, Pipes, and the Workings of Circulatory Systems, by Steven Vogel.
Vital Circuits,
by Steven Vogel.
In his book Vital Circuits: On Pumps, Pipes, and the Workings of Circulatory Systems, Steven Vogel provides a clear and engaging discussion of Murray’s law.
Our problem of figuring the cheapest arrangement of pipes turns out to involve nothing more nor less than calculating the relative dimensions of pipes so that the steepness of the speed gradient at all walls is the same. This calculation was done by Cecil D. Murray, of Bryn Mawr College, back in 1926, and is spoken of, when (uncommonly) it’s mentioned, as “Murray’s law.”
Murray’s law isn’t especially complicated, and anyone with a hand calculator can play around with it (but you can ignore the specifics without missing the present message). The rule is that the cube of the radius of the parental vessel equals the sum of the cubes of the radii of the daughter vessels. If a pipe with a radius of two units splits into a pair of pipes, each of the pair ought to have a radius of about 1.6 units. (To check, cube 1.6 and then double the result—you get about 2 cubed.) The daughters are smaller, but only a little (Figure 5.6). Still, if the parental one eventually divides into a hundred progeny, the progeny do come out substantially smaller, each about a fifth of the radius of the parent. (Their aggregate cross-section area is, of course, greater than the parental one—to be specific, four fold greater.)

The relationship predicts the relative sizes of both our arteries and our veins quite well. It only fails for the very smallest arterioles and capillaries….

It would be indefensibly anthropocentric to suppose that we’re the only creatures to follow Mr. Murray. My friend, Michael LaBarbera (who introduced me to the whole issue) has tested the law on several systems that are very unlike us structurally and functionally, and very distant from us evolutionarily…Murray’s law again proves applicable…

The mechanism … is becoming clear. Without getting into the details, it looks as if the cells lining the blood vessels can quite literally sense changes in the speed gradient next to them. An increase in the speed of flow through a vessel increases the speed gradient at its walls. An increase in gradient stimulates cell division, which would increase vessel diameter as appropriate to offset the faster flow. Neither change in blood pressure nor cutting the nerve supply makes any difference—this is apparently a direct effect of the gradient on synthesis of some chemical signal by the cells. Perhaps the neatest feature of the scheme is that a cell needn’t know anything about the size of the vessel of which it’s a part. As a consequence of Murray’s Law, it can be given the same specific instruction wherever it might be located, a command telling it to divide when the speed gradient exceeds a specific value.
Vogel is a faculty member in the Biology Department at Duke University. He has published several fine books, including Vital Circuits quoted above and the delightful Life in Moving Fluids (Princeton University Press, 1994), both cited in Intermediate Physics for Medicine and Biology.

Friday, October 21, 2011

A Useful Website

While I have many goals when writing this blog (with the top being to sell textbooks!), sometimes I simply like to point out useful websites relevant to readers of the 4th edition of Intermediate Physics for Medicine and Biology. One example is the website of Rob MacLeod, a professor of bioengineering at the University of Utah. MacLeod’s research, like mine, centers on the numerical simulation of cardiac electrophysiology, so we find many of the same topics interesting.

I particularly enjoy his list of Background Links for Rob’s Courses. You will find many books listed, some of which Russ Hobbie and I cite in Intermediate Physics for Medicine and Biology, and some that we don’t cite but should. For example, MacLeod speaks highly of the book Mathematical Physiology by Keener and Sneyd, but somehow Russ and I never reference it. I didn’t know Malmivuo and Plonsey’s book Bioelectromagnetism (which we do cite) is now available online and free of charge. The Welcome Trust Heart Atlas is beautiful, as is the Virtual Heart website. MacLeod’s list of books about “Cardiology and Medicine” look fascinating, with a heavy emphasis on the relevant history and biography. If I start running out of topics for these blog posts, I could probably find a year of material by exploring the sources listed on this page.

If you visit MacLeod's website (and I hope you do), make sure to click on the link “Information on Writing.” I am an admirer of good writing, especially in nonfiction, and am frustrated when presented with a poorly written scientific book or paper. (I review a lot of papers for journals, and often find myself venting and fuming.) My advice to a young scientist is: Learn To Write. Throughout your scientific career you will be judged primarily on your papers and your grant proposals, which are both written documents. Maybe your science is so good that it can overcome poor writing and still impress the reader, but I doubt it. Learn to write.

Friday, October 14, 2011

Bethesda

A couple months ago I went to Bethesda, Maryland to review grant proposals for the National Institutes of Health. They swear us to secrecy, so I can’t divulge any details about the specific research. But I will share a few general observations.
  1. Winston Churchill said that “Democracy is the worst form of government except all the others that have been tried.” That sums up my opinion of the NIH review process. There are all sorts of problems with the way we select the best research to fund, but I can’t think of a better way than that used by NIH. Each time I participate, I come away with a great respect for the process. Of course, from the outside the review process can resemble a casino, but I don’t see how you can eliminate some randomness while at the same time keeping the process fair, with wide input, and a focus on the significance and impact of the research.
  2. If you are a young biomedical researcher, or hope to be one someday, then you should take advantage of any opportunity to review grant proposals. It is like going to grant writing school. No book, no website, no video, no workshop is more useful for learning how to prepare a proposal. It is a lot of work, but you will gain much, especially the first time or two you do it. However, if you simply are not able to participate in a review panel, then at least watch this video (see below), which is a fairly accurate description of what goes on.
  3. After reviewing grant proposals, I am optimistic about the future of the scientific enterprise in the United States, because of all the fascinating and important research being proposed. I am also pessimistic about my chances for winning additional funding, because the competition is so fierce. But, we must soldier on. To quote Churchill again, “Never give in, never give in, never, never, never, never.” So I’ll keep trying.
  4. Research is becoming more and more interdisciplinary, and many proposals now come from multidisciplinary teams. Each individual researcher cannot know everything, but they must know enough to understand each other, and to talk to each other intelligently. I believe this is one of the virtues of the 4th edition of Intermediate Physics for Medicine and Biology. It helps bridge the gap between physicists and engineers on the one side, and biologists and medical doctors on the other. The book won’t turn a physicist into a biologist, but it may help a physicist talk to and better appreciate a biologist. This is crucial for performing modern collaborative research, and for obtaining funding to pay for that research. After reviewing all those proposals, I came away proud of our textbook.
We finished our review session a couple hours earlier than anticipated, so I used the time to visit the new Martin Luther King Memorial in Washington, DC. It is just across the tidal basin from the Jefferson Memorial, and the statues of King and Jefferson stare at each other across the water. If you happen to be going to DC soon, prepare yourself for a shock. The beautiful reflecting pool between the Washington Monument and the Lincoln Memorial is now a dried-up, plowed-up mud flat. Apparently they are renovating it. But the other attractions are as beautiful as ever, including the Vietnam Veterans Memorial, the Korean War Veterans Memorial, the National World War II Memorial, and the Franklin Delano Roosevelt Memorial. I even saw one I had somehow missed in previous visits: the George Mason Memorial, near the Jefferson Memorial. All this site seeing was a little bonus after reviewing all those grants (packed into two frantic hours between leaving the review session and reaching the airport).

NIH Peer Review Revealed.

Friday, October 7, 2011

The Mathematics of Diffusion

The title page of The Mathematics of Diffusion, by John Crank, superimposed on Intermediate Physics for Medicine and Biology.
The Mathematics of Diffusion,
by John Crank.
Diffusion is one of those topics that is rarely covered in an introductory physics class, but is essential for understanding biology. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss diffusion and its biomedical applications. One of the books we cite is The Mathematics of Diffusion by John Crank. Hard-core mathematical physicists who are interested in biology and medicine will find Crank’s book to be a good fit. Physiologists who want to avoid as much mathematical analysis as possible may prefer to learn their diffusion from Random Walks in Biology, by Howard Berg.
Crank died five years ago this week. Like Wilson Greatbatch, who I discussed in my last blog entry, Crank was one of those scientists who came of age serving in the military during World War Two (Tom Brokaw would call them members the “Greatest Generation”). Crank’s 2006 obituary in the British newspaper The Telegraph states:
John Crank was born on February 6 1916 at Hindley, Lancashire, the only son of a carpenter’s pattern-maker. He studied at Manchester University, where he gained his BSc and MSc. At Manchester he was a student of the physicist Lawrence Bragg, the youngest-ever winner of a Nobel prize, and of Douglas Hartree, a leading numerical analyst.

Crank was seconded to war work during the Second World War, in his case to work on ballistics. This was followed by employment as a mathematical physicist at Courtaulds Fundamental Research Laboratory from 1945 to 1957. He was then, from 1957 to 1981, professor of mathematics at Brunel University (initially Brunel College in Acton).

Crank published only a few research papers, but they were seminal. Even more influential were his books. His work at Courtaulds led him to write The Mathematics of Diffusion, a much-cited text that is still an inspiration for researchers who strive to understand how heat and mass can be transferred in crystalline and polymeric material. He subsequently produced Free and Moving Boundary Problems, which encompassed the analysis and numerical solution of a class of mathematical models that are fundamental to industrial processes such as crystal growth and food refrigeration.
Crank is best known for a numerical technique to solve equations like the diffusion equation, developed with Phyllis Nicolson and known as the Crank-Nicolson method. The algorithm has the advantage that it is numerically stable, which can be shown using von Neuman stability analysis. They published this method in a 1947 paper in the Proceedings of the Cambridge Philosophical Society
Crank, J., and P. Nicolson (1947) “A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat Conduction Type,” Proc. Camb. Phil. Soc., Volume 43, Pages 50–67.
Rather than describe the Crank-Nicolson method, I will let the reader explore it in a new homework problem.
Section 4.8

Problem 24 ½ The numerical approximation for the diffusion equation, derived as part of Problem 24, has a key limitation: it is unstable if the time step is too large. This problem can be avoided using the Crank-Nicolson method. Replace the first time derivative in the diffusion equation with a finite difference, as was done in Problem 24. Next, replace the second space derivative with the finite difference approximation from Problem 24, but instead of evaluating the second derivative at time t, use the average of the second derivative evaluated at times t and t+Δt.
(a) Write down this numerical approximation to the diffusion equation, analogous to Eq. 4 in Problem 24.

(b) Explain why this expression is more difficult to compute than the expression given in the first two lines of Eq. 4. Hint: consider how you determine C(t+Δt) once you know C(t).

The difficulty you discover in part (b) is offset by the advantage that the Crank-Nicolson method is stable for any time step. For more information about the Crank-Nicolson method, stability, and other numerical issues, see Press et al. (1992).
The citation is to my favorite book on computational methods: Numerical Recipes (of course, the link is to the FORTRAN 77 version, which is the edition that sits on my shelf).

Friday, September 30, 2011

Wilson Greatbatch (1919-2011)

This week we lost a giant of engineering: Wilson Greatbatch, inventor of the implantable cardiac pacemaker.

The cardiac pacemaker represents one of the most important contributions of physics and engineering to medicine. In Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe the pacemaker.
Cardiac pacemakers are a useful treatment for certain heart diseases [Jeffrey (2001); Moses et al. (2000); Barold (1985)]. The most frequent are an abnormally slow pulse rate (bradycardia) associated with symptoms such as dizziness, fainting (syncope), or heart failure. These may arise from a problem with the SA node (sick sinus syndrome) or with the conduction system (heart block)….

A pacemaker can be used temporarily or permanently. The pacing electrode can be threaded through a vein from the shoulder to the right ventricle (transvenous pacing, Fig. 7.31) or placed directly in the myocardium during heart surgery.
Several years ago, I taught a class about pacemakers and defibrillators as part of Oakland University’s honors college. The class was designed to challenge our top undergraduates, but not necessarily those majoring in science. Among the readings for the class was a profile in the March 1995 issue of IEEE Spectrum about Wilson Greatbatch (Volume 32, Pages 56-61). The article tells the story of Greatbatch’s first implantable pacemaker:
Greatbatch was on one team that had been summoned by William C. Chardack, chief of surgery at Buffalo’s Veteran’s Administration Hospital, to deal with a blood oximeter. The engineers could not help with that problem, but the meeting for the inventor was momentous: finally, after many previous attempts, he had met a surgeon who was enthusiastic about prospects for an implantable pacemaker. The surgeon estimated such a device might save 10000 lives a year.

Three weeks later, on May 7, 1958, the engineer brought what would become the worlds first implantable cardiac pacemaker to the animal lab at Chardack’s hospital. There Chardack and another surgeon, Andrew Gage, exposed the heart of a dog, to which Greatbatch touched the two pacemaker wires. The heart proceeded to beat in synchrony with the device, made with two Texas Instruments 910 transistors. Chardack looked at the oscilloscope, looked back at the animal, and said, “Well, I’ll be damned.”
Machines in Our Hearts, by Kirk Jeffrey, superimposed on Intermediate Physics for Medicine and Biology.
Machines in Our Hearts,
by Kirk Jeffrey.
Another source the honors college students studied from was Kirk Jeffrey’s excellent book Machines in Our Hearts: The Cardiac Pacemaker, the Implantable Defibrillator, and American Health Care. Jeffrey tells the long history of how pacemakers and defibrillators were developed. In a chapter titled Multiple Invention of Implantable Pacemakers he describes Greatbatch’s contributions as well as others, including Elmqvist and Senning in Sweden. Jeffrey writes
If theirs [Chardack and Greatbatch] was not the only pacemaker of the 1950s, it appears to be the only one that survives today in the collective memory of the community of physicians, engineers, and businesspeople whose careers are tied to the pacemaker… The Chardack-Greatbatch pacamaker stood out from other prototype implantables of the late 1950s not because it was first or clearly a better design, but because it succeeded in the U.S. market as did no other device.
Jeffrey also discusses at length Greatbatch’s contributions to developing the lithium battery.
Because of his prestige in the pacing community and his effectiveness as a champion of technology be believed in, Greatbatch was able almost single-handedly to turn the industry to lithium; in fact by 1978, a survey of pacing practices indicated that only 5 percent of newly implanted pulse generators still used mercury-zinc batteries.
Greatbatch was inducted into the National Inventor’s Hall of Fame in 1986. His citation says
Wilson Greatbatch invented the cardiac pacemaker, an innovation selected in 1983 by the National Society of Professional Engineers as one of the two major engineering contributions to society during the previous 50 years. Greatbatch has established a series of companies to manufacture or license his inventions, including Greatbatch Enterprises, which produces most of the world's pacemaker batteries.

Invention Impact

His original pacemaker patent resulted in the first implantable cardiac pacemaker, which has led to heart patient survival rates comparable to that of a healthy population of similar age.

Inventor Bio

Born in Buffalo, New York, Greatbatch received his preliminary education at public schools in West Seneca, New York. In 1936 he entered military service and served in the Atlantic and Pacific theaters during World War II. He was honorably discharged with the rating of aviation chief radioman in 1945. He attended Cornell University and graduated with a B.E.E. in electrical engineering in 1950. Greatbatch received a master's from the State University of New York at Buffalo in 1957 and was awarded honorary doctor's degrees from Houghton College in 1970 and State University of New York at Buffalo in 1984. Although trained as an electrical engineer, Greatbatch has primarily studied interdisciplinary areas combining engineering with medical electronics, agricultural genetics, the electrochemistry of pacemaker batteries, and the electrochemical polarization of physiological electrodes.
Below are some links related to Wilson Greatbatch that you might find useful.
An article about Greatbatch published by the Lemelson Center for the Study of Invention and Innovation: http://invention.smithsonian.org/centerpieces/ilives/lecture09.html

A video about Greatbatch produced by the Vega Science Trust: http://www.vega.org.uk/video/programme/248

Biography of Wilson Greatbatch on the Heart Rhythm Society website:
http://www.hrsonline.org/News/ep-history/notable-figures/wilsongreatbatch.cfm

New York Times obituary: http://www.nytimes.com/2011/09/28/business/wilson-greatbatch-pacemaker-inventor-dies-at-92.html

BBC obituary: http://www.bbc.co.uk/news/world-us-canada-15085056

A video honoring Wilson Greatbatch, the 1996 Lemelson-MIT Lifetime Achievement Award Winner.
Learn about Wilson Greatbatch, 1996 Lemelson-MIT Lifetime Achievement Award Winner.
https://www.youtube.com/embed/WLZBl118Ads

Friday, September 23, 2011

Optical Mapping

In Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I mention an optical technique that is used to measure the transmembrane potential in the heart.
Experimental measurements of the transmembrane potential often rely on the use of a voltage sensitive dye whose fluorescence changes with the transmembrane potential [Knisley et al. (1994); Neunlist and Tung (1995); Rosenbaum and Jalife (2001)].
This method, often called optical mapping, has revolutionized cardiac electrophysiology, because it allows you to use optical methods to make electrical measurements. If you want to learn more, take a look at the book Optical Mapping of Cardiac Excitation and Arrhythmias, by David Rosenbaum and Jose Jalife (2001). The chapters in this book were written by the stars of this field.
  1. Optical Mapping: Background and Historical Perspective. Guy Salama.
  2. Mechanisms and Principles of Voltage-Sensitive Fluorescence. Leslie M. Loew.
  3. Optical Properties of Cardiac Tissue. William T. Baxter.
  4. Optics and Detectors Used in Optical Mapping. Kenneth R. Laurita and Imad Libbus.
  5. Optimization of Temporal Filtering for Optical Transmembrane Potential Signals. Francis X. Witkowski, Patricia A. Penkoske, and L. Joshua Leon.
  6. Optical Mapping of Impulse Propagation within Cardiomyocytes. Herbert Windisch.
  7. Optical Mapping of Impulse Propagation between Cardiomyocytes. Stephan Rohr and Jan P. Kucera.
  8. Role of Cell-to-Cell Coupling, Structural Discontinuities, and Tissue Anisotropy in Propagation of the Electrical Impulse. André G. Kléber, Stephan Rohr, and Vladimir G. Fast.
  9. Optical Mapping of Impulse Propagation in the Atrioventricular Node: 1. Todor N. Mazgalev and Igor R. Efimov.
  10. Optical Mapping of Impulse Propagation in the Atrioventricular Node: 2. Guy Salama and Bum-Rak Choi.
  11. Optical Mapping of Microscopic Propagation: Clinical Insights and Applications. Albert L. Waldo.
  12. Mapping Arrhythmia Substrates Related to Repolarization: 1. Dispersion of Repolarization. Kenneth R. Laurita, Joseph M. Pastore, and David S. Rosenbaum.
  13. Mapping Arrhythmia Substrates Related to Repolarization: 2. Cardiac Wavelength. Steven Girouard and David S. Rosenbaum.
  14. Video Imaging of Cardiac Fibrillation. Richard A. Gray and José Jalife.
  15. Video Mapping of Spiral Waves in the Heart. William T. Baxter and Jorge M. Davidenko.
  16. Video Imaging of Wave Propagation in a Transgenic Mouse Model of Cardiomyopathy. Faramarz Samie, Gregory E. Morley, Dhjananjay Vaidya, Karen L. Vikstrom, and José Jalife.
  17. Optical Mapping of Cardiac Arrhythmias: Clinical Insights and Applications. Douglas L. Packer.
  18. Response of Cardiac Myocytes to Electrical Fields. Leslie Tung.
  19. New Perspectives in Electrophysiology from The Cardiac Bidomain. Shien-Fong Lin and John P. Wikswo, Jr..
  20. Mechanisms of Defibrillation: 1. Influence of Fiber Structure on Tissue Response to Electrical Stimulation. Stephen B. Knisley.
  21. Mechanisms of Defibrillation: 2. Application of Laser Scanning Technology. Stephen M. Dillon.
  22. Mechanisms of Defibrillation: 3. Virtual Electrode-Induced Wave Fronts and Phase Singularities; Mechanisms of Success and Failure of Internal Defibrillation. Igor R. Efimov and Yuanna Cheng.
  23. Optical Mapping of Cardiac Defibrillation: Clinical Insights and Applications. Douglas P. Zipes.
For those who are tired of reading, two videos have recently been published in the Journal of Visualized Experiments that explain the technique step-by-step. One video is about studying the rabbit heart and the other is about the mouse heart. These excellent video clips were filmed in the laboratory of Igor Efimov, of Washington University in Saint Louis.

My former graduate student, Debbie Janks, is now a post doc in Efimov’s lab. Regular readers of this blog may recognize Janks’ name, as she provides many insightful comments following these blog entries. Janks studied optical mapping from a theoretical perspective when she was here at Oakland University. She published a nice paper that examined the question of averaging over depth during optical mapping. The optical method does not measure the transmembrane potential at the tissue surface. Rather, light penetrates some distance into the tissue, and the optical signal is a weighted average of the transmembrane potential over depth. Janks looked at the effect of this averaging during an electrical shock. Rather than explaining the whole story, I will present it as a new homework problem. That way, you can figure it out for yourself. Enjoy.
Section 7.10

Problem 47 1/2 The signal measured during optical mapping, V, is a weighted average of the transmembrane potential, Vm(z), as a function of depth, V=∫0Vm(z)w(z)dz, where w(z) is a normalized weighting function. Suppose the light decays with depth exponentially, with an optical length constant δ. Then w(z) = exp(−z/δ)/δ. Often a shock will cause Vm(z) to fall off exponentially with depth, Vm(z)=Vo exp(z/λ), where Vo is the transmembrane potential at the tissue surface and λ is the electrical length constant (see Sec. 6.12).
(a) Perform the required integration to find an analytical expression for the optical signal, V, as a function of Vo, δ and λ.
(b) What is V in the case δ much less than λ? Explain this result physically.
(c) What is V in the case δ much greater than λ? Explain this result physically.
(d) For which limit do you obtain an accurate measurement of the transmembrane potential at the surface, V=Vo?
In cardiac tissue, δ is usually on the order of a millimeter, whereas λ is more like a quarter of a millimeter, so averaging over depth significantly distorts the measured signal. For a more detailed analysis of this problem, see Janks and Roth (2002).

Friday, September 16, 2011

Does cell biology need physicists?

The American Physical Society has an online journal, Physics, with the goal of making recent research accessible to a wide audience. The journal website states:
Physics highlights exceptional papers from the Physical Review journals. To accomplish this, Physics features expert commentaries written by active researchers who are asked to explain the results to physicists in other subfields. These commissioned articles are edited for clarity and readability across fields and are accompanied by explanatory illustrations.
One recent paper that caught my eye was an essay written by Charles Wolgemuth, titled “Does Cell Biology Need Physicists?” Wolgemuth asks key questions in the introduction to his essay.
The past has shown that cell biologists are extremely capable of making great progress without much need for physicists (other than needing physicists and engineers to develop many of the technologies that they use). Do the new data and new technological capabilities require a physicist’s viewpoint to analyze the mechanisms of the cell? Is physics of use to cell biology?
Later in the essay, Wolgemuth asks his central question in a more specific way:
It is possible that the physics that cells must deal with is slave to the reactions; i.e., the protein levels and kinetics of the biochemical reactions determine the behavior of the system, and any physical processes that a cell must accomplish are purely consequences of the biochemistry. Or, could it be that cellular biology cannot be fully understood without physics?
Readers of the 4th edition of Intermediate Physics for Medicine and Biology are likely to scream “Yes!” to these questions. I too enthusiastically answer yes, but I agree with Wolgemuth that it is proper to ask such basic questions occasionally.

I should add that Russ Hobbie and I tend to look primarily at macroscopic phenomena in Intermediate Physics for Medicine and Biology, such as the biomechanics of walking with a cane, the interpretation of an electrocardiogram, or the algorithm required to form an image of the brain using a CAT scan. We occasionally look at events on the atomic scale, but for the most part we ignore molecular biophysics. Yet, the cellular scale is an interesting intermediate level that is becoming a fertile field for the applications of physics to biology. Indeed, I examined this issue when discussing the textbook Physical Biology of the Cell last year in this blog. The discussion that Russ and I give to fluid dynamics, diffusion, and bioelectricity in Intermediate Physics for Medicine and Biology is relevant to cellular topics.

To answer his question, Wolgemuth provides five examples in which physics provides key insights into cellular biology: 1) Molecular motors, 2) Cellular movement, 3) How cells swim, 4) Cell growth and division, and 5) How cells interact with the environment. One of my favorite parts of the essay is the consideration of potential pitfalls for physicists in biology.
Fifteen years ago, around the time that I began working in biophysics, there were very few collaborations between physicists and cell biologists, especially if the physicists were theorists. Theory was, and still is to a good degree, a word that should be avoided in the presence of biologists. Those of us who use math and computers to try to understand how cells work tend to call ourselves modelers instead of theorists. My suspicion is that many of the first physicists and mathematicians who tried to develop models for how biology works attempted to be too abstract or too general. As physicists we like to try to find universal laws, and though there are undoubtedly general principles at play in cell biology, it is likely that there are no real universal rules. Evolution need not find only one way to do something but more often probably finds many. Rather than search out generalities, we will serve biology better if we deal with specifics. As Aharon Katchalsky, who is largely credited with bringing nonequilibrium thermodynamics to biology, purportedly said, “It is easier to make a theory of everything than a theory of something.”

In recent years, physicists have done a much better job at addressing specific problems in biology. However, there still remains a divide between the two communities. Indeed, good physical biology that comes out of the physics community often goes unnoticed or is under appreciated. The burden falls on us to properly convey our work so as to be accessible to biologists. We need to make conscious efforts at communication and dissemination of our results. Two useful approaches toward this end are to publish in broader audience journals that reach both communities, and for papers that contain theoretical analyses to provide a qualitative description of the modeling in the main text, while leaving the more mathematical details for the appendices or supplemental material (for further discussion of this topic, see Ref. [55]). It is also of prime importance to maintain and to forge new connections between physicists and biologists.
Wolgemuth comes closest to answering his own questions near the end of the essay, where he predicts
To be truly successful, we must provide an understanding of biology that spans the gorge from biochemistry and genetics to cellular function, and do it in such a way that our models and experiments are not only informative about physics, but directly impact biology.

Cell biology is awaiting these descriptions. And it may be that physicists are the most able to draw these connections between the protein level description of cellular biology that currently dominates and a more intuitive, yet still quantitative, description of the behavior of cells and their responses to their environments.

Friday, September 9, 2011

Radon Transform

In Chapter 12 of the 4th Edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce the Radon transformation. It consists of finding the projections F(θ, x') at different angles θ from a function f(x,y). But why is it called the “Radon” transformation, and does it have anything to do with the radioactive gas radon discussed in Chapter 16?

Well, it has nothing to do with the element radon. Instead, and predictably, the term honors Johann Radon, the Austrian mathematician who investigated this transformation. In “A Tribute to Johann Radon” in the IEEE Transactions on Medical Imaging (Volume 5, Page 169, 1986, reproduced long after his death to honor his memory) Hans Hornich wrote
With the death in Vienna on 25 May 1956 of Dr. Johann Radon, Professor of the University of Vienna, not only the mathematical world and Austrian science but also the German Mathematical Union has suffered a severe loss, as have also many other scientific bodies of which the deceased was a prominent member, and who spent most of his teaching life in German universities.

Radon was born in the small town of Tetschen in Bohemia near the border of Saxony on December 16, 1887. He studied at Vienna University where, alongside Mertens and Wirtinger, Escherisch above all was the great influence on Radon’s development: Escherisch had, as one of the first in Austria, imparted to his students the world of ideas of Weierstrass and his rigorous foundations of analysis. Through Escherich, Radon was led next to variational calculus….

A few years later appeared his “Habilitationsschrift” “Theory and application of absolute additive weighting functions” (S. Ber. math. naturw., Kl. K. Akad. Wiss. Wien II Abt., vol. 122, pp. 1295–1438, 1913), which played a leading role in the development of analysis; the Radon integral and the Radon theorem laid the foundations of functional analysis. As an application Radon somewhat later treated the first and second boundary value problem of the logarithmic potential in a very general way.
The Radon transformation has important applications in medical imaging, and plays a crucial role in computed tomography, positron emission tomography, and single photon emission tomography. I found a nice layman’s description of the Radon transform in an essay at the website http://www.ams.org/samplings/feature-column/fcarc-tomography, written by Bill Casselman.
The original example of this sort of technology [involving a collaboration between medicine and mathematics], and the ancestor of many of these technologies, is what is now called computed tomography, for which Allan Cormack, a physicist whose research became more and more mathematical as time went on, laid down the theoretical foundations around 1960. He shared the 1979 Nobel prize in medicine for his work in this field.

In fact the basic idea of tomography had been discovered for purely theoretical reasons in 1917 by the Austrian mathematician Johann Radon, and it had been rediscovered several times since by others, but Cormack was not to know this until much later than his own independent discovery. The problem he solved is this: Suppose we know all the line integrals through a body of varying density. Can we reconstruct the body itself? The answer, perhaps surprisingly, is that we can, and furthermore we can do so constructively. In practical terms, we know that a single X-ray picture can give only limited information because certain things are obscured by other, heavier things. We might take more X-ray pictures in the hope that we can somehow see behind the obscuring objects, but it is not at all obvious that by taking a lot—really, a lot—of X-ray pictures we can in effect even see into objects, which is what Radon tells us, at least in principle. Making Radon’s theorem into a practical tool was not a trivial matter.”
You can listen to a lecture on tomography and inverting the Radon transform here.

Friday, September 2, 2011

Fraunhofer Diffraction

Last week’s blog entry discussed Fresnel diffraction, which Russ Hobbie and I analyzed in the 4th edition of Intermediate Physics for Medicine and Biology when we examined the ultrasonic pressure distribution produced near a circular piezoelectric transducer. This week, I will analyze diffraction far from the wave source, known as Fraunhofer diffraction, named for the German scientist Joseph von Fraunhofer.

The mathematics of Fraunhofer diffraction is a bit too complicated to derive here, but the gist of it can be found by inspecting the first equation in Section 13.7 (Medical Uses of Ultrasound), found at the bottom of the left column on page 351. The pressure is found by integrating 1/r times a cosine function over the transducer face. When you are far from the transducer, r is approximately a constant and can be taken out of the integral. In that case, you just integrate cosine over the transducer area. This becomes similar to the two-dimensional Fourier transform defined in Chapter 12 (Images). The far field pressure distribution produced by a circular transducer given in Eq. 13.40 is the same Bessel function result as derived in Problem 10 of Chapter 12.

The intensity distribution in Eq. 13.40 is known as the Airy pattern, after the English scientist and mathematician George Biddell Airy. As shown in Fig. 13.15, the pattern consists of a central peak, surrounded by weaker secondary maxima. The Airy pattern occurs during imaging using a circular aperture, such as when viewing stars through a telescope. Two adjacent stars appear as two Airy patterns. Distinguishing the two stars is difficult unless the separation between the images is greater than the separation between the peak of the Airy pattern and its first zero. This is called the Rayleigh criterion, after Lord Rayleigh. Rayleigh (1842–1919, born John William Strutt)—one of those 19th century English Victorian physicists I like so much—did fundamental work in acoustics, and published the classic textbook Theory of Sound.