Friday, December 30, 2011

Wilhelm Roentgen

The medical use of X rays is one of the main topics discussed in the 4th edition of Intermediate Physics for Medicine and Biology. However, Russ Hobbie and I don’t say much about the discoverer of X rays, Wilhelm Roentgen (1845-1923). Let me be more precise: we never mention Roentgen at all, despite his winning the first ever Nobel Prize in Physics in 1901. We do refer to the unit bearing his name, but in an almost disparaging way:
Problem 8 The obsolete unit, the roentgen (R), is defined as 2.08 x 109 ion pairs produced in 0.001 293 g of dry air. (This is 1 cm3 of dry air at standard temperature and pressure.) Show that if the average energy required to produce an ion pair in air is 33.7 eV (an old value), then 1 R corresponds to an absorbed does of 8.69 x 10-3 Gy and that 1 R is equivalent to 2.58 x 10-4 C kg-1.
Roentgen’s story is told in Asimov’s Biographical Encyclopedia of Science and Technology. (My daughter gave me a copy of this book for Christmas this year; Thanks, Kathy!)
“…The great moment that lifted Roentgen out of mere competence and made him immortal came in the autumn of 1895 when he was head of the department of physics at the University of Wurzburg in Bavaria. He was working on cathode rays and repeating some of the experiments of Lenard and Crookes. He was particularly interested in the luminescence these rays set up in certain chemicals.

In order to observe the faint luminescence, he darkened the room and enclosed the cathode ray tube in thin black cardboard. On November 5, 1895, he set the enclosed cathode ray tube into action and a flash of light that did not come from the tube caught his eye. He looked up and quite a distance from the tube he noted that a sheet of paper coated with barium platinocyanide was glowing. It was one of the luminescent substances, but it was luminescing now even though the cathode rays, blocked off by the cardboard, could not possibly be reaching it.

He turned off the tube; the coated paper darkened. He turned it on again; it glowed. He walked into the next room with the coated paper, closed the door, and pulled down the blinds. The paper continued to glow while the tube was in operation…

For seven weeks he experimented furiously and then, finally, on December 28, 1895 [116 years ago this week], submitted his first paper, in which he not only announced the discovery but reported all the fundamental properties of X rays...

The first public lecture on the new phenomenon was given by Roentgen on January 23, 1896. When he had finished talking, he called for a volunteer, and Kolliker, almost eighty years old at the time, stepped up. An X-ray photograph was taken of this hand—which shows the bones in beautiful shape for an octogenarian. There was wild applause, and interest in X rays swept over Europe and America.”
You can learn more about X rays in Chapter 15 (Interaction of Photons and Charged Particles with Matter) and Chapter 16 (Medical Use of X Rays) in Intermediate Physics for Medicine and Biology.

Friday, December 23, 2011

Poisson's Ratio

One of the many new problems that Russ Hobbie and I added to the 4th Edition of Intermediate Physics for Medicine and Biology deals with Poisson's Ratio. From Chapter 1:
Problem 25 Figure 1.20, showing a rod subject to a force along its length, is a simplification. Actually, the cross-sectional area of the rod shrinks as the rod lengthens. Let the axial strain and stress be along the z axis. They are related to Eq. 1.25, sz = E εz. The lateral strains εx and εy are related to sz by sz = - (E/ν) εx = -(E/ν) εy, where ν is called the 'Poisson’s ratio' of the material.
(a) Use the result of Problem 13 to relate E and ν to the fractional change in volume ΔV/V.
(b) The change in volume caused by hydrostatic pressure is the sum of the volume changes caused by axial stresses in all three directions. Relate Poisson’s ratio to the compressibility.
(c) What value of ν corresponds to an incompressible material?
(d) For an isotropic material, -1 ≤ ν ≤ 0.5. How would a material with negative ν behave?
Elliott et al. (2002) measured Poisson’s ratio for articular (joint) cartilage under tension and found 1 ν 2. This large value is possible because cartilage is anisotropic: Its properties depend on direction.
The citation is to a paper by Dawn Elliott, Daria Narmoneva and Lori Setton, Direct Measurement of the Poisson’s Ratio of Human Patella Cartilage in Tension, in the Journal of Biomechanical Engineering, Volume 124, Pages 223-228, 2002. (Apologies to Dr. Narmoneva, whose name was misspelled in our book. It is now corrected in the errata, available at the book website.)

As hinted at in our homework problem, a particularly fascinating type of material has negative Poisson's Ratio. Some foams expand laterally, rather than contract, when you stretch them; see Roderic Lakes, Foam Structures with a Negative Poisson’s Ratio, Science, Volume 235, Pages 1038-1040, 1987. A model for such a material is shown in this video. Lakes’ website contains much interesting information about Poisson’s ratio. For instance, cork has a Poisson ratio of nearly zero, making it ideal for stopping wine bottles.

Simeon Denis Poisson (1781-1840) was a French mathematician and physicist whose name appears several times in Intermediate Physics for Medicine and Biology. Besides Poisson’s ratio, in Chapter 9 Russ and I present the Poisson Equation in electrostatics, and its extension the Poisson-Boltzmann Equation governing the electric field in salt water. Appendix J reviews the Poisson Probability Distribution. Finally, Poisson appeared in this blog before, albeit as something of a scientific villain, in the story of Poisson’s spot. Poisson is one of the 72 names appearing on the Eiffel Tower.

Friday, December 16, 2011

Gadolinium

While school children know the most famous elements listed in the periodic table (for example hydrogen, oxygen, and carbon), even many scientists are unfamiliar with those rare earth elements down at the bottom of the table, listed under the generic label of lanthanides. But one of these, gadolinium (Gd, element 64), has become crucial for modern medicine because of its use as a contrast agent during magnetic resonance imaging. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss gadolinium in Chapter 18.
“Differences in relaxation time are easily detected in an image. Different tissues have different relaxation times. A contrast agent containing gadolinium (Gd3+), which is strongly paramagnetic, is often used in magnetic resonance imaging. It is combined with many of the same pharmaceuticals used with 99mTc, and it reduces the relaxation time of nearby nuclei.”
In 1999, Peter Caravan and his coworkers published a major review article about the uses of gadolinium in imaging, which has been cited over 1500 times (Gadolinium(III) Chelates as MRI Contrast Agents: Structure, Dynamics, and Applications, Chemical Reviews, Volume 99, Pages 2293-2352). The review is well written, and I reproduce the introduction below.
“Gadolinium, an obscure lanthanide element buried in the middle of the periodic table, has in the course of a decade become commonplace in medical diagnostics.
Like platinum in cancer therapeutics and technetium in cardiac scanning, the unique magnetic properties of the gadolinium(III) ion placed it right in the middle of a revolutionary development in medicine: magnetic resonance imaging (MRI). While
it is odd enough to place patients in large superconducting magnets and noisily pulse water protons in their tissues with radio waves, it is odder still to inject into their veins a gram of this potentially toxic metal ion which swiftly floats among the water
molecules, tickling them magnetically.

The successful penetration of gadolinium(III) chelates into radiologic practice and medicine as a whole can be measured in many ways. Since the approval of [Gd(DTPA)(H2O)]2- in 1988, it can be estimated that over 30 metric tons of gadolinium have been administered to millions of patients worldwide. Currently, approximately 30% of MRI exams include the use of contrast agents, and this is projected to increase as new agents and applications arise; Table 1 lists agents currently approved or in clinical trials. In the rushed world of modern medicine, radiologists, technicians, and nurses often refrain from calling the agents by their brand names, preferring instead the affectionate 'gado.' They trust this clear, odorless 'magnetic light', one of the safest class of drugs ever developed. Aside from the cost ($50-80/bottle), asking the nurse to 'Give him some gado' is as easy as starting a saline drip or obtaining a blood sample.

Gadolinium is also finding a place in medical research. When one of us reviewed the field in its infancy, in 1987, only 39 papers could be found for that year in a Medline search for 'gado-' and MRI. Ten years later over 600 references appear each year. And as MRI becomes relied upon by different specialties, 'gado' is becoming known by neurologists, cardiologists, urologists, opthamologists, and others in search of new ways to visualize functional changes in the body.

While other types of MRI contrast agents have been approved, namely an iron particle-based agent and a manganese(II) chelate, gadolinium(III) remains the dominant starting material. The reasons for this include the direction of MRI development and the nature of Gd chelates.”
In Section 18.12 about Functional MRI, Russ and I again mention gadolinium.
"Magnetic resonance imaging provides excellent structural information. Various contrast agents can provide information about physiologic function. For example, various contrast agents containing gadolinium are injected intravenously. They leak through a damaged blood-tissue barrier and accumulate in the damaged region. At small concentrations T1 is shortened."
Here at Oakland University, several of our Biomedical Sciences: Medical Physics PhD students study brain injury using this method. See, for instance, the dissertation Magnetic Resonance Imaging Investigations of Ischemic Stroke, Intracerebral Hemorrhage and Blood-Brain Barrier Pathology by Karki, Kishor, 2009.

Friday, December 9, 2011

The Cyclotron

The 4th edition of Intermediate Physics for Medicine and Biology has its own Facebook group, and any readers of this blog who use Facebook are welcome to join. One nice feature of Facebook is that is encourages comments, such as the recent one that asked “Why isn't there a chapter or a subchapter in the textbook 'Intermediate physics for medicine and biology' that refers to the fundamental concepts of the cyclotron and the betatron and how are they used in medicine?” This is a good question, because undoubtedly cyclotrons are important in nuclear medicine. I can’t do anything to change the 4th edition of our book, but this blog provides an opportunity to address such comments, and to try out possible text for a 5th edition.

Although the term does not appear in the index (oops…), the cyclotron is mentioned in Intermediate Physics for Medicine and Biology at the end of Section 17.9 (Radiopharmaceuticals and Tracers).
“Other common isotopes are 201Tl, 67Ga, and 123I. Thallium, produced in a cyclotron, is chemically similar to potassium and is used in heart studies, though it is being replaced by 99mTc-sestamibi and 99mTc-tetrofosmin. Gallium is used to image infections and tumors. Iodine is also produced in a cyclotron and is used for thyroid studies.”
Cyclotrons are again mentioned in Section 17.14 (Positron Emission Tomography)
Positron emitters are short-lived, and it is necessary to have a cyclotron for producing them in or near the hospital. This is proving to be less of a problem than initially imagined. Commercial cyclotron facilities deliver isotopes to a number of nearby hospitals. Patterson and Mosley (2005) found that 97% of the people in the United States live within 75 miles of a clinical PET facility.”
(Note: on page 513 of our book, we omitted the word “emission” from the phrase “positron emission tomography” in the title of the Patterson and Mosley paper; again, oops…)

Perhaps the best place in Intermediate Physics for Medicine and Biology to discuss cyclotrons would be after Section 8.1 (The Magnetic Force on a Moving Charge). Below is some sample text that serves as a brief introduction to cyclotrons.
8.1 ½ The Cyclotron

One important application of magnetic forces in medicine is the cyclotron. Many hospitals have a cyclotron for the production of radiopharmaceuticals, or for the generation of positron emitting nuclei for use in Positron Emission Tomography (PET) imaging (see Chapter 17).

Consider a particle of charge q and mass m, moving with speed v in a direction perpendicular to a magnetic field B. The magnetic force will bend the path of the particle into a circle. Newton’s second law states that the mass times the centripetal acceleration, v2/r, is equal to the magnetic force

m v2/r = q v B . (8.4a)

The speed is equal to circumference of the circle, 2 π r, divided by the period of the orbit, T. Substituting this expression for v into Eq. 8.4a and simplifying, we find

T = 2 π m/(q B) . (8.4b)

In a cyclotron particles orbit at the cyclotron frequency, f = 1/T. Because the magnetic force is perpendicular to the motion, it does not increase the particles’ speed or energy. To do that, the particles are subjected periodically to an electric field that must change direction with the cyclotron frequency so that it is always accelerating, and not decelerating, the particles. This would be difficult if not for the fortuitous disappearance of both v and r from Eq. 8.4b, so that the cyclotron frequency only depends on the charge-to-mass ratio of the particles and the magnetic field, but not on their energy.

Typically, protons are accelerated in a magnetic field of about 1 T, resulting in a cyclotron frequency of approximately 15 MHz. Each orbit raises the potential of the proton by about 100 kV, and it must circulate enough times to raise its total energy to at least 10 MeV so that it can overcome the electrostatic repulsion of the target nucleus and cause nuclear reactions. For example, the high-energy protons may be incident on a target of 18O (a rare but stable isotope of oxygen), initiating a nuclear reaction that results in the production of 18F, an important positron emitter used in PET studies.
Since Intermediate Physics for Medicine and Biology is not a history book, I didn’t mention the interesting history of the cyclotron, which was invented by Ernest Lawrence in the early 1930s, for which he received the Nobel Prize in Physics in 1939. The American Institute of Physics Center for the History of Physics has a nice website about Lawrence’s invention. The same story is told, perhaps more elegantly, in Richard Rhode’s masterpiece The Making of the Atomic Bomb (see Chapter 6, “Machines”). Lawrence played a major role in the Manhattan Project, using modified cyclotrons as massive mass spectrometers to separate the fissile uranium isotope 235U from the more abundant 238U.

Finally, I think it is appropriate that Intermediate Physics for Medicine and Biology should have a section about the cyclotron, because my coauthor Russ Hobbie (who was the sole author of the first three editions of the textbook) obtained his PhD while working at the Harvard Cyclotron. Thus, an unbroken path leads from Ernest Lawrence and the cyclotron to the publication of our book and the writing of this blog.

Friday, December 2, 2011

Feedback Loops

Negative feedback is an important concept in physiology. Russ Hobbie and I discuss feedback loops in Chapter 10 of the 4th edition of Intermediate Physics for Medicine and Biology. In the text and homework problems, we discuss several examples of negative feedback, including the regulation of breathing rate by the concentration of carbon dioxide in the alveoli, the prevention of overheating of the body by sweating, and the control of blood glucose levels by insulin. You can never have enough of these examples. Therefore, here is another homework problem related to negative feedback: regulation of blood osmolarity by antidiuretic hormone. Warning: the model is greatly simplified. It should be correct qualitatively, but not accurate quantitatively.

Section 10.3

Problem 15 ½ The osmolarity of plasma (C, in mosmole) is regulated by the concentration of antidiuretic hormone (ADH, in pg/ml, also known as vasopressin). As antidiuretic hormone increases, the kidney reabsorbs more water and the plasma osmolarity decreases, C=700/ADH. When osmoreceptors in the hypothalamus detect an increase of plasma osmolarity, they stimulate the pituitary gland to produce more antidiuretic hormone, ADH = C-280 for C greater than 280, and zero otherwise.
(a) Draw a block diagram of the feedback loop, including accurate plots of the two relationships.
(b) Calculate the operating point and the open loop gain (you may need to use four to six significant figures to determine the operating point accurately).
(c) Suppose the behavior of the kidney changed so now C=750/ADH. First determine the new value of C if the regulation of ADH is not functioning (ADH is equal to that found in part b), and then determine the value of C taking regulation of ADH by the hypothalamus into account.

You should find that this feedback loop is very effective at holding the blood osmolarity constant. For more about osmotic effects, see Chapter 5 of Intermediate Physics for Medicine and Biology.

Here is how Guyton and Hall describe the physiological details of this feedback loop in their Textbook of Medical Physiology (11th edition):
“When osmolarity (plasma sodium concentration) increases above normal because of water deficit, for example, this feedback system operates as follows:

1. An increase in extracellular fluid osmolarity (which in practical terms means an increase in plasma sodium concentration) causes the special nerve cells called osmoreceptor cells, located in the anterior hypothalamus near the supraoptic nuclei, to shrink.

2. Shrinkage of the osmoreceptor cells casuse them to fire, sending nerve signals to additional nerve cells in the supraoptic nuclei, which then relay these signals down the stalk of the pituitary gland to the posterior pituitary.

3. These action potentials conducted to the posterior pituitary stimulate the release of ADH, which is stored in secretory granules (or vesicles) in the nerve endings.

4. ADH enters the blood stream and is transported to the kidneys, where it increases the water permeability of the late distal tubules, cortical collecting tubules, and the medullary collecting ducts.

5. The increased water permeability in the distal nephron segments causes increased water reabsorption and excretion of a small volume of concentrated urine.

Thus, water is conserved in the body while sodium and other solutes continue to be excreted in the urine. This causes dilution of the solutes in the extracellular fluid, thereby correcting the initial excessively concentrated extracellular fluid.”
Feedback loops are central to physiology. Guyton and Hall write in their first introductory chapter
“Thus, one can see how complex the feedback control systems of the body can be. A person’s life depends on all of them. Therefore, a major share of this text is devoted to discussing these life-giving mechanisms.”

Friday, November 25, 2011

The Second Law of Thermodynamics

Russ Hobbie and I discuss thermodynamics in Chapter 3 of the 4th edition of Intermediate Physics for Medicine and Biology. We take a statistical perspective (similar to that used so effectively by Frederick Reif in Statistical Physics, which is Volume 5 of the Berkeley Physics Course), and discuss many topics such as heat, temperature, entropy, the Boltzmann factor, Gibbs free energy, and the chemical potential. But only at the very end of the chapter do we mention the central concept of thermodynamics: The Second Law.
“In some cases, thermal energy can be converted into work. When gas in a cylinder is heated, it expands against a piston that does work. Energy can be supplied to an organism and it lives. To what extent can these processes, which apparently contradict the normal increase of entropy, be made to take place? The questions can be stated in a more basic form.

1. To what extent is it possible to convert internal energy distributed randomly over many molecules into energy that involves a change of a macroscopic parameter of the system? (How much work can be captured from the gas as it expands the piston?)

2. To what extent is it possible to convert a random mixture of simple molecules into complex and highly organized macromolecules?

Both these questions can be reformulated: under what conditions can the entropy of a system be made to decrease?

The answer is that the entropy of a system can be made to decrease if, and only if, it is in contact with one or more auxiliary systems that experience at least a compensating increase in entropy. Then the total entropy remains the same or increases. This is one form of the second law of thermodynamics. For a fascinating discussion of the second law, see Atkins (1994).”
The book by Peter Atkins, The Second Law, is published by the Scientific American Library, and is aimed at a general audience. It is a wonderful book, and provides the best non-mathematical description of thermodynamics I know of. Atkins’ preface begins
“No other part of science has contributed as much to the liberation of the human spirit as the Second Law of thermodynamics. Yet, at the same time, few other parts of science are held to be so recondite. Mention of the Second Law raises visions of lumbering steam engines, intricate mathematics, and infinitely incomprehensible entropy. Not many would pass C. P. Snow’s test of general literacy, in which not knowing the Second Law is equivalent to not having read a work of Shakespeare.

In this book I hope to go some way toward revealing the workings of the Law, and showing its span of application. I start with the steam engine, and the acute observations of the early scientists, and I end with a consideration of the processes of life. By looking under the classical formulation of the Law we see its mechanism. As soon as we do so, we realize how simple it is to comprehend, and how wide is its application. Indeed, the interpretation of the Second Law in terms of the behavior of molecules is not only straightforward (and in my opinion much easier to understand that the First Law, that of the conservation of energy), but also much more powerful. We shall see that the insight it provides lets us go well beyond the domain of classical thermodynamics, to understand all the processes that underlie the richness of the world.”
Atkins’s book is at the level of a Scientific American article, with many useful (and colorful) pictures and historical anecdotes. The writing is excellent. For instance, consider this excerpt:
“The Second Law recognizes that there is a fundamental dissymmetry in Nature…hot objects cool, but cool objects do not spontaneously become hot; a bouncing ball comes to rest, but a stationary ball does not spontaneously begin to bounce. Here is the feature of Nature that both Kelvin and Clausius disentangled from the conservation of energy: although the total quantity of energy must be conserved in any process…, the distribution of that energy changes in an irreversible manner….”
I particularly like Atkins’ analysis of the equivalence of two statements of the Second Law: No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work (Kelvin statement); and No process is possible in which the sole result is the transfer of energy from a cooler to a hotter body (Clausius statement). Atkins writes
‘The Clausius statement, like the Kelvin statement, identifies a fundamental dissymmetry in Nature, but ostensibly a different dissymmetry. In the Kelvin statement the dissymmetry is that between work and heat; in the Clausius statement there is no overt mention of work. The Clausius statement implies a dissymmetry in the direction of natural change: energy may flow spontaneously down the slope of temperature, not up. The twin dissymmetries are the anvils on which we shall forge the description of all natural change.”
Peter Atkins has written several books, including another of my favorites: Peter Atkins’ Molecules. Here is a video of Atkins discussing his book the Four Laws that Drive the Universe. Not surprisingly, the four laws are the laws of thermodynamics.

Friday, November 18, 2011

Plessey Semiconductor Electric Potential Integrated Circuit

The electrocardiogram, or ECG, is one of the most common and useful tools for diagnosing heart arrhythmias. Russ Hobbie and I discuss the ECG in Chapter 7 (The Exterior Potential and the Electrocardiogram) of the 4th edition of Intermediate Physics for Medicine and Biology. The November issue of the magazine IEEE Spectrum contains an article by Willie D. Jones about new instrumentation for measuring the ECG. Jones writes
“In October, Plessey Semiconductors of Roborough, England, began shipping samples of its Electric Potential Integrated Circuit (EPIC), which measures minute changes in electric fields. In videos demonstrating the technology, two sensors placed on a person’s chest delivered electrocardiogram (ECG) readings. No big deal, you say? The sensors were placed on top of the subject’s sweater, and in future iterations, the sensors could be integrated into clothes or hospital gurneys so that vital signs could be monitored continuously—without cords, awkward leads, hair-pulling sticky tape, or even the need to remove the patient’s clothes.”
Apparently the Plessey device is an ultra high input impedance voltmeter. The electrode is capacitively coupled to the body, so no electrical contact is necessary. You can learn more about it by watching this video. I don’t want to sound like an advertisement for Plessey Semiconductors, but I think this device is neat. (I have no relationship with Plessey, and I have no knowledge of the quality of their product, other than what I saw in the IEEE Spectrum article and the video that Plessey produced.)

According to the Plessey press release, “most places on earth have a vertical electric field of about 100 Volts per metre. The human body is mostly water and this interacts with the electric field. EPIC technology is so sensitive that it can detect these changes at a distance and even through a solid wall.”

I don’t have any inside information about this device, but let me guess how it can detect a person at a distance. The body would perturb a surrounding electric field because it is mostly saltwater, and therefore a conductor. In Section 9.10 of Intermediate Physics for Medicine and Biology, Russ and I explain how a conductor interacts with applied electric fields. For the case of a dc field, the conducting tissue completely shields the interior of the body from the field. To understand how a body could affect an electric field, try solving the following new homework problem

Section 9.10

Problem 34 ½ Consider how a spherical conductor, of radius a, perturbs an otherwise uniform electric field, Eo. The conductor is at a uniform potential, which we take as zero. As in Problem 34, assume that the electric potential V outside the conductor is V = A cosθ/r2Eo r cosθ.
(a) Use the boundary condition that the potential is continuous at r=a to determine the constant A.
(b) In the direction θ=0, determine the upward component of the electric field, - dV/dr.
(c) The perturbation of the electric field by the conductor is the difference between the fields with and without the conductor present. Calculate this difference. How does it depend on r?
(d) Suppose you measure the voltage in two locations separated by 10 cm, and that your detector can reliably detect voltage differences of 1 mV. How far from the center of a 1 m radius conductor can you be (assuming θ=0) and still detect the perturbation caused by the conductor?
You may be wondering why there is a 100 V/m electric field at the earth’s surface. The Feynman Lectures (Volume 2, Chapter 9) has a nice discussion about electricity in the atmosphere. The reason that this electric field exists is complicated, and has to do with 1) charging of the earth by lightning, and 2) charge separation in falling raindrops.

Friday, November 11, 2011

The Making of the Pacemaker: Celebrating a Lifesaving Invention

I’m still thinking about Wilson Greatbatch, one of the inventors of the implantable pacemaker, who died a few weeks ago (see my September 30 blog entry honoring him). Here is an interesting excerpt from his book The Making of the Pacemaker: Celebrating a Lifesaving Invention, about how he created the circuit in the first pacemaker.
“My marker oscillator used a 10k basebias resistor. I reached into my resistor box for one but misread the colors and got a brown-black-green (one megohm) instead of a brown-black-orange. The circuit started to ‘squeg’ with a 1.8 ms pulse, followed by a one second quiescent interval. During the interval, the transistor was cut off and drew practically no current. I stared at the thing in disbelief and then realized that this was exactly what was needed to drive a heart. I built a few more. For the next five years, most of the world’s pacemakers used a blocking oscillator with a UTC DOT-1 transformer, just because I grabbed the wrong resistor.”
Here is another story from the book about how he met William Chardack, his primary collaborator in developing the first pacemaker.
“In Buffalo we had the first local chapter in the world of the Institute of Radio Engineers, Professional Group in Medical Electronics (the IRE/PBME, now the Biomedical Engineering Society of the Institute of Electrical and Electronic Engineers [IEEE]). Every month twenty-five to seventy-five doctors and engineers met for a technical program. We strove to attract equal numbers of doctors and engineers. We had a standing offer to send an engineering team to assist any doctor who had an instrumentation problem. I went with one team to visit Dr. Chardack on a problem deadline with a blood oximeter. Imagine my surprise to find that his assistant was my old high school classmate, Dr. Andrew Gage. We couldn’t help Dr. Chardack much with his oximeter problem, but when I broached my pacemaker idea to him, he walked up and down the lab a couple times, looked at me strangely, and said, ‘If you can do that, you can save ten thousand lives a year.’ Three weeks later we had our first model implanted in a dog."
This excerpt is interesting:
“I had $2,000 in cash and enough set aside to feed my family for two years. I put it to the Lord in prayer and felt led to quit all my jobs and devote my time to the pacemaker. I gave the family money to my wife. I then took the $2,000 and went up into my wood-heated barn workshop. In two years I built fifty pacemakers, forty of which went into animals and ten into patients. We had no grant funding and asked for none. The program was successful. We got fifty pacemakers for $2,000. Today, you can’t buy one for that.”
This one may be my favorite. You gotta love Eleanor. They were married in 1945 and stayed together until her death in January of this year.
“Many of the early Medtronic programs were first worked out in Clarence, New York, and then taken to Minneapolis. I had two ovens set up in my bedroom. My wife did much of the testing. The shock test consisted of striking the transistor with a wooden pencil while measuring beta (current gain). We found that a metal pencil could wreck the transistor, but a wooden pencil could not. Many mornings I would awake to the cadence of my wife Eleanor tap, tap, tapping the transistors with her calibrated pencil. For some months every transistor that was used worldwide in Medtronic pacemakers got tapped in my bedroom.”
You can learn more about pacemakers and defibrillators in the 4th edition of Intermediate Physics for Medicine and Biology.

Friday, November 4, 2011

Countercurrent Heat Exchange

Problem 17 in Chapter 5 of the 4th edition of Intermediate Physics for Medicine and Biology considers a countercurrent heat exchanger. Countercurrent transport in general is discussed in Section 5.8 in terms of the movement of particles. However, Russ Hobbie and I conclude the section by applying the concept to heat exchange.
“The principle [of countercurrent exchange] is also used to conserve heat in the extremities—such as a person’s arms and legs, whale flippers, or the leg of a duck. If a vein returning from an extremity runs closely parallel to the artery feeding the extremity, the blood in the artery will be cooled and the blood in the vein warmed. As a result, the temperature of the extremity will be lower and the heat loss to the surroundings will be reduced."
Problem 17 provides an example of this behavior, and cites Knut Schmidt-Nielsen’s book How Animals Work (1972, Cambridge University Press), which describes countercurrent exchange in more detail. (His comments below about the nose refer to an earlier section of the book, in which Schmidt-Nielsen discusses heat exchange in the nose of the kangaroo rat).
“The heat exchange in the nose has a great similarity to the well-known countercurrent heat exchange which takes place, for example, in the extremities of many aquatic animals, such as in the flippers of whales and the legs of wading birds. The body of a whale that swims in water near the freezing point is well insulated with blubber, but the thin streamlined flukes and flippers are uninsulated and highly vascularized and would have an excessive heat loss if it were not for the exchange of heat between arterial and venous blood in these structures. As the cold venous blood returns to the body from the flipper, the vessels run in close proximity to the arteries, in fact, they completely surround the artery, and heat from the arterial blood flows into the returning venous blood, which is thus reheated before it returns to the body (figure 3). Similarly, in the limbs of many animals both arteries and veins split up into a large number of parallel, intermingled vessels each with a diameter of about 1 mm or so, forming a discrete vascular bundle known as a rete…Whether the blood vessels form such a rete system, or in some other way run in close proximity, as in the flipper of the whale, is a question of design and does not alter the principle of the heat recovery mechanism. The blood flows in opposite directions in the arteries and veins, and heat exchange takes place between the two parallel sets of tubes; the system is therefore known as a countercurrent heat exchanger.”
Schmidt-Nielsen also wrote Scaling: Why is Animal Size So Important?, which Russ and I cite often in Chapter 2 and which I included in my top ten list of biological physics books. I have also read Schmidt-Nielsen's autobiography The Camel’s Nose: Memoirs of a Curious Scientist. (See the review of this book in the New England Journal of Medicine.) His Preface begins
“This is a personal story of a life spent in science. It tells about curiosity, about finding out and finding answers. The questions I have tried to answer have been very straightforward, perhaps even simple. Do marine birds drink sea water? How do camels in hot deserts manage for days without drinking when humans could not survive without water for more than a day? How can kangaroo rats live in the desert without any water to drink? How can snails find water and food in the most barren deserts? Can crab-eating frogs really survive in sea water?

These are important questions. The answers not only tell us how animals overcome seemingly insurmountable obstacles in hostile environments; they also give us insight into general principles of life and survival.”
Schmidt-Nielsen died in 2007, and Steven Vogel (who I quoted in last week’s blog entry) wrote an article about him for the Biographical Memoirs of Fellows of the Royal Society (volume 54, pages 319-331, 2008). See also his obituary in the Journal of Experimental Biology. A statue of Schmidt-Nielsen with a camel (which he famously studied) graces the Duke University campus.

Friday, October 28, 2011

Murray’s Law

Homework Problem 33 in Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology is about Murray’s law, a relationship describing the radii of branching vessels.
“A parent vessel of radius Rp branches into two daughter vessels of radii Rd1 and Rd2. Find a relationship between the radii such that the shear stress on the vessel wall is the same in each vessel. (Hint: Use conservation of the volume flow.) This relationship is called ‘Murray’s Law’. Organisms may use shear stress to determine the appropriate size of vessels for fluid transport [LaBarbera (1990)].”
The reference is to
LaBarbera, M. (1990). Principles of design of fluid transport systems in zoology. Science, 249:992-1000.
In his book Vital Circuits: On Pumps, Pipes, and the Workings of Circulatory Systems, Steven Vogel provides a clear and engaging discussion of Murray’s law.
“Our problem of figuring the cheapest arrangement of pipes turns out to involve nothing more nor less than calculating the relative dimensions of pipes so that the steepness of the speed gradient at all walls is the same. This calculation was done by Cecil D. Murray, of Bryn Mawr College, back in 1926, and is spoken of, when (uncommonly) it’s mentioned, as ‘Murray’s law’.
Murray’s law isn’t especially complicated, and anyone with a hand calculator can play around with it (but you can ignore the specifics without missing the present message). The rule is that the cube of the radius of the parental vessel equals the sum of the cubes of the radii of the daughter vessels. If a pipe with a radius of two units splits into a pair of pipes, each of the pair ought to have a radius of about 1.6 units. (To check, cube 1.6 and then double the result—you get about 2 cubed.) The daughters are smaller, but only a little (Figure 5.6). Still, if the parental one eventually divides into a hundred progeny, the progeny do come out substantially smaller, each about a fifth of the radius of the parent. (Their aggregate cross-section area is, of course, greater than the parental one—to be specific, four fold greater.)

The relationship predicts the relative sizes of both our arteries and our veins quite well. It only fails for the very smallest arterioles and capillaries….

It would be indefensibly anthropocentric to suppose that we’re the only creatures to follow Mr. Murray. My friend, Michael LaBarbera (who introduced me to the whole issue) has tested the law on several systems that are very unlike us structurally and functionally, and very distant from us evolutionarily…Murray’s law again proves applicable…

The mechanism … is becoming clear. Without getting into the details, it looks as if the cells lining the blood vessels can quite literally sense changes in the speed gradient next to them. An increase in the speed of flow through a vessel increases the speed gradient at its walls. An increase in gradient stimulates cell division, which would increase vessel diameter as appropriate to offset the faster flow. Neither change in blood pressure nor cutting the nerve supply makes any difference—this is apparently a direct effect of the gradient on synthesis of some chemical signal by the cells. Perhaps the neatest feature of the scheme is that a cell needn’t know anything about the size of the vessel of which it’s a part. As a consequence of Murray’s Law, it can be given the same specific instruction wherever it might be located, a command telling it to divide when the speed gradient exceeds a specific value.”
Vogel is a faculty member in the Biology Department at Duke University. He has published several fine books, including Vital Circuits quoted above and the delightful Life in Moving Fluids (Princeton University Press, 1994), both cited in Intermediate Physics for Medicine and Biology.

Friday, October 21, 2011

A Useful Website

While I have many goals when writing this blog (with the top being to sell textbooks!), sometimes I simply like to point out useful websites relevant to readers of the 4th edition of Intermediate Physics for Medicine and Biology. One example is the website of Rob MacLeod, a professor of bioengineering at the University of Utah. MacLeod’s research, like mine, centers on the numerical simulation of cardiac electrophysiology, so we find many of the same topics interesting.

I particularly enjoy his list of “Background Links for Rob’s Courses”. You will find many books listed, some of which Russ Hobbie and I cite in Intermediate Physics for Medicine and Biology, and some that we don’t cite but should. For example, MacLeod speaks highly of the book Mathematical Physiology by Keener and Sneyd, but somehow Russ and I never reference it. I didn’t know Malmivuo and Plonsey’s book Bioelectromagnetism (which we do cite) is now available online and free of charge. The Welcome Trust Heart Atlas is beautiful, as is the Virtual Heart website. MacLeod’s list of books about “Cardiology and Medicine” look fascinating, with a heavy emphasis on the relevant history and biography. If I start running out of topics for these blog posts, I could probably find a year of material by exploring the sources listed on this page.

If you visit MacLeod's website (and I hope you do), make sure to click on the link “Information on Writing”. I am an admirer of good writing, especially in nonfiction, and am frustrated when presented with a poorly written scientific book or paper. (I review a lot of papers for journals, and often find myself venting and fuming.) My advice to a young scientist is: Learn To Write. Throughout your scientific career you will be judged primarily on your papers and your grant proposals, which are both written documents. Maybe your science is so good that it can overcome poor writing and still impress the reader, but I doubt it. Learn to write.

Friday, October 14, 2011

Bethesda

A couple months ago I went to Bethesda, Maryland to review grant proposals for the National Institutes of Health. They swear us to secrecy, so I can’t divulge any details about the specific research. But I will share a few general observations.
  1. Winston Churchill said that “Democracy is the worst form of government except all the others that have been tried.” That sums up my opinion of the NIH review process. There are all sorts of problems with the way we select the best research to fund, but I cannot think of a better way than that used by NIH. Each time I participate, I come away with a great respect for the process. Of course, from the outside the review process can resemble a casino, but I don’t see how you can eliminate some randomness while at the same time keeping the process fair, with wide input, and a focus on the significance and impact of the research.
  2. If you are a young biomedical researcher, or hope to be one someday, then you should take advantage of any opportunity to review grant proposals. It is like going to grant writing school. No book, no website, no video, no workshop is more useful for learning how to prepare a proposal. It is a lot of work, but you will gain much, especially the first time or two you do it. However, if you simply are not able to participate in a review panel, then at least watch this video, which is a fairly accurate description of what goes on.
  3. After reviewing grant proposals, I am optimistic about the future of the scientific enterprise in the United States, because of all the fascinating and important research being proposed. I am also pessimistic about my chances for winning additional funding, because the competition is so fierce. But, we must soldier on. To quote Churchill again, “Never give in, never give in, never, never, never, never.” So I’ll keep trying.
  4. Research is becoming more and more interdisciplinary, and many proposals now come from multidisciplinary teams. Each individual researcher cannot know everything, but they must know enough to understand each other, and to talk to each other intelligently. I believe this is one of the virtues of the 4th edition of Intermediate Physics for Medicine and Biology. It helps bridge the gap between physicists and engineers on the one side, and biologists and medical doctors on the other. The book won’t turn a physicist into a biologist, but it may help a physicist talk to and better appreciate a biologist. This is crucial for performing modern collaborative research, and for obtaining funding to pay for that research. After reviewing all those proposals, I came away proud of our textbook.
We finished our review session a couple hours earlier than anticipated, so I used the time to visit the new Martin Luther King Memorial in Washington, DC. It is just across the tidal basin from the Jefferson Memorial, and the statues of King and Jefferson stare at each other across the water. If you happen to be going to DC soon, prepare yourself for a shock. The beautiful Reflecting Pool between the Washington Monument and the Lincoln Memorial is now a dried up, plowed up mud flat. Apparently they are renovating it. But the other attractions are as beautiful as ever, including the Vietnam Veterans Memorial, the Korean War Veterans Memorial, the National World War II Memorial, and the Franklin Delano Roosevelt Memorial. I even saw one I had somehow missed in previous visits: the George Mason Memorial, near the Jefferson Memorial. All this site seeing was a little bonus after reviewing all those grants (packed into two frantic hours between leaving the review session and reaching the airport).

Friday, October 7, 2011

The Mathematics of Diffusion

Diffusion is one of those topics that is rarely covered in an introductory physics class, but is essential for understanding biology. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss diffusion and its biomedical applications. One of the books we cite is The Mathematics of Diffusion by John Crank. Hard-core mathematical physicists who are interested in biology and medicine will find Crank’s book to be a good fit. Physiologists who want to avoid as much mathematical analysis as possible may prefer to learn their diffusion from Random Walks in Biology, by Howard Berg.

Crank died five years ago this week. Like Wilson Greatbatch, who I discussed in my last blog entry, Crank was one of those scientists who came of age serving in the military during World War Two (Tom Brokaw would call them members the “Greatest Generation”). Crank’s 2006 obituary in the British newspaper The Telegraph states:
“John Crank was born on February 6 1916 at Hindley, Lancashire, the only son of a carpenter's pattern-maker. He studied at Manchester University, where he gained his BSc and MSc. At Manchester he was a student of the physicist Lawrence Bragg, the youngest-ever winner of a Nobel prize, and of Douglas Hartree, a leading numerical analyst.

Crank was seconded to war work during the Second World War, in his case to work on ballistics. This was followed by employment as a mathematical physicist at Courtaulds Fundamental Research Laboratory from 1945 to 1957. He was then, from 1957 to 1981, professor of mathematics at Brunel University (initially Brunel College in Acton).

Crank published only a few research papers, but they were seminal. Even more influential were his books. His work at Courtaulds led him to write The Mathematics of Diffusion, a much-cited text that is still an inspiration for researchers who strive to understand how heat and mass can be transferred in crystalline and polymeric material. He subsequently produced Free and Moving Boundary Problems, which encompassed the analysis and numerical solution of a class of mathematical models that are fundamental to industrial processes such as crystal growth and food refrigeration.”
Crank is best known for a numerical technique to solve equations like the diffusion equation, developed with Phyllis Nicolson and known as the Crank-Nicolson method. The algorithm has the advantage that it is numerically stable, which can be shown using von Neuman stability analysis. They published this method in a 1947 paper in the Proceedings of the Cambridge Philosophical Society:
Crank, J., and P. Nicolson (1947) A practical method for numerical evaluation of solutions of partial differential equations of the heat conduction type. Proc. Camb. Phil. Soc. 43:50–67.
Rather than describe the Crank-Nicolson method, I will let the reader explore it in a new homework problem.
Section 4.8

Problem 24 ½ The numerical approximation for the diffusion equation, derived as part of Problem 24, has a key limitation: it is unstable if the time step is too large. This problem can be avoided using the Crank-Nicolson method. Replace the first time derivative in the diffusion equation with a finite difference, as was done in Problem 24. Next, replace the second space derivative with the finite difference approximation from Problem 24, but instead of evaluating the second derivative at time t, use the average of the second derivative evaluated at times t and t+Δt.
(a) Write down this numerical approximation to the diffusion equation, analogous to Eq. 4 in Problem 24.

(b) Explain why this expression is more difficult to compute than the expression given in the first two lines of Eq. 4. Hint: consider how you determine C(t+Δt) once you know C(t).

The difficulty you discover in part (b) is offset by the advantage that the Crank-Nicolson method is stable for any time step. For more information about the Crank-Nicolson method, stability, and other numerical issues, see Press et al. (1992).
The citation is to my favorite book on computational methods: Numerical Recipes (of course, the link is to the FORTRAN 77 version, which is the edition that sits on my shelf).

Friday, September 30, 2011

Wilson Greatbatch (1919-2011)

This week we lost a giant of engineering: Wilson Greatbatch, inventor of the implantable cardiac pacemaker.

The cardiac pacemaker represents one of the most important contributions of physics and engineering to medicine. In Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe the pacemaker:
“Cardiac pacemakers are a useful treatment for certain heart diseases [Jeffrey (2001); Moses et al. (2000); Barold (1985)]. The most frequent are an abnormally slow pulse rate (bradycardia) associated with symptoms such as dizziness, fainting (syncope), or heart failure. These may arise from a problem with the SA node (sick sinus syndrome) or with the conduction system (heart block)….

A pacemaker can be used temporarily or permanently. The pacing electrode can be threaded through a vein from the shoulder to the right ventricle (transvenous pacing, Fig. 7.31) or placed directly in the myocardium during heart surgery.”
Several years ago, I taught a class about Pacemakers and Defibrillators as part of Oakland University’s honors college. The class was designed to challenge our top undergraduates, but not necessarily those majoring in science. Among the readings for the class was a profile in the March 1995 issue of IEEE Spectrum about Wilson Greatbatch (Volume 32, Pages 56-61). The article tells the story of Greatbatch’s first implantable pacemaker:
“Greatbatch was on one team that had been summoned by William C. Chardack, chief of surgery at Buffalo's Veteran's Administration Hospital, to deal with a blood oximeter. The engineers could not help with that problem, but the meeting for the inventor was momentous: finally, after many previous attempts, he had met a surgeon who was enthusiastic about prospects for an implantable pacemaker. The surgeon estimated such a device might save 10000 lives a year.

Three weeks later, on May 7, 1958, the engineer brought what would become the worlds first implantable cardiac pacemaker to the animal lab at Chardack's hospital. There Chardack and another surgeon, Andrew Gage, exposed the heart of a dog, to which Greatbatch touched the two pacemaker wires. The heart proceeded to beat in synchrony with the device, made with two Texas Instruments 910 transistors. Chardack looked at the oscilloscope, looked back at the animal, and said, ‘Well, I'll be damned.’ "
Another source the honors college students studied from was Kirk Jeffrey’s excellent book “Machines in Our Hearts: The Cardiac Pacemaker, the Implantable Defibrillator, and American Health Care.” Jeffrey tells the long history of how pacemakers and defibrillators were developed. In a chapter titled “Multiple Invention of Implantable Pacemakers” he describes Greatbatch’s contributions as well as others, including Elmqvist and Senning in Sweeden. Jeffrey writes
“If theirs [Chardack and Greatbatch] was not the only pacemaker of the 1950s, it appears to be the only one that survives today in the collective memory of the community of physicians, engineers, and businesspeople whose careers are tied to the pacemaker…The Chardack-Greatbatch pacamaker stood out from other prototype implantables of the late 1950s not because it was first or clearly a better design, but because it succeeded in the U.S. market as did no other device.”
Jeffrey also discusses at length Greatbatch’s contributions to developing the lithium battery.
“Because of his prestige in the pacing community and his effectiveness as a champion of technology be believed in, Greatbatch was able almost single-handedly to turn the industry to lithium; in fact by 1978, a survey of pacing practices indicated that only 5 percent of newly implanted pulse generators still used mercury-zinc batteries.”
Greatbatch was inducted into the National Inventor’s Hall of Fame in 1986. His citation says:
“Wilson Greatbatch invented the cardiac pacemaker, an innovation selected in 1983 by the National Society of Professional Engineers as one of the two major engineering contributions to society during the previous 50 years. Greatbatch has established a series of companies to manufacture or license his inventions, including Greatbatch Enterprises, which produces most of the world's pacemaker batteries.

Invention Impact

His original pacemaker patent resulted in the first implantable cardiac pacemaker, which has led to heart patient survival rates comparable to that of a healthy population of similar age.

Inventor Bio

Born in Buffalo, New York, Greatbatch received his preliminary education at public schools in West Seneca, New York. In 1936 he entered military service and served in the Atlantic and Pacific theaters during World War II. He was honorably discharged with the rating of aviation chief radioman in 1945. He attended Cornell University and graduated with a B.E.E. in electrical engineering in 1950. Greatbatch received a master's from the State University of New York at Buffalo in 1957 and was awarded honorary doctor's degrees from Houghton College in 1970 and State University of New York at Buffalo in 1984. Although trained as an electrical engineer, Greatbatch has primarily studied interdisciplinary areas combining engineering with medical electronics, agricultural genetics, the electrochemistry of pacemaker batteries, and the electrochemical polarization of physiological electrodes.”
Below are some links related to Wilson Greatbatch that you might find useful.

A video honoring Wilson Greatbatch, the 1996 Lemelson-MIT Lifetime Achievement Award Winner: http://www.youtube.com/watch?v=WLZBl118Ads&list=PLE9482E912FAB47BA&index=4

An article about Greatbatch published by the Lemelson Center for the Study of Invention and Innovation: http://invention.smithsonian.org/centerpieces/ilives/lecture09.html

A video about Greatbatch produced by the Vega Science Trust: http://www.vega.org.uk/video/programme/248

Biography of Wilson Greatbatch on the Heart Rhythm Society website:
http://www.hrsonline.org/News/ep-history/notable-figures/wilsongreatbatch.cfm

New York Times obituary: http://www.nytimes.com/2011/09/28/business/wilson-greatbatch-pacemaker-inventor-dies-at-92.html

BBC obituary: http://www.bbc.co.uk/news/world-us-canada-15085056

Friday, September 23, 2011

Optical Mapping

In Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I mention an optical technique that is used to measure the transmembrane potential in the heart.
“Experimental measurements of the transmembrane potential often rely on the use of a voltage sensitive dye whose fluorescence changes with the transmembrane potential [Knisley et al. (1994); Neunlist and Tung (1995); Rosenbaum and Jalife (2001)].”
This method, often called optical mapping, has revolutionized cardiac electrophysiology, because it allows you to use optical methods to make electrical measurements. If you want to learn more, take a look at the book Optical Mapping of Cardiac Excitation and Arrhythmias, by David Rosenbaum and Jose Jalife (2001). The chapters in this book were written by the stars of this field.
  1. Optical Mapping: Background and Historical Perspective. Guy Salama.
  2. Mechanisms and Principles of Voltage-Sensitive Fluorescence. Leslie M. Loew.
  3. Optical Properties of Cardiac Tissue. William T. Baxter.
  4. Optics and Detectors Used in Optical Mapping. Kenneth R. Laurita and Imad Libbus.
  5. Optimization of Temporal Filtering for Optical Transmembrane Potential Signals. Francis X. Witkowski, Patricia A. Penkoske, and L. Joshua Leon.
  6. Optical Mapping of Impulse Propagation within Cardiomyocytes. Herbert Windisch.
  7. Optical Mapping of Impulse Propagation between Cardiomyocytes. Stephan Rohr and Jan P. Kucera.
  8. Role of Cell-to-Cell Coupling, Structural Discontinuities, and Tissue Anisotropy in Propagation of the Electrical Impulse. André G. Kléber, Stephan Rohr, and Vladimir G. Fast.
  9. Optical Mapping of Impulse Propagation in the Atrioventricular Node: 1. Todor N. Mazgalev and Igor R. Efimov.
  10. Optical Mapping of Impulse Propagation in the Atrioventricular Node: 2. Guy Salama and Bum-Rak Choi.
  11. Optical Mapping of Microscopic Propagation: Clinical Insights and Applications. Albert L. Waldo.
  12. Mapping Arrhythmia Substrates Related to Repolarization: 1. Dispersion of Repolarization. Kenneth R. Laurita, Joseph M. Pastore, and David S. Rosenbaum.
  13. Mapping Arrhythmia Substrates Related to Repolarization: 2. Cardiac Wavelength. Steven Girouard and David S. Rosenbaum.
  14. Video Imaging of Cardiac Fibrillation. Richard A. Gray and José Jalife.
  15. Video Mapping of Spiral Waves in the Heart. William T. Baxter and Jorge M. Davidenko.
  16. Video Imaging of Wave Propagation in a Transgenic Mouse Model of Cardiomyopathy. Faramarz Samie, Gregory E. Morley, Dhjananjay Vaidya, Karen L. Vikstrom, and José Jalife.
  17. Optical Mapping of Cardiac Arrhythmias: Clinical Insights and Applications. Douglas L. Packer.
  18. Response of Cardiac Myocytes to Electrical Fields. Leslie Tung.
  19. New Perspectives in Electrophysiology from The Cardiac Bidomain. Shien-Fong Lin and John P. Wikswo, Jr..
  20. Mechanisms of Defibrillation: 1. Influence of Fiber Structure on Tissue Response to Electrical Stimulation. Stephen B. Knisley.
  21. Mechanisms of Defibrillation: 2. Application of Laser Scanning Technology. Stephen M. Dillon.
  22. Mechanisms of Defibrillation: 3. Virtual Electrode-Induced Wave Fronts and Phase Singularities; Mechanisms of Success and Failure of Internal Defibrillation. Igor R. Efimov and Yuanna Cheng.
  23. Optical Mapping of Cardiac Defibrillation: Clinical Insights and Applications. Douglas P. Zipes.
For those who are tired of reading, two videos have recently been published in the Journal of Visualized Experiments that explain the technique step-by-step. One video is about studying the rabbit heart and the other is about the mouse heart. These excellent video clips were filmed in the laboratory of Igor Efimov, of Washington University in Saint Louis.

My former graduate student, Debbie Janks, is now a post doc in Efimov’s lab. Regular readers of this blog may recognize Janks’ name, as she provides many insightful comments following these blog entries. Janks studied optical mapping from a theoretical perspective when she was here at Oakland University. She published a nice paper that examined the question of averaging over depth during optical mapping. The optical method does not measure the transmembrane potential at the tissue surface. Rather, light penetrates some distance into the tissue, and the optical signal is a weighted average of the transmembrane potential over depth. Janks looked at the effect of this averaging during an electrical shock. Rather than explaining the whole story, I will present it as a new homework problem. That way, you can figure it out for yourself. Enjoy.

Section 7.10

Problem 47 1/2 The signal measured during optical mapping, V, is a weighted average of the transmembrane potential, Vm(z), as a function of depth, V=∫0Vm(z)w(z)dz, where w(z) is a normalized weighting function. Suppose the light decays with depth exponentially, with an optical length constant δ. Then w(z) = exp(-z/δ)/δ. Often a shock will cause Vm(z) to fall off exponentially with depth, Vm(z)=Vo exp(-z/λ), where Vo is the transmembrane potential at the tissue surface and λ is the electrical length constant (see Sec. 6.12).
(a) Perform the required integration to find an analytical expression for the optical signal, V, as a function of Vo, δ and λ.
(b) What is V in the case δ much less than λ? Explain this result physically.
(c) What is V in the case δ much greater than λ? Explain this result physically.
(d) For which limit do you obtain an accurate measurement of the transmembrane potential at the surface, V=Vo?
In cardiac tissue, δ is usually on the order of a millimeter, whereas λ is more like a quarter of a millimeter, so averaging over depth significantly distorts the measured signal. For a more detailed analysis of this problem, see Janks and Roth (2002).

Friday, September 16, 2011

Does cell biology need physicists?

The American Physical Society has an online journal, Physics, with the goal of making recent research accessible to a wide audience. The journal website states:
Physics highlights exceptional papers from the Physical Review journals. To accomplish this, Physics features expert commentaries written by active researchers who are asked to explain the results to physicists in other subfields. These commissioned articles are edited for clarity and readability across fields and are accompanied by explanatory illustrations.”
One recent paper that caught my eye was an essay written by Charles Wolgemuth, titled “Does Cell Biology Need Physicists?”. Wolgemuth asks key questions in the introduction to his essay:
“The past has shown that cell biologists are extremely capable of making great progress without much need for physicists (other than needing physicists and engineers to develop many of the technologies that they use). Do the new data and new technological capabilities require a physicist’s viewpoint to analyze the mechanisms of the cell? Is physics of use to cell biology?”
Later in the essay, Wolgemuth asks his central question in a more specific way:
“It is possible that the physics that cells must deal with is slave to the reactions; i.e., the protein levels and kinetics of the biochemical reactions determine the behavior of the system, and any physical processes that a cell must accomplish are purely consequences of the biochemistry. Or, could it be that cellular biology cannot be fully understood without physics?”
Readers of the 4th edition of Intermediate Physics for Medicine and Biology are likely to scream “Yes!” to these questions. I too enthusiastically answer yes, but I agree with Wolgemuth that it is proper to ask such basic questions occasionally.

I should add that Russ Hobbie and I tend to look primarily at macroscopic phenomena in Intermediate Physics for Medicine and Biology, such as the biomechanics of walking with a cane, the interpretation of an electrocardiogram, or the algorithm required to form an image of the brain using a CAT scan. We occasionally look at events on the atomic scale, but for the most part we ignore molecular biophysics. Yet, the cellular scale is an interesting intermediate level that is becoming a fertile field for the applications of physics to biology. Indeed, I examined this issue when discussing the textbook Physical Biology of the Cell last year in this blog. The discussion that Russ and I give to fluid dynamics, diffusion, and bioelectricity in Intermediate Physics for Medicine and Biology is relevant to cellular topics.

To answer his question, Wolgemuth provides five examples in which physics provides key insights into cellular biology: 1) Molecular motors, 2) Cellular movement, 3) How cells swim, 4) Cell growth and division, and 5) How cells interact with the environment. One of my favorite parts of the essay is the consideration of potential pitfalls for physicists in biology.
“Fifteen years ago, around the time that I began working in biophysics, there were very few collaborations between physicists and cell biologists, especially if the physicists were theorists. Theory was, and still is to a good degree, a word that should be avoided in the presence of biologists. Those of us who use math and computers to try to understand how cells work tend to call ourselves modelers instead of theorists. My suspicion is that many of the first physicists and mathematicians who tried to develop models for how biology works attempted to be too abstract or too general. As physicists we like to try to find universal laws, and though there are undoubtedly general principles at play in cell biology, it is likely that there are no real universal rules. Evolution need not find only one way to do something but more often probably finds many. Rather than search out generalities, we will serve biology better if we deal with specifics. As Aharon Katchalsky, who is largely credited with bringing nonequilibrium thermodynamics to biology, purportedly said, ‘It is easier to make a theory of everything than a theory of something.’

In recent years, physicists have done a much better job at addressing specific problems in biology. However, there still remains a divide between the two communities. Indeed, good physical biology that comes out of the physics community often goes unnoticed or is under appreciated. The burden falls on us to properly convey our work so as to be accessible to biologists. We need to make conscious efforts at communication and dissemination of our results. Two useful approaches toward this end are to publish in broader audience journals that reach both communities, and for papers that contain theoretical analyses to provide a qualitative description of the modeling in the main text, while leaving the more mathematical details for the appendices or supplemental material (for further discussion of this topic, see Ref. [55]). It is also of prime importance to maintain and to forge new connections between physicists and biologists.”
Wolgemuth comes closest to answering his own questions near the end of the essay, where he predicts
“To be truly successful, we must provide an understanding of biology that spans the gorge from biochemistry and genetics to cellular function, and do it in such a way that our models and experiments are not only informative about physics, but directly impact biology.

Cell biology is awaiting these descriptions. And it may be that physicists are the most able to draw these connections between the protein level description of cellular biology that currently dominates and a more intuitive, yet still quantitative, description of the behavior of cells and their responses to their environments.”