Friday, December 9, 2011

The Cyclotron

The 4th edition of Intermediate Physics for Medicine and Biology has its own Facebook group, and any readers of this blog who use Facebook are welcome to join. One nice feature of Facebook is that is encourages comments, such as the recent one that asked “Why isn’t there a chapter or a subchapter in the textbook ‘Intermediate physics for medicine and biology’ that refers to the fundamental concepts of the cyclotron and the betatron and how are they used in medicine?” This is a good question, because undoubtedly cyclotrons are important in nuclear medicine. I can’t do anything to change the 4th edition of our book, but this blog provides an opportunity to address such comments, and to try out possible text for a 5th edition.

Although the term does not appear in the index (oops…), the cyclotron is mentioned in Intermediate Physics for Medicine and Biology at the end of Section 17.9 (Radiopharmaceuticals and Tracers).
Other common isotopes are 201Tl, 67Ga, and 123I. Thallium, produced in a cyclotron, is chemically similar to potassium and is used in heart studies, though it is being replaced by 99mTc-sestamibi and 99mTc-tetrofosmin. Gallium is used to image infections and tumors. Iodine is also produced in a cyclotron and is used for thyroid studies.
Cyclotrons are again mentioned in Section 17.14 (Positron Emission Tomography)
Positron emitters are short-lived, and it is necessary to have a cyclotron for producing them in or near the hospital. This is proving to be less of a problem than initially imagined. Commercial cyclotron facilities deliver isotopes to a number of nearby hospitals. Patterson and Mosley (2005) found that 97% of the people in the United States live within 75 miles of a clinical PET facility.
(Note: on page 513 of our book, we omitted the word “emission” from the phrase “positron emission tomography” in the title of the Patterson and Mosley paper; again, oops…)

Perhaps the best place in Intermediate Physics for Medicine and Biology to discuss cyclotrons would be after Section 8.1 (The Magnetic Force on a Moving Charge). Below is some sample text that serves as a brief introduction to cyclotrons.
8.1 ½ The Cyclotron

One important application of magnetic forces in medicine is the cyclotron. Many hospitals have a cyclotron for the production of radiopharmaceuticals, or for the generation of positron emitting nuclei for use in Positron Emission Tomography (PET) imaging (see Chapter 17).

Consider a particle of charge q and mass m, moving with speed v in a direction perpendicular to a magnetic field B. The magnetic force will bend the path of the particle into a circle. Newton’s second law states that the mass times the centripetal acceleration, v2/r, is equal to the magnetic force

m v2/r = q v B . (8.4a)

The speed is equal to circumference of the circle, 2 π r, divided by the period of the orbit, T. Substituting this expression for v into Eq. 8.4a and simplifying, we find

T = 2 π m/(q B) . (8.4b)

In a cyclotron particles orbit at the cyclotron frequency, f = 1/T. Because the magnetic force is perpendicular to the motion, it does not increase the particles’ speed or energy. To do that, the particles are subjected periodically to an electric field that must change direction with the cyclotron frequency so that it is always accelerating, and not decelerating, the particles. This would be difficult if not for the fortuitous disappearance of both v and r from Eq. 8.4b, so that the cyclotron frequency only depends on the charge-to-mass ratio of the particles and the magnetic field, but not on their energy.

Typically, protons are accelerated in a magnetic field of about 1 T, resulting in a cyclotron frequency of approximately 15 MHz. Each orbit raises the potential of the proton by about 100 kV, and it must circulate enough times to raise its total energy to at least 10 MeV so that it can overcome the electrostatic repulsion of the target nucleus and cause nuclear reactions. For example, the high-energy protons may be incident on a target of 18O (a rare but stable isotope of oxygen), initiating a nuclear reaction that results in the production of 18F, an important positron emitter used in PET studies.
Since Intermediate Physics for Medicine and Biology is not a history book, I didn’t mention the interesting history of the cyclotron, which was invented by Ernest Lawrence in the early 1930s, for which he received the Nobel Prize in Physics in 1939. The American Institute of Physics Center for the History of Physics has a nice website about Lawrence’s invention. The same story is told, perhaps more elegantly, in Richard Rhodes masterpiece The Making of the Atomic Bomb (see Chapter 6, Machines). Lawrence played a major role in the Manhattan Project, using modified cyclotrons as massive mass spectrometers to separate the fissile uranium isotope 235U from the more abundant 238U.

Finally, I think it’s appropriate that Intermediate Physics for Medicine and Biology should have a section about the cyclotron, because my coauthor Russ Hobbie (who was the sole author of the first three editions of the textbook) obtained his PhD while working at the Harvard cyclotron. Thus, an unbroken path leads from Ernest Lawrence and the cyclotron to the publication of our book and the writing of this blog.

Friday, December 2, 2011

Feedback Loops

Negative feedback is an important concept in physiology. Russ Hobbie and I discuss feedback loops in Chapter 10 of the 4th edition of Intermediate Physics for Medicine and Biology. In the text and homework problems, we discuss several examples of negative feedback, including the regulation of breathing rate by the concentration of carbon dioxide in the alveoli, the prevention of overheating of the body by sweating, and the control of blood glucose levels by insulin. You can never have enough of these examples. Therefore, here is another homework problem related to negative feedback: regulation of blood osmolarity by antidiuretic hormone. Warning: the model is greatly simplified. It should be correct qualitatively, but not accurate quantitatively.
Section 10.3

Problem 15 ½ The osmolarity of plasma (C, in mosmole) is regulated by the concentration of antidiuretic hormone (ADH, in pg/ml, also known as vasopressin). As antidiuretic hormone increases, the kidney reabsorbs more water and the plasma osmolarity decreases, C=700/ADH. When osmoreceptors in the hypothalamus detect an increase of plasma osmolarity, they stimulate the pituitary gland to produce more antidiuretic hormone, ADH = C-280 for C greater than 280, and zero otherwise.
(a) Draw a block diagram of the feedback loop, including accurate plots of the two relationships.
(b) Calculate the operating point and the open loop gain (you may need to use four to six significant figures to determine the operating point accurately).
(c) Suppose the behavior of the kidney changed so now C=750/ADH. First determine the new value of C if the regulation of ADH is not functioning (ADH is equal to that found in part b), and then determine the value of C taking regulation of ADH by the hypothalamus into account.
You should find that this feedback loop is very effective at holding the blood osmolarity constant. For more about osmotic effects, see Chapter 5 of Intermediate Physics for Medicine and Biology.

Textbook of Medical Physiology, by Guyton and Hall, superimposed on Intermediate Physics for Medicine and Biology.
Textbook of Medical Physiology,
by Guyton and Hall.









Here is how Guyton and Hall describe the physiological details of this feedback loop in their Textbook of Medical Physiology (11th edition):
When osmolarity (plasma sodium concentration) increases above normal because of water deficit, for example, this feedback system operates as follows:

1. An increase in extracellular fluid osmolarity (which in practical terms means an increase in plasma sodium concentration) causes the special nerve cells called osmoreceptor cells, located in the anterior hypothalamus near the supraoptic nuclei, to shrink.

2. Shrinkage of the osmoreceptor cells casuse them to fire, sending nerve signals to additional nerve cells in the supraoptic nuclei, which then relay these signals down the stalk of the pituitary gland to the posterior pituitary.

3. These action potentials conducted to the posterior pituitary stimulate the release of ADH, which is stored in secretory granules (or vesicles) in the nerve endings.

4. ADH enters the blood stream and is transported to the kidneys, where it increases the water permeability of the late distal tubules, cortical collecting tubules, and the medullary collecting ducts.

5. The increased water permeability in the distal nephron segments causes increased water reabsorption and excretion of a small volume of concentrated urine.

Thus, water is conserved in the body while sodium and other solutes continue to be excreted in the urine. This causes dilution of the solutes in the extracellular fluid, thereby correcting the initial excessively concentrated extracellular fluid.
Feedback loops are central to physiology. Guyton and Hall write in their first introductory chapter
Thus, one can see how complex the feedback control systems of the body can be. A person’s life depends on all of them. Therefore, a major share of this text is devoted to discussing these life-giving mechanisms.

Friday, November 25, 2011

The Second Law of Thermodynamics

Russ Hobbie and I discuss thermodynamics in Chapter 3 of the 4th edition of Intermediate Physics for Medicine and Biology. We take a statistical perspective (similar to that used so effectively by Frederick Reif in Statistical Physics, which is Volume 5 of the Berkeley Physics Course), and discuss many topics such as heat, temperature, entropy, the Boltzmann factor, Gibbs free energy, and the chemical potential. But only at the very end of the chapter do we mention the central concept of thermodynamics: The second law.
In some cases, thermal energy can be converted into work. When gas in a cylinder is heated, it expands against a piston that does work. Energy can be supplied to an organism and it lives. To what extent can these processes, which apparently contradict the normal increase of entropy, be made to take place? The questions can be stated in a more basic form.

1. To what extent is it possible to convert internal energy distributed randomly over many molecules into energy that involves a change of a macroscopic parameter of the system? (How much work can be captured from the gas as it expands the piston?)

2. To what extent is it possible to convert a random mixture of simple molecules into complex and highly organized macromolecules?

Both these questions can be reformulated: under what conditions can the entropy of a system be made to decrease?

The answer is that the entropy of a system can be made to decrease if, and only if, it is in contact with one or more auxiliary systems that experience at least a compensating increase in entropy. Then the total entropy remains the same or increases. This is one form of the second law of thermodynamics. For a fascinating discussion of the second law, see Atkins (1994).
The Second Law:  Energy, Chaos, and Form,  by Peter Atkins, superimposed on Intermediate Physics for Medicine and Biology.
The Second Law:
Energy, Chaos, and Form,
by Peter Atkins.
The book by Peter Atkins, The Second Law, is published by the Scientific American Library, and is aimed at a general audience. It’s a wonderful book, and provides the best non-mathematical description of thermodynamics I know of. Atkins’ preface begins
No other part of science has contributed as much to the liberation of the human spirit as the Second Law of thermodynamics. Yet, at the same time, few other parts of science are held to be so recondite. Mention of the Second Law raises visions of lumbering steam engines, intricate mathematics, and infinitely incomprehensible entropy. Not many would pass C. P. Snow’s test of general literacy, in which not knowing the Second Law is equivalent to not having read a work of Shakespeare.

In this book I hope to go some way toward revealing the workings of the Law, and showing its span of application. I start with the steam engine, and the acute observations of the early scientists, and I end with a consideration of the processes of life. By looking under the classical formulation of the Law we see its mechanism. As soon as we do so, we realize how simple it is to comprehend, and how wide is its application. Indeed, the interpretation of the Second Law in terms of the behavior of molecules is not only straightforward (and in my opinion much easier to understand that the First Law, that of the conservation of energy), but also much more powerful. We shall see that the insight it provides lets us go well beyond the domain of classical thermodynamics, to understand all the processes that underlie the richness of the world.
Atkins’s book is at the level of a Scientific American article, with many useful (and colorful) pictures and historical anecdotes. The writing is excellent. For instance, consider this excerpt:
The Second Law recognizes that there is a fundamental dissymmetry in Nature…hot objects cool, but cool objects do not spontaneously become hot; a bouncing ball comes to rest, but a stationary ball does not spontaneously begin to bounce. Here is the feature of Nature that both Kelvin and Clausius disentangled from the conservation of energy: although the total quantity of energy must be conserved in any process…, the distribution of that energy changes in an irreversible manner…
I particularly like Atkins’ analysis of the equivalence of two statements of the second law: No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work (Kelvin statement); and no process is possible in which the sole result is the transfer of energy from a cooler to a hotter body (Clausius statement). Atkins writes
The Clausius statement, like the Kelvin statement, identifies a fundamental dissymmetry in Nature, but ostensibly a different dissymmetry. In the Kelvin statement the dissymmetry is that between work and heat; in the Clausius statement there is no overt mention of work. The Clausius statement implies a dissymmetry in the direction of natural change: energy may flow spontaneously down the slope of temperature, not up. The twin dissymmetries are the anvils on which we shall forge the description of all natural change.
Peter Atkins has written several books, including another of my favorites: Peter Atkins’ Molecules. Here is a video of Atkins discussing his book the Four Laws that Drive the Universe. Not surprisingly, the four laws are the laws of thermodynamics.

Peter Atkins discussing the Four Laws that Drive the Universe.

Friday, November 18, 2011

Plessey Semiconductor Electric Potential Integrated Circuit

The electrocardiogram, or ECG, is one of the most common and useful tools for diagnosing heart arrhythmias. Russ Hobbie and I discuss the ECG in Chapter 7 (The Exterior Potential and the Electrocardiogram) of the 4th edition of Intermediate Physics for Medicine and Biology. The November issue of the magazine IEEE Spectrum contains an article by Willie D. Jones about new instrumentation for measuring the ECG. Jones writes
In October, Plessey Semiconductors of Roborough, England, began shipping samples of its Electric Potential Integrated Circuit (EPIC), which measures minute changes in electric fields. In videos demonstrating the technology, two sensors placed on a person’s chest delivered electrocardiogram (ECG) readings. No big deal, you say? The sensors were placed on top of the subject’s sweater, and in future iterations, the sensors could be integrated into clothes or hospital gurneys so that vital signs could be monitored continuously—without cords, awkward leads, hair-pulling sticky tape, or even the need to remove the patient’s clothes.
Apparently the Plessey device is an ultra high input impedance voltmeter. The electrode is capacitively coupled to the body, so no electrical contact is necessary. You can learn more about it by watching this video. I don’t want to sound like an advertisement for Plessey Semiconductors, but I think this device is neat. (I have no relationship with Plessey, and I have no knowledge of the quality of their product, other than what I saw in the IEEE Spectrum article and the video that Plessey produced.)

According to the Plessey press release, “most places on earth have a vertical electric field of about 100 Volts per metre. The human body is mostly water and this interacts with the electric field. EPIC technology is so sensitive that it can detect these changes at a distance and even through a solid wall.”

I don’t have any inside information about this device, but let me guess how it can detect a person at a distance. The body would perturb a surrounding electric field because it is mostly saltwater, and therefore a conductor. In Section 9.10 of Intermediate Physics for Medicine and Biology, Russ and I explain how a conductor interacts with applied electric fields. For the case of a dc field, the conducting tissue completely shields the interior of the body from the field. To understand how a body could affect an electric field, try solving the following new homework problem
Section 9.10

Problem 34 ½ Consider how a spherical conductor, of radius a, perturbs an otherwise uniform electric field, Eo. The conductor is at a uniform potential, which we take as zero. As in Problem 34, assume that the electric potential V outside the conductor is V = A cosθ/r2Eo r cosθ.
(a) Use the boundary condition that the potential is continuous at r=a to determine the constant A.
(b) In the direction θ=0, determine the upward component of the electric field, - dV/dr.
(c) The perturbation of the electric field by the conductor is the difference between the fields with and without the conductor present. Calculate this difference. How does it depend on r?
(d) Suppose you measure the voltage in two locations separated by 10 cm, and that your detector can reliably detect voltage differences of 1 mV. How far from the center of a 1 m radius conductor can you be (assuming θ=0) and still detect the perturbation caused by the conductor?
You may be wondering why there is a 100 V/m electric field at the earth’s surface. The Feynman Lectures (Volume 2, Chapter 9) has a nice discussion about electricity in the atmosphere. The reason that this electric field exists is complicated, and has to do with 1) charging of the earth by lightning, and 2) charge separation in falling raindrops.

Friday, November 11, 2011

The Making of the Pacemaker: Celebrating a Lifesaving Invention

The Making of the Pacemaker: Celebrating a Lifesaving Invention, by Wilson Greatbatch.
The Making of the Pacemaker:
Celebrating a Lifesaving Invention,
by Wilson Greatbatch.
I’m still thinking about Wilson Greatbatch, one of the inventors of the implantable pacemaker, who died a few weeks ago (see my September 30 blog entry honoring him). Here is an interesting excerpt from his book The Making of the Pacemaker: Celebrating a Lifesaving Invention, about how he created the circuit in the first pacemaker.
My marker oscillator used a 10k basebias resistor. I reached into my resistor box for one but misread the colors and got a brown-black-green (one megohm) instead of a brown-black-orange. The circuit started to ‘squeg’ with a 1.8 ms pulse, followed by a one second quiescent interval. During the interval, the transistor was cut off and drew practically no current. I stared at the thing in disbelief and then realized that this was exactly what was needed to drive a heart. I built a few more. For the next five years, most of the world’s pacemakers used a blocking oscillator with a UTC DOT-1 transformer, just because I grabbed the wrong resistor.
Here is another story from The Making of the Pacemaker about how Greatbatch met William Chardack, his primary collaborator in developing the first pacemaker.
In Buffalo we had the first local chapter in the world of the Institute of Radio Engineers, Professional Group in Medical Electronics (the IRE/PBME, now the Biomedical Engineering Society of the Institute of Electrical and Electronic Engineers [IEEE]). Every month twenty-five to seventy-five doctors and engineers met for a technical program. We strove to attract equal numbers of doctors and engineers. We had a standing offer to send an engineering team to assist any doctor who had an instrumentation problem. I went with one team to visit Dr. Chardack on a problem deadline with a blood oximeter. Imagine my surprise to find that his assistant was my old high school classmate, Dr. Andrew Gage. We couldn’t help Dr. Chardack much with his oximeter problem, but when I broached my pacemaker idea to him, he walked up and down the lab a couple times, looked at me strangely, and said, “If you can do that, you can save ten thousand lives a year.” Three weeks later we had our first model implanted in a dog.
This excerpt is interesting:
I had $2,000 in cash and enough set aside to feed my family for two years. I put it to the Lord in prayer and felt led to quit all my jobs and devote my time to the pacemaker. I gave the family money to my wife. I then took the $2,000 and went up into my wood-heated barn workshop. In two years I built fifty pacemakers, forty of which went into animals and ten into patients. We had no grant funding and asked for none. The program was successful. We got fifty pacemakers for $2,000. Today, you can’t buy one for that.
This one may be my favorite. You gotta love Eleanor. They were married in 1945 and stayed together until her death in January of this year.
Many of the early Medtronic programs were first worked out in Clarence, New York, and then taken to Minneapolis. I had two ovens set up in my bedroom. My wife did much of the testing. The shock test consisted of striking the transistor with a wooden pencil while measuring beta (current gain). We found that a metal pencil could wreck the transistor, but a wooden pencil could not. Many mornings I would awake to the cadence of my wife Eleanor tap, tap, tapping the transistors with her calibrated pencil. For some months every transistor that was used worldwide in Medtronic pacemakers got tapped in my bedroom.
You can learn more about pacemakers and defibrillators in the 4th edition of Intermediate Physics for Medicine and Biology.

A picture of Wilson Greatbatch, superimposed on Intermediate Physics for Medicine and Biology.
Wilson Greatbatch.

Friday, November 4, 2011

Countercurrent Heat Exchange

Problem 17 in Chapter 5 of the 4th edition of Intermediate Physics for Medicine and Biology considers a countercurrent heat exchanger. Countercurrent transport in general is discussed in Section 5.8 in terms of the movement of particles. However, Russ Hobbie and I conclude the section by applying the concept to heat exchange.
The principle [of countercurrent exchange] is also used to conserve heat in the extremities—such as a person’s arms and legs, whale flippers, or the leg of a duck. If a vein returning from an extremity runs closely parallel to the artery feeding the extremity, the blood in the artery will be cooled and the blood in the vein warmed. As a result, the temperature of the extremity will be lower and the heat loss to the surroundings will be reduced.
How Animals Work,  by Knut Schmidt-Nielsen, superimposed on Intermediate Physics for Medicine and Biology.
How Animals Work,
by Knut Schmidt-Nielsen.
Problem 17 provides an example of this behavior, and cites Knut Schmidt-Nielsen’s book How Animals Work (1972, Cambridge University Press), which describes countercurrent exchange in more detail. (His comments below about the nose refer to an earlier section of the book, in which Schmidt-Nielsen discusses heat exchange in the nose of the kangaroo rat).
The heat exchange in the nose has a great similarity to the well-known countercurrent heat exchange which takes place, for example, in the extremities of many aquatic animals, such as in the flippers of whales and the legs of wading birds. The body of a whale that swims in water near the freezing point is well insulated with blubber, but the thin streamlined flukes and flippers are uninsulated and highly vascularized and would have an excessive heat loss if it were not for the exchange of heat between arterial and venous blood in these structures. As the cold venous blood returns to the body from the flipper, the vessels run in close proximity to the arteries, in fact, they completely surround the artery, and heat from the arterial blood flows into the returning venous blood, which is thus reheated before it returns to the body (figure 3). Similarly, in the limbs of many animals both arteries and veins split up into a large number of parallel, intermingled vessels each with a diameter of about 1 mm or so, forming a discrete vascular bundle known as a rete…Whether the blood vessels form such a rete system, or in some other way run in close proximity, as in the flipper of the whale, is a question of design and does not alter the principle of the heat recovery mechanism. The blood flows in opposite directions in the arteries and veins, and heat exchange takes place between the two parallel sets of tubes; the system is therefore known as a countercurrent heat exchanger.
The Camel's Nose: Memoirs of a Curious Scientist, by Knut Schmidt-Nielsen, with Intermediate Physics for Medicine and Biology.
The Camel's Nose:
Memoirs of a Curious Scientist,
by Knut Schmidt-Nielsen.
Schmidt-Nielsen also wrote Scaling: Why is Animal Size So Important?, which Russ and I cite often in Chapter 2 and which I included in my top ten list of biological physics books. I have also read Schmidt-Nielsen's autobiography The Camel’s Nose: Memoirs of a Curious Scientist. (See the review of this book in the New England Journal of Medicine.) His Preface begins
This is a personal story of a life spent in science. It tells about curiosity, about finding out and finding answers. The questions I have tried to answer have been very straightforward, perhaps even simple. Do marine birds drink sea water? How do camels in hot deserts manage for days without drinking when humans could not survive without water for more than a day? How can kangaroo rats live in the desert without any water to drink? How can snails find water and food in the most barren deserts? Can crab-eating frogs really survive in sea water?

These are important questions. The answers not only tell us how animals overcome seemingly insurmountable obstacles in hostile environments; they also give us insight into general principles of life and survival.
A statue of Knut Schmidt-Nielsen with a camel on the campus of Duke University.
A statue of Knut Schmidt-Nielsen with a camel
on the campus of Duke University.
Schmidt-Nielsen died in 2007, and Steven Vogel (who I quoted in last week’s blog entry) wrote an article about him for the Biographical Memoirs of Fellows of the Royal Society (Volume 54, Pages 319–331, 2008). See also his obituary in the Journal of Experimental Biology. A statue of Schmidt-Nielsen with a camel (which he famously studied) graces the Duke University campus.

Friday, October 28, 2011

Murray’s Law

Homework Problem 33 in Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology is about Murray’s law, a relationship describing the radii of branching vessels.
A parent vessel of radius Rp branches into two daughter vessels of radii Rd1 and Rd2. Find a relationship between the radii such that the shear stress on the vessel wall is the same in each vessel. (Hint: Use conservation of the volume flow.) This relationship is called ‘Murray’s Law’. Organisms may use shear stress to determine the appropriate size of vessels for fluid transport [LaBarbera (1990)].
The reference is to
LaBarbera, M. (1990) “Principles of Design of Fluid Transport Systems in Zoology.” Science, Volume 249, Pages 992–1000.
Vital Circuits: On Pumps, Pipes, and the Workings of Circulatory Systems, by Steven Vogel.
Vital Circuits,
by Steven Vogel.
In his book Vital Circuits: On Pumps, Pipes, and the Workings of Circulatory Systems, Steven Vogel provides a clear and engaging discussion of Murray’s law.
Our problem of figuring the cheapest arrangement of pipes turns out to involve nothing more nor less than calculating the relative dimensions of pipes so that the steepness of the speed gradient at all walls is the same. This calculation was done by Cecil D. Murray, of Bryn Mawr College, back in 1926, and is spoken of, when (uncommonly) it’s mentioned, as “Murray’s law.”
Murray’s law isn’t especially complicated, and anyone with a hand calculator can play around with it (but you can ignore the specifics without missing the present message). The rule is that the cube of the radius of the parental vessel equals the sum of the cubes of the radii of the daughter vessels. If a pipe with a radius of two units splits into a pair of pipes, each of the pair ought to have a radius of about 1.6 units. (To check, cube 1.6 and then double the result—you get about 2 cubed.) The daughters are smaller, but only a little (Figure 5.6). Still, if the parental one eventually divides into a hundred progeny, the progeny do come out substantially smaller, each about a fifth of the radius of the parent. (Their aggregate cross-section area is, of course, greater than the parental one—to be specific, four fold greater.)

The relationship predicts the relative sizes of both our arteries and our veins quite well. It only fails for the very smallest arterioles and capillaries….

It would be indefensibly anthropocentric to suppose that we’re the only creatures to follow Mr. Murray. My friend, Michael LaBarbera (who introduced me to the whole issue) has tested the law on several systems that are very unlike us structurally and functionally, and very distant from us evolutionarily…Murray’s law again proves applicable…

The mechanism … is becoming clear. Without getting into the details, it looks as if the cells lining the blood vessels can quite literally sense changes in the speed gradient next to them. An increase in the speed of flow through a vessel increases the speed gradient at its walls. An increase in gradient stimulates cell division, which would increase vessel diameter as appropriate to offset the faster flow. Neither change in blood pressure nor cutting the nerve supply makes any difference—this is apparently a direct effect of the gradient on synthesis of some chemical signal by the cells. Perhaps the neatest feature of the scheme is that a cell needn’t know anything about the size of the vessel of which it’s a part. As a consequence of Murray’s Law, it can be given the same specific instruction wherever it might be located, a command telling it to divide when the speed gradient exceeds a specific value.
Vogel is a faculty member in the Biology Department at Duke University. He has published several fine books, including Vital Circuits quoted above and the delightful Life in Moving Fluids (Princeton University Press, 1994), both cited in Intermediate Physics for Medicine and Biology.

Friday, October 21, 2011

A Useful Website

While I have many goals when writing this blog (with the top being to sell textbooks!), sometimes I simply like to point out useful websites relevant to readers of the 4th edition of Intermediate Physics for Medicine and Biology. One example is the website of Rob MacLeod, a professor of bioengineering at the University of Utah. MacLeod’s research, like mine, centers on the numerical simulation of cardiac electrophysiology, so we find many of the same topics interesting.

I particularly enjoy his list of Background Links for Rob’s Courses. You will find many books listed, some of which Russ Hobbie and I cite in Intermediate Physics for Medicine and Biology, and some that we don’t cite but should. For example, MacLeod speaks highly of the book Mathematical Physiology by Keener and Sneyd, but somehow Russ and I never reference it. I didn’t know Malmivuo and Plonsey’s book Bioelectromagnetism (which we do cite) is now available online and free of charge. The Welcome Trust Heart Atlas is beautiful, as is the Virtual Heart website. MacLeod’s list of books about “Cardiology and Medicine” look fascinating, with a heavy emphasis on the relevant history and biography. If I start running out of topics for these blog posts, I could probably find a year of material by exploring the sources listed on this page.

If you visit MacLeod's website (and I hope you do), make sure to click on the link “Information on Writing.” I am an admirer of good writing, especially in nonfiction, and am frustrated when presented with a poorly written scientific book or paper. (I review a lot of papers for journals, and often find myself venting and fuming.) My advice to a young scientist is: Learn To Write. Throughout your scientific career you will be judged primarily on your papers and your grant proposals, which are both written documents. Maybe your science is so good that it can overcome poor writing and still impress the reader, but I doubt it. Learn to write.

Friday, October 14, 2011

Bethesda

A couple months ago I went to Bethesda, Maryland to review grant proposals for the National Institutes of Health. They swear us to secrecy, so I can’t divulge any details about the specific research. But I will share a few general observations.
  1. Winston Churchill said that “Democracy is the worst form of government except all the others that have been tried.” That sums up my opinion of the NIH review process. There are all sorts of problems with the way we select the best research to fund, but I can’t think of a better way than that used by NIH. Each time I participate, I come away with a great respect for the process. Of course, from the outside the review process can resemble a casino, but I don’t see how you can eliminate some randomness while at the same time keeping the process fair, with wide input, and a focus on the significance and impact of the research.
  2. If you are a young biomedical researcher, or hope to be one someday, then you should take advantage of any opportunity to review grant proposals. It is like going to grant writing school. No book, no website, no video, no workshop is more useful for learning how to prepare a proposal. It is a lot of work, but you will gain much, especially the first time or two you do it. However, if you simply are not able to participate in a review panel, then at least watch this video (see below), which is a fairly accurate description of what goes on.
  3. After reviewing grant proposals, I am optimistic about the future of the scientific enterprise in the United States, because of all the fascinating and important research being proposed. I am also pessimistic about my chances for winning additional funding, because the competition is so fierce. But, we must soldier on. To quote Churchill again, “Never give in, never give in, never, never, never, never.” So I’ll keep trying.
  4. Research is becoming more and more interdisciplinary, and many proposals now come from multidisciplinary teams. Each individual researcher cannot know everything, but they must know enough to understand each other, and to talk to each other intelligently. I believe this is one of the virtues of the 4th edition of Intermediate Physics for Medicine and Biology. It helps bridge the gap between physicists and engineers on the one side, and biologists and medical doctors on the other. The book won’t turn a physicist into a biologist, but it may help a physicist talk to and better appreciate a biologist. This is crucial for performing modern collaborative research, and for obtaining funding to pay for that research. After reviewing all those proposals, I came away proud of our textbook.
We finished our review session a couple hours earlier than anticipated, so I used the time to visit the new Martin Luther King Memorial in Washington, DC. It is just across the tidal basin from the Jefferson Memorial, and the statues of King and Jefferson stare at each other across the water. If you happen to be going to DC soon, prepare yourself for a shock. The beautiful reflecting pool between the Washington Monument and the Lincoln Memorial is now a dried-up, plowed-up mud flat. Apparently they are renovating it. But the other attractions are as beautiful as ever, including the Vietnam Veterans Memorial, the Korean War Veterans Memorial, the National World War II Memorial, and the Franklin Delano Roosevelt Memorial. I even saw one I had somehow missed in previous visits: the George Mason Memorial, near the Jefferson Memorial. All this site seeing was a little bonus after reviewing all those grants (packed into two frantic hours between leaving the review session and reaching the airport).

NIH Peer Review Revealed.

Friday, October 7, 2011

The Mathematics of Diffusion

The title page of The Mathematics of Diffusion, by John Crank, superimposed on Intermediate Physics for Medicine and Biology.
The Mathematics of Diffusion,
by John Crank.
Diffusion is one of those topics that is rarely covered in an introductory physics class, but is essential for understanding biology. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss diffusion and its biomedical applications. One of the books we cite is The Mathematics of Diffusion by John Crank. Hard-core mathematical physicists who are interested in biology and medicine will find Crank’s book to be a good fit. Physiologists who want to avoid as much mathematical analysis as possible may prefer to learn their diffusion from Random Walks in Biology, by Howard Berg.
Crank died five years ago this week. Like Wilson Greatbatch, who I discussed in my last blog entry, Crank was one of those scientists who came of age serving in the military during World War Two (Tom Brokaw would call them members the “Greatest Generation”). Crank’s 2006 obituary in the British newspaper The Telegraph states:
John Crank was born on February 6 1916 at Hindley, Lancashire, the only son of a carpenter’s pattern-maker. He studied at Manchester University, where he gained his BSc and MSc. At Manchester he was a student of the physicist Lawrence Bragg, the youngest-ever winner of a Nobel prize, and of Douglas Hartree, a leading numerical analyst.

Crank was seconded to war work during the Second World War, in his case to work on ballistics. This was followed by employment as a mathematical physicist at Courtaulds Fundamental Research Laboratory from 1945 to 1957. He was then, from 1957 to 1981, professor of mathematics at Brunel University (initially Brunel College in Acton).

Crank published only a few research papers, but they were seminal. Even more influential were his books. His work at Courtaulds led him to write The Mathematics of Diffusion, a much-cited text that is still an inspiration for researchers who strive to understand how heat and mass can be transferred in crystalline and polymeric material. He subsequently produced Free and Moving Boundary Problems, which encompassed the analysis and numerical solution of a class of mathematical models that are fundamental to industrial processes such as crystal growth and food refrigeration.
Crank is best known for a numerical technique to solve equations like the diffusion equation, developed with Phyllis Nicolson and known as the Crank-Nicolson method. The algorithm has the advantage that it is numerically stable, which can be shown using von Neuman stability analysis. They published this method in a 1947 paper in the Proceedings of the Cambridge Philosophical Society
Crank, J., and P. Nicolson (1947) “A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat Conduction Type,” Proc. Camb. Phil. Soc., Volume 43, Pages 50–67.
Rather than describe the Crank-Nicolson method, I will let the reader explore it in a new homework problem.
Section 4.8

Problem 24 ½ The numerical approximation for the diffusion equation, derived as part of Problem 24, has a key limitation: it is unstable if the time step is too large. This problem can be avoided using the Crank-Nicolson method. Replace the first time derivative in the diffusion equation with a finite difference, as was done in Problem 24. Next, replace the second space derivative with the finite difference approximation from Problem 24, but instead of evaluating the second derivative at time t, use the average of the second derivative evaluated at times t and t+Δt.
(a) Write down this numerical approximation to the diffusion equation, analogous to Eq. 4 in Problem 24.

(b) Explain why this expression is more difficult to compute than the expression given in the first two lines of Eq. 4. Hint: consider how you determine C(t+Δt) once you know C(t).

The difficulty you discover in part (b) is offset by the advantage that the Crank-Nicolson method is stable for any time step. For more information about the Crank-Nicolson method, stability, and other numerical issues, see Press et al. (1992).
The citation is to my favorite book on computational methods: Numerical Recipes (of course, the link is to the FORTRAN 77 version, which is the edition that sits on my shelf).