Friday, September 27, 2013

Hermann von Helmholtz, Biological Physicist

Who was the greatest biological physicist ever? That’s a difficult question, but one candidate is the German scientist Hermann von Helmholtz (1821–1894). Helmholtz was both a physician and physicist who made important contributions to physiology. Russ Hobbie and I mention him briefly in the 4th edition of Intermediate Physics for Medicine and Biology. In Chapter 6 on Impulses in Nerve and Muscle Cells, we write
The action potential was first measured by Helmholtz around 1850.
Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov's Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
That is true, but he made many other contributions to biological physics. To highlight some of these, I turn to Asimov’s Biographical Encyclopedia of Science and Technology. Asimov first describes Helmholtz’s work on vision (some of which I have described previously in this blog).
Like [Thomas] Young, Helmholtz made a close study of the function of the eye, and in 1851 he invented an ophthalmoscope, with which one could peer into the eye’s interior—an instrument without which the modern eye specialist would be all but helpless…In addition he revived Young’s theory of three-color vision and expanded it, so that it is now known as the Young-Helmholtz theory.
He also studied sound, the ear, and music (he was a fine musician).
Helmholtz studied that other sense organ, the ear, as well. He advanced the theory that the ear detected differences in pitch through the action of the cochlea, a spiral organ in the inner ear. It contained, he explained, a series of progressively smaller resonators, each of which responded to a sound wave of progressively higher frequency. The pitch we detected depended on which resonator responded.
And as Russ and I noted, he made pioneering measurements in nerve electrophysiology.
Helmholtz was the first to measure the speed of the nerve impulse. His teacher, Muller, was fond of presenting this as an example of something science could never accomplish because the impulse moved so quickly over so short a path. In 1852, however, Helmholtz stimulated a nerve connected to a frog muscle, stimulating it first near the muscle, then farther away. He managed to measure the added time required for the muscle to respond in the latter case.
He also helped formulate the principle of the conservation of energy, an idea he came upon when studying the behavior of muscle.
But he is best known for his contributions to physics and in particular for his treatment of the conservation of energy, something to which he was led by his studies of muscle action. He was the first to show that animal heat was produced chiefly by contracting muscle and that an acid—which we now know to be lactic acid—was formed in the working muscle.
Given my admiration for 19th century physicists, I’m a little surprised that I don’t know more about Helmholtz. This is probably because I am more familiar with the great British physicists—Faraday, Maxwell, Kelvin—than with the Germans of that era (this is odd, given that I am half German). I wouldn’t go so far as to claim Helmholtz was as great a physicist as my Victorian heroes, but I do suggest that he was a greater biological physicist. I think a good argument could be made that he is the greatest of all biological physicists.

Friday, September 20, 2013

Musicophilia

Musicophilia: Tales of Music and the Brain, by Oliver Sacks, superimposed on Intermediate Physics for Medicine and Biology.
Musicophilia: Tales of
Music and the Brain,
by Oliver Sacks.
Those who know me well are aware that I spend considerable time walking my dog Suki. Usually during these walks I am listening to recorded books. Being too cheap to spend money on this habit, I borrow these recordings from the Rochester Hills Public Library. They have a impressive selection, but Suki and I have been at this for a while (she is almost 11 years old), and I have slowly worked my way through their stock of recordings in genres that I ordinarily listen to; science, history, and biography. I don’t view this as a problem, because it has forced me to sample books about topics I would not ordinarily listen to. The most recent example is Musicophilia: Tales of Music and the Brain, by Oliver Sacks. Perhaps you object that this is a science book, but I view it more as a medical book outside my normal experience. Regardless, I was pleasantly surprised to find considerable medical physics discussed.

I had listened previously to Sacks’s delightfully-titled The Man Who Mistook His Wife for a Hat, so I knew what I was getting into. In Musicophilia, Sacks discusses a variety of abnormalities in the perception of music. For instance, he begins with musical hallucinations. This is more than just having a song stuck in your head. These were examples from his clinical practice of people who had, say, suffered a brain injury and afterward would hear music in their mind that they could not distinguish from real music. They sometimes could not turn it on or off, but were stuck with it more or less continuously. Another example is people who, after a stroke, lost the ability to hear music as music. An opera sounds like someone screaming, and a symphony like pots and pans crashing onto the floor. In one case he related, this occurred to a former professional musician. It’s amazing.

Sacks describes all sorts of brain studies being done to examine these patients. There is considerable discussion of data measured using electroencephalography, magnetoencephalography, positron emission tomography, functional magnetic resonance imaging, and transcranial magnetic stimulation—all of which Russ Hobbie and I analyze in the 4th edition of Intermediate Physics for Medicine and Biology. For me, hearing these stories makes me nostalgic for my years working at the National Institutes of Health, where I used to collaborate with neurologists such as Mark Hallett (whose research is mentioned by Sacks). Hallett and his team studied all sorts of odd diseases while I was helping them develop magnetic stimulation. In this case, we physicists and engineers were not discovering new biological ideas or medical abnormalities, but we were providing the tools for others to make these discoveries. And, oh, what tools!

Sacks notes there are some patients who have lost their ability to tell which of two tones is the higher pitch (but can still hum a song). These patients are in contrast with those rare individuals with perfect or absolute pitch; they can tell what note a sound is when heard in isolation. My sister has something approaching perfect pitch. When I was in high school, I took piano lessons. Whenever I played a wrong note while practicing (which was quite often) she would call out from an adjacent room “F-sharp!” or “B-flat!” Do you know how annoying it is not only to have your mistakes pointed out for all to hear, but also to have the specific note identified precisely? Worst of all, she was always right. Some of these piano pieces she had played herself, but others she had not; she was just able to identify the pitch. I have always envied people with perfect pitch, but Sacks raises an interesting point. If people with perfect pitch hear a song played flawlessly but in the wrong key, they get all agitated and upset (he compared this to seeing a painting with all the colors wrong). I, on the other hand, would remain blissfully unaware of the problem. When I was in graduate school in Nashville, I bought a used piano from a blind fellow who refurbished pianos for a living. This particular piano was so old that he could not tighten its strings completely, so the piano was tuned about 3 steps too low (He gave me a good deal on it). The improper tuning never bothered me in the least (my sister hated that piano). However, sometimes my weakness with tonal discrimination has caused me some embarrassment. I played tuba in my high school band, and before concerts the director would have us all “tune up”. The first clarinet would play a note, and we would each play the same note in turn to make sure we were in tune. I always hated this, because I could never tell if I was sharp or flat, and the director would usually end up yelling at me in frustration “You’re flat. Flat! Push the tuning slide in!”

Sacks’s book got me to thinking about all sorts of unusual sensory perceptions. He describes people who could hear but could not perceive music, and I thought it must be like someone born without sight. But Sacks had a better analogy; imagine someone born colorblind (say, completely color blind, instead of just lacking one of three color receptors). How do you describe color to such a person? It has no meaning. How do you describe music to someone born unable to make sense of it? Then I began thinking of other odd sensory inputs, like magnetoreception and the ability to perceive the polarization of light. Humans can’t perceive these signals, but other species can. If you will let me indulge in a bit of anthropomorphization, I suspect there are some bird families who sit in their nest at night saying to each other “those humans can’t perceive magnetic fields or polarization! How to they ever get home?”

Finally, for those of you who know Suki, let me provide a quick update. Earlier this year she damaged her anterior cruciate ligament, and our walks came to an abrupt halt. After much debate (she is a small dog, and is 10 years old) we decided to have her undergo surgery. The veterinary surgeon Dr. McAbee did a marvelous job, and we are now back to our walks as if nothing ever happened.

Friday, September 13, 2013

Plain Words

Plain Words, by Sir Ernest Gowers, superimposed on Intermediate Physics for Medicine and Biology.
Plain Words,
by Sir Ernest Gowers.
When I arrived at graduate school, the main goal given to me by my advisor John Wikswo was to write scientific papers. Of course, I had to write a PhD dissertation, but that was in the distant future. The immediate job was to publish journal articles. John is a good writer, and he insists his students write well. So he recommended that I read the book Plain Words, by Sir Ernest Gowers. (I can’t recall if he made this suggestion before or after reading my first draft of a paper!) I dutifully read the book, which I have come to love. I believe I read the 1973 revision by Bruce Fraser although I am not sure; I borrowed Wikswo’s copy.

Gowers is an advocate for writing simply and clearly. He states in the introduction
Here we come to the most important part of our subject. Correctness is not enough. The words used may all be words approved by the dictionary and used in their right senses; the grammar may be faultless and the idiom above reproach. Yet what is written may still fail to convey a ready and precise meaning to the reader. That it does so fail is the charge brought against much of what is written nowadays, including much of what is written by officials. In the first chapter I quoted a saying of Matthew Arnold that the secret of style was to have something to say and to say it as clearly as you can. The basic fault of present-day writing is a tendency to say what one has to say in as complicated a way as possible. Instead of being simple, terse and direct, it is stilted, long-winded and circumlocutory; instead of choosing the simple word it prefers the unusual.
I have become a strong advocate for using plain language in scientific writing. Over the last three decades I have reviewed hundreds of papers for scientific journals, and I can attest that many scientists should read Plain Words. I have tried to use plain, clear language in the 4th edition of Intermediate Physics for Medicine and Biology (although Russ Hobbie’s writing was quite good in earlier editions of IPMB, which I had nothing to do with, so the book didn’t need much editing by me). Below, Gowers describes three rules for writing, which apply as well to scientific writing as to the official government writing that he focused on.
What we are concerned with is not a quest for a literary style as an end in itself, but to study how best to convey our meaning without ambiguity and without giving unnecessary trouble to our readers. This being our aim, the essence of the advice of both these authorities [mentioned earlier] may be expressed in the following three rules, and the rest of what I have to say in the domain of the vocabulary will be little more than an elaboration of them.
- Use no more words than are necessary to express your meaning. For if you use more you are likely to obscure it and to tire your reader. In particular do not use superfluous adjectives and adverbs and do not use roundabout phrases where single words would serve.
- Use familiar words rather than the far-fetched, for the familiar are more likely to be readily understood.
- Use words with a precise meaning rather than those that are vague, for they will obviously serve better to make your meaning clear; and in particular prefer concrete words to abstract, for they are more likely to have a precise meaning.
For me, the chore of writing is made easier because I like to write. Really, why else would I write this blog each week if I didn’t enjoy the craft of writing (certainly increased book sales can’t justify the time and effort). When my children were young, I once became secretary of their elementary school’s Parent-Teacher Association mainly because my primary duty would be writing the minutes of the PTA meetings. If you were to ask my graduate students, I think they would complain that I make too many changes to drafts of their papers, and we tend to go through too many iterations before submission to a journal. I can usually tell when we are close to a finished paper, because I find myself putting in commas in one draft, and then taking them out in the next. One trick Wikswo taught me is to read the text out loud, listening to the cadence and tone. I find this helpful, and I don’t care what people think when they walk by and hear me reading to myself in my office.

Most Americans have an advantage in the world of science. Modern science is primarily performed and published in the English language, which is our native tongue. I feel sorry for those who must submit articles written in an unfamiliar language—it really is unfair—but that has not stopped me from criticizing their English mercilessly in anonymous reviews. For any young scientist who may be reading this blog (and I do hope there are some of you out there), my advice is: learn to write. As a scientist, you will be judged on your written documents: your papers, your reports, and above all your grant proposals. You simply cannot afford to have these poorly written.

I believe role models are important in writing. One of mine is Isaac Asimov. While I enjoy his fiction, I use his science writing as an example of how to explain difficult concepts clearly. I was very lucky to have encountered his books when in high school. A second role model is not a science writer at all. I have read Winston Churchill’s books, especially his history of the second world war, and I find his writing both clear and elegant. A third model is physicist David Mermin. His textbook Solid State Physics is quite well written, and you can read his essay on writing physics here. You will find learning to write scientific papers difficult if all you read are other scientific papers, because the majority are not well written. If you pattern your own writing after them you will be aiming at the wrong target. Please, learn to write well.

You can read Plain Words online (and for free) here.

This week’s blog entry seems rather long and rambling. Let me conclude with a paraphrase of Mark Twain’s famous quip about letter writing: If I had more time, I would have written a shorter blog entry.

Friday, September 6, 2013

The Art of Electronics

The Art of Electronics, by Horowitz and Hill, superimposed on Intermediate Physics for Medicine and Biology.
The Art of Electronics,
by Horowitz and Hill.
A biological physicist needs many skills, and an important one for experimentalists is electronics. In graduate school, I began my career as an experimentalist, and my PhD advisor John Wikswo required all his students to design and build at least one piece of electronics. My job was to make a timer for our microelectrode puller. I wasn’t experienced with circuit design, so at Wikswo’s suggestion I turned to The Art of Electronics, by Paul Horowitz and Winfield Hill. This wonderful book taught me almost all I know about the subject (OK, that’s not saying much). I used the first edition, but in 1989 a second edition came out. Below is the preface from edition two.
Electronics, perhaps more than any other field of technology, has enjoyed an explosive development in the last four decades. Thus it was with some trepidation that we attempted, in 1980, to bring out a definitive volume teaching the art of the subject. By “art” we meant the kind of mastery that comes from an intimate familiarity with real circuits, actual devices, and the like, rather than the more abstract approach often favored in textbooks on electronics.
The Art of Electronics is particularly useful for understanding active circuits, such as those including transistors and operational amplifiers. I recall that in graduate school my education had a conspicuous hole in that I didn’t understand transistors, and The Art of Electronics helped me learn about them in an intuitive way (I still recall fondly Horowitz and Hill’s “transistor man”).

Russ Hobbie and I don’t discuss electronics explicitly in the fourth edition of Intermediate Physics for Medicine and Biology, but it is implicit in some chapters. For instance, thin film transistor arrays are discussed briefly in Chapter 16, used for detecting x-ray images. In Chapter 6, Figure 6.32 shows the apparatus for making voltage-clamp measurements. The “controller” in that figure is basically an op-amp, and in order to understand how it works one needs to appreciate their “golden rules”: 1) the output does whatever is necessary to make the voltage difference between the inputs zero, and 2) the inputs draw no current. You can do a lot with an op amp, including simple circuits such as a voltage follower (which is needed if you want to record a voltage using a large input impedance, something that is important in bioelectric recordings), simple amplifiers, integrators and differentiators. Horowitz and Hill describe all these circuits and more, in a way that can be understood by the beginner. For me, The Art of Electronics is to electronic circuits what Numerical Recipes is to computational methods: a well-written book that lets you learn the essence of the subject and the practical applications, without getting bogged down in all the esoteric details.

My timer for our microelectrode puller worked, although it wasn’t pretty. As I recall, it was built using leftover parts, and looked something like a big toaster with gigantic, 1950s-style knobs. But it allowed me to pull glass microelectrodes with a reproducible resistance to use in intracellular measurements of voltage in nerve axons. My experimental work culminated in the first simultaneous measurement of the transmembrane potential and magnetic field of a nerve axon (see Barach, Roth, and Wikswo, IEEE Trans. Biomed. Eng., Volume 32, Pages 136–140, 1985; and Roth and Wikswo, Biophys. J., Volume 48, Pages 93–109, 1985). The Biophysical Journal paper is one of my favorites, and represents the high water mark of my experimental career. However, I also like the less-cited IEEE TBME paper for two reasons: it was my very first journal article (appearing in February of 1985, whereas the Biophysical Journal paper appeared in July), and it is my only paper in which I supplied the experimental data and someone else (in this case, Prof. John Barach) performed the theoretical analysis. However, it soon became apparent that my talents and interests were more in mathematical modeling and computer simulation. Nevertheless, I have always had enormous respect for experimental work, which in my view is more difficult than theoretical analysis. I have suffered from a case of “experimentalist envy” since those formative years in graduate school.

Rumor has it that a 3rd edition of The Art of Electronics will appear soon.

Friday, August 30, 2013

The Ascent of Sap in Trees

In Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I included a homework problem about moving water up trees.
Problem 34 Sap flows up a tree at a speed of about 1 mm s−1 through its vascular system (xylem), which consists of cylindrical pores of 20 μm radius. Assume the viscosity of sap is the same as the viscosity of water. What pressure difference between the bottom and top of a 100 m tall tree is needed to generate this flow? How does it compare to the hydrostatic pressure difference caused by gravity?
When you calculate the pressure needed to push water (that is, sap) up the tree through the xylem, you get (Spoiler Alert!) twenty atmospheres to overcome the viscous resistance of the pores, and ten atmospheres to overcome gravity. How does the tree generate all this pressure? That is a famous old problem known as the “ascent of sap.”

Now I admit that a 100-meter tree is, indeed, very tall; taller than even the Statue of Liberty. But it is not an unrealistic example. The majestic sequoias in California reach this height. The tallest known tree, named Hyperion, is a sequoia (coast redwood) in northern California’s Redwood National and State Parks that reaches a height of 115 m. The leaves at the top of that redwood need water to carry out photosynthesis. How do they get it?

First, let us consider some mechanisms that do not work. The tree cannot suck the water up, as if it were a gigantic drinking straw. Even if the tree could produce a true vacuum at its peak it could only create a pressure difference of one atmosphere, which corresponds to a rise of water of 10 m. Another idea is that the water rises by capillary action, like a giant wick. But the height that can be reached by climbing up a tube via surface tension is inversely proportional to the tube radius, and for xylem’s 20 micron radius tubes water will rise only a tiny fraction of the tree’s height (in Sec. 12.2 of his book Air and Water, Mark Denny estimates that water would rise in xylem by capillary action to only a height of three-fourths of a meter). Osmotic pressure won’t work either, for any realistic concentration gradient. So what is the answer?

There is still some controversy, but the generally accepted mechanism for the ascent of sap is called the cohesion-tension theory. In the leaves, capillary action through very tiny channels helps pull water upwards to replace that which evaporates from the leaf surface. In the larger pores of the xylem, the water is pulled by tension (negative pressure), somewhat like a steel cable pulling an elevator up its shaft. But can water support such a tension? It can, but there is a problem. If any air is present, the system will fail. Think of a piston filled half with water and half air. If you pull on the piston, you will just expand the air as its pressure is reduced. Now, consider the piston with only 1/4th air and 3/4th water; the air still expands when you pull. In fact, if there is even one bubble present in the water, pulling on the piston will cause it to expand. Only if the piston contains no air at all will the water be able to exert a tension force. In other words, water under negative pressure is susceptible to cavitation; the formation of bubbles. Fortunately, the structure of xylem is such that bubbles cannot grow indefinitely, but get trapped in one compartment.

For more details, see “The Cohesion-Tension Mechanism and the Acquisition of Water by Plant Roots,” by Ernst Steudle (Annual Review of Plant Physiology, Volume 52, Pages 847–875, 2001). Below I reproduce his summary of cohesion-tension theory. Note that 100 MPa is 1000 atm!
  • Water has high cohesive forces. It can be subjected to from some ten to several hundred MPa before columns break. When subjected to tensions, water is in a metastable state, i.e. pressure in xylem vessels is much smaller than the equilibrium water vapor pressure at the given temperature. 
  •  Walls of vessels represent the weak part of the system. They may contain air or seeds of water vapor. When a critical tension is reached in the lumen of xylem vessels, pits in vessel walls allow the passage of air through them, resulting in cavitation (embolism). 
  • Water in vessels of higher plants forms a continuous system from evaporating surfaces in the leaves to absorbing surfaces of the roots
and into the soil (soil-plant-air-continuum; SPAC). With few exceptions, water flow within the SPAC is hydraulic in nature, and the system can be described as a network of resistors arranged in series and in parallel. 
  • Evaporation from leaves lowers their water potential and causes water to move from the xylem to evaporating cells across leaf tissue. This reduces the pressure in the xylem, often to values well below zero (vacuum). 
  • Gradients in pressure (water potential) are established along transpiring plants; this causes an inflow of water from the soil into the roots and to the transpiring surfaces in the leaves.
Here is an animation that nicely summarizes this process.

I find the idea of water being hoisted up a tree by tens of atmospheres of tension to be fascinating, if a bit disconcerting. This phenomenon offers a fine example of the important role of physics in biology.

Friday, August 23, 2013

Stealth Nanoparticles Boost Radiotherapy

I hope, dear readers, that you all have been regularly browsing through http://medicalphysicsweb.org, the website from the Institute of Physics dedicated to medical physics news. I was particularly taken by the article published there this week titled “Stealth Nanoparticles Boost Radiotherapy.” Russ Hobbie and I don’t talk about nanoparticles in the 4th edition of Intermediate Physics for Medicine and Biology, but they are a hot topic in biomedical research these days. The article by freelance journalist Cynthia Keen begins
Imagine a microscopic bomb precisely positioned inside a cancer tumour cell that explodes when ignited by a dose of precision-targeted radiotherapy. The cancerous tumour is destroyed. The healthy tissue surrounding it survives.

This scenario may become reality within a decade if research by Massachusetts scientists on using nanoparticles to deliver cancer-fighting drugs proceeds smoothly. Wilfred F Ngwa, a medical physicist in the department of radiation oncology at Brigham and Women's Hospital and Dana Farber Cancer Institute in Boston, described the latest initiative at the AAPM annual meeting, held earlier this month in Indianapolis, IN. 
We discuss radiation therapy in Chapter 16 of IPMB. The trick of radiotherapy is to selectively kill cancer cells while sparing normal tissue. The nanoparticles are designed to target tumors
by applying tumour vasculature-targeted cisplatin, Oxaliplatin or carboplatin [three widely used, platinum-based chemotherapy drugs] nanoparticles during external-beam radiotherapy, a substantial photon-induced boost to tumour endothelial cells can be achieved. This would substantially increase damage to the tumour’s blood vessels, as well as cells that cause cancer to recur, while also delivering chemotherapy with fewer toxicities.
In general, nanoparticles typically have a size on the order of 10 to 100 nm. This size passes easily through the smallest blood vessels, but is too big to pass through ion channels in the cell membrane. It is about the size of a large biomolecule or a small virus. Nanoparticles are used in imaging and therapy. For an overview, see the review by Shashi Murthy (International Journal of Nanomedicine, Volume 2, Pages 129–141, 2007).

The medicalphysicsweb article concludes
“The promising result of using approved platinum-based nanoparticles combined with experimental results of the past two years convince us that our new RAID [radiotherapy application with in situ dose-painting] approach to cancer provides a number of possibilities for customizing and significantly improving radiotherapy,” Ngwa said at the press conference. This research is still in its early stages, with laboratory testing of the new approach in mice ongoing. If tests continue to prove successful, and a grant or private funding is available, it will lead to clinical trials in humans. The researchers are hopeful that they will be able to continue their work without any disruption and to move their novel treatment from laboratory to clinical use. 
Another news story about this research can be found here

Friday, August 16, 2013

We Need Theoretical Physics Approaches to Study Living Systems

An editorial titled “We Need Theoretical Physics Approaches to Study Living Systems,” which was published recently in the journal Physical Biology (Volume 10, Article number 040201), has resonated with me. Krastan Blagoev, Kamal Shukla and Herbert Levine discuss the importance of using simply physical models to understand complicated biological problems. The debate about how much detail to include in mathematical models is a constant source of tension between physicists and biologists, and even between physicists and biomedical engineers. I agree with the editorial’s authors: simple models are vitally important. Biologists (and even more so, medical doctors) put great emphasis on the complexity of their systems. But the value of a simple model is that it highlights the fundamental behavior of a system that is often not obvious from experiments. If we build realistic models including all the complexity, they will be just as difficult to understand as are the experiments themselves. Blagoev, Shukla and Levine say much the same (my italics).
In this editorial, we propose that theoretical physics can play an essential role in making sense of living matter. When faced with a highly complex system, a physicist builds simplified models. Quoting Philip W Anderson’s Nobel prize address, “the art of model-building is the exclusion of real but irrelevant parts of the problem and entails hazards for the builder and the reader. The builder may leave out something genuinely relevant and the reader, armed with too sophisticated an experimental probe, may take literally a schematized model. Very often such a simplified model throws more light on the real working of nature... ” In his formulation, the job of a theorist is to get at the crux of the system by ignoring details and yet to find a testable consequence of the resulting simple picture. This is rather different than the predilection of the applied mathematician who wants to include all the known details in the hope of a quantitative simulacrum of reality. These efforts may be practically useful, but do not usually lead to increased understanding.
In my own research, the best example of simple model building is the prediction of adjacent regions of depolarization and hyperpolarization during electrical stimulation of the heart. Nestor Sepulveda, John Wikswo, and I used the “bidomain model,” which accounts for essential properties of cardiac tissue such as the tissue anisotropy and the relative electrical conductivity of the intracellular and extracellular spaces (Biophysical Journal, Volume 55, Pages 987–999, 1989; I have discussed this study in this blog before). Yet, this model was an enormous simplification. We ignored the opening and closing of ion channels, the membrane capacitance, the curvature of the myocardial fibers, the cellular structure of the tissue, the details of the electrode-tissue interface, the three-dimensional volume of the tissue, and much more. Nevertheless, the model made a nonintuitive qualitative prediction that was subsequently confirmed by experiments. I think the reason this research has made an impact (over 200 citations to the paper so far) is that we were able to strip our model of all the unnecessary details except those key ones underlying the qualitative behavior. The gist of this idea can be found in a quote usually attributed to Einstein: Everything should be made as simple as possible, but no simpler. I must admit, sometimes it pays to be lucky when deciding which features of a model to keep and which to throw out. But it is not all luck; model building is a skill that needs to be learned.

The editorial continues (again, my italics)
A leading biologist once remarked to one of us that a calculation of in vivo cytoskeletal dynamics that did not take into account the fact that the particular cell in question had more than ten isoforms of actin could not possibly be correct. We need to counter that any calculation which takes into account all these isoforms is overwhelmingly likely to be vastly under-constrained and ultimately not useful. Adding more details can often bring us further from reality. Of course, the challenge for models is then falsification, i.e., finding robust predictions which can be directly tested experimentally.
How does one learn and practice model building? One place to start—regular readers of this blog will have already guessed my answer—is the 4th edition of Intermediate Physics for Medicine and Biology. This book, and especially the homework problems at the end of each chapter, provide plenty of examples of model building (for simple models applied to the study of the heart, see Chapter 10, Problems 37–40). I think that this aspect of the book sets it apart from many others texts, which cover the biology in more detail.

Krastan Blagoev is the director of the Physics of Living Systems program at the National Science Foundation. According to the NSF website
The program “Physics of Living Systems” (PoLS) in the Physics Division at the National Science Foundation targets theoretical and experimental research exploring the most fundamental physical processes that living systems utilize to perform their functions in dynamic and diverse environments. The focus should be on understanding basic physical principles that underlie biological function. Proposals that use physics only as a tool to study biological questions are of low priority.
Because I might someday apply for a grant from the PoLS program, let me note that Dr. Blagoev is a gentleman and a scholar, who has done much to advance the application of physics to biology. To learn more about Blagoev, see the April 2008 issue of The Biological Physicist, the newsletter for the Division of Biological Physics of the American Physical Society. Shukla is the director for the “Biomolecular Dynamics, Structure and Function” program at NSF, which I am unlikely ever to seek funding from, so I’ll just say he is probably a good guy too. Levine is the Director of the Center for Theoretical Biological Physics at Rice University.

Friday, August 9, 2013

Martha Chase (1927-2003)

Ten years ago yesterday, the American biologist Martha Chase passed away. Chase is famous for her participation in a fundamental genetics experiment. In collaboration with Alfred Hershey, she performed this experiment in 1952 at Cold Spring Harbor Laboratory (see last week's blog entry).  Their results supported the hypothesis that DNA is the biological molecule that carries genetic information. They showed that the DNA, not the protein, of the bacteriophage T2 (a virus that infects bacteria) entered E. coli upon infection.

The Eighth Day of Creation: The Makers of the Revolution in Biology, by Horance Freeland Judson, suuperimposed on Intermediate Physics for Medicine and Biology.
The Eighth Day of Creation:
The Makers of the Revolution in Biology,
by Horace Freeland Judson.
To describe this experiment, I quote from Horace Freeland Judson’s wonderful book The Eighth Day of Creation: The Makers of the Revolution in Biology.
Hershey and Chase decided to see if they could strip off the empty phage ghosts from the bacteria and find out what they were and where their contents had gone. DNA contains no sulphur; phage protein has no phosphorus. Accordingly, they began by growing phage in a bacterial culture with a radioactive isotope as the only phosphorus in the soup [P32], which was taken up in all the phosphate groups as the DNA of the phage progeny was assembled, or, in the parallel experiment, by growing phage whose coat protein was labelled with hot sulphur [S35]. They used the phage to infect fresh bacteria in broths that were not radioactive, and a few minutes after infection tried to separate the bacteria from the emptied phage coats. “We tried various grinding arrangements, with results that weren’t very encouraging,” Hershey wrote later. Then they made a technological breakthrough, in the best Delbruck fashion of homely improvisation. “When Margaret McDonald loaned us her blender the experiment promptly succeeded.”
This ordinary kitchen blender provided just the right shear forces to strip the empty bacteriophage coats off the bacteria. When tested, those bacteria infected by phages containing radioactive phosphorus were themselves radioactive, but those infected by phages containing radioactive sulphur were not. Thus, the DNA and not the protein is the genetic material responsible for infection. This was truly an elegant experiment. They key was the use of radioactive tracers. Russ Hobbie and I discuss nuclear physics and nuclear medicine in Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology. We focus on medical applications of radioactive isotopes, but we should remember that these tracers also have played a crucial role in experiments in basic biology.

Hershey and Chase’s experiment, often called the Warring Blender experiment, is a classic studied in introductory biology classes. It was the high point of Chase’s career. She obtained her bachelor’s degree from the College of Wooster and was then hired by Hershey to work in his Cold Spring Harbor laboratory. She stayed at Cold Spring Harbor only three years, but in that time she and Hershey performed their famous experiment. In 1964 she obtained her PhD from the University of Southern California. Unfortunately, things did not go so well from Chase after that. Writer Milly Dawson tells the story.
In the late 1950s in California, she had met and married a fellow scientist, Richard Epstein, but they soon divorced… Chase suffered several other personal setbacks, including a job loss, in the late 1960s, a period that saw the end of her scientific career. Later, she experienced decades of dementia, with long-term but no short-term memory. [Waclow] Szybalski [a colleague at Cold Spring Harbor Laboratory in the 1950s] remembered his friend as “a remarkable but tragic person.”
A good description of the Hershey-Chase experiment can be found here. You can learn more about life of Martha Chase in obituaries here and here.  Szybalski’s reminiscences are recording in a Cold Spring Harbor oral history available here. Dawson’s tribute can be found here. And most importantly, the 1952 Hershey-Chase paper can be found here.

Friday, August 2, 2013

Cold Spring Harbor Laboratory

A photograph of me standing next to the entrance of Cold Spring Harbor Laboratory.
Me standing next to the entrance of
Cold Spring Harbor Laboratory.
Last week my wife, my mother-in-law, and I made a brief trip to Long Island, New York, where we made a quick stop at the Cold Spring Harbor Laboratory. What a lovely setting for a research center. We drove around the grounds, looking at the various labs. It sits right on a bay off the Long Island Sound, and looks more like a resort than a scientific laboratory. James Watson, of DNA fame, was the long-time director of Cold Spring Harbor Lab.

In the last few years, the lab has begun a thrust into “Quantitative Biology.” This area of research has much overlap with the 4th edition of Intermediate Physics for Medicine and Biology. I view this development as evidence that science is going in “our direction,” toward a larger role for physics and math in medicine and biology. The Cold Spring Harbor website describes the new Simons Center for Quantitative Biology.
Cold Spring Harbor Laboratory (CSHL) has recently opened the Simons Center for Quantitative Biology (SCQB). The areas of expertise in the SCQB include applied mathematics, computer science, theoretical physics, and engineering. Members of the SCQB will interact closely with other CSHL researchers and will apply their approaches to research areas including genomic analysis, population genetics, neurobiology, evolutionary biology, and signal and image processing.
We passed by CSHL during a trip that included stops at Sagamore Hill National Historic Site in Oyster Bay (President Theodore Roosevelt’s home), Planting Fields Arboretum, and the Montauk Point Lighthouse.

Friday, July 26, 2013

Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields

Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields, by Jakko Malmivuo and Robert Plonsey, superimposed on Intermediate Physics for Medicine and Biology.
Bioelectromagnetism,
by Malmivuo and Plonsey.
A good textbook about bioelectricity and biomagnetism is Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields by Jaakko Malmivuo and Robert Plonsey (Oxford University Press, 1995). One of the best features of the book is that it is available for free online at www.bem.fi/book/index.htm. The book covers many of the topics Russ Hobbie and I discuss in Chapters 6–9 of the 4th edition of Intermediate Physics for Medicine and Biology: the cable equation, the Hodgkin and Huxley model, patch-clamp recordings, the electrocardiogram, biomagnetism, the bidomain model, and magnetic stimulation. The book’s introduction outlines its eight parts:
Part I discusses the anatomical and physiological basis of bioelectromagnetism. From the anatomical perspective, for example, Part I considers bioelectric phenomena first on a cellular level (i.e., involving nerve and muscle cells) and then on an organ level (involving the nervous system (brain) and the heart).

Part II introduces the concepts of the volume source and volume conductor and the concept of modeling. It also introduces the concept of impressed current source and discusses general theoretical concepts of source-field models and the bidomain volume conductor. These discussions consider only electric concepts.

Part III explores theoretical methods and thus anatomical features are excluded from discussion. For practical (and historical) reasons, this discussion is first presented from an electric perspective in Chapter 11. Chapter 12 then relates most of these theoretical methods to magnetism and especially considers the difference between concepts in electricity and magnetism.

The rest of the book (i.e., Parts IV–IX) explores clinical applications. For this reason, bioelectromagnetism is first classified on an anatomical basis into bioelectric and bio(electro)magnetic constituents to point out the parallelism between them. Part IV describes electric and magnetic measurements of bioelectric sources of the nervous system, and Part V those of the heart.

In Part VI, Chapters 21 and 22 discuss electric and magnetic stimulation of neural and Part VII, Chapters 23 and 24, that of cardiac tissue. These subfields are also referred to as electrobiology and magnetobiology. Part VIII focuses on Subdivision III of bioelectromagnetism—that is, the measurement of the intrinsic electric properties of biological tissue. Chapters 25 and 26 examine the measurement and imaging of tissue impedance, and Chapter 27 the measurement of the electrodermal response.

In Part IX, Chapter 28 introduces the reader to a bioelectric signal that is not generated by excitable tissue: the electro-oculogram (EOG). The electroretinogram (ERG) also is discussed in this connection for anatomical reasons, although the signal is due to an excitable tissue, namely the retina.
Jaakko Malmivuo is a Professor in the School of Electrical Engineering at Aalto University in Helsinki, Finland. He is also the director of the Ragnar Granit Institute.

Robert Plonsey is the Pfizer-Pratt University Professor Emeritus of Biomedical Engineering at Duke University. This year, he received the IEEE Biomedical Engineering Award “for developing quantitative methods to characterize the electromagnetic fields in excitable tissue, leading to a better understanding of the electrophysiology of nerve, muscle, and brain.” Plonsey is cited on 16 pages of Intermediate Physics for Medicine and Biology, the most of any scientist or author.