Friday, February 26, 2010

All the News That’s Fit to Print

Newspaper articles may not provide the most authoritative information about science and medicine, but they are probably the primary source of news about medical physics for the layman. Today, I will discuss some recent articles from one of the leading newspapers in the United States: the venerable New York Times.


Last week Russ Hobbie sent me a copy of an article in the February 16 issue of the NYT, titled “New Source of an Isotope in Medicine is Found.” It describes the continuing shortage of technetium-99m, a topic I have discussed before in this blog.
Just as the worldwide shortage of a radioactive isotope used in millions of medical procedures is about to get worse, officials say a new source for the substance has emerged: a nuclear reactor in Poland.

The isotope, technetium 99, is used to measure blood flow in the heart and to help diagnose bone and breast cancers. Almost two-thirds of the world’s supply comes from two reactors; one, in Ontario, has been shut for repairs for nine months and is not expected to reopen before April, and the other, in the Netherlands, will close for six months starting Friday.

Radiologists say that as a result of the shortage, their treatment of some patients has had to revert to inferior materials and techniques they stopped using 20 years ago.

But on Wednesday, Covidien, a company in St. Louis that purifies the material created in the reactor and packages it in a form usable by radiologists, will announce that it has signed a contract with the operators of the Maria reactor, near Warsaw, one of the world’s most powerful research reactors.
I doubt that relying on a Polish reactor is a satisfactory long-term solution to our 99mTc shortage, but it may provide crucial help with the short term crisis. A more promising permanent solution is described in a January 26 article on medicalphysicsweb.
GE Hitachi Nuclear Energy (GEH) announced today it has been selected by the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) to help develop a U.S. supply of a radioisotope used in more than 20 million diagnostic medical procedures in the United States each year.
More information can be found at the American Association of Physicists in Medicine website. Let’s hope that this new initiative will prove successful.


The second topic I want to discuss today was called to my attention by my former student Phil Prior (PhD in Biomedical Sciences: Medical Physics, Oakland University, 2008). On January 26, the NYT published Walt Bogdanich’s article “As Technology Surges, Radiation Safeguards Lag.”
In New Jersey, 36 cancer patients at a veterans hospital in East Orange were overradiated—and 20 more received substandard treatment—by a medical team that lacked experience in using a machine that generated high-powered beams of radiation… In Louisiana, Landreaux A. Donaldson received 38 straight overdoses of radiation, each nearly twice the prescribed amount, while undergoing treatment for prostate cancer… In Texas, George Garst now wears two external bags—one for urine and one for fecal matter—because of severe radiation injuries he suffered after a medical physicist who said he was overworked failed to detect a mistake.

These mistakes and the failure of hospitals to quickly identify them offer a rare look into the vulnerability of patient safeguards at a time when increasingly complex, computer-controlled devices are fundamentally changing medical radiation, delivering higher doses in less time with greater precision than ever before.

Serious radiation injuries are still infrequent, and the new equipment is undeniably successful in diagnosing and fighting disease. But the technology introduces its own risks: it has created new avenues for error in software and operation, and those mistakes can be more difficult to detect. As a result, a single error that becomes embedded in a treatment plan can be repeated in multiple radiation sessions.
A related article by the same author, “Radiation Offers New Cures, and Ways to Do Harm,” was also published in the Gray Lady a few days earlier. These articles discuss recent medical mistakes in which patients have received much more radiation than originally intended. While somewhat sensational, the articles reinforce the importance of quality control in medical physics.

The NYT articles triggered a response from the American Association of Physicists in Medicine on January 28.
The American Association of Physicists in Medicine (AAPM) has issued a statement today in the wake of several recent articles in the New York Times yesterday and earlier in the week that discuss a number of rare but tragic events in the last decade involving people undergoing radiation therapy.

While it does not specifically comment on the details of these events, the statement acknowledges their gravity. It reads in part: “The AAPM and its members deeply regret that these events have occurred, and we continue to work hard to reduce the likelihood of similar events in the future.” The full statement appears here.

Today's statement also seeks to reassure the public on the safety of radiation therapy, which is safely and effectively used to treat hundreds of thousands of people with cancer and other diseases every year in the United States. Medical physicists in hospitals and clinics across the United States are board-certified professionals who play a key role in assuring quality during these treatments because they are directly responsible for overseeing the complex technical equipment used.

Taken together, the articles I’ve discussed today highlight some of the challenges that face the field of medical physics. For those who want additional background about the underlying physics and its applications to medicine, I recommend—you guessed it—the 4th edition of Intermediate Physics for Medicine and Biology.

Friday, February 19, 2010

The Electron Microscope

Intermediate Physics for Medicine and Biology does not discuss one of the most important instruments in modern biology: the electron microscope. If I were to add a very brief introduction about the electron microscope to Intermediate Physics for Medicine and Biology, I would put it right after Sec. 14.1, The Nature of Light: Waves Versus Photons. It would look something like this.
14.1 ½ De Broglie Wavelength and the Electron Microscope

Like light, matter can have both wave and particle properties. The French physicist Louis de Broglie derived a quantum mechanical relationship between a particle’s momentum p and wavelength λ

λ = h/p     (14.6 ½)

[Eisberg and Resnick (1985)]. For example, a 100 eV electron has a speed of 5.9 × 106 m s−1 (about 2% the speed of light), a momentum of 5.4 × 10−24 kg m s−1, and a wavelength of 0.12 nm.

The electron microscope takes advantage of the short wavelength of electrons to produce exquisite pictures of very small objects. Diffraction limits the spatial resolution of an image to about a wavelength. For a visible light microscope, this resolution is on the order of 500 nm (Table 14.2). For the electron microscope, however, the wavelength of the electron limits the resolution. A typical electron energy used for imaging is about 100 keV, implying a wavelength much smaller than an atom (however, practical limitations often limit the resolution to about 1 nm). Table 1.2 shows that viruses appear as blurry smears in a light microscope, but can be resolved with considerable detail in an electron microscope. In 1986, Ernst Ruska shared the Nobel Prize in Physics “for his fundamental work in electron optics, and for the design of the first electron microscope.”

Electron microscopes come in two types. In a transmission electron microscope (TEM), electrons pass through a thin sample. In a scanning electron microscope (SEM), a fine beam of electrons is raster scanned across the sample and secondary electrons emitted by the surface are imaged. In both cases, the image is formed in vacuum and the electron beam is focused using a magnetic lens.
To learn more, you can watch a YouTube video about the electron microscope. Nice collections of electron microscope images can be found at http://www.denniskunkel.com, http://www5.pbrc.hawaii.edu/microangela and http://www.mos.org/sln/SEM.

Structure and function of the electron microscope. 

Friday, February 12, 2010

Biomagnetism and Medicalphysicsweb

Medicalphysicsweb is an excellent website for news and articles related to medical physics. Several articles that have appeared recently are related to the field of biomagnetism, a topic Russ Hobbie and I cover in Chapter 8 of the 4th edition of Intermediate Physics for Medicine and Biology. I have followed the biomagnetism field ever since graduate school, when I made some of the first measurements of the magnetic field of an isolated nerve axon. Below I describe four recent articles from medicalphysicsweb.

A February 2 article titled “Magnetomometer Eases Cardiac Diagnostics” discusses a novel type of magnetometer for measuring the magnetic field of the heart. In Section 8.9 of our book, Russ and I discuss Superconducting Quantum Interference Device (SQUID) magnetometers, which have long been used to measure the small (100 pT) magnetic fields produced by cardiac activity. Another way to measure weak magnetic fields is to determine the Zeeman splitting of energy levels of a rubidium gas. The energy difference between levels depends on the external magnetic field, and is measured by detecting the frequency of optical light that is in resonance with this energy difference. Ben Varcoe, of the University of Leeds, has applied this technology to the heart by separating the magnetometer from the pickup coil:
The magnetic field detector—a rubidium vapour gas cell—is housed within several layers of magnetic shielding that reduce the Earth’s field about a billion-fold. The sensor head, meanwhile, is external to this shielding and contained within a handheld probe.
I haven’t been able to find many details about this device (such as if the pickup coils are superconducting or not, and why the pickup coil doesn’t transport the noise from the unshielded measurement area to the shielded detector), but Varcoe believes the device is a breakthrough in the way researchers can measure biomagnetic fields.

Another recent (February 10) article on medicalphysicsweb is about the effect of magnetic resonance imaging scans on pregnant women. As described in Chapter 18 of Intermediate Physics for Medicine and Biology, MRI uses a radio-frequency magnetic field to flip the proton spins so their decay can be measured, resulting in the magnetic resonance signal. This radio-frequency field induces eddy currents in the body that heat the tissue. Heating is a particular concern if the MRI is performed on a pregnant woman, as it could affect fetal development.
Medical physicists at Hammersmith Hospital, Imperial College London, UK, have now developed a more sophisticated model of thermal transport between mother and baby to assess how MRI can affect the foetal temperature (Phys. Med. Biol. 55 913)… This latest analysis takes account of heat transport through the blood vessels in the umbilical cord, an important mechanism that was ignored in previous models. It also includes the fact that the foetus is typically half a degree warmer than the mother – another key piece of information overlooked in earlier work.
Russ and I discuss these issues in Sec. 14.10: Heating Tissue with Light, where we derive the bioheat equation. The authors of the study, Jeff Hand and his colleagues, found that under normal conditions, fetal heating wasn’t a concern, but if exposed to 7.5 minutes of continuous RF field (unlikely during MRI) the temperature increase could be significant.

In a January 27 article, researchers from the University of Minnesota (Russ’s institution) use magnetoencephalography to diagnose post-traumatic stress disorder.
Post-traumatic stress disorder (PTSD) is a difficult condition to diagnose definitively from clinical evidence alone. In the absence of a reliable biomarker, patients’ descriptions of flashbacks, worry and emotional numbness are all doctors have to work with. Researchers from the University of Minnesota Medical School (Minneapolis, MN) have now shown how magnetoencephalography (MEG) could identify genuine PTSD sufferers with high confidence and without the need for patients to relive painful past memories (J. Neural Eng. 7 016011).
The magnetoencephalogram is discussed in Sec. 8.5 of Intermediate Physics for Medicine and Biology. The data for the Minnesota study was obtained using a 248-channel SQUID magnetometer. The researchers analyzed data from 74 patients with post-traumatic stress disorder known to the VA hospital in Minneapolis, and 250 healthy controls. The authors claim that the accuracy of the test is over 90%.

Finally, a February 8 article describes a magnetic navigation system installed in Taiwan by the company Stereotaxis.
The Stereotaxis System is designed to enable physicians to complete more complex interventional procedures by providing image guided delivery of catheters and guidewires through the blood vessels and chambers of the heart to treatment sites. This is achieved using computer-controlled, externally applied magnetic fields that govern the motion of the working tip of the catheter or guidewire, resulting in improved navigation, shorter procedure time and reduced x-ray exposure.
The system works by having ferromagnetic material in a catheter tip, and an applied magnetic field that can be adjusted to “steer” the catheter through the blood vessels. We discuss magnetic forces in Sec. 8.1 of Intermediate Physics for Medicine and Biology, and ferromagnetic materials in Sec. 8.8.

Although I believe medicalphysicsweb is extremely useful for keeping up-to-date on developments in medical physics, I find that often their articles either have specialized physics concepts that the layman may not understand or, more often, don’t address the underlying physics at all. Yet, one can’t understand modern medicine without mastery of the basic physics concepts. My browsing through medicalphysicsweb convinced me once again about the importance of learning how physics can be applied to medicine and biology. Perhaps I am biased, but I think that studying from the 4th edition of Intermediate Physics for Medicine and Biology is a great way to master these important topics.

Friday, February 5, 2010

Beta Decay and the Neutrino

In Section 17.4 in the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss beta decay the neutrino.
The emission of a beta-particle is accompanied by the emission of a neutrino… [which] has no charge and no rest mass… [and] hardly interacts with matter at all… A particle that seemed originally to be an invention to conserve energy and angular momentum now has a strong experimental basis.
Understanding Physics: The Electron, Proton, and Neutron, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Understanding Physics:
The Electron, Proton, and Neutron,
by Isaac Asimov.
Our wording implies there is a story behind this particle “that seemed originally to be an invention to conserve energy.” Indeed, that is the case. I will let Isaac Asimov tell this tale. (Asimov's books, which I read in high school, influenced me to become a scientist.) The excerpt below is from Chapter 14 of his book Understanding Physics: The Electron, Proton, and Neutron.
In Chapter 11, disappearance in mass during the course of nuclear reactions was described as balanced by an appearance of energy in accordance with Einstein’s equation, e=mc2. This balance also held in the case of the total annihilation of a particle by its anti-particle, or the production of a particle/anti-particle pair from energy.
Nevertheless, although in almost all such cases the mass-energy equivalence was met exactly, there was one notable exception in connection with radioactive radiations.

Alpha radiation behaves in satisfactory fashion. When a parent nucleus breaks down spontaneously to yield a daughter nucleus and an alpha particle, the sum of the mass of the two products does not quite equal the mass of the original nucleus. The difference appears in the form of energy—specifically, as the kinetic energy of the speeding alpha particle. Since the same particles appear as products at every breakdown of a particular parent nucleus, the mass-difference should always be the same, and the kinetic energy of the alpha particles should also always be the same. In other words, the beam of alpha particles should be monoenergetic. This was, in essence, found to be the case…

It was to be expected that the same considerations would hold for a parent nucleus breaking down to a daughter nucleus and a beta particle. It would seem reasonable to suppose that the beta particles would form a monoenergetic beam too…

Instead, as early as 1900, Becquerel indicated that beta particles emerged with a wide spread of kinetic energies. By 1914, the work of James Chadwick demonstrated the “continuous beta particle spectrum” to be undeniable.

The kinetic energy calculated for a beta particle on the basis of mass loss turned out to be a maximum kinetic energy that very few obtained. (None surpassed it, however; physicists were not faced with the awesome possibility of energy appearing out of nowhere.)

Most beta particles fell short of the expected kinetic energy by almost any amount up to the maximum. Some possessed virtually no kinetic energy at all. All told, a considerable portion of the energy that should have been present, wasn’t present, and through the 1920’s this missing energy could not be detected in any form.

Disappearing energy is as insupportable, really, as appearing energy, and though a number of physicists, including, notably, Niels Bohr, were ready to abandon the law of conservation of energy at the subatomic level, other physicists sought desperately for an alternative.

In 1931, an alternative was suggested by Wolfgang Pauli. He proposed that whenever a beta particle was produced, a second particle was also produced, and that the energy that was lacking in the beta particle was present in the second particle.

The situation demanded certain properties of the hypothetical particle. In the emission of beta particles, electric charge was conserved; that is, the net charge of the particles produced after emission was the same as that of the original particle. Pauli’s postulated particle therefore had to be uncharged. This made additional sense since, had the particle possessed a charge, it would have produced ions as it sped along and would therefore have been detectable in a cloud chamber, for instance. As a matter of fact, it was not detectable.

In addition, the total energy of Pauli’s projected particle was very small—only equal to the missing kinetic energy of the electron. The total energy of the particle had to include its mass, and the possession of so little energy must signify an exceedingly small mass. It quickly became apparent that the new particle had to have a mass of less than 1 percent of the electron and, in all likelihood, was altogether massless.

Enrico Fermi, who interested himself in Pauli’s theory at once, thought of calling the new particle a “neutron,” but Chadwick, at just about that time, discovered the massive, uncharged particle that came to be known by that name. Fermi therefore employed an Italian diminutive suffix and named the projected particle the neutrino (“little neutral one”), and it is by that name that it is known.

Friday, January 29, 2010

William Albert Hugh Rushton

This semester, I am teaching a graduate class at Oakland University on Bioelectric Phenomena (PHY 530). Rather than using a textbook, I require the students to read original papers, thereby providing insights into the history of the subject and many opportunities to learn about the structure and content of original research articles.

We began with a paper by Alan Hodgkin and Bernard Katz (“The Effect of Sodium Ions on the Electrical Activity of the Giant Axon of the Squid,” Journal of Physiology, Volume 108, Pages 37–77, 1949) that tests the hypothesis that the nerve membrane becomes selectively permeable to sodium during an action potential. We then moved on to Alan Hodgkin and Andrew Huxley’s monumental 1952 paper in which they present the Hodgkin-Husley model of the squid nerve axon (“A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve,” Journal of Physiology, Volume 117, Pages 500–544, 1952). In order to provide a more modern view of the ion channels that underlie Hodgkin and Huxley’s model, we next read an article by Roderick MacKinnon and his group (“The Structure of the Potassium Channel: Molecular Basis of K+ Conduction and Selectivity,” Science, Volume 280, Pages 69–77, 1998). Then we read a paper by Erwin Neher, Bert Sakmann and their colleagues that described patch clamp recordings of single ion channels (“Improved Patch-Clamp Techniques for High-Resolution Current Recordings from Cells and Cell-Free Membrane Patches,” Pflugers Archive, Volume 391, Pages 85–100, 1981).

This week I wanted to cover one-dimensional cable theory, so I chose one of my favorite papers, by Alan Hodgkin and William Rushton (“The Electrical Constants of a Crustacean Nerve Fibre,” Proceedings of the Royal Society of London, B, Volume 133, Pages 444–479, 1946). I recall reading this lovely article during my first summer as a graduate student at Vanderbilt University (where my daughter Kathy is now an attending college). My mentor, John Wikswo, had notebook after notebook full of research papers about nerve electrophysiology, and I set out to read them all. Learning a subject by reading the original literature is an interesting experience. It is less efficient than learning from a textbook, but you pick up many insights that are lost when the research is presented in a condensed form. Hodgkin and Rushton’s paper contains the fascinating quote
Electrical measurements were made by applying rectangular pulses of current and recording the potential response photographically. About fifteen sets of film were obtained in May and June 1939, and a preliminary analysis was started during the following months. The work was then abandoned and the records and notes stored for six years [my italics]. A final analysis was made in 1945 and forms the basis of this paper.
During those six years, the authors were preoccupied with a little issue called World War II.

Sometimes I like to provide my students with biographical information about the authors of these papers, and I had already talked about my hero, the Nobel Prize-winning Alan Hodgkin, earlier in the semester. So, I did some research on Rushton, who I was less familiar with. It turns out, he is known primarily for his work on vision. William Albert Hugh Rushton (1901–1980) has only a short Wikipedia entry, which does not even discuss his work on nerves. (Footnote: Several months ago, after reading—or rather listening to while walking my dog Suki—The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia by Andrew Lih, I became intensely interested in Wikipedia and started updating articles related to my areas of expertise. This obsession lasted for only about a week or two. I rarely make edits anymore, but I may update Rushton’s entry.) Rushton was a professor of physiology at Trinity College in Cambridge University. He became a Fellow of the Royal Society in 1948, and received the Royal Medal from that society in 1970.

Horace Barlow wrote an obituary for Rushton in the Biographical Memoirs of Fellows of the Royal Society (Volume 32, Pages 423–459, 1986). It begins
William Rushton first achieved scientific recognition for his work on the excitability of peripheral nerve where he filled the gap in the Cambridge succession between Lord Adrian, whose last paper on peripheral nerve appeared in 1922, and Alan Hodgkin, whose first paper was published in 1937. It was on the strength of this work that he was elected as a fellow of the Royal Society in 1948, but then Rushton started his second scientific career, in vision, and for the next 30 years he was dominant in a field that was advancing exceptionally fast. In whatever he was engaged he cut a striking and influential figure, for he was always interested in a new idea and had the knack of finding the critical argument or experiment to test it. He was argumentative, and often an enormously successful showman, but he also exerted much influence from the style of his private discussions and arguments. He valued the human intellect and its skillful use above everything else, and he successfully transmitted this enthusiasm to a large number of students and disciples.
Another of my favorite papers by Rushton is “A Theory of the Effects of Fibre Size in Medullated Nerve” (Journal of Physiology, Volume 115, Pages 101–122, 1951). Here, he correctly predicts many of the properties of myelinated nerve axons, such as the ratio of the inner and outer diameters of the myelin, from first principles.

Both of the Rushton papers I have cited here are also referenced in the 4th edition of Intermediate Physics for Medicine and Biology. Problem 34 in Chapter 6 is based on the Hodgkin-Rushton paper. It examines their analytical solution to the one-dimensional cable equation, which involves error functions. Was it Hodgkin or Rushton who was responsible for this elegant piece of mathematics gracing the Journal of Physiology? I can’t say for sure, but in Hodgkin’s Nobel Prize autobiography he claims he learned about cable theory from Rushton (who was 13 years older than him).

William Rushton provides yet another example of how a scientist with a firm grasp of basic physics can make fundamental contributions to biology.

Friday, January 22, 2010

Summer Internships

Many readers of the 4th edition of Intermediate Physics for Medicine and Biology are undergraduate majors in science or engineering. This time of the year, these students are searching for summer internships. I have a few suggestions.

My first choice is the NIH Summer Internship Program in Biomedical Research. The intramural campus of the National Institutes of Health in Bethesda, Maryland is the best place in the world to do biomedical research. My years working there in the 1990s were wonderful. Apply now. The deadline is March 1.

The National Science Foundation supports Research Experience for Undergraduate (REU) programs throughout the US. Click here for a list (it is long, but probably incomplete). Often NSF requires schools to not just select from their own undergraduates, but also to open some positions in their REU program to students from throughout the country. You might also try to Google “REU” and see what you come up with. Each program has different deadlines and eligibility requirements. For several years Oakland University, where I work, had an REU program run by the physics department. We have applied for funding again, but have not heard yet if we were successful. If lucky, we will run the program this summer, with a somewhat later deadline than most.

Last year, as part of the federal government’s stimulus package, the National Institutes of Health encouraged researchers supported by NIH grants to apply for a supplement to fund undergraduate students in the summer. Most of these supplements were for two years, and this will be the second summer. Therefore, I expect there will be extra opportunities for undergraduate students to do biomedical research in the coming months. Strike while the iron’s hot! The stimulus program is scheduled to end next year.

Finally, one of the best ways for undergraduate students to find opportunities to do research in the summer is to ask around among your professors. Get a copy of your department’s annual report and find out which professors have funding. Attend department seminars and colloquia to find out who is doing research that interests you. Or just show up at a faculty member’s door and ask (first read what you can about his or her research, and have your resume in hand). If you can manage it financially, consider working without pay for the first summer, just to get your foot in the door.

When I look back on my undergraduate education at the University of Kansas, one of the most valuable experiences was doing research in Professor Wes Unruh’s lab. I learned more from Unruh and his graduate students than I did in my classes. But such opportunities don’t just fall into your lap. You need to look for them. Ask around, knock on some doors, and keep your eyes open. And start now, because many of the formal internship programs have deadlines coming up soon.

If, dear reader, you are fortunate enough to get an internship this summer, but it’s far from home, then don’t forget to pack your copy of Intermediate Physics for Medicine and Biology when you go. After working all day in the lab, you can relax with it in the evening!

Good luck.

Friday, January 15, 2010

TeX

The TeXbook,
by Donald Knuth.
Russ Hobbie and I wrote the 4th edition of Intermediate Physics for Medicine and Biology using TeX, the typesetting program developed by Donald Knuth. Well, not really. We actually used LaTeX, a document markup language based on TeX. To be honest, “we” didn’t even use LaTeX: Russ did all the typesetting with LaTeX while I merely read pdf files and sent back comments and suggestions.

TeX is particularly well suited for writing equations, of which there are many in Intermediate Physics for Medicine and Biology. I used TeX in graduate school, while working in John Wikswo’s laboratory at Vanderbilt University. This was back in the days before LaTeX was invented, and writing equations in TeX was a bit like programming in machine language. I remember sitting at my desk with Knuth’s TeXbook (blue, spiral bound, and delightful), worrying about arcane details of typesetting some complicated expression. At that time, TeX was new and unique. When I first arrived at Vanderbilt in 1982, Wikswo’s version of TeX did not even have a WYSIWYG editor, and our lab did not have a laser printer, so I would make a few changes in the TeX document and then run down the hall to the computer center to inspect my printout. As you can imagine, after several iterations of this process the novelty of TeX wore off. But, oh, did our papers look good when we shipped them out to the journal (and, yes, we did mail paper copies; no electronic submission back then). Often, I thought our version looked better than what was published. By the way, Donald Knuth is a fascinating man. Check out his website at http://www-cs-faculty.stanford.edu/~knuth. He pays $2.56 to readers who find an error in his books (according to Knuth, 256 pennies is one hexadecimal dollar). Russ Hobbie used to pay a quarter for errors, and all I give is a few lousy extra credit points to my students.

I must confess, now-a-days I use the equation editor in Microsoft Word for writing equations. Word’s output doesn’t look as nice as TeX’s, but I find it easier to use. The solution manual for the 4th edition of Intermediate Physics for Medicine and Biology is written entirely using Word (email Russ or me for a copy), and so is the errata. But I did reacquaint myself with TeX when writing my Scholarpedia article about the bidomain model. Both Wikipedia and Scholarpedia use some sort of TeX hybrid for equations.

Listen to Donald Knuth describe his work.
https://www.youtube.com/embed/nyCW279KCM4

Friday, January 8, 2010

In The Beat of a Heart

In the Beat of a Heart: Life, Energy, and the Unity of Nature, by John Whitfield, superimposed on Intermediate Physics for Medicine and Biology.
In the Beat of a Heart:
Life, Energy, and the Unity of Nature,
by John Whitfield.
Over Christmas break, I read In the Beat of a Heart: Life, Energy, and the Unity of Nature, by John Whitfield. I had mixed feelings about the book. I didn’t have much interest in the parts dealing with biodiversity in tropical forests and skimmed through them rather quickly. But other parts I found fascinating. One of the main topics explored in the book is Kleiber’s law (metabolic rate scales as the 3/4th power of body mass), which Russ Hobbie and I discuss in Chapter 2 of the 4th edition of Intermediate Physics for Medicine and Biology. But the book has a broader goal: to compare and contrast the approaches of physicists and biologists to understanding life. The main idea can be summarized by the subtitle of the textbook I studied biology out of when an undergraduate at the University of Kansas: The Unity and Diversity of Life. Intermediate Physics for Medicine and Biology lies on the “unity” side of this great divide, but the interplay of these two views of life makes for a remarkable story.

The book begins with a Prologue about D’Arcy Thompson (Whitfield calls him “the last Victorian scientist”), author of the influential, if out-of-the-mainstream, book On Growth and Form.
This is the story—with some detours—of D’Arcy Thompson’s strand of biology and of a century-long attempt to build a unified theory, based on the laws of physics and mathematics, of how living things work. At the story’s heart is the study of something that Thompson called “a great theme”—the role of energy in life... The way that energy affects life depends on the size of living things. Size is the most important single notion in our attempt to understand energy’s role in nature. Here, again, we shall be following Thompson’s example. After its introduction, On Growth and Form ushers the reader into a physical view of living things with a chapter titled “On Magnitude,” which looks at the effects of body size on biology, a field called biological scaling.
In the Beat of a Heart examines Max Rubner’s idea that metabolism scales with surface area (2/3rd power), and Max Kleiber’s modification of this rule to a 3/4th power. It then describes the attempt of physicist Geoffrey West and ecologist Brian Enquist to explain this rule by modeling the fractal networks that provide the raw materials needed to maintain metabolism. While I was familiar with much of this story before reading Whitfield’s book, I nevertheless found the historical context and biographical background engrossing. Then came the lengthy section on forest ecology (Zzzzzzzzz). I soldiered on and was rewarded by a penetrating final chapter comparing the physicist’s and biologist’s points of view.
Finding a unity of nature would not make studying the details of nature obsolete. Indeed, finding unity depends on understanding the details. The variability of life means that in biology the ability to generalize is not enough. If you've measured one electron, you've measured them all, but, as I saw in Costa Rica, to understand a forest you must be able to see the trees, and that takes a botanist. Thinkers such as Humboldt, Darwin, and Wallace gained their understanding of how nature works from years of intimate experience of nature in the flesh and the leaf. And yet they were not just interested in what their senses told them: they also tried to abstract and unify. The combination of attributes--intrepid and reflective, naturalist and mathematician--strikes me as rather rare, and becoming more so. These days scientific lone wolves such as D’Arcy Thompson are almost extinct, and it would take a truly awesome polymath to acquire the necessary suite of skills in natural history, ecology, mathematics, and physics to devise a theory as complex as fractal networks.
The book ends with a provoking question and answer that sums up the debate nicely:
Is nature beautifully simple or beautifully complex? Yes, it is.
More about In the Beat of a Heart can be found at the book’s website: http://www.inthebeatofaheart.com.

Friday, January 1, 2010

BIO2010

2010 is finally here. Happy New Year! Let’s celebrate by discussing the National Research Council report BIO2010.

In 2003 the NRC released the report BIO2010: Transforming Undergraduate Education for Future Research Biologists. If I had to sum up the report in one phrase, it would be “they are signing our song.” In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I incorporate many of the ideas championed in BIO2010. The preface of the report recommends
a comprehensive reevaluation of undergraduate science education for future biomedical researchers. In particular it calls for a renewed discussion on the ways that engineering and computer science, as well as chemistry, physics, and mathematics are presented to life science students. The conclusions of the report are based on input from chemists, physicists, and mathematicians, not just practicing research biologists. The committee recognizes that all undergraduate science education is interconnected. Changes cannot be made solely to benefit future biomedical researchers. The impact on undergraduates studying other types of biology, as well as other sciences, cannot be ignored as reforms are considered. The Bio2010 report therefore provides ideas and options suitable for various academic situations and diverse types of institutions. It is hoped that the reader will use these possibilities to initiate discussions on the goals and methods of teaching used within their own department, institution, or professional society.
The executive summary begins
The interplay of the recombinant DNA, instrumentation, and digital revolutions has profoundly transformed biological research. The confluence of these three innovations has led to important discoveries, such as the mapping of the human genome. How biologists design, perform, and analyze experiments is changing swiftly. Biological concepts and models are becoming more quantitative, and biological research has become critically dependent on concepts and methods drawn from other scientific disciplines. The connections between the biological sciences and the physical sciences, mathematics, and computer science are rapidly becoming deeper and more extensive. The ways that scientists communicate, interact, and collaborate are undergoing equally rapid and dramatic transformations, which are driven by the accessibility of vast computing power and facile information exchange over the Internet.
Readers of this blog will be particularly interested in Recommendation #1.3 of the report, dealing with the physics education required by biologists, reproduced below. In the list of concepts the report considers essential, I have indicated in brackets the sections of the 4th edition of Intermediate Physics for Medicine and Biology that address each topic. (I admit that the comparison of the report’s recommended physics topics to those topics covered in our book may be a bit unfair, because the report was referring to an introductory physics class, not an intermediate one.) Some of the connections between the report’s topics and sections in our book need additional elaboration, which I have included as footnotes.
Physics

RECOMMENDATION #1.3

The principles of physics are central to the understanding of biological processes, and are increasingly important in sophisticated measurements in biology. The committee recommends that life science majors master the key physics concepts listed below. Experience with these principles provides a simple context in which to learn the relationship between observations and mathematical description and modeling.

The typical calculus-based introductory physics course taught today was designed to serve the needs of physics, mathematics, and engineering students. It allocates a major block of time to electromagnetic theory and to many details of classical mechanics. In so doing, it does not provide the time needed for in-depth descriptions of the equally basic physics on which students can build an understanding of biology. By emphasizing exactly solvable problems, the course rarely illustrates the ways that physics can be applied to more recalcitrant problems. Illustrations involving modern biology are rarely given, and computer simulations are usually absent. Collective behaviors and systems far from equilibrium are not a traditional part of introductory physics. However, the whole notion of emergent behavior, pattern formation, and dynamical networks is so central to understanding biology, where it occurs in an extremely complex context, that it should be introduced first in physical systems, where all interactions and parameters can be clearly specified, and quantitative study is possible.

Concepts of Physics

Motion, Dynamics, and Force Laws
  • Measurement1: physical quantities [throughout], units [1.1, symbol list at the end of each chapter], time/length/mass [1.1], precision [none]
  • Equations of motion2: position [Appendix B], velocity [Appendix B], acceleration [Appendix B], motion under gravity [2.7, Problem 1.28]
  • Newton’s laws [1.8]: force [1.2], mass [1.12], acceleration [Appendix B], springs [Appendix F] and related material: stiffness3 [1.9], damping4 [1.14, 2.7, 10.6], exponential decay [2.2], harmonic motion [10.6]
  • Gravitational [3.9] and spring [none] potential energy, kinetic energy [1.8], power [1.8], heat from dissipation [Problem 8.24], work [1.8]
  • Electrostatic forces [6.2], charge [6.2], conductors/insulators [6.5], Coulomb’s law [6.2]
  • Electric potential [6.4], current [6.8], units [6.2, 6.4, 6.6, 6.8], Ohm’s law [6.8]
  • Capacitors [6.6], R [6.9] and RC [6.11] circuits
  • Magnetic forces [8.1] and magnetic fields [8.2]
  • Magnetic induction and induced currents [8.6]
Conservation Laws and Gobal [sic] Constraints
  • Conservation of energy [3.3] and momentum5 [15.4]
  • Conservation of charge [6.9, 7.9]
  • First [3.3] and Second [3.19] Laws of thermodynamics
Thermal Processes at the Molecular Level
  • Thermal motions: Brownian motion [3.10], thermal force (collisions) [none], temperature [3.5], equilibrium [3.5]
  • Boltzmann’s law [3.7], kT [3.5], examples [3.8, 3.9, 3.10]
  • Ideal gas statistical concepts using Boltzmann’s law, pressure [1.11]
  • Diffusion limited dynamics6 [4.6], population dynamics [2.9, Problem 2.34]
Waves, Light, Optics, and Imaging
  • Oscillators and waves [13.1]
  • Geometrical optics: rays, lenses [14.12], mirrors7 [none]
  • Optical instruments: microscopes and microscopy [Problem 14.45]
  • Physical optics: interference [14.6.2] and diffraction [13.7]
  • X-ray scattering [15.4] and structure determination [none]
  • Particle in a box [none]; energy levels [3.2, 14.2]; spectroscopy from a quantum viewpoint [14.2, 14.3]
  • Other microscopies8: electron [none], scanning tunneling [none], atomic force [none]
Collective Behaviors and Systems far from Equilibrium
  • Liquids [1.11, 1.12, 1.14, 1.15], laminar flow [1.14], viscosity [1.14], turbulence [1.18]
  • Phase transitions9 [Problem 3.57], pattern formation10 [10.11.5], and symmetry breaking [none]
  • Dynamical networks11: electrical, neural, chemical, genetic [none]
1. Russ Hobbie and I have not developed a laboratory to go along with our book, so we don’t discuss measurement, the important differences between precision and accuracy, the ideas of random versus systematic error, or error propagation.

2. Some elementary topics—such as position, velocity, and acceleration vectors—are not presented in the book, but are summarized in an Appendix (we assume they would be mastered in an introductory physics class). We analyze Newton’s second law specifically, but do not develop his three laws of motion in general.

3. We describe Young’s modulus, but we never introduce the term “stiffness.” We talk about potential energy, and especially electrical potential energy, but we don’t spend much time on mass-spring systems and never introduce the concept of elastic (or spring) potential energy.

4. The term “damping” is used
only occasionally in our book, but we discuss several types of dissipative phenomena, such as viscosity, exponential decay plus a constant input, and a harmonic oscillator with friction.

5. We use conservation of momentum when we analyze Compton scattering of electrons in Chapter 15, but we never actually present conservation of momentum as a concept.

6. We don’t discuss “diffusion limited dynamics,” but we do analyze diffusion extensively in Chapter 4.

7. We analyze lenses, but not mirrors, and never analyze the reflection of light (although we spend considerable time discussing the reflection of ultrasonic waves in Chapter 13).

8. I have to admit our book is weak on microscopy: the light microscope is relegated to a homework problem, and we don’t talk at all about electron, scanning tunneling, or atomic force microscopies.

9. We discuss thermodynamic phase transitions in a homework problem, but I believe that the report refers more generally to phase transitions that occur in condensed matter physics (e.g., the Ising model), which we do not discuss.

10. We touch on pattern formation in Chapter 10, and in particular in Problems 10.39 and 10.40 that describe wave propagation in the heart using a cellular automaton. But we do not analyze pattern formation (such as Turing patterns) in detail. Symmetry breaking is not mentioned.

11. We don’t discuss neural networks, or other related topics such as emergent behavior. We can only cover so much in one book.


I may be biased, but I believe that the 4th edition of Intermediate Physics for Medicine and Biology does a pretty good job of implementing the BIO2010 report suggestions into a textbook on physics for biologists. With 2010 now here, it’s important to remind aspiring biology students about the importance of physics in their undergraduate education.

Friday, December 25, 2009

A Present From Santa

Santa arrived last night and left you, dear reader, a present in your stocking: two new homework problems for the 4th edition of Intermediate Physics for Medicine and Biology. The problems belong to Chapter 8 on Biomagnetism (one of my favorite chapters), and specifically to Section 8.6 on Electromagnetic Induction. They both explore the idea of skin depth, but from somewhat different perspectives. Please forgive Santa for being a bit long-winded; he got carried away.

Enjoy.
Section 8.6

Problem 25.1 The concept of “skin depth” plays a role in some biomagnetic applications.
(a) Write Ampere’s law (Eq. 8.22) for the case when the displacement current is negligible.
(b) Use Ohm’s law (Eq. 6.26) to write the result from (a) in terms of the electric field.
(c) Take the curl of both sides of the equation you found in (b) (Assume the conductivity σ is homogeneous and isotropic).
(d) Use Faraday’s law (Eq. 8.20), ∇·B=0 (Eq. 8.7), and the vector identity ∇×(∇×B)=∇(∇·B)-∇2B to simplify the result from (c).
(e) Your answer to (d) should be the familiar diffusion equation (Eq. 4.24). Express the diffusion constant D in terms of electric and magnetic parameters.
(f) In Chapter 4, we found that diffusion over a distance L takes a time T equal to L2/2D. During transcranial magnetic stimulation, L=0.1 m, σ=0.1 S/m and μo=4π × 10−7 T m/A. How long does the magnetic field take to diffuse into the head? Is this time much longer than or much shorter than the rise time of the magnetic field for the stimulator designed by Barker et al. (1985)?
(g) Solve T= L2/2D for L, using the expression for D found in (e). Calculate L for T=0.1 ms. Is L much larger than or much smaller than the size of your head? L is closely related to the “skin depth” defined in electromagnetic theory.
(h) During magnetic resonance imaging (see Chapter 18), an 85 MHz radio-frequency magnetic field is applied to the body. Calculate L using half a period for T. How does L compare to the size of the head? The frequency of the RF field is proportional to the strength of the static magnetic field in an MRI device, and 85 MHz corresponds to 2 T. If the static field is 7 T (common in modern high-field MRI), calculate L. Is it safe to ignore skin depth during high-field MRI?


Problem 25.2 During magnetic stimulation, a changing magnetic field B induces eddy currents in the body that produce their own magnetic field B'. The goal of this problem is to compare B' and B. We can estimate B' using the following approximations. First, ignore the vector nature of all fields and do not distinguish between components. Second, ignore all negative signs. Third, replace all time derivatives with multiplication by 1/T, where T is a characteristic time. Fourth, replace all space derivatives (such as the curl) by multiplication with 1/L, where L is a characteristic length.
(a) Use Faraday’s law (Eq. 8.20) to estimate the induced electric field E from B.
(b) Use Ohm’s law (Eq. 6.26) to estimate the current density J from E.
(c) Use Ampere’s law (Eq. 8.22, but ignore displacement currents) to estimate B' from J.
(d) Combine parts (a), (b), and (c) to determine an expression for the ratio B'/B in terms of the conductivity σ, the permeability μo, L, and T.
(e) In magnetic stimulation, L=0.1 m, T=0.1 ms, σ=0.1 S/m and μo=4π ×
10−7 T m/A. Calculate B'/B. Is it safe to ignore B' compared to B during magnetic stimulation?