Friday, March 12, 2010

The Strangest Man

The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom, by Graham Farmelo, superimposed on Intermediate Physics for Medicine and Biology.
The Strangest Man:
The Hidden Life of Paul Dirac,
Mystic of the Atom,
by Graham Farmelo.
I recently read The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom, by Graham Farmelo, a fascinating biography of the Nobel Prize winning physicist Paul Adrien Maurice Dirac. One thing I did not find in the book was biological or medical physics. Nevertheless, Russ Hobbie and I mention Dirac in Chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology, in connection with the Dirac delta function.
The δ function can be thought of as a rectangle of width a and height 1/a in the limit [as a goes to zero]… The δ function is not like the usual function in mathematics because of its infinite discontinuity at the origin. It is one of a class of “generalized functions” whose properties have been rigorously developed by mathematicians since they were first used by the physicist P. A. M. Dirac.
The Principles of Quantum Mechanics, by Paul Dirac, superimposed on Intermediate Physics for Medicine and Biology.
The Principles of Quantum Mechanics,
by Paul Dirac.
Dirac won his Nobel Prize for contributions to quantum mechanics. I bought a copy of his famous textbook The Principles of Quantum Mechanics when I was an undergraduate at the University of Kansas. Farmelo describes it as “never out of print, it remains the most insightful and stylish introduction to quantum mechanics and is still a powerful source of inspiration for the most able young theoretical physicists. Of all the textbooks they use, none presents the theory with such elegance and with such relentless logic.”

One of Dirac’s greatest contributions was the prediction of positive electrons, or positrons, a type of antimatter. His prediction arose from the relativistic wave equation for the electron, now called the Dirac equation. An interesting feature of the Dirac equation is that it implies negative energy states. The only time these negative states are observable is when an electron is missing from one of the states: a hole. Farmelo writes
The bizarre upshot of the theory is that the entire universe is pervaded by an infinite number of negative-energy electrons – what might be thought of as a “sea.” Dirac argued that this sea has a constant density everywhere, so that experimenters can observe only departures from this perfect uniformity… Only a disturbance in Dirac’s sea—a bursting bubble, for example—would be observable. He envisaged just this when he foresaw that there would be some vacant states in the sea of negative-energy electrons, causing tiny departures from the otherwise perfect uniformity. Dirac called these unoccupied states “holes”... Each hole has positive energy and positive charge—the properties of the proton, the only other subatomic particle known at that time [1929]. So Dirac made the simplest possible assumption by suggesting that a hole is a proton.
We now know that these holes are not protons but are positrons, discovered experimentally in 1932 by Carl Anderson. Positrons are vital for understanding how x-rays interact with matter, as Russ and I describe in Section 15.6 of Intermediate Physics for Medicine and Biology
A photon with energy above 1.02 MeV can produce a particle-antiparticle pair: a negative electron and a positive electron or positron… Since the rest energy (mc2) of an electron or positron is 0.51 MeV, pair production is energetically impossible for photons below 2mc2 = 1.02 MeV.

One can show, using o = pc for the photon, that momentum is not conserved by the positron and electron if Eq. 15.23 [conservation of energy] is satisfied. However, pair production always takes place in the Coulomb field of another particle (usually a nucleus) that recoils to conserve momentum.
In Sec. 17.14, Russ and I describe the crucial role positrons play in medical imaging.
If a positron emitter is used as the radionuclide, the positron comes to rest and annihilates an electron, emitting two annihilation photons back to back. In positron emission tomography (PET) these are detected in coincidence. This simplifies the attenuation correction, because the total attenuation for both photons is the same for all points of emission along each gamma ray through the body (see Problem 54). Positron emitters are short-lived, and it is necessary to have a cyclotron for producing them in or near the hospital. This is proving to be less of a problem than initially imagined. Commercial cyclotron facilities deliver isotopes to a number of nearby hospitals. Patterson and Mosley (2005) found that 97% of the people in the United States live within 75 miles of a clinical PET facility.
Another famous prediction of Dirac’s was magnetic monopoles. Russ and I only mention monopoles in passing in Section 8.8.1: “Since there are no known magnetic charges (monopoles), we must consider the effect of magnetic fields on current loops or magnetic dipoles.” Dirac predicted that magnetic monopoles could exist. Farmelo tells the story.
In Cambridge, during the spring of 1931, Dirac happened upon a rich new seam of ideas that would crystallize into one of his most famous contributions to science… As usual, Dirac appears to have said nothing of this to anyone, even to his close friends. In the early months of 1931, a quiet time for his fellow theoreticians, he was working on the most promising new theory he had conceived for years. The theory broke new ground in magnetism. For centuries, it had been a commonplace of science that magnetic poles come only in pairs, labeled north and south: if one pole is spotted, then the opposite one will be close by. Dirac had found that quantum theory is compatible with the existence of single magnetic poles. During a talk at the Kapitza Club, he dubbed them magnons, but the name never caught on in this context; the particles became known as magnetic monopoles.
Physicists have searched for magnetic monopoles, and once they even thought they found one. In 1982, physicist Blas Cabrera observed a signal consistent with the experimental signature of a monopole (Physical Review Letters, Volume 48, Pages 1378–1381), but it now appears to have been an artifact, as the result has never been reproduced. I have my own remote (indeed, very remote) connection with this experiment (and thus to Dirac). Cabrera’s PhD advisor, William Fairbank, was John Wikswo’s PhD advisor, and Wikswo was in turn my PhD advisor. Thus, academically speaking, I am one of Cabrera’s scientific nephews.

Dirac was known for saying little and behaving rather oddly (the title of the book is, after all, “The Strangest Man”), and Farmelo suggests a possible reason: Dirac may have been autistic.
[Dirac] always attributed his extreme taciturnity and stunted emotions to his father’s disciplinarian regime; but there is another, quite different explanation, namely that he was autistic. Two of Dirac’s younger colleagues confided in me that they had concluded this, each of them making their disclosure in sotto voce, as if they were imparting a shameful secret. Both refused to be quoted… There is not nearly enough detail in her [Dirac’s mother’s] comments or in reports of Dirac’s behaviour in school to justify a diagnosis that he was then autistic. His behavior as an a adult, however, had all the characteristics that almost every autistic person has to some degree—reticence, passivity, aloofness, literal-mindedness, rigid patterns of activity, physical ineptitude, self-centredness and, above all, a narrow range of interests and a marked inability to empathise with other human beings.
Whatever the cause of Dirac’s unusual behavior, he was a great physicist. Farmelo sums up Dirac’s enduring legacy at the end of his book.
There is no doubt that Dirac was a great scientist, one of the few who deserves a place just below Einstein in the pantheon of modern physicists. Along with Heisenberg, Jordan, Pauli, Schrodinger and Born, Dirac was one of the group of theoreticians who discovered quantum mechanics. Yet his contribution was special. In his heyday, between 1925 and 1933, he brought a uniquely clear vision to the development of a new branch of science: the book of nature often seemed to be open in front of him.

Friday, March 5, 2010

Magnetic Measurements of Peripheral Nerve Function Using a Neuromagnetic Current Probe

Section 8.9 in the 4th Edition of Intermediate Physics for Medicine and Biology describes how a toroidal probe can be used to measure the magnetic field of a nerve.
If the signal [a weak biomagnetic field] is strong enough, it can be detected with conventional coils and signal-averaging techniques that are described in Chapter 11. Barach et al. (1985) used a small detector through which a single axon was threaded. The detector consisted of a toroidal magnetic core wound with many turns of fine wire (Fig. 8.26). Current passing through the hole in the toroid generated a magnetic field that was concentrated in the ferromagnetic material of the toroid. When the field changed, a measurable voltage was induced in the surrounding coil.
My friend Ranjith Wijesinghe, of the Department of Physics at Ball State University, recently published the definitive review of this research in the journal Experimental Biology and Medicine, titled “Magnetic Measurements of Peripheral Nerve Function Using a Neuromagnetic Current Probe” (Volume 235, Pages 159–169).
The progress made during the last three decades in mathematical modeling and technology development for the recording of magnetic fields associated with cellular current flow in biological tissues has provided a means of examining action currents more accurately than that of using traditional electrical recordings. It is well known to the biomedical research community that the room-temperature miniature toroidal pickup coil called the neuromagnetic current probe can be employed to measure biologically generated magnetic fields in nerve and muscle fibers. In contrast to the magnetic resonance imaging technique, which relies on the interaction between an externally applied magnetic field and the magnetic properties of individual atomic nuclei, this device, along with its room-temperature, low-noise amplifier, can detect currents in the nano-Ampere range. The recorded magnetic signals using neuromagnetic current probes are relatively insensitive to muscle movement since these probes are not directly connected to the tissue, and distortions of the recorded data due to changes in the electrochemical interface between the probes and the tissue are minimal. Contrary to the methods used in electric recordings, these probes can be employed to measure action currents of tissues while they are lying in their own natural settings or in saline baths, thereby reducing the risk associated with elevating and drying the tissue in the air during experiments. This review primarily describes the investigations performed on peripheral nerves using the neuromagnetic current probe. Since there are relatively few publications on these topics, a comprehensive review of the field is given. First, magnetic field measurements of isolated nerve axons and muscle fibers are described. One of the important applications of the neuromagnetic current probe to the intraoperative assessment of damaged and reconstructed nerve bundles is summarized. The magnetic signals of crushed nerve axons and the determination of the conduction velocity distribution of nerve bundles are also reviewed. Finally, the capabilities and limitations of the probe and the magnetic recordings are discussed.
Ranjith and I were both involved in this research when we were graduate students working in John Wikswo’s laboratory at Vanderbilt University. I remember the painstaking process of making those toroids; just winding the wire onto the ferrite core was a challenge. Wikswo built this marvelous contraption that held the core at one spot under a dissection microscope but at the same time allowed the core to be rotated around multiple axes (he’s very good at that sort of thing). When Russ Hobbie and I wrote about “many turns of fine wire” we were not exaggerating. The insulated copper wire was often as thin as 40-gauge (80 microns diameter), which is only slightly thicker than a human hair. With wire that thin, the slightest kink causes a break. We sometimes wound up to 100 turns on one toroid. It was best to wind the toroid when no one else was around (I preferred early morning): if someone startled you when you were winding, the result was usually a broken wire, requiring you to start over. We potted the toroid and its winding in epoxy, which itself was a job requiring several steps. We machined a mold out of Teflon, carefully positioned the toroid in the mold, soldered the ends of those tiny wires to a coaxial cable, and then injected epoxy into the mold under vacuum to avoid bubbles. If all went as planned, you ended up with a “neuromagnetic current probe” to use in your experiments. Often, all didn’t go as planned.

Ranjith’s review describes the work of several of my colleagues from graduate school days. Frans Gielen (who now works at the Medtronic Bakken Research Centre in Maastricht, the Netherlands) was a post doc who used toroids to record action currents in skeletal muscle. Ranjith studied compound action currents in the frog sciatic nerve for his PhD dissertation. My research was mostly on crayfish axons. Jan van Egeraat was the first to measure action currents in a single muscle fiber, did experiments on squid giant axons, and studied how the magnetic signal changed near a region of the nerve that was damaged. Jan obtained his PhD from Vanderbilt a few years after I did, and then tragically died of cancer just as his career was taking off. I recall that when my wife Shirley and I moved from Tennessee to Maryland to start my job at the National Institutes of Health, Jan and his wife drove the rented truck with all our belongings while we drove our car with our 1-month old baby. They got a free trip to Washington DC out of the deal, and we got a free truck driver. John Barach was a Vanderbilt professor who originally studied plasma physics, but changed to biological physics later in his career when collaborating with Wikswo. I always have admired Barach’s ability to describe complex phenomena in a very physical and intuitive way (see, for instance, Problem 13 in Chapter 8 of our textbook). Of course, we were all led by Wikswo, whose energy and drive are legendary, and whose grant writing skills kept us all in business. For his work developing the Neuromagnetic Current Probe, Wikswo earned an IR-100 Award in 1984, presented by the RandD Magazine to recognize the 100 most technologically significant new products and processes of the year.

Friday, February 26, 2010

All the News That’s Fit to Print

Newspaper articles may not provide the most authoritative information about science and medicine, but they are probably the primary source of news about medical physics for the layman. Today, I will discuss some recent articles from one of the leading newspapers in the United States: the venerable New York Times.


Last week Russ Hobbie sent me a copy of an article in the February 16 issue of the NYT, titled “New Source of an Isotope in Medicine is Found.” It describes the continuing shortage of technetium-99m, a topic I have discussed before in this blog.
Just as the worldwide shortage of a radioactive isotope used in millions of medical procedures is about to get worse, officials say a new source for the substance has emerged: a nuclear reactor in Poland.

The isotope, technetium 99, is used to measure blood flow in the heart and to help diagnose bone and breast cancers. Almost two-thirds of the world’s supply comes from two reactors; one, in Ontario, has been shut for repairs for nine months and is not expected to reopen before April, and the other, in the Netherlands, will close for six months starting Friday.

Radiologists say that as a result of the shortage, their treatment of some patients has had to revert to inferior materials and techniques they stopped using 20 years ago.

But on Wednesday, Covidien, a company in St. Louis that purifies the material created in the reactor and packages it in a form usable by radiologists, will announce that it has signed a contract with the operators of the Maria reactor, near Warsaw, one of the world’s most powerful research reactors.
I doubt that relying on a Polish reactor is a satisfactory long-term solution to our 99mTc shortage, but it may provide crucial help with the short term crisis. A more promising permanent solution is described in a January 26 article on medicalphysicsweb.
GE Hitachi Nuclear Energy (GEH) announced today it has been selected by the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) to help develop a U.S. supply of a radioisotope used in more than 20 million diagnostic medical procedures in the United States each year.
More information can be found at the American Association of Physicists in Medicine website. Let’s hope that this new initiative will prove successful.


The second topic I want to discuss today was called to my attention by my former student Phil Prior (PhD in Biomedical Sciences: Medical Physics, Oakland University, 2008). On January 26, the NYT published Walt Bogdanich’s article “As Technology Surges, Radiation Safeguards Lag.”
In New Jersey, 36 cancer patients at a veterans hospital in East Orange were overradiated—and 20 more received substandard treatment—by a medical team that lacked experience in using a machine that generated high-powered beams of radiation… In Louisiana, Landreaux A. Donaldson received 38 straight overdoses of radiation, each nearly twice the prescribed amount, while undergoing treatment for prostate cancer… In Texas, George Garst now wears two external bags—one for urine and one for fecal matter—because of severe radiation injuries he suffered after a medical physicist who said he was overworked failed to detect a mistake.

These mistakes and the failure of hospitals to quickly identify them offer a rare look into the vulnerability of patient safeguards at a time when increasingly complex, computer-controlled devices are fundamentally changing medical radiation, delivering higher doses in less time with greater precision than ever before.

Serious radiation injuries are still infrequent, and the new equipment is undeniably successful in diagnosing and fighting disease. But the technology introduces its own risks: it has created new avenues for error in software and operation, and those mistakes can be more difficult to detect. As a result, a single error that becomes embedded in a treatment plan can be repeated in multiple radiation sessions.
A related article by the same author, “Radiation Offers New Cures, and Ways to Do Harm,” was also published in the Gray Lady a few days earlier. These articles discuss recent medical mistakes in which patients have received much more radiation than originally intended. While somewhat sensational, the articles reinforce the importance of quality control in medical physics.

The NYT articles triggered a response from the American Association of Physicists in Medicine on January 28.
The American Association of Physicists in Medicine (AAPM) has issued a statement today in the wake of several recent articles in the New York Times yesterday and earlier in the week that discuss a number of rare but tragic events in the last decade involving people undergoing radiation therapy.

While it does not specifically comment on the details of these events, the statement acknowledges their gravity. It reads in part: “The AAPM and its members deeply regret that these events have occurred, and we continue to work hard to reduce the likelihood of similar events in the future.” The full statement appears here.

Today's statement also seeks to reassure the public on the safety of radiation therapy, which is safely and effectively used to treat hundreds of thousands of people with cancer and other diseases every year in the United States. Medical physicists in hospitals and clinics across the United States are board-certified professionals who play a key role in assuring quality during these treatments because they are directly responsible for overseeing the complex technical equipment used.

Taken together, the articles I’ve discussed today highlight some of the challenges that face the field of medical physics. For those who want additional background about the underlying physics and its applications to medicine, I recommend—you guessed it—the 4th edition of Intermediate Physics for Medicine and Biology.

Friday, February 19, 2010

The Electron Microscope

Intermediate Physics for Medicine and Biology does not discuss one of the most important instruments in modern biology: the electron microscope. If I were to add a very brief introduction about the electron microscope to Intermediate Physics for Medicine and Biology, I would put it right after Sec. 14.1, The Nature of Light: Waves Versus Photons. It would look something like this.
14.1 ½ De Broglie Wavelength and the Electron Microscope

Like light, matter can have both wave and particle properties. The French physicist Louis de Broglie derived a quantum mechanical relationship between a particle’s momentum p and wavelength λ

λ = h/p     (14.6 ½)

[Eisberg and Resnick (1985)]. For example, a 100 eV electron has a speed of 5.9 × 106 m s−1 (about 2% the speed of light), a momentum of 5.4 × 10−24 kg m s−1, and a wavelength of 0.12 nm.

The electron microscope takes advantage of the short wavelength of electrons to produce exquisite pictures of very small objects. Diffraction limits the spatial resolution of an image to about a wavelength. For a visible light microscope, this resolution is on the order of 500 nm (Table 14.2). For the electron microscope, however, the wavelength of the electron limits the resolution. A typical electron energy used for imaging is about 100 keV, implying a wavelength much smaller than an atom (however, practical limitations often limit the resolution to about 1 nm). Table 1.2 shows that viruses appear as blurry smears in a light microscope, but can be resolved with considerable detail in an electron microscope. In 1986, Ernst Ruska shared the Nobel Prize in Physics “for his fundamental work in electron optics, and for the design of the first electron microscope.”

Electron microscopes come in two types. In a transmission electron microscope (TEM), electrons pass through a thin sample. In a scanning electron microscope (SEM), a fine beam of electrons is raster scanned across the sample and secondary electrons emitted by the surface are imaged. In both cases, the image is formed in vacuum and the electron beam is focused using a magnetic lens.
To learn more, you can watch a YouTube video about the electron microscope. Nice collections of electron microscope images can be found at http://www.denniskunkel.com, http://www5.pbrc.hawaii.edu/microangela and http://www.mos.org/sln/SEM.

Structure and function of the electron microscope. 

Friday, February 12, 2010

Biomagnetism and Medicalphysicsweb

Medicalphysicsweb is an excellent website for news and articles related to medical physics. Several articles that have appeared recently are related to the field of biomagnetism, a topic Russ Hobbie and I cover in Chapter 8 of the 4th edition of Intermediate Physics for Medicine and Biology. I have followed the biomagnetism field ever since graduate school, when I made some of the first measurements of the magnetic field of an isolated nerve axon. Below I describe four recent articles from medicalphysicsweb.

A February 2 article titled “Magnetomometer Eases Cardiac Diagnostics” discusses a novel type of magnetometer for measuring the magnetic field of the heart. In Section 8.9 of our book, Russ and I discuss Superconducting Quantum Interference Device (SQUID) magnetometers, which have long been used to measure the small (100 pT) magnetic fields produced by cardiac activity. Another way to measure weak magnetic fields is to determine the Zeeman splitting of energy levels of a rubidium gas. The energy difference between levels depends on the external magnetic field, and is measured by detecting the frequency of optical light that is in resonance with this energy difference. Ben Varcoe, of the University of Leeds, has applied this technology to the heart by separating the magnetometer from the pickup coil:
The magnetic field detector—a rubidium vapour gas cell—is housed within several layers of magnetic shielding that reduce the Earth’s field about a billion-fold. The sensor head, meanwhile, is external to this shielding and contained within a handheld probe.
I haven’t been able to find many details about this device (such as if the pickup coils are superconducting or not, and why the pickup coil doesn’t transport the noise from the unshielded measurement area to the shielded detector), but Varcoe believes the device is a breakthrough in the way researchers can measure biomagnetic fields.

Another recent (February 10) article on medicalphysicsweb is about the effect of magnetic resonance imaging scans on pregnant women. As described in Chapter 18 of Intermediate Physics for Medicine and Biology, MRI uses a radio-frequency magnetic field to flip the proton spins so their decay can be measured, resulting in the magnetic resonance signal. This radio-frequency field induces eddy currents in the body that heat the tissue. Heating is a particular concern if the MRI is performed on a pregnant woman, as it could affect fetal development.
Medical physicists at Hammersmith Hospital, Imperial College London, UK, have now developed a more sophisticated model of thermal transport between mother and baby to assess how MRI can affect the foetal temperature (Phys. Med. Biol. 55 913)… This latest analysis takes account of heat transport through the blood vessels in the umbilical cord, an important mechanism that was ignored in previous models. It also includes the fact that the foetus is typically half a degree warmer than the mother – another key piece of information overlooked in earlier work.
Russ and I discuss these issues in Sec. 14.10: Heating Tissue with Light, where we derive the bioheat equation. The authors of the study, Jeff Hand and his colleagues, found that under normal conditions, fetal heating wasn’t a concern, but if exposed to 7.5 minutes of continuous RF field (unlikely during MRI) the temperature increase could be significant.

In a January 27 article, researchers from the University of Minnesota (Russ’s institution) use magnetoencephalography to diagnose post-traumatic stress disorder.
Post-traumatic stress disorder (PTSD) is a difficult condition to diagnose definitively from clinical evidence alone. In the absence of a reliable biomarker, patients’ descriptions of flashbacks, worry and emotional numbness are all doctors have to work with. Researchers from the University of Minnesota Medical School (Minneapolis, MN) have now shown how magnetoencephalography (MEG) could identify genuine PTSD sufferers with high confidence and without the need for patients to relive painful past memories (J. Neural Eng. 7 016011).
The magnetoencephalogram is discussed in Sec. 8.5 of Intermediate Physics for Medicine and Biology. The data for the Minnesota study was obtained using a 248-channel SQUID magnetometer. The researchers analyzed data from 74 patients with post-traumatic stress disorder known to the VA hospital in Minneapolis, and 250 healthy controls. The authors claim that the accuracy of the test is over 90%.

Finally, a February 8 article describes a magnetic navigation system installed in Taiwan by the company Stereotaxis.
The Stereotaxis System is designed to enable physicians to complete more complex interventional procedures by providing image guided delivery of catheters and guidewires through the blood vessels and chambers of the heart to treatment sites. This is achieved using computer-controlled, externally applied magnetic fields that govern the motion of the working tip of the catheter or guidewire, resulting in improved navigation, shorter procedure time and reduced x-ray exposure.
The system works by having ferromagnetic material in a catheter tip, and an applied magnetic field that can be adjusted to “steer” the catheter through the blood vessels. We discuss magnetic forces in Sec. 8.1 of Intermediate Physics for Medicine and Biology, and ferromagnetic materials in Sec. 8.8.

Although I believe medicalphysicsweb is extremely useful for keeping up-to-date on developments in medical physics, I find that often their articles either have specialized physics concepts that the layman may not understand or, more often, don’t address the underlying physics at all. Yet, one can’t understand modern medicine without mastery of the basic physics concepts. My browsing through medicalphysicsweb convinced me once again about the importance of learning how physics can be applied to medicine and biology. Perhaps I am biased, but I think that studying from the 4th edition of Intermediate Physics for Medicine and Biology is a great way to master these important topics.

Friday, February 5, 2010

Beta Decay and the Neutrino

In Section 17.4 in the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss beta decay the neutrino.
The emission of a beta-particle is accompanied by the emission of a neutrino… [which] has no charge and no rest mass… [and] hardly interacts with matter at all… A particle that seemed originally to be an invention to conserve energy and angular momentum now has a strong experimental basis.
Understanding Physics: The Electron, Proton, and Neutron, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
Understanding Physics:
The Electron, Proton, and Neutron,
by Isaac Asimov.
Our wording implies there is a story behind this particle “that seemed originally to be an invention to conserve energy.” Indeed, that is the case. I will let Isaac Asimov tell this tale. (Asimov's books, which I read in high school, influenced me to become a scientist.) The excerpt below is from Chapter 14 of his book Understanding Physics: The Electron, Proton, and Neutron.
In Chapter 11, disappearance in mass during the course of nuclear reactions was described as balanced by an appearance of energy in accordance with Einstein’s equation, e=mc2. This balance also held in the case of the total annihilation of a particle by its anti-particle, or the production of a particle/anti-particle pair from energy.
Nevertheless, although in almost all such cases the mass-energy equivalence was met exactly, there was one notable exception in connection with radioactive radiations.

Alpha radiation behaves in satisfactory fashion. When a parent nucleus breaks down spontaneously to yield a daughter nucleus and an alpha particle, the sum of the mass of the two products does not quite equal the mass of the original nucleus. The difference appears in the form of energy—specifically, as the kinetic energy of the speeding alpha particle. Since the same particles appear as products at every breakdown of a particular parent nucleus, the mass-difference should always be the same, and the kinetic energy of the alpha particles should also always be the same. In other words, the beam of alpha particles should be monoenergetic. This was, in essence, found to be the case…

It was to be expected that the same considerations would hold for a parent nucleus breaking down to a daughter nucleus and a beta particle. It would seem reasonable to suppose that the beta particles would form a monoenergetic beam too…

Instead, as early as 1900, Becquerel indicated that beta particles emerged with a wide spread of kinetic energies. By 1914, the work of James Chadwick demonstrated the “continuous beta particle spectrum” to be undeniable.

The kinetic energy calculated for a beta particle on the basis of mass loss turned out to be a maximum kinetic energy that very few obtained. (None surpassed it, however; physicists were not faced with the awesome possibility of energy appearing out of nowhere.)

Most beta particles fell short of the expected kinetic energy by almost any amount up to the maximum. Some possessed virtually no kinetic energy at all. All told, a considerable portion of the energy that should have been present, wasn’t present, and through the 1920’s this missing energy could not be detected in any form.

Disappearing energy is as insupportable, really, as appearing energy, and though a number of physicists, including, notably, Niels Bohr, were ready to abandon the law of conservation of energy at the subatomic level, other physicists sought desperately for an alternative.

In 1931, an alternative was suggested by Wolfgang Pauli. He proposed that whenever a beta particle was produced, a second particle was also produced, and that the energy that was lacking in the beta particle was present in the second particle.

The situation demanded certain properties of the hypothetical particle. In the emission of beta particles, electric charge was conserved; that is, the net charge of the particles produced after emission was the same as that of the original particle. Pauli’s postulated particle therefore had to be uncharged. This made additional sense since, had the particle possessed a charge, it would have produced ions as it sped along and would therefore have been detectable in a cloud chamber, for instance. As a matter of fact, it was not detectable.

In addition, the total energy of Pauli’s projected particle was very small—only equal to the missing kinetic energy of the electron. The total energy of the particle had to include its mass, and the possession of so little energy must signify an exceedingly small mass. It quickly became apparent that the new particle had to have a mass of less than 1 percent of the electron and, in all likelihood, was altogether massless.

Enrico Fermi, who interested himself in Pauli’s theory at once, thought of calling the new particle a “neutron,” but Chadwick, at just about that time, discovered the massive, uncharged particle that came to be known by that name. Fermi therefore employed an Italian diminutive suffix and named the projected particle the neutrino (“little neutral one”), and it is by that name that it is known.

Friday, January 29, 2010

William Albert Hugh Rushton

This semester, I am teaching a graduate class at Oakland University on Bioelectric Phenomena (PHY 530). Rather than using a textbook, I require the students to read original papers, thereby providing insights into the history of the subject and many opportunities to learn about the structure and content of original research articles.

We began with a paper by Alan Hodgkin and Bernard Katz (“The Effect of Sodium Ions on the Electrical Activity of the Giant Axon of the Squid,” Journal of Physiology, Volume 108, Pages 37–77, 1949) that tests the hypothesis that the nerve membrane becomes selectively permeable to sodium during an action potential. We then moved on to Alan Hodgkin and Andrew Huxley’s monumental 1952 paper in which they present the Hodgkin-Husley model of the squid nerve axon (“A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve,” Journal of Physiology, Volume 117, Pages 500–544, 1952). In order to provide a more modern view of the ion channels that underlie Hodgkin and Huxley’s model, we next read an article by Roderick MacKinnon and his group (“The Structure of the Potassium Channel: Molecular Basis of K+ Conduction and Selectivity,” Science, Volume 280, Pages 69–77, 1998). Then we read a paper by Erwin Neher, Bert Sakmann and their colleagues that described patch clamp recordings of single ion channels (“Improved Patch-Clamp Techniques for High-Resolution Current Recordings from Cells and Cell-Free Membrane Patches,” Pflugers Archive, Volume 391, Pages 85–100, 1981).

This week I wanted to cover one-dimensional cable theory, so I chose one of my favorite papers, by Alan Hodgkin and William Rushton (“The Electrical Constants of a Crustacean Nerve Fibre,” Proceedings of the Royal Society of London, B, Volume 133, Pages 444–479, 1946). I recall reading this lovely article during my first summer as a graduate student at Vanderbilt University (where my daughter Kathy is now an attending college). My mentor, John Wikswo, had notebook after notebook full of research papers about nerve electrophysiology, and I set out to read them all. Learning a subject by reading the original literature is an interesting experience. It is less efficient than learning from a textbook, but you pick up many insights that are lost when the research is presented in a condensed form. Hodgkin and Rushton’s paper contains the fascinating quote
Electrical measurements were made by applying rectangular pulses of current and recording the potential response photographically. About fifteen sets of film were obtained in May and June 1939, and a preliminary analysis was started during the following months. The work was then abandoned and the records and notes stored for six years [my italics]. A final analysis was made in 1945 and forms the basis of this paper.
During those six years, the authors were preoccupied with a little issue called World War II.

Sometimes I like to provide my students with biographical information about the authors of these papers, and I had already talked about my hero, the Nobel Prize-winning Alan Hodgkin, earlier in the semester. So, I did some research on Rushton, who I was less familiar with. It turns out, he is known primarily for his work on vision. William Albert Hugh Rushton (1901–1980) has only a short Wikipedia entry, which does not even discuss his work on nerves. (Footnote: Several months ago, after reading—or rather listening to while walking my dog Suki—The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia by Andrew Lih, I became intensely interested in Wikipedia and started updating articles related to my areas of expertise. This obsession lasted for only about a week or two. I rarely make edits anymore, but I may update Rushton’s entry.) Rushton was a professor of physiology at Trinity College in Cambridge University. He became a Fellow of the Royal Society in 1948, and received the Royal Medal from that society in 1970.

Horace Barlow wrote an obituary for Rushton in the Biographical Memoirs of Fellows of the Royal Society (Volume 32, Pages 423–459, 1986). It begins
William Rushton first achieved scientific recognition for his work on the excitability of peripheral nerve where he filled the gap in the Cambridge succession between Lord Adrian, whose last paper on peripheral nerve appeared in 1922, and Alan Hodgkin, whose first paper was published in 1937. It was on the strength of this work that he was elected as a fellow of the Royal Society in 1948, but then Rushton started his second scientific career, in vision, and for the next 30 years he was dominant in a field that was advancing exceptionally fast. In whatever he was engaged he cut a striking and influential figure, for he was always interested in a new idea and had the knack of finding the critical argument or experiment to test it. He was argumentative, and often an enormously successful showman, but he also exerted much influence from the style of his private discussions and arguments. He valued the human intellect and its skillful use above everything else, and he successfully transmitted this enthusiasm to a large number of students and disciples.
Another of my favorite papers by Rushton is “A Theory of the Effects of Fibre Size in Medullated Nerve” (Journal of Physiology, Volume 115, Pages 101–122, 1951). Here, he correctly predicts many of the properties of myelinated nerve axons, such as the ratio of the inner and outer diameters of the myelin, from first principles.

Both of the Rushton papers I have cited here are also referenced in the 4th edition of Intermediate Physics for Medicine and Biology. Problem 34 in Chapter 6 is based on the Hodgkin-Rushton paper. It examines their analytical solution to the one-dimensional cable equation, which involves error functions. Was it Hodgkin or Rushton who was responsible for this elegant piece of mathematics gracing the Journal of Physiology? I can’t say for sure, but in Hodgkin’s Nobel Prize autobiography he claims he learned about cable theory from Rushton (who was 13 years older than him).

William Rushton provides yet another example of how a scientist with a firm grasp of basic physics can make fundamental contributions to biology.

Friday, January 22, 2010

Summer Internships

Many readers of the 4th edition of Intermediate Physics for Medicine and Biology are undergraduate majors in science or engineering. This time of the year, these students are searching for summer internships. I have a few suggestions.

My first choice is the NIH Summer Internship Program in Biomedical Research. The intramural campus of the National Institutes of Health in Bethesda, Maryland is the best place in the world to do biomedical research. My years working there in the 1990s were wonderful. Apply now. The deadline is March 1.

The National Science Foundation supports Research Experience for Undergraduate (REU) programs throughout the US. Click here for a list (it is long, but probably incomplete). Often NSF requires schools to not just select from their own undergraduates, but also to open some positions in their REU program to students from throughout the country. You might also try to Google “REU” and see what you come up with. Each program has different deadlines and eligibility requirements. For several years Oakland University, where I work, had an REU program run by the physics department. We have applied for funding again, but have not heard yet if we were successful. If lucky, we will run the program this summer, with a somewhat later deadline than most.

Last year, as part of the federal government’s stimulus package, the National Institutes of Health encouraged researchers supported by NIH grants to apply for a supplement to fund undergraduate students in the summer. Most of these supplements were for two years, and this will be the second summer. Therefore, I expect there will be extra opportunities for undergraduate students to do biomedical research in the coming months. Strike while the iron’s hot! The stimulus program is scheduled to end next year.

Finally, one of the best ways for undergraduate students to find opportunities to do research in the summer is to ask around among your professors. Get a copy of your department’s annual report and find out which professors have funding. Attend department seminars and colloquia to find out who is doing research that interests you. Or just show up at a faculty member’s door and ask (first read what you can about his or her research, and have your resume in hand). If you can manage it financially, consider working without pay for the first summer, just to get your foot in the door.

When I look back on my undergraduate education at the University of Kansas, one of the most valuable experiences was doing research in Professor Wes Unruh’s lab. I learned more from Unruh and his graduate students than I did in my classes. But such opportunities don’t just fall into your lap. You need to look for them. Ask around, knock on some doors, and keep your eyes open. And start now, because many of the formal internship programs have deadlines coming up soon.

If, dear reader, you are fortunate enough to get an internship this summer, but it’s far from home, then don’t forget to pack your copy of Intermediate Physics for Medicine and Biology when you go. After working all day in the lab, you can relax with it in the evening!

Good luck.

Friday, January 15, 2010

TeX

The TeXbook,
by Donald Knuth.
Russ Hobbie and I wrote the 4th edition of Intermediate Physics for Medicine and Biology using TeX, the typesetting program developed by Donald Knuth. Well, not really. We actually used LaTeX, a document markup language based on TeX. To be honest, “we” didn’t even use LaTeX: Russ did all the typesetting with LaTeX while I merely read pdf files and sent back comments and suggestions.

TeX is particularly well suited for writing equations, of which there are many in Intermediate Physics for Medicine and Biology. I used TeX in graduate school, while working in John Wikswo’s laboratory at Vanderbilt University. This was back in the days before LaTeX was invented, and writing equations in TeX was a bit like programming in machine language. I remember sitting at my desk with Knuth’s TeXbook (blue, spiral bound, and delightful), worrying about arcane details of typesetting some complicated expression. At that time, TeX was new and unique. When I first arrived at Vanderbilt in 1982, Wikswo’s version of TeX did not even have a WYSIWYG editor, and our lab did not have a laser printer, so I would make a few changes in the TeX document and then run down the hall to the computer center to inspect my printout. As you can imagine, after several iterations of this process the novelty of TeX wore off. But, oh, did our papers look good when we shipped them out to the journal (and, yes, we did mail paper copies; no electronic submission back then). Often, I thought our version looked better than what was published. By the way, Donald Knuth is a fascinating man. Check out his website at http://www-cs-faculty.stanford.edu/~knuth. He pays $2.56 to readers who find an error in his books (according to Knuth, 256 pennies is one hexadecimal dollar). Russ Hobbie used to pay a quarter for errors, and all I give is a few lousy extra credit points to my students.

I must confess, now-a-days I use the equation editor in Microsoft Word for writing equations. Word’s output doesn’t look as nice as TeX’s, but I find it easier to use. The solution manual for the 4th edition of Intermediate Physics for Medicine and Biology is written entirely using Word (email Russ or me for a copy), and so is the errata. But I did reacquaint myself with TeX when writing my Scholarpedia article about the bidomain model. Both Wikipedia and Scholarpedia use some sort of TeX hybrid for equations.

Listen to Donald Knuth describe his work.
https://www.youtube.com/embed/nyCW279KCM4

Friday, January 8, 2010

In The Beat of a Heart

In the Beat of a Heart: Life, Energy, and the Unity of Nature, by John Whitfield, superimposed on Intermediate Physics for Medicine and Biology.
In the Beat of a Heart:
Life, Energy, and the Unity of Nature,
by John Whitfield.
Over Christmas break, I read In the Beat of a Heart: Life, Energy, and the Unity of Nature, by John Whitfield. I had mixed feelings about the book. I didn’t have much interest in the parts dealing with biodiversity in tropical forests and skimmed through them rather quickly. But other parts I found fascinating. One of the main topics explored in the book is Kleiber’s law (metabolic rate scales as the 3/4th power of body mass), which Russ Hobbie and I discuss in Chapter 2 of the 4th edition of Intermediate Physics for Medicine and Biology. But the book has a broader goal: to compare and contrast the approaches of physicists and biologists to understanding life. The main idea can be summarized by the subtitle of the textbook I studied biology out of when an undergraduate at the University of Kansas: The Unity and Diversity of Life. Intermediate Physics for Medicine and Biology lies on the “unity” side of this great divide, but the interplay of these two views of life makes for a remarkable story.

The book begins with a Prologue about D’Arcy Thompson (Whitfield calls him “the last Victorian scientist”), author of the influential, if out-of-the-mainstream, book On Growth and Form.
This is the story—with some detours—of D’Arcy Thompson’s strand of biology and of a century-long attempt to build a unified theory, based on the laws of physics and mathematics, of how living things work. At the story’s heart is the study of something that Thompson called “a great theme”—the role of energy in life... The way that energy affects life depends on the size of living things. Size is the most important single notion in our attempt to understand energy’s role in nature. Here, again, we shall be following Thompson’s example. After its introduction, On Growth and Form ushers the reader into a physical view of living things with a chapter titled “On Magnitude,” which looks at the effects of body size on biology, a field called biological scaling.
In the Beat of a Heart examines Max Rubner’s idea that metabolism scales with surface area (2/3rd power), and Max Kleiber’s modification of this rule to a 3/4th power. It then describes the attempt of physicist Geoffrey West and ecologist Brian Enquist to explain this rule by modeling the fractal networks that provide the raw materials needed to maintain metabolism. While I was familiar with much of this story before reading Whitfield’s book, I nevertheless found the historical context and biographical background engrossing. Then came the lengthy section on forest ecology (Zzzzzzzzz). I soldiered on and was rewarded by a penetrating final chapter comparing the physicist’s and biologist’s points of view.
Finding a unity of nature would not make studying the details of nature obsolete. Indeed, finding unity depends on understanding the details. The variability of life means that in biology the ability to generalize is not enough. If you've measured one electron, you've measured them all, but, as I saw in Costa Rica, to understand a forest you must be able to see the trees, and that takes a botanist. Thinkers such as Humboldt, Darwin, and Wallace gained their understanding of how nature works from years of intimate experience of nature in the flesh and the leaf. And yet they were not just interested in what their senses told them: they also tried to abstract and unify. The combination of attributes--intrepid and reflective, naturalist and mathematician--strikes me as rather rare, and becoming more so. These days scientific lone wolves such as D’Arcy Thompson are almost extinct, and it would take a truly awesome polymath to acquire the necessary suite of skills in natural history, ecology, mathematics, and physics to devise a theory as complex as fractal networks.
The book ends with a provoking question and answer that sums up the debate nicely:
Is nature beautifully simple or beautifully complex? Yes, it is.
More about In the Beat of a Heart can be found at the book’s website: http://www.inthebeatofaheart.com.