Friday, May 1, 2009

Paul Lauterbur

This week we celebrate the 80th anniversary of the birth of Paul Lauterbur (May 6, 1929–March 27, 2007), co-winner with Peter Mansfield of the 2003 Nobel Prize in Physiology or Medicine “for their discoveries concerning magnetic resonance imaging. Lauterbur’s contribution was the introduction of magnetic field gradients, so that differences in frequency could be used to localize the spins. In Sec. 18.9 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe this technique.
Creation of the [magnetic resonance] images requires the application of gradients in the static magnetic field Bz which cause the Larmor frequency to vary with position. The first gradient is applied in the z direction [the same direction as the static magnetic field] during the pi/2 pulse so that only the spins in a slice in the patient are selected (nutated into the xy plane). Slice selection is followed by gradients of Bz in the x and y directions. These also change the Larmor frequency. If the gradient is applied during the readout, the Larmor frequency of the signal varies as Bz varies with position. If the gradient is applied before the readout, it causes a position-dependent phase shift in the signal which can be detected.
Lauterbur grew up in Sidney, Ohio. He attended college at Case Institute of Technology, now part of Case Western Reserve University in Cleveland, where he majored in chemistry. He obtained his PhD in Chemistry in 1962 from the University of Pittsburgh. He was a Professor at the State University of New York at Stony Brook from 1969-1985, during which time he published his landmark paper Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance, (Nature, Volume 242, Pages 190–191, 1973). In an interesting story, Lauterbur came up with the idea of using gradients to do magnetic resonance imaging while eating a hamburger in a Big Boy restaurant.
Principles of Magnetic Resonance Imaging: A Signal Processing Perspective, by Liang and Lauterbur, superimposed on Intermeidate Physics for Medicine and Biology.
Principles of Magnetic Resonance Imaging:
A Signal Processing Perspective,
by Liang and Lauterbur.

You can learn more about magnetic resonance imaging by reading Lauterbur’s book (with Zhi-Pei Liang) Principles of Magnetic Resonance Imaging: A Signal Processing Perspective. If looking for a briefer introduction, consult Chapter 18 of Intermediate Physics for Medicine and Biology. Be sure to use the 4th edition if you want to learn about recent developments, such as functional MRI and diffusion tensor imaging.

Friday, April 24, 2009

Proton Therapy

Section 16.11.3 in the 4th edition of Intermediate Physics for Medicine and Biology discusses proton therapy.
Protons are also used to treat tumors. Their advantage is the increase of stopping power at low energies. It is possible to make them come to rest in the tissue to be destroyed, with an enhanced dose relative to intervening tissue and almost no dose distally (“downstream”) as shown by the Bragg peak.
Proton therapy has become popular recently: see articles in US News and World Report and on MSNBC. There even exists a National Association for Proton Therapy. Their website explains the main advantage of protons over X-rays.
Both standard x-ray therapy and proton beams work on the principle of selective cell destruction. The major advantage of proton treatment over conventional radiation, however, is that the energy distribution of protons can be directed and deposited in tissue volumes designated by the physicians in a three-dimensional pattern from each beam used. This capability provides greater control and precision and, therefore, superior management of treatment. Radiation therapy requires that conventional x-rays be delivered into the body in total doses sufficient to assure that enough ionization events occur to damage all the cancer cells. The conventional x-rays lack of charge and mass, however, results in most of their energy from a single conventional x-ray beam being deposited in normal tissues near the body’s surface, as well as undesirable energy deposition beyond the cancer site. This undesirable pattern of energy placement can result in unnecessary damage to healthy tissues, often preventing physicians from using sufficient radiation to control the cancer.

Protons, on the other hand, are energized to specific velocities. These energies determine how deeply in the body protons will deposit their maximum energy. As the protons move through the body, they slow down, causing increased interaction with orbiting electrons.
Figure 16.51 of the 4th edition of Intermediate Physics for Medicine and Biology shows the dose versus depth from a 150 MeV proton beam, including the all-important Bragg peak located many centimeters below the tissue surface. If you want to understand better why proton energy is deposited in the Bragg peak rather than being spread throughout the tissue, solve Problem 31 in Chapter 16.

To learn more about the pros and cons of proton therapy, I suggest several
point/counterpoint articles from the journal Medical Physics: Within the Next Decade Conventional Cyclotrons for Proton Radiotherapy will Become Obsolete and Replaced by Far Less Expensive Machines using Compact Laser Systems for the Acceleration of the Protons, Chang-Ming Ma and Richard Maughan (Medical Physics, Volume 33, Pages 571–573, 2006), Proton Therapy is the Best Radiation Treatment Modality for Prostate Cancer, Michael Moyers and Jean Pouliot (Medical Physics, Volume 34, Pages 375378, 2007), and Proton Therapy is Too Expensive for the Minimal Potential Improvements in Outcome Claimed, Robert Schulz and Alfred Smith (Medical Physics, Volume 34, Pages 1135–1138, 2007).

Friday, April 17, 2009

The Diffusion Approximation to Photon Transport

Chapter 14 in the 4th edition of Intermediate Physics for Medicine and Biology contains a section describing the diffusion approximation to photon transport.
When photons enter a substance, they may scatter many times before being absorbed or emerging from the substance. This leads to turbidity, which we see, for example, in milk or clouds. The most accurate studies of multiple scattering are done with “Monte Carlo” computer simulation, in which probabilistic calculations are used to follow a large number of photons as they repeatedly interact in the tissue being simulated. However, Monte Carlo techniques use lots of computer time. Various approximate analytical solutions also exist... One of the approximations, the diffusion approximation, is described here. It is valid when many scattering events occur for each photon absorption.
Optical Mapping of Cardiac Excitation and Arrhythmias, edited by Rosenbaum and Jalife.
Optical Mapping of Cardiac
Excitation and Arrhythmias,
edited by Rosenbaum and Jalife.
Today, I would like to present a new homework problem about the diffusion approximation, based on a brief communication I published in the August 2008 issue of IEEE Transactions on Biomedical Engineering (Volume 55, Pages 2102–2104). I was interested in the problem because of its role in optical mapping of transmembrane potential in the heart, discussed briefly at the end of Sec 7.10 and reviewed exhaustively in the excellent book Optical Mapping of Cardiac Excitation and Arrhythmias, edited by David Rosenbaum and Jose Jalife. Enjoy the problem, which belongs at the bottom of the left column of page 394.
Section 14.5

Problem 16 ½ Consider light with fluence rate φ0 continuously and uniformly irradiating a half-infinite slab of tissue having an absorption coefficient μa and a reduced scattering coefficient μ's. Divide the photons into two types: the incident ballistic photons that have not yet interacted with the tissue, and the diffuse photons undergoing multiple scattering. The diffuse photon fluence rate, φ, is governed by the steady state limit of the photon diffusion equation (Eq. 14.26). The source of diffuse photons is the scattering of ballistic photons, so the source term in Eq. 14.26 is s = μ's exp(-z/λunatten), where z is the depth below the tissue surface. At the surface (z=0), the diffuse photons obey the boundary condition φ = 2 D dφ/dz.
(a) Derive an analytical expression for the diffuse photon fluence rate in the tissue, phi(z)
(b) Plot φ(z) versus z for μa=0.08 mm−1 and μ's=4 mm1.  
(c) Evaluate λunatten and λdiffuse for these parameters.
The most interesting aspect of this calculation is that the diffuse photon fluence rate is not maximum at the tissue surface, but rather it builds up to a peak below the surface, somewhat like the imparted energy from 10 MeV photons shown in Fig. 15.32. This has some interesting implications for optical mapping of the heart: subsurface tissue may contribute more to the optical signal than surface tissue.

If you want the solution, send me an email (roth@oakland.edu) and I will gladly supply it.

Friday, April 10, 2009

We Should All Congratulate Professor Hobbie For This Excellent Text

Peter Kahn reviewed the third edition of Intermediate Physics for Medicine and Biology in the American Journal of Physics (volume 67, Pages 457–458, 1999). He wrote:
As a professor of physics I am upset that our biology students have such brief and superficial exposure to physics and mathematics, and that, at the same time, our physics students go through a curriculum that ignores the important role that biology is playing in modern science. We should all congratulate Professor Hobbie for this excellent text. Now it is up to us to initiate the dialogue that builds on this solid foundation.

Friday, April 3, 2009

Div, Grad, Curl, and All That

Russ Hobbie and I assume that readers of the 4th edition of Intermediate Physics for Medicine and Biology know the basics of calculus (our preface states that “calculus is used without apology”). We even introduce some concepts from vector calculus, such as the divergence, gradient, and curl. Although these vector derivatives are crucial for understanding topics such as diffusion and electricity, many readers may be unfamiliar with them. These functions are even more complicated in curvilinear coordinate systems, and in Appendix L we summarize how to write the divergence, gradient, curl, and Laplacian in rectangular, cylindrical, and spherical coordinates.

Div, Grad, Curl, and All That,  by H. M. Schey, superimposed on Intermediate Physics for Medicine and Biology.
Div, Grad, Curl, and All That,
by H. M. Schey.
When I was a young physics student at the University of Kansas, Dr. Jack Culvahouse gave me a book that helped explain vector calculus: Div, Grad, Curl, and All That: An Informal Text on Vector Calculus, by H. M. Schey. For me, this book made clear and intuitive what had been confusing and complicated. By defining the divergence and curl in terms of surface and line integrals, I suddenly could understand what these seemingly random collections of partial derivatives meant. One can hardly make sense of Maxwell’s equations of electromagnetism without vector calculus (try reading a textbook from Maxwell’s era before vector calculus was invented if you don't believe me). In fact, Schey introduces vector calculus using electromagnetism as his primary example:
In this text the subject of vector calculus is presented in the context of simple electrostatics. We follow this procedure for two reasons. First, much of vector calculus was invented for use in electromagnetic theory and is ideally suited to it. This presentation will therefore show what vector calculus is and at the same time give you an idea of what it's for. Second, we have a deep-seated conviction that mathematics—in any case some mathematicsis best discussed in a context that is not exclusively mathematical. Thus, we will soft-pedal mathematical rigor, which we think is an obstacle to learning this subject on a first exposure to it, and appeal as much as possible to physical and geometric intuition.
For readers of Intermediate Physics for Medicine and Biology who get stuck when we delve into vector calculus, I suggest setting our book aside for a few days (but only a few!) to read Div, Grad, Curl, and All That. Not only will you be able to understand our book better, but youll find this background useful in many other fields of physics, math, and engineering.

Friday, March 27, 2009

Sigma Xi

Here at Oakland University, this Tuesday, March 31, is our annual Sigma Xi lecture (4 P.M. in 201 Dodge Hall of Engineering). Each year, we invite a leading scientist to OU to give a lecture for a general audience. This year Dr. Vicki Chandler, Chief Program Director of the Gordon and Betty Moore Foundation, will give a talk about “Epigenetic Silencing Across Generations.” (The term “epigenetic gene silencing” describes the switching off of a gene by a mechanism other than genetic modification. That is, a gene that would be expressed, or turned on, under normal circumstances is switched off by machinery in the cell.)

For six years, I served as the president of the Oakland University chapter of Sigma Xi, the Scientific Research Society. As readers of the
4th edition of Intermediate Physics for Medicine and Biology become biomedical researchers, they should consider joining Sigma Xi. I joined as a graduate student at Vanderbilt University.
Sigma Xi is an international, multidisciplinary research society whose programs and activities promote the health of the scientific enterprise and honor scientific achievement. There are nearly 60,000 Sigma Xi members in more than 100 countries around the world. Sigma Xi chapters, more than 500 in all, can be found at colleges and universities, industrial research centers and government laboratories. The Society endeavors to encourage support of original work across the spectrum of science and technology and to promote an appreciation within society at large for the role research has played in human progress.
The mission of Sigma Xi is “to enhance the health of the research enterprise, foster integrity in science and engineering, and promote the public's understanding of science for the purpose of improving the human condition.” As a member of Sigma Xi, you automatically receive a subscription to American Scientist, the award-winning illustrated magazine of science and technology. I particularly enjoy Henry Petroski’s monthly essay on topics in engineering, and the book reviews are outstanding. The magazine alone is worth the cost of membership. Another benefit that I look forward to each day is Science in the News, a free e-mail bulletin featuring top science and technology stories. Sigma Xi also has an annual meeting, including a student research conference. Last year, the meeting was November 20–23 in Washington, DC. The society is a strong advocate of scientific research, and is worthy of support.

Finally, you have to love the society
s motto: Companions in Zealous Research.

Friday, March 20, 2009

The West-Brown-Enquist Model for Allometric Scaling

Chapter 2 of the 4th edition of Intermediate Physics for Medicine and Biology ends with a section on “Food Consumption, Basal Metabolic Rate, and Scaling.” Here Russ Hobbie and I discuss the famous “3/4-power law” (also known as Kleiber’s law), which relates the metabolic rate R (in Watts) to the body mass M (in kg) by the equation R = 4.1 M0.751 (Eq. 2.32c in our book). We conclude the section by writing
A number of models have been proposed to explain a 3/4-power dependence [McMahon (1973)Peters (1983); West et al. (1999); Banavar et al. (1999)]. West et al. argue that the 3/4-power dependence is universal: they derive it from a model that supplies nutrients through a branching network that reaches all parts of the organism, minimizes the energy required for distribution, and ends in capillaries (or terminal xylem in plants) that are all the same size. Whether it is universal is still debated [Kozlowski and Konarzewski (2004)]. West and Brown (2004) review quarter-power scaling in a variety of circumstances.
When we wrote this paragraph, the origin of the 3/4th power law was still being hotly debated in the literature. Readers of Intermediate Physics for Medicine and Biology might like an update.

First, this work is highly cited. West, Brown, and Enquist’s first paper in Science (
A General Model for the Origin of Allometric Scaling Laws in Biology,” Volume 276, Pages 122–126, 1997; not cited in our book) now has over 1000 citations. Their second paper, which we list in the references at the end of Chapter 2, has nearly 400 citations. The paper by Banavar, Maritan and Rinaldo cited in Chapter 2 has over 200 citations. Clearly, these studies have had a major impact on the field.

Second, the work has generated quite a bit of discussion in the press. The December 2008 issue of The Scientist has an article by Bob Grant titled
The Powers That Might Be about West and his colleagues and how they have coped with criticisms of their work. An interview with Geoffrey West can be found at physicsworld.com, and one with Brian Enquist at www.in-cities.com. In 2004, John Whitfield published a feature in the open access journal PLOS Biology reviewing the field (“open access means that anyone can access the paper over the internet, without the need for a journal subscription).

Third, several recent papers in scientific journals have addressed this topic. Savage et al. have analyzed what they refer to as the WBE model in an article appearing in the open access journal PLOS Computational Biology (Volume 4, Article e1000171, 2008). The authors’ summary states

The rate at which an organism produces energy to live increases with body mass to the 3/4 power. Ten years ago West, Brown, and Enquist posited that this empirical relationship arises from the structure and dynamics of resource distribution networks such as the cardiovascular system. Using assumptions that capture physical and biological constraints, they defined a vascular network model that predicts a 3/4 scaling exponent. In our paper we clarify that this model generates the 3/4 exponent only in the limit of infinitely large organisms. Our calculations indicate that in the finite-size version of the model metabolic rate and body mass are not related by a pure power law, which we show is consistent with available data. We also show that this causes the model to produce scaling exponents significantly larger than the observed 3/4. We investigate how changes in certain assumptions about network structure affect the scaling exponent, leading us to identify discrepancies between available data and the predictions of the finite-size model. This suggests that the model, the data, or both, need reassessment. The challenge lies in pinpointing the physiological and evolutionary factors that constrain the shape of networks driving metabolic scaling.
In another paper, published in the December 2006 issue of Physics of Life Reviews (Volume 3, Pages 229–261), de Silva et al. write that
One of the most pervasive laws in biology is the allometric scaling, whereby a biological variable Y is related to the mass M of the organism by a power law, Y = Y0Mb, where b is the so-called allometric exponent. The origin of these power laws is still a matter of dispute mainly because biological laws, in general, do not follow from physical ones in a simple manner. In this work, we review the interspecific allometry of metabolic rates, where recent progress in the understanding of the interplay between geometrical, physical and biological constraints has been achieved.

For many years, it was a universal belief that the basal metabolic rate (BMR) of all organisms is described by Kleiber’s law (allometric exponent b = 3/4). A few years ago, a theoretical basis for this law was proposed, based on a resource distribution network common to all organisms. Nevertheless, the 3/4-law has been questioned recently. First, there is an ongoing debate as to whether the empirical value of b is 3/4 or 2/3, or even nonuniversal. Second, some mathematical and conceptual errors were found [in] these network models, weakening the proposed theoretical arguments. Another pertinent observation is that the maximal aerobically sustained metabolic rate of endotherms scales with an exponent larger than that of BMR. Here we present a critical discussion of the theoretical models proposed to explain the scaling of metabolic rates, and compare the predicted exponents with a review of the experimental literature. Our main conclusion is that although there is not a universal exponent, it should be possible to develop a unified theory for the common origin of the allometric scaling laws of metabolism.
Now, five years after we included the topic in Intermediate Physics for Medicine and Biology, the controversy continues. It makes for a wonderful example of how ideas from fundamental physics can elucidate biological laws, and a warning about how complicated and messy biology can be, limiting the application of simple models. I can't tell you how this debate will ultimately be resolved. But it provides a fascinating case study in the interaction of physics and biology.

Friday, March 13, 2009

The Discovery of Technetium

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the biomedical properties of the element technetium (Tc), which plays an important role in nuclear medicine.
The most widely used isotope is 99m-Tc. As its name suggests, it does not occur naturally on earth, since it has no stable isotopes... [It decays by emitting] a nearly monoenergetic 140-keV gamma ray. Only about 10% of the energy is in the form of nonpenetrating radiation. The isotope is produced in the hospital from the decay of its parent, 99-Mo, which is a fission product of 235-U and can be separated from about 75 other fission products. The 99-Mo decays to 99m-Tc.
The Search For the Elements, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
The Search For the Elements,
by Isaac Asimov.
Technetium has an interesting history. When Dmitri Mendeleev proposed the periodic table, he predicted that holes in his table were missing elements that had not yet been discovered. In The Search for the Elements, Isaac Asimov writes
The first elements produced [by artificial transmutation] was the missing number 43 [technetium]. A claim to discovery of this element had been made in 1925 by Noddack, Tacke, and Berg, the discoverers of rhenium. They had named element number 43 “masurium” (after a district in East Prussia). But no one else was able to find masurium in the same source material, so their supposed discovery had remained a question mark. It was, in fact, just a mistake. In 1937 Emilio Gino Segre of Italy, an ardent hunter for the element, identified the real number 43.

[Ernest O.] Lawrence had bombarded a sample of molybdenum (element number 42) with protons accelerated in his cyclotron. Finally he got some radioactive stuff which he sent to Segre in Italy for analysis. Segre and an assistant, C. Perrier, traced some of the radioactivity to an element which behaved like manganese. Since the missing element 43 belonged in the vacancy in the periodic table next to manganese, they were sure this was it.

It turned out that element number 43 had several isotopes. Oddly, all of them were radioactive. There were no stable isotopes of the element!... Segre named the element number 43 technetium, from a Greek work meaning artificial, because it was the first element made by man.
Asimov tells the standard history of the discovery of technetium, but recently there has been a new twist to the story. John Armstrong of the National Institute of Standards and Technology (NIST) suggested that maybe masurium really was technetium. In an abstract to a NIST Sigma Xi colloquium in 2000 titled The Disputed Discovery of Element 43 (Technetium), Armstrong and P. H. M. Van Assche write
In 1925, Noddack, Tacke and Berg reported discovery of element Z = 43, which they named Masurium, based on line identification of x-ray emission spectra from chemically concentrated residues of various U-rich minerals. Their results were disputed and eventually the discovery of element 43 (Technetium) was generally credited to Perrier and Segre, based on their chemical separation of neutron-irradiated molybdenum in 1937. Using first principles x-ray emission spectral generation algorithms from the N.I.S.T. DTSA spectral processing program, we have simulated the x-ray spectra that would be expected using their likely analytical conditions (from their papers and contemporaneous reports) and the likely residue compositions suggested by Noddack et al. and Van Assche. The resulting spectra are in close agreement with that reported by Noddack et al., place limits on the possible residue compositions, and are supportive of the presence of detectable amounts of element 43 in their sample. Moreover, the calculated mass of element 43 shown in their spectrum is consistent with the amount that would be now expected from the spontaneous fission of U present in the ores they studied. The history of the original masurium/technetium controversy and the means used to reexamine the original record will be presented in this scientific detective story.
Was masurium really technitium? You will have to look at the evidence and decide for yourself. The story certainly is fascinating, and will interest readers of Intermediate Physics for Medicine and Biology.

Friday, March 6, 2009

NCRP Report No. 160

In past entries to this blog, I have reported on a growing controversy over radiation exposure from medical procedures. On December 7, 2007 I described a study by David Brenner and Eric Hall warning that the increased popularity of CT scans, particularly in children, can lead to an increased incidence of cancer. Then, just three weeks ago, I discussed the “Image Gently” website, created to raise awareness in the imaging community of the need to adjust radiation dose when imaging children.

This week the debate intensified, with three simultaneous press releases. On Wednesday, the National Council on Radiation Protection and Measurement (NCRP) issued a new study titled
Medical Radiation Exposure of the U.S. Population Greatly Increased Since the Early 1980s. This report, also known as NCRP Report Number 160, updates NCRP Report Number 93, Ionizing Radiation Exposure of the Population of the United States, published in 1987. Readers of the 4th edition of Intermediate Physics for Medicine and Biology may recall that Russ Hobbie and I based much of our discussion in Chapter 16 about the risk of ionizing radiation on Report No. 93. The press release announcing Report No. 160 states that
In 2006, Americans were exposed to more than seven times as much ionizing radiation from medical procedures as was the case in the early 1980s, according to a new report on population exposure released March 3rd by the National Council on Radiation Protection and Measurements (NCRP) at its annual meeting in Bethesda, Maryland. In 2006, medical exposure constituted nearly half of the total radiation exposure of the U.S. population from all sources.
The report triggered an immediate response from the American Association of Physicists in Medicine. Their press release, titled NCRP Report No. 160 on Increased Average Radiation Exposure of the U.S. Population Requires Perspective and Caution, begins
Scientists at the American Association of Physicists in Medicine (AAPM) are offering additional background information to help the public avoid misinterpreting the findings contained in a report issued today by the National Council on Radiation Protection and Measurements (NCRP), a non-profit body chartered by the U.S. Congress to make recommendations on radiation protection and measurements. The report is not without scientific controversy and requires careful interpretation.
Not to be outdone, the American College of Radiology also issued its own press release Wednesday.
A recent National Council on Radiation Protection and Measurements (NCRP) Report (NCRP Report No. 160, Ionizing Radiation Exposure of the Population of the United States) stated that the U.S. population is now exposed to seven times more radiation each year from medical imaging exams than in 1980. The American College of Radiology (ACR), Society for Pediatric Radiology (SPR), Society of Breast Imaging (SBI), and the Society of Computed Body Tomography and Magnetic Resonance (SCBT-MR) urge Americans, including elected officials and medical providers, to understand why this increase occurred, consider the Report’s information in its proper context, and support appropriate actions to help lower the radiation dose experienced each year from these exams.

“It is essential that this Report not be interpreted solely as an increase in risk to the U.S. population without also carefully considering the tremendous and undeniable benefits of medical imaging. Patients must make these risk/benefit decisions regarding their imaging care based on all the facts available and in consultation with their doctors,” said James H. Thrall, MD, FACR, chair of the ACR Board of Chancellors.
Who says medical physics isn’t exciting? Seriously, this is an important topic, and deserves the careful scrutiny of anyone interested in medical physics. As always, I recommend the 4th edition of Intermediate Physics for Medicine and Biology as a good starting point to learn the basic physics that underlies this controversy. And keep coming back to this blog for updates as the debate unfolds.

Friday, February 27, 2009

Hello to the Medical Physics 2 Class at Ball State University

Russ Hobbie and I would like to thank those instructors and students who use the 4th edition of  Intermediate Physics for Medicine and Biology as the textbook for their class. Also, we greatly appreciate those careful readers who find errors in our book and inform us about them. Without our dear readers, all the work preparing the 4th edition would be pointless.

Special thanks go to Dr. Ranjith Wijesinghe, Assistant Professor of Physics and Astronomy at Ball State University in Muncie, Indiana. This semester, Ranjith is teaching APHYS 316 (Medical Physics 2) using Intermediate Physics for Medicine and Biology. As he prepares his class lectures, Ranjith emails me all the mistakes he finds in our book, which I dutifully add to the errata. I can keep track of what the class is covering by the location of the errors Ranjith finds. In mid January the class was studying Fourier series, and he found a missing “sin” in Eq. 11.26d. By early February they were analyzing images, and Ranjith noticed some missing text in the figure associated with Problem 12.7. Then in mid February they began studying ultrasound, and eagle-eyed Ranjith emailed me that the derivative in Eq. 13.2 should be a partial derivative. I’m expecting some newly-discovered typo in Chapter 14 next week.

Electric Fields of the Brain:
The Neurophysics of EEG,
by Paul Nunez.
Ranjith is an old friend of mine. We were graduate students together at Vanderbilt University in the late 1980s, and both worked in the lab of John Wikswo. I took care of the crayfish (which have some giant axons that are useful for studying action currents) and Ranjith looked after the frogs (whose sciatic nerve is an excellent model for analyzing the compound action potential). After leaving Vanderbilt, Ranjith was a postdoc at Tulane University with Paul Nunez, an expert in electroencephalography and author of the acclaimed textbook Electric Fields of the Brain: The Neurophysics of EEG. While a member of Nunezs group, Ranjith coauthored several papers, including “EEG Coherency.1. Statistics, Reference Electrode, Volume Conduction, Laplacians, Cortical Imaging, and Interpretation at Multiple Scales” in the journal Electroencephalography and Clinical Neurophysiology (Volume 103, Pages 499–515, 1997). According to Google Scholar, this landmark paper has been cited 277 times, which is quite an accomplishment (and is more citations than my most cited paper has).

I hope Ranjith keeps on sending me errors he finds, and I encourage other careful readers to do so too. And a big HELLO! to Ball State students taking Medical Physics 2. The true measure of a textbook is what the students think of it. I hope you all find it useful, and best of luck to you as the end of the semester approaches. Don’t give Dr. Wijesinghe too hard a time in class. If he finishes early one day and you have a few minutes to spare, ask him for some old stories from graduate school. He has a few, if he will tell you!