Friday, May 22, 2009

Using Logarithmic Transformations When Fitting Allometric Data

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss least squares fitting. A homework problem at the end of Chapter 11 (see page 321) asks the student to fit some data to a power law.
Problem 11 Consider the data given in Problem 2.36 relating molecular weight M and molecular radius R. Assume the radius is determined from the molecular weight by a power law: R = B Mn. Fit the data to this expression to determine B and n. Hint: Take logarithms of both sides of the equation.
The solution manual (available at the book’s website, contact one of the authors for the password) outlines how taking logarithms makes the problem linear, so a simple linear least squares fit gives the solution R = 0.0534 M0.371.

However, inquisitive students may ask “what if I don’t follow the hint, and do a least squares fit to the original power law without taking logarithms. Do I get the same result?” This becomes a more difficult problem, since you must now make a nonlinear least squares fit. Nevertheless, I solved the problem this way (using a terribly inefficient iterative guess-and-check method) and found R = 0.0619 M0.358.

Which solution is correct? Gary Packard and Thomas Boardman, both from Colorado State University, address this question in their paper “A Comparison of Methods for Fitting Allometric Equations to Field Metabolic Rates of Animals” (Journal of Comparative Physiology, B, Volume 179, Pages 175–182, 2009), and find that

the discrepancies could be caused by four sources of bias acting singly or in combination to cause exponents (and coefficients) estimated by back-transformation from logarithms to be inaccurate and misleading. First, influential outliers may go undetected in some analyses ... owing to the altered relationship between X and Y variables that accompanies logarithmic transformation ... Second, the use of logarithmic transformations may result in the fitting of mathematical functions (i.e., two-parameter power functions) that are poor descriptors of data in the original scale ... Third, a two-parameter power function ... fitted to the original data by least squares invokes a statistical model with additive error Y = aXb + e and predicts arithmetic means for Y whereas a straight line fitted to logarithmic transformations of the data by least squares invokes an underlying model with multiplicative error Y = aXb 10e and predicts geometric means for the response variable ... And fourth, linear regression on nonlinear transformations like logarithms may introduce further bias into analyses by the unequal weighting of large and small values for both X and Y...

Conversion to logs results in an overall compression of the distributions for both the Y- and X-variables, but the compression is greater at the high ends of the scales than at the low ends... Consequently, linear regression on transformations gives unduly large influence to small values for Y and X and unduly small influence to large ones... This disparate influence is apparent in plots of back-transformations against the backdrop of data in their original scales, where the location of data for the largest animals had little apparent influence on fits of the lines.
Their paper concludes
Why transform? Log transformations have a long history of use in allometric research... and have been justified on grounds ranging from linearizing data to achieving better distributions for purposes of graphical display... However, most of the reasons for making such transformations disappeared with the advent of powerful PCs and sophisticated software for graphics and statistics. Indeed, the only ongoing application for log transformations in allometric research is in adjusting (‘‘stabilizing’’) distributions when residuals from analyses performed in the original scale are not distributed normally and/or when variances are not constant at all values for X. Assuming that log transformations actually linearize the data and produce the desired distributions, the regression of log Y on log X will yield evidence for a dependency between Y and X values in their original state, and statistical comparisons can be made with other samples that also are expressed in logarithmic form. However, interpretations about patterns of variation of the variables in the arithmetic domain seldom are warranted... because transformation fundamentally alters the relationship between the predictor and response variables. Interest typically is in patterns of variation of data expressed in an arithmetic scale, so this is the scale in which allometric analyses need to be performed if it is at all possible to do so.

Implications for allometric research. Accumulating evidence from the field of biology... and beyond... gives cause for concern about the accuracy and reliability of allometric equations that have been estimated in the traditional way... This concern has special bearing on the current debate about the true’’ exponent for scaling of metabolism to body mass because exponents of 2/3 and 3/4 need both to be viewed with some skepticism. The aforementioned evidence also indicates that the traditional approach to allometric analysis may need to be abandoned in favor of a new research paradigm that will prevent future studies from being compromised by the insidious effects of logarithmic transformations.
In the above quote, many of the ...s indicate important references that I skipped to save space.

Packard and Boardman make a persuasive case that you might want to ignore our hint at the end of Problem 11. However, if you do ignore it, you had better be prepared to do nonlinear least squares fitting. See Sec. 11.2, Nonlinear Least Squares, in our book to get started.

For more about this subject, see Packard
s letter to the editor in the Journal of Theoretical Biology (Volume 257, Pages 515–518, 2009). Also, Russ Hobbie has a paper submitted to the journal Ecology that discusses a similar issue with exponential, rather than power law, least squares fits (Single pool exponential decomposition models: Potential pitfalls in their use in ecological studies). Russs coatuhors are E. Carol Adair and Sarah E. Hobbie (Russs daughter), both of the University of Minnesota in Saint Paul.

Friday, May 15, 2009

Current Injection into a Two-Dimensional Anisotropic Bidomain

Twenty years ago this month, Nestor Sepulveda, John Wikswo, and I published a calculation of the transmembrane potential induced when a point electrode passes current into cardiac tissue, as might happen when pacing the heart (“Current Injection into a Two-Dimensional Anisotropic Bidomain,” Biophysical Journal, Volume 55, Pages 987–999, 1989). When we wrote the paper, Sepulveda was a Research Assistant Professor and I had just gotten my PhD and was starting a one-year post doc in Wikswo’s laboratory at Vanderbilt University. We used a mathematical model of the electrical properties of cardiac tissue called the bidomain model, which was relatively new at that time. In Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe this result.
The bidomain has been used to understand the response of cardiac tissue to stimulation... [Sepulveda et al.’s] simulation explains a remarkable experimental observation. Although the speed of the wave front is greater along the fibers than perpendicular to them, if the stimulation is well above threshold, the wave front originates farther from the cathode in the direction perpendicular to the fibers—the direction in which the speed of propagation is slower. The simulations show that this is due to the anisotropy in conductivity. This is called the “dog-bone” shape of the virtual cathode. It can rotate with depth in the myocardium because the myocardial fibers change orientation. The difference in anisotropy accentuates the effect of a region of hyperpolarization surrounding the depolarization region produced by a cathode electrode. Strong point stimulation can also produce reentry waves that can interfere with the desired pacing effect.
The calculation was possible because Sepulveda had developed a finite element computer program that could solve the bidomain equations: a system of two coupled partial differential equations. Meanwhile, Wikswo was performing experiments on dog hearts with collaborators at the Vanderbilt Hospital, and had observed that the wave fronts originate from a spot farther from the electrode in the direction perpendicular to the fibers than in the direction parallel to them (Virtual Cathode Effects during Stimulation of Cardiac-Muscle 2-Dimensional In Vivo Experiments,Circulation Research, Volume 68, Pages 513–530, 1991). As soon as Sepulveda performed the calculation, they realized that it would explain Wikswos data.

I remember being so surprised hyperpolarization would be produced just a millimeter or two away from a cathode that I quietly slipped into my office and developed a Fourier method to check Sepulveda
s finite element calculation. I got the same result: regions of hyperpolarization adjacent to the cathode. After our publication, six years passed before the prediction of hyperpolarized regions was verified experimentally, by three groups simultaneously including researchers in Wikswos lab (Virtual Electrodes in Cardiac Tissue: A Common Mechanism for Anodal and Cathodal Stimulation, Biophysical Journal, Volume 69, Pages 21952210, 1995). During these years—when I was working at the National Institutes of HealthJosh Saypol, an undergraduate summer student, and I showed that the hyperpolarization could have an important effect: it could lead to reentry, a type of cardiac arrhythmia (“A Mechanism for Anisotropic Reentry in Electrically Active Tissue, Journal of Cardiovascular Electrophysiology, Volume 3, Pages 558566, 1992). For a simple, visual, and non-mathematical introduction to these ideas, see my paper in the Online Journal of Cardiology.

Our original publication in 1989 has now been cited in the literature 200 times. It remains one of my most cited papers (although, to be honest, I had less to do with the research than Sepulveda and Wikswo did), and is one of my favorites.

Friday, May 8, 2009

Color Vision

Color vision is one topic from biological physics that Russ Hobbie and I do not discuss in the 4th edition of Intermediate Physics for Medicine and Biology. Why? Well, the book is already rather long, and it is printed in black and white. To do justice to this topic, one really needs color pictures.

The Last Man Who  Knew Everything,  by Andrew Robinson, superimposed on Intermediate Physics for Medicine and Biology.
The Last Man Who
Knew Everything,
by Andrew Robinson.
The history of color vision is fascinating, in part because it illustrates the role that physics and physicists can play in the life sciences. The fundamental idea of trichromatic color vision was developed by two giants of 19th century physics, Thomas Young and Hermann von Helmholtz. Young was a fascinating intellectual, who Andrew Robinson describes in his book The Last Man Who Knew Everything: Thomas Young, the Anonymous Genius Who Proved Newton Wrong and Deciphered the Rosetta Stone, Among Other Surprising Feats. Helmholtz was a leading figure in early German physics (see: Hermann von Helmholtz and the Foundations of Nineteenth-Century Science).

The Young-Helmholtz theory postulates three types of photoreceptors in the eye, corresponding to three different colors of light: red (long wavelength), green (intermediate wavelength), and blue (short wavelength). Other colors can be formed by a mixture of these three. For instance, yellow is a combination of red and green (which amazes me, because yellow does not look anything like what you might expect from a red-green mixture). You can make yellow light two ways: a single wavelength of light intermediate between red and green so it excites both the red and green receptors (their response curves overlap), or by two wavelengths—one pure red and one pure green
mixed together. Your eye can’t tell the difference: in each case the red and green receptors are both excited. However, you could easily tell which is which using a prism or diffraction grating. Cyan is a mixture of green and blue (and cyan does indeed look like what you might call blue-green). Magenta is a mixture of blue and red, and is a particularly interesting case because it is not one of the colors of the rainbow: you could not, for instance, have a magenta laser, because a laser outputs a single wavelength of light—one color—but in order to produce magenta you need to excite both the red and blue receptors without exciting the green receptor. There is no way to do this without using at least two wavelengths. Of course, if you mix all three colors (that is, excite all three receptors simultaneously) you get white light.

Color mixing is much easier to understand if you can visualize it. I suggest going to one of the excellent color mixing applets on the internet, such as this one, or this one. If none of this sounds much like what you learned when mixing paint in kindergarten, it’s because there you were really doing color subtraction, rather than color addition.

Once you understand color mixing, you can understand color blindness. The most common type is red-green color blindness, where either the red or green receptor is absent. If the green receptor is not present, you cannot distinguish red from green or yellow (both excite only the red receptor), although you can still distinguish red from blue or magenta. Not sure if you are colorblind? There are many websites available that offer tests, including this one and this one. Not all animals have trichomatic vision. Your dog has only two receptors, making her a dichromat.

Another physicist that worked on color vision was James Clerk Maxwell, who is best remembered for his monumental work on electromagnetic theory (“Maxwell’s Equations”), as well as his work on the kinetic theory of gasses. But he also studied color vision by using wheels painted with more than one color, which when spun would produce a color mixture. Maxwell also produced the first color photograph.
The Feynman Lectures on Physics, by Richard Feynman, superimposed on Intermediate Physics for Medicine and Biology.
The Feynman Lectures on Physics,
by Richard Feynman.

We should keep in mind this admonition from Richard Feynman in Volume 1 of his famous  The Feynman Lectures on Physics: “Color is not a question of the physics of light itself. Color is a sensation, and the sensation for different colors is different in different circumstances.” If you dont believe this, see this optical illusion. You can even play tricks on your eye by creating afterimages like this one. Color vision is a fascinating subject, and a great example of the interaction between physics and physiology.

Friday, May 1, 2009

Paul Lauterbur

This week we celebrate the 80th anniversary of the birth of Paul Lauterbur (May 6, 1929–March 27, 2007), co-winner with Peter Mansfield of the 2003 Nobel Prize in Physiology or Medicine “for their discoveries concerning magnetic resonance imaging. Lauterbur’s contribution was the introduction of magnetic field gradients, so that differences in frequency could be used to localize the spins. In Sec. 18.9 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe this technique.
Creation of the [magnetic resonance] images requires the application of gradients in the static magnetic field Bz which cause the Larmor frequency to vary with position. The first gradient is applied in the z direction [the same direction as the static magnetic field] during the pi/2 pulse so that only the spins in a slice in the patient are selected (nutated into the xy plane). Slice selection is followed by gradients of Bz in the x and y directions. These also change the Larmor frequency. If the gradient is applied during the readout, the Larmor frequency of the signal varies as Bz varies with position. If the gradient is applied before the readout, it causes a position-dependent phase shift in the signal which can be detected.
Lauterbur grew up in Sidney, Ohio. He attended college at Case Institute of Technology, now part of Case Western Reserve University in Cleveland, where he majored in chemistry. He obtained his PhD in Chemistry in 1962 from the University of Pittsburgh. He was a Professor at the State University of New York at Stony Brook from 1969-1985, during which time he published his landmark paper Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance, (Nature, Volume 242, Pages 190–191, 1973). In an interesting story, Lauterbur came up with the idea of using gradients to do magnetic resonance imaging while eating a hamburger in a Big Boy restaurant.
Principles of Magnetic Resonance Imaging: A Signal Processing Perspective, by Liang and Lauterbur, superimposed on Intermeidate Physics for Medicine and Biology.
Principles of Magnetic Resonance Imaging:
A Signal Processing Perspective,
by Liang and Lauterbur.

You can learn more about magnetic resonance imaging by reading Lauterbur’s book (with Zhi-Pei Liang) Principles of Magnetic Resonance Imaging: A Signal Processing Perspective. If looking for a briefer introduction, consult Chapter 18 of Intermediate Physics for Medicine and Biology. Be sure to use the 4th edition if you want to learn about recent developments, such as functional MRI and diffusion tensor imaging.

Friday, April 24, 2009

Proton Therapy

Section 16.11.3 in the 4th edition of Intermediate Physics for Medicine and Biology discusses proton therapy.
Protons are also used to treat tumors. Their advantage is the increase of stopping power at low energies. It is possible to make them come to rest in the tissue to be destroyed, with an enhanced dose relative to intervening tissue and almost no dose distally (“downstream”) as shown by the Bragg peak.
Proton therapy has become popular recently: see articles in US News and World Report and on MSNBC. There even exists a National Association for Proton Therapy. Their website explains the main advantage of protons over X-rays.
Both standard x-ray therapy and proton beams work on the principle of selective cell destruction. The major advantage of proton treatment over conventional radiation, however, is that the energy distribution of protons can be directed and deposited in tissue volumes designated by the physicians in a three-dimensional pattern from each beam used. This capability provides greater control and precision and, therefore, superior management of treatment. Radiation therapy requires that conventional x-rays be delivered into the body in total doses sufficient to assure that enough ionization events occur to damage all the cancer cells. The conventional x-rays lack of charge and mass, however, results in most of their energy from a single conventional x-ray beam being deposited in normal tissues near the body’s surface, as well as undesirable energy deposition beyond the cancer site. This undesirable pattern of energy placement can result in unnecessary damage to healthy tissues, often preventing physicians from using sufficient radiation to control the cancer.

Protons, on the other hand, are energized to specific velocities. These energies determine how deeply in the body protons will deposit their maximum energy. As the protons move through the body, they slow down, causing increased interaction with orbiting electrons.
Figure 16.51 of the 4th edition of Intermediate Physics for Medicine and Biology shows the dose versus depth from a 150 MeV proton beam, including the all-important Bragg peak located many centimeters below the tissue surface. If you want to understand better why proton energy is deposited in the Bragg peak rather than being spread throughout the tissue, solve Problem 31 in Chapter 16.

To learn more about the pros and cons of proton therapy, I suggest several
point/counterpoint articles from the journal Medical Physics: Within the Next Decade Conventional Cyclotrons for Proton Radiotherapy will Become Obsolete and Replaced by Far Less Expensive Machines using Compact Laser Systems for the Acceleration of the Protons, Chang-Ming Ma and Richard Maughan (Medical Physics, Volume 33, Pages 571–573, 2006), Proton Therapy is the Best Radiation Treatment Modality for Prostate Cancer, Michael Moyers and Jean Pouliot (Medical Physics, Volume 34, Pages 375378, 2007), and Proton Therapy is Too Expensive for the Minimal Potential Improvements in Outcome Claimed, Robert Schulz and Alfred Smith (Medical Physics, Volume 34, Pages 1135–1138, 2007).

Friday, April 17, 2009

The Diffusion Approximation to Photon Transport

Chapter 14 in the 4th edition of Intermediate Physics for Medicine and Biology contains a section describing the diffusion approximation to photon transport.
When photons enter a substance, they may scatter many times before being absorbed or emerging from the substance. This leads to turbidity, which we see, for example, in milk or clouds. The most accurate studies of multiple scattering are done with “Monte Carlo” computer simulation, in which probabilistic calculations are used to follow a large number of photons as they repeatedly interact in the tissue being simulated. However, Monte Carlo techniques use lots of computer time. Various approximate analytical solutions also exist... One of the approximations, the diffusion approximation, is described here. It is valid when many scattering events occur for each photon absorption.
Optical Mapping of Cardiac Excitation and Arrhythmias, edited by Rosenbaum and Jalife.
Optical Mapping of Cardiac
Excitation and Arrhythmias,
edited by Rosenbaum and Jalife.
Today, I would like to present a new homework problem about the diffusion approximation, based on a brief communication I published in the August 2008 issue of IEEE Transactions on Biomedical Engineering (Volume 55, Pages 2102–2104). I was interested in the problem because of its role in optical mapping of transmembrane potential in the heart, discussed briefly at the end of Sec 7.10 and reviewed exhaustively in the excellent book Optical Mapping of Cardiac Excitation and Arrhythmias, edited by David Rosenbaum and Jose Jalife. Enjoy the problem, which belongs at the bottom of the left column of page 394.
Section 14.5

Problem 16 ½ Consider light with fluence rate φ0 continuously and uniformly irradiating a half-infinite slab of tissue having an absorption coefficient μa and a reduced scattering coefficient μ's. Divide the photons into two types: the incident ballistic photons that have not yet interacted with the tissue, and the diffuse photons undergoing multiple scattering. The diffuse photon fluence rate, φ, is governed by the steady state limit of the photon diffusion equation (Eq. 14.26). The source of diffuse photons is the scattering of ballistic photons, so the source term in Eq. 14.26 is s = μ's exp(-z/λunatten), where z is the depth below the tissue surface. At the surface (z=0), the diffuse photons obey the boundary condition φ = 2 D dφ/dz.
(a) Derive an analytical expression for the diffuse photon fluence rate in the tissue, phi(z)
(b) Plot φ(z) versus z for μa=0.08 mm−1 and μ's=4 mm1.  
(c) Evaluate λunatten and λdiffuse for these parameters.
The most interesting aspect of this calculation is that the diffuse photon fluence rate is not maximum at the tissue surface, but rather it builds up to a peak below the surface, somewhat like the imparted energy from 10 MeV photons shown in Fig. 15.32. This has some interesting implications for optical mapping of the heart: subsurface tissue may contribute more to the optical signal than surface tissue.

If you want the solution, send me an email (roth@oakland.edu) and I will gladly supply it.

Friday, April 10, 2009

We Should All Congratulate Professor Hobbie For This Excellent Text

Peter Kahn reviewed the third edition of Intermediate Physics for Medicine and Biology in the American Journal of Physics (volume 67, Pages 457–458, 1999). He wrote:
As a professor of physics I am upset that our biology students have such brief and superficial exposure to physics and mathematics, and that, at the same time, our physics students go through a curriculum that ignores the important role that biology is playing in modern science. We should all congratulate Professor Hobbie for this excellent text. Now it is up to us to initiate the dialogue that builds on this solid foundation.

Friday, April 3, 2009

Div, Grad, Curl, and All That

Russ Hobbie and I assume that readers of the 4th edition of Intermediate Physics for Medicine and Biology know the basics of calculus (our preface states that “calculus is used without apology”). We even introduce some concepts from vector calculus, such as the divergence, gradient, and curl. Although these vector derivatives are crucial for understanding topics such as diffusion and electricity, many readers may be unfamiliar with them. These functions are even more complicated in curvilinear coordinate systems, and in Appendix L we summarize how to write the divergence, gradient, curl, and Laplacian in rectangular, cylindrical, and spherical coordinates.

Div, Grad, Curl, and All That,  by H. M. Schey, superimposed on Intermediate Physics for Medicine and Biology.
Div, Grad, Curl, and All That,
by H. M. Schey.
When I was a young physics student at the University of Kansas, Dr. Jack Culvahouse gave me a book that helped explain vector calculus: Div, Grad, Curl, and All That: An Informal Text on Vector Calculus, by H. M. Schey. For me, this book made clear and intuitive what had been confusing and complicated. By defining the divergence and curl in terms of surface and line integrals, I suddenly could understand what these seemingly random collections of partial derivatives meant. One can hardly make sense of Maxwell’s equations of electromagnetism without vector calculus (try reading a textbook from Maxwell’s era before vector calculus was invented if you don't believe me). In fact, Schey introduces vector calculus using electromagnetism as his primary example:
In this text the subject of vector calculus is presented in the context of simple electrostatics. We follow this procedure for two reasons. First, much of vector calculus was invented for use in electromagnetic theory and is ideally suited to it. This presentation will therefore show what vector calculus is and at the same time give you an idea of what it's for. Second, we have a deep-seated conviction that mathematics—in any case some mathematicsis best discussed in a context that is not exclusively mathematical. Thus, we will soft-pedal mathematical rigor, which we think is an obstacle to learning this subject on a first exposure to it, and appeal as much as possible to physical and geometric intuition.
For readers of Intermediate Physics for Medicine and Biology who get stuck when we delve into vector calculus, I suggest setting our book aside for a few days (but only a few!) to read Div, Grad, Curl, and All That. Not only will you be able to understand our book better, but youll find this background useful in many other fields of physics, math, and engineering.

Friday, March 27, 2009

Sigma Xi

Here at Oakland University, this Tuesday, March 31, is our annual Sigma Xi lecture (4 P.M. in 201 Dodge Hall of Engineering). Each year, we invite a leading scientist to OU to give a lecture for a general audience. This year Dr. Vicki Chandler, Chief Program Director of the Gordon and Betty Moore Foundation, will give a talk about “Epigenetic Silencing Across Generations.” (The term “epigenetic gene silencing” describes the switching off of a gene by a mechanism other than genetic modification. That is, a gene that would be expressed, or turned on, under normal circumstances is switched off by machinery in the cell.)

For six years, I served as the president of the Oakland University chapter of Sigma Xi, the Scientific Research Society. As readers of the
4th edition of Intermediate Physics for Medicine and Biology become biomedical researchers, they should consider joining Sigma Xi. I joined as a graduate student at Vanderbilt University.
Sigma Xi is an international, multidisciplinary research society whose programs and activities promote the health of the scientific enterprise and honor scientific achievement. There are nearly 60,000 Sigma Xi members in more than 100 countries around the world. Sigma Xi chapters, more than 500 in all, can be found at colleges and universities, industrial research centers and government laboratories. The Society endeavors to encourage support of original work across the spectrum of science and technology and to promote an appreciation within society at large for the role research has played in human progress.
The mission of Sigma Xi is “to enhance the health of the research enterprise, foster integrity in science and engineering, and promote the public's understanding of science for the purpose of improving the human condition.” As a member of Sigma Xi, you automatically receive a subscription to American Scientist, the award-winning illustrated magazine of science and technology. I particularly enjoy Henry Petroski’s monthly essay on topics in engineering, and the book reviews are outstanding. The magazine alone is worth the cost of membership. Another benefit that I look forward to each day is Science in the News, a free e-mail bulletin featuring top science and technology stories. Sigma Xi also has an annual meeting, including a student research conference. Last year, the meeting was November 20–23 in Washington, DC. The society is a strong advocate of scientific research, and is worthy of support.

Finally, you have to love the society
s motto: Companions in Zealous Research.

Friday, March 20, 2009

The West-Brown-Enquist Model for Allometric Scaling

Chapter 2 of the 4th edition of Intermediate Physics for Medicine and Biology ends with a section on “Food Consumption, Basal Metabolic Rate, and Scaling.” Here Russ Hobbie and I discuss the famous “3/4-power law” (also known as Kleiber’s law), which relates the metabolic rate R (in Watts) to the body mass M (in kg) by the equation R = 4.1 M0.751 (Eq. 2.32c in our book). We conclude the section by writing
A number of models have been proposed to explain a 3/4-power dependence [McMahon (1973)Peters (1983); West et al. (1999); Banavar et al. (1999)]. West et al. argue that the 3/4-power dependence is universal: they derive it from a model that supplies nutrients through a branching network that reaches all parts of the organism, minimizes the energy required for distribution, and ends in capillaries (or terminal xylem in plants) that are all the same size. Whether it is universal is still debated [Kozlowski and Konarzewski (2004)]. West and Brown (2004) review quarter-power scaling in a variety of circumstances.
When we wrote this paragraph, the origin of the 3/4th power law was still being hotly debated in the literature. Readers of Intermediate Physics for Medicine and Biology might like an update.

First, this work is highly cited. West, Brown, and Enquist’s first paper in Science (
A General Model for the Origin of Allometric Scaling Laws in Biology,” Volume 276, Pages 122–126, 1997; not cited in our book) now has over 1000 citations. Their second paper, which we list in the references at the end of Chapter 2, has nearly 400 citations. The paper by Banavar, Maritan and Rinaldo cited in Chapter 2 has over 200 citations. Clearly, these studies have had a major impact on the field.

Second, the work has generated quite a bit of discussion in the press. The December 2008 issue of The Scientist has an article by Bob Grant titled
The Powers That Might Be about West and his colleagues and how they have coped with criticisms of their work. An interview with Geoffrey West can be found at physicsworld.com, and one with Brian Enquist at www.in-cities.com. In 2004, John Whitfield published a feature in the open access journal PLOS Biology reviewing the field (“open access means that anyone can access the paper over the internet, without the need for a journal subscription).

Third, several recent papers in scientific journals have addressed this topic. Savage et al. have analyzed what they refer to as the WBE model in an article appearing in the open access journal PLOS Computational Biology (Volume 4, Article e1000171, 2008). The authors’ summary states

The rate at which an organism produces energy to live increases with body mass to the 3/4 power. Ten years ago West, Brown, and Enquist posited that this empirical relationship arises from the structure and dynamics of resource distribution networks such as the cardiovascular system. Using assumptions that capture physical and biological constraints, they defined a vascular network model that predicts a 3/4 scaling exponent. In our paper we clarify that this model generates the 3/4 exponent only in the limit of infinitely large organisms. Our calculations indicate that in the finite-size version of the model metabolic rate and body mass are not related by a pure power law, which we show is consistent with available data. We also show that this causes the model to produce scaling exponents significantly larger than the observed 3/4. We investigate how changes in certain assumptions about network structure affect the scaling exponent, leading us to identify discrepancies between available data and the predictions of the finite-size model. This suggests that the model, the data, or both, need reassessment. The challenge lies in pinpointing the physiological and evolutionary factors that constrain the shape of networks driving metabolic scaling.
In another paper, published in the December 2006 issue of Physics of Life Reviews (Volume 3, Pages 229–261), de Silva et al. write that
One of the most pervasive laws in biology is the allometric scaling, whereby a biological variable Y is related to the mass M of the organism by a power law, Y = Y0Mb, where b is the so-called allometric exponent. The origin of these power laws is still a matter of dispute mainly because biological laws, in general, do not follow from physical ones in a simple manner. In this work, we review the interspecific allometry of metabolic rates, where recent progress in the understanding of the interplay between geometrical, physical and biological constraints has been achieved.

For many years, it was a universal belief that the basal metabolic rate (BMR) of all organisms is described by Kleiber’s law (allometric exponent b = 3/4). A few years ago, a theoretical basis for this law was proposed, based on a resource distribution network common to all organisms. Nevertheless, the 3/4-law has been questioned recently. First, there is an ongoing debate as to whether the empirical value of b is 3/4 or 2/3, or even nonuniversal. Second, some mathematical and conceptual errors were found [in] these network models, weakening the proposed theoretical arguments. Another pertinent observation is that the maximal aerobically sustained metabolic rate of endotherms scales with an exponent larger than that of BMR. Here we present a critical discussion of the theoretical models proposed to explain the scaling of metabolic rates, and compare the predicted exponents with a review of the experimental literature. Our main conclusion is that although there is not a universal exponent, it should be possible to develop a unified theory for the common origin of the allometric scaling laws of metabolism.
Now, five years after we included the topic in Intermediate Physics for Medicine and Biology, the controversy continues. It makes for a wonderful example of how ideas from fundamental physics can elucidate biological laws, and a warning about how complicated and messy biology can be, limiting the application of simple models. I can't tell you how this debate will ultimately be resolved. But it provides a fascinating case study in the interaction of physics and biology.