Friday, April 18, 2014

The Periodic Table in IPMB

The periodic table of the elements summarizes so much of science, and chemistry in particular. Of course, the periodic table is crucial in biology and medicine. How many of the over one hundred elements do Russ Hobbie and I mention in the 4th edition of Intermediate Physics for Medicine and Biology? Surveying all the elements is too big of a job for one blog entry, so let me consider just the first twenty elements: hydrogen through calcium. How many of these appear in IPMB?
1. Hydrogen. Hydrogen appears many places in IPMB, including Chapter 14 (Atoms and Light) that describes the hydrogen energy levels and emission spectrum.

2. Helium. Liquid helium is mentioned when describing SQUID magnetometers in Chapter 8 (Biomagnetism), and the alpha particle (a helium nucleus) plays a major role in Chapter 17 (Nuclear Physics and Nuclear Medicine).

3. Lithium. Chapter 7 (The Exterior Potential and the Electrocardiogram) mentions lithium-iodide battery that powers most pacemakers, and Chapter 16 (Medical Use of X-rays) mentions lithium-drifted germanium x-ray detectors.

4. Beryllium. I can’t find beryllium anywhere in IPMB.

5. Boron. Boron neutron capture therapy is reviewed in Chapter 16 (Medical Use of X Rays).

6. Carbon. A feedback loop relating the carbon dioxide concentration in the alveoli to the breathing rate is analyzed in Chapter 10 (Feedback and Control).

7. Nitrogen. When working problems about the atmosphere, readers are instructed to consider the atmosphere to be pure nitrogen (rather than only 80% nitrogen) in Chapter 3 (Systems of Many Particles).

8. Oxygen. Oxygen is often mentioned when discussing hemoglobin, such as in Chapter 18 (Magnetic Resonance Imaging) when describing functional MRI.

9. Fluorine. The isotope Fluorine-18, a positron emitter, is used in positron emission tomography (Chapter 17, Nuclear Physics and Nuclear Medicine).

10. Neon. Not present.

11. Sodium. Sodium and sodium channels are essential for firing action potentials in nerves (Chapter 6, Impulses in Nerve and Muscle Cells).

12. Magnesium. Russ and I don’t mention magnesium by name. However, Problem 16 in Chapter 9 (Electricity and Magnetism at the Cellular Level) provides a citation for the mechanism of anomalous rectification in a potassium channel. The mechanism is block by magnesium ions.

13. Aluminum. Chapter 16 (Medical Use of X Rays) tells how sheets of aluminum are used to filter x-ray beams; removing the low-energy photons while passing the high-energy ones.

14. Silicon. Silicon X ray detectors are considered in Chapter 16 (Medical Use of X Rays).

15. Phosphorus. The section on Distances and Sizes that starts Chapter 1 (Mechanics) considers the molecule adenosine triphosphate (ATP), which is crucial for metabolism.

16. Sulfur. The isotope technitium-99m is often combined with colloidal sulfur for use in nuclear medicine imaging (Chapter 17, Nuclear Physics and Nuclear Medicine).

17. Chlorine. Ion channels are described in Chapter 9 (Electricity and Magnetism at the Cellular Level), including chloride ion channels.

18. Argon. In Problem 32 of Chapter 16 (Medical Use of X rays), we ask the reader to calculate the stopping power of electrons in argon.

19. Potassium. The selectivity and voltage dependence of ion channels have been studied using the Shaker potassium ion channel (Chapter 9, Electricity and Magnetism at the Cellular Level).

20. Calcium. After discussing diffusion in Chapter 4 (Transport in an Infinite Medium), in Problem 23 we ask the reader to analyze calcium diffusion when a calcium buffer is present.

Friday, April 11, 2014

Bilinear Interpolation

If you know the value of a variable at a regular array of points (xi,yj), you can estimate its value at intermediate positions (x,y) using an interpolation function. For bilinear interpolation, the function f(x,y) is

f(x,y) = a + b x + c y + d x y

where a, b, c, and d are constants. You can determine these constants by requiring that f(x,y) is equal to the known data at points (xi,yj), (xi+1,yj), (xi,yj+1), and (xi+1,yj+1):

f(xi,yj) = a + b xi + c yj + d xi yj
f(xi+1,yj) = a + b xi+1 + c yj + d xi+1 yj
f(xi,yj+1) = a + b xi + c yj+1 + d xi yj+1
f(xi+1,yj+1) = a + b xi+1 + c yj+1 + d xi+1 yj+1 .

Solving these four equations for the four unknowns a, b, c, and d, plugging those values into the equation for f(x,y), and then doing a bit of algebra gives you

f(x,y) = [ f(xi,yj)  (xi+1 – x) (yj+1 – y) + f(xi+1,yj) (x – xi) (yj+1 – y) 
                            + f(xi,yj+1) (xi+1 – x) (y – yj) + f(xi+1,yj+1) (x – xi) (y – yj) ] /(ΔxΔy)

where xi+1 = xi + Δx and yj+1 = yj + Δy. To see why this makes sense, let x = xi and y = yj. In that case, the last three terms in this expression go to zero, and the first term reduces to f(xi,yj), just as you would want an interpolation function to behave. As you can check for yourself, this is true of all four data points. If you hold y fixed then the function is a linear function of x, and if you hold x fixed then the function is a linear function of y. If you assume y = e x, then the function is quadratic.

If you want to try it yourself, see http://www.ajdesigner.com/phpinterpolation/bilinear_interpolation_equation.php

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce bilinear interpolation in Problem 20 of Chapter 12, in the context of computed tomography. In CT, you obtain the Fourier transform of the image at points in a polar coordinate grid ki, θj. In other words, the points lie on concentric circles in the spatial frequency plane, each of radius ki. In order to compute a numerical two-dimensional Fourier reconstruction to recover the image, one needs the Fourier transform on a Cartesian grid kx,n, ky,m. Thus, one needs to interpolate from data at ki, θj to kx,n, ky,m. In Problem 20, we suggest doing this using bilinear interpolation, and ask the reader to perform a numerical example.

I like bilinear interpolation, because it is simple, intuitive, and often “good enough.” But it is not necessarily the best way to proceed. Tomogrphic methods arise not only in CT but also in synthetic aperture radar (SAR) (see: Munson, D. C., J. D. O’Brien, and W. K. Jenkins (1983) “A Tomographic Formulation of Spotlight-Mode Synthetic Aperture Radar,” Proceedings of the IEEE, Volume 71, Pages 917–925). In their conference proceeding paper “A Comparison of Algorithms for Polar-to-Cartesian Interpolation in Spotlight Mode SAR” (IEEE International Conference on Acoustics, Speech and Signal Processing '85, Volume 10, Pages 1364–1367, 1985), Munson et al. write
Given the polar Fourier samples, one method of image reconstruction is to interpolate these samples to a cartesian grid, apply a 2-D inverse FFT, and to then display the magnitude of the result. The polar-to-cartesian interpolation operation must be of extremely high quality to prevent aliasing . . . In an actual system implementation the interpolation operation may be much more computationally expensive than the FFT. Thus, a problem of considerable importance is the design of algorithms for polar-to-cartesian interpolation that provide a desirable quality/computational complexity tradeoff.
Along the same lines, O’Sullivan (“A Fast Sinc Function Gridding Algorithm for Fourier Inversion in Computer Tomography,” IEEE Trans. Medical Imaging, Volume 4, Pages 200–207, 1985) writes
Application of Fourier transform reconstruction methods is limited by the perceived difficulty of interpolation from the measured polar or other grid to the Cartesian grid required for efficient computation of the Fourier transform. Various interpolation schemes have been considered, such as nearest-neighbor, bilinear interpolation, and truncated sinc function FIR interpolators [3]-[5]. In all cases there is a tradeoff between the computational effort required for the interpolation and the level of artifacts in the final image produced by faulty interpolation.
There has been considerable study of this problem. For instance, see
Stark et al. (1981) “Direct Fourier reconstruction in computer tomography,” IEEE Trans. Acoustics, Speech, and Signal Processing, Volume 29, Pages 237–245.

Moraski, K. J. and D. C. Munson (1991) “Fast tomographic reconstruction using chirp-z interpolation,” 1991 Conference Record of the Twenty-Fifth Asilomar Conference on Signals, Systems and Computers, Volume 2, Pages 1052–1056.
Going into the details of this topic would take me into more deeply into signal processing than I am comfortable with. Hopefully, Problem 20 in IPMB will give you a flavor for what sort of interpolation needs to be done, and the references given in this blog entry can provide an entry to more detailed analysis.

Friday, April 4, 2014

17 Equations that Changed the World

In Pursuit of the Unknown: 17 Equations that Changed the World, by Ian Stewart, superimposed on Intermediate Physics for Medicine and Biology.
In Pursuit of the Unknown:
17 Equations that Changed the World,
by Ian Stewart.
Ian Stewart’s book In Pursuit of the Unknown: 17 Equations that Changed the World “is the story of the ascent of humanity, told through 17 equations.” Of course, my first thought was “I wonder how many of those equations are in the 4th edition of Intermediate Physics for Medicine and Biology?” Let’s see.
1. Pythagorean theorem: a2+b2=c2. In Appendix B of IPMB, Russ Hobbie and I discuss vectors, and quote Pythagoras’ theorem when relating a vector’s x and y components to its magnitude.

2. Logarithms: log(xy)=log(x)+log(y). In Appendix C, we present many of the properties of logarithms, including this sum/product rule as Eq. C6. Log-log plots are discussed extensively in Chapter 2 (Exponential Growth and Decay).

3. Definition of the derivative: df/dt = limit h → 0 (f(t+h)-f(t))/h. We assume the reader has taken introductory calculus (the preface states “Calculus is used without apology”), so we don’t define the derivative or consider what it means to take a limit. However, in Appendix D we present the Taylor series through its first two terms, which is essentially the same equation as the definition of the derivative, just rearranged.

4. Newton’s law of gravity: F = Gm1m2/d2. Russ and I are ruthless about focusing exclusively on physics that has implications for biology and medicine. Almost all organisms live at the surface of the earth. Therefore, we discuss the acceleration of gravity, g, starting in Chapter 1 (Mechanics), but not Newton’s law of Gravity.

5. The square root of minus one: i2 = -1. Russ and I generally avoid complex numbers, but they are mentioned in Chapter 11 (The Method of Least Squares and Signal Analysis) as an alternative way to formulate the Fourier series. We write the equation as i = √-1, which is the same thing as i2 = -1.

6. Euler’s formula for polyhedra: FE + V = 2. We never come close to mentioning it.

7. Normal distribution: P(x) = 1/√(2πσ) exp[-(x-μ)2/2σ2]. Appendix I is about the Gaussian (or normal) probability distribution, which is introduced in Eq. I.4.

8. Wave equation: 2u/∂t2 = c22u/∂x2. Russ and I introduce the wave equation (Eq. 13.5) in Chapter 13 (Sound and Ultrasound).

9. Fourier transform: f(k) = ∫ f(x) e-2πixk dx. In Chapter 11 (The Method of Least Squares and Signal Analysis) we develop the Fourier transform in detail (Eq. 11.57), and then use it in Chapter 12 (Images) to do tomography.

10. Navier-Stokes equation: ρ (∂v/∂t + v ⋅∇ v) = -∇ p + ∇ ⋅ T + f. Russ and I analyze biological fluid mechanics in Chapter 1 (Mechanics), and write down a simplified version of the Navier-Stokes equation in Problem 28.

11. Maxwell’s equations: ∇ ⋅ E = 0, ∇ × E = -1/c H/∂t, ∇ ⋅ H = 0, and ∇ × H = 1/c E/∂t. Chapter 6 (Impulses in Nerve and Muscle Cells), Chapter 7 (The Exterior Potential and the Electrocardiogram), and Chapter 8 (Biomagnetism) discuss each of Maxwell’s equations. In Problem 22 of Chapter 8, Russ and I ask the reader to collect all these equations together. Yes, I own a tee shirt with Maxwell’s equations on it.

12. Second law of thermodynamics: dS ≥ 0. In Chapter 3 (Systems of Many Particles), Russ and I discuss the second law of thermodynamics. We derive entropy from statistical considerations (I would have chosen S = kB lnΩ rather than dS ≥ 0 to sum up the second law). We state in words “the total entropy remains the same or increases,” although we don’t actually write dS ≥ 0.

13. Relativity: E = mc2. We don’t discuss special relativity in much detail, but we do need E = mc2 occasionally, most notably when discussing pair production in Chapter 15 (Interaction of Photons and Charged Particles with Matter).

14. Schrödinger’s equation: i ħ ∂Ψ/∂t = Ĥ Ψ. Russ and I don’t write down or analyze Schrödinger’s equation, but we do mention it by name, particularly at the start of Chapter 3 (Systems of Many Particles).

15. Information theory: H = - Σ p(x) log p(x). Not mentioned whatsoever.

16. Chaos theory: xi+1 = k xi (1-xi). Russ and I analyze chaotic behavior in Chapter 10 (Feedback and Control), including the logistic map xi+1=kxi(1-xi) (Eq. 10.36).

17. Black-Scholes equation: ½ σ2S22V/∂S2 + rS V/S + V/t – rV = 0. Never heard of it. Something about economics and the 2008 financial crash. Nothing about it in IPMB.
Seventeen is a strange number of equations to select (a medium sized prime number). If I were to round it out to twenty, then I would have three to select on my own. My first thought is Newton’s second law, F=ma, but Stewart mentions that this relationship underlies both the Navier-Stokes equation and the wave equation, so I guess it is already present implicitly. Here are my three:
18. Exponential equation with constant input: dy/dt = a – by. Chapter 2 of IPMB (Exponential Growth and Decay) is dedicated to the exponential function. This equation appears over and over throughout the book. Stewart discusses the exponential function briefly in his chapter on logarithms, but I am inclined to add the differential equation leading to the exponential function to the list. Among its many uses, this function is crucial for understanding the decay of radioactive isotopes in Chapter 17 (Nuclear Physics and Nuclear Medicine).

19. Diffusion equation: ∂C/∂t = D ∂2C/∂x2. To his credit, Stewart introduces the diffusion equation in his chapter on the Fourier transform, and indeed it was Fourier’s study of the heat equation (the same as the diffusion equation, with T for temperature replacing C for concentration) that motivated the development of the Fourier series. Nevertheless, the diffusion equation is so central to biology, and discussed in such detail in Chapter 4 (Transport in an Infinite Medium) of IPMB, that I had to include it. Some may argue that if we include both the wave equation and the diffusion equation, we also should add Laplace’s equation, but I consider that a special case of Maxwell’s equations, so it is already in the list.

20. Light quanta: E = hν: Although Stewart included Schrodinger’s equation of quantum mechanics, I would include this second equation containing Planck’s constant h. It summarizes the wave-particle duality of light, and is crucially important in Chapters 14 (Atoms and Light), 15 (Interaction of Photons and Charged Particles with Matter), and 16 (Medical Uses of X Rays).
Runners up include the Bloch equations since I need something from Chapter 18 (Magnetic Resonance Imaging), the Boltzmann factor (except that it is a factor, not an equation), Stokes law, the ideal gas law and its analog the van’t Hoff’s law from Chapter 5 (Transport through Neutral Membranes), the Hodgkin and Huxley equations, the Poisson-Boltzmann equation in Chapter 9 (Electricity and Magnetism at the Cellular Level), the Poisson probability distribution, and Planck’s blackbody radiation law (perhaps in place of E=hν).

Overall, I think studying the 4th edition of Intermediate Physics for Medicine and Biology introduces the reader to most of the critical equations that have indeed changed the world.

Friday, March 28, 2014

The Correspondence Between Sir George Gabriel Stokes and Sir William Thomson, Baron Kelvin of Largs

I recently obtained through interlibrary loan a copy of The Correspondence Between Sir George Gabriel Stokes and Sir William Thomson, Baron Kelvin of Largs, edited by David Wilson. This wonderful collection of over 650 letters spans the years 1846 to 1901. Robert Purrington, in his book Physics in the Nineteenth Century, claims that “the Thomson-Stokes correspondence is one of the treasures of nineteenth-century scientific communication.” I thought I would share a few excerpts from these letters with the readers of this blog.

Stokes (1819-1903) was the older of the two men. For years he served as the Lucasian Professor of Mathematics at the University of Cambridge. I have discussed him in this blog before. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I mention Stokes in connection to Stokes’ law for the drag force of a sphere moving in a viscous fluid, and the Navier-Stokes equations of fluid dynamics.

Kelvin (1824-1907) was five years younger than Stokes. He was born with the name William Thomson, but was made a Lord in 1892 and was thereafter referred to as Lord Kelvin. Russ and I don’t mention Kelvin in IPMB, but we do mention the unit of absolute temperature named after him. He spent his career at the University of Glasgow in Scotland, and is remembered for many accomplishments, but primarily for his contributions to thermodynamics.

Stokes’ and Kelvin’s letters were full of mathematics (Stokes was primarily a mathematical physicist) and critiques of the many famous physicists of their era. They spent a lot of time trying to obtain copies of papers. In the days before the internet, or even the Xerox machine, making a copy of a scientific paper was not easy, and they were constantly lending out the few copies they possessed. In some years, their letters were primarily about reviewing manuscripts, as both men served as editors of journals at one time and as reviewers for journals at another. Most commonly Stokes, as editor, was trying to coax Kelvin to complete his reviews on time.

A letter of April 7, 1847—between two relatively young and up-and-coming physicists—highlights the different areas of interest of the two men.
My Dear Stokes,
Many thanks for your letters…..I have been for a long time thinking on subjects such as those you write about, and helping myself to understand them by illustrations from the theories of heat, electricity, magnetism, and especially galvanism; sometimes also water. I can strongly recommend heat for clearing the head on all such considerations, but I suppose you prefer cold water….
Yours very truly, William Thomson
In this letter from Oct 25, 1849, Kelvin congratulates Stokes on becoming the Lucasian Professor.
My Dear Professor
I have been daily expecting to hear of the election of a Lucasian Professor and whenever the Times has been in my hands I have looked for such a proceeding in the University Intelligence, and now I am glad to be able to congratulate you on the result…
Yours sincerely, William Thomson
Sometimes the two engaged in a bit of trash-talking about other scientists. In a Jan 6, 1851 letter, Stokes adds a pugnacious postscript.
My Dear Thomson,
…..
Yours very truly, G. G. Stokes.
P.S. Have you seen Prof Challis’s awful heterodoxy in the present no. of the Phil. Mag. I am half inclined to take up arms, but I fear the controversy would be endless.
Readers of IPMB will recall the cable equation in Chapter 6 that describes the electrical properties of a nerve axon. This equation was not originally derived to model an axon, but instead was proposed by Kelvin to describe a submarine telegraph cable. Kelvin was deeply involved with the trans-Atlantic cable, and in this Oct 30, 1854 letter he described this work to Stokes.
My Dear Stokes,
An application of the theory of the transmission of electricity along a submarine telegraph wire, which I omitted to mention in the haste of finishing my letter on Saturday, shows how the question raised by Faraday as to the practicability of sending distinct signals along such a length as the 2000 or 3000 miles of wire that would be required for America, may be answered. The general investigations will show exactly how much the sharpness of the signals will be worn down, and will show what the maximum strength of current through the apparatus in America, would be produced by a specified battery action on the end in England, with wire of given dimensions etc.
The following form of solution of the general equation
σ2kc dv/dt = d2v/dx2 - hv
which is the first given by Fourier, enables us to compare the times until a given strength of current shall be obtained, with different dimensions etc of wire….
Yours always truly, William Thomson
Neither scientist could properly be called a biological physicist. Medicine and biology almost never appear in their letters (unless one of them is sick). Kelvin once brought up a biological topic in his Jan 28, 1856 letter.
My Dear Stokes,
…Have you seen Clerk Maxwell’s paper in the Trans R S E [Transactions of the Royal Society of Edinburgh] on colour as seen by the eye?...do you believe that the whites produced by various combinations, such as two homogenous colours, three homogeneous colours, etc. are absolutely indistinguishable from one another and from solar white by the best eye?...Are you at all satisfied with Young’s idea of triplicity in the perceptive organ?
Yours very truly, William Thomson
Stokes’ curt reply on Feb 4 indicated he could not have been less interested in the topic.
My Dear Thomson,
….I have not made any experiments on the mixture of colours, nor attended particularly to the subject….
Yours very truly, G. G. Stokes
Once Kelvin was late getting some page proofs sent to Stokes, and in a Jan 20, 1857 letter he received a stern tongue lashing (one suspects, tongue-in-cheek).
My Dear Thomson,
You are a terrible fellow and I must write you a scolding…Hoping you will be more punctual for the future I remain
Yours most sincerely, G. G. Stokes
This was not the last time Kelvin was slow in responding to Stokes, and he often apologized for being late. He was tardy once when reviewing one of Maxwell’s papers for a journal. To me, Maxwell is a giant of physics, to be spoken of in the same category as Newton and Einstein. But for Kelvin, reviewing one of Maxwell’s papers was just another chore he needed to find time to do.

Although most of the letters were about science, personal matters were sometimes mentioned. Kelvin wrote a Dec 27, 1863 letter of consolation after scarlet fever took the life of Stokes’ infant child. Stokes himself was also ill with the disease.
My Dear Stokes,
I am very sorry to hear of the loss you have had and I feel much concerned about the danger you have yourself suffered. I hope you are still improving steadily, and that you will soon be quite strong again….
Yours always truly, W. Thomson
I hope the others of your family have perfectly recovered, if not escaped the scarlet fever.
When Kelvin became a Lord, Stokes’ Jan 2, 1892 letter had a bit of fun with the event. But afterwards, his letters were always addressed to Kelvin rather than Thomson.
My Dear Lord (What?)
I write to congratulate you on the great honour Her Majesty has bestowed on you, and through you on science, by creating you a Peer. At the same time I may add my congratulations to those of my wife to The Lady Thomson, or whatever she is to be called. I was speculating whether you would be Lord Thomson, or Lord Netherhall, or Lord Largs, or what. Time will tell….
Yours sincerely, G. G. Stokes
They didn’t always agree on scientific issues. In an Oct 27, 1894 letter, the 75-year-old Stokes’ humerously addresseed a disagreement about the behavior of a fluid in some container.
My Dear Lord Kelvin,
…..perhaps you think to demolish me by saying, Let the vessel be rigid but massless. Well. There is life in the old dog yet….
Yours sincerely, G. G. Stokes
I found it fascinating to listen as they corresponded about the important physics of their era. For instance, they discussed Rontgen’s discovery of X-rays extensively in 1896, and considered writing a joint note about their electromagnetic nature. They debated Becquerel’s discovery of radioactivity from uranium in 1897. Their last letter, in 1901, analyzed a problem from fluid dynamics and contained mathematical equations.

I didn’t have time to read all the letters, but I did spend most of a Saturday sampling many of them. The letters provide a valuable glimpse into the relationship between two intelligent yet human scientists. Wilson lists a few quotes at the start of his book, with the last one by Arthur Schuster (1932) stating
I shall always remember Lord Kelvin, as he stood at the open grave, almost overcome by his emotion, saying in a low voice: “Stokes is gone and I shall never return to Cambridge again.”

Friday, March 21, 2014

A Dozen New Homework Problems

Russ Hobbie and I are hard at work on the 5th edition of Intermediate Physics for Medicine and Biology. Sometimes we consider adding new material, try things out, debate its merits, but in the end it doesn’t make the cut. For instance, we thought about adding a section on elasticity theory to Chapter 1, but that was going to be too long (we are constantly battling between adding important topics and keeping the book from getting too fat), so we tried writing some new homework problems to teach the material that way. But it was also too much, and eventually we gave up on the idea. For those wanting to learn more about the biological applications of elasticity theory, I recommend Y. C. Fung’s book Biomechanics: Mechanical Properties of Living Tissue (cited in IPMB), or his more general textbook First Course in Continuum Mechanics.

I don't want to see good homework problems go to waste, so I offer them here in this blog. Twelve new homework problems. Free. They offer a way to learn a bit of elasticity theory. Enjoy.
Problem 1 Consider the rod in Fig. 1.20; the x-axis is along the rod’s length and x=0 is where the rod meets the wall. Let the displacement u(x,y) be the change in position of each point in the material in response to the force F. Express the displacement as ux=Ax, uy=0, and uz=0, where A is a constant.
(a) Calculate the normal strain εn using Eq. 1.24 and Fig. 1.20.
(b) Calculate εn using the definition εn=∂ux/∂x . Is it the same as in (a)?

Problem 2 Consider the rod in Fig. 1.23; x is horizontal, y is vertical, and y=0 is where the rod meets the floor. Express the displacement as ux=By, uy=0, and uz=0, where B is a constant.
(a) Calculate the shear strain εs using Eq. 1.27 and Fig. 1.23 (assume B « 1)
(b) Calculate εs using the definition εs=∂ux/∂y+∂uy/∂x. Is it the same as in (a)?
Problem 3 The normal and shear strains can be combined into a strain tensor, which a 3 x 3 matrix. Define the diagonal components of this tensorxx, εyy, εzz) as the normal strain in each direction, and the off-diagonal components (εxy, εyz, εzx) as one half of the shear strain in each direction. For example, εxy=(∂ux/∂y+∂uy/∂x)/2.
(a) Derive expressions relating each component of the strain tensor to the displacement.
(b) Show that the strain tensor is symmetric (e.g., εxy=εyx).
Note: This expression for the strain tensor is correct for small strains. For large strains it is a more complicated nonlinear function of the displacement (Fung, 1993).

Problem 4 The dilatation is defined as the change in volume over the original volume, ΔV/V. For small strains, the dilatation is εxxyyzz .
(a) Calculate the dilatation for the displacement in Problem 1. Does the volume change?
(b) Calculate the dilatation for the displacement in Problem 2. Does the volume change?
(c) Calculate the dilatation for the displacement ux=Cx, uy=Cy, uz=Cz, where C is a constant. Does the volume change?
(d) Show that the dilatation is equal to the trace of the strain tensor (the trace of a matrix is the sum of the diagonal components) and also is equal to the divergence of the displacement (the divergence is defined in Chapter 4).

Problem 5 Consider the displacement ux=Dx, uy=-Dy, uz=0, where D is a constant.
(a) Sketch a plot of the displacement distribution by drawing the displacement vectors over a 5 x 5 grid centered at the origin.
(b) Sketch how a small square in the x-y plane centered at the origin is deformed.
(c) Calculate the strain tensor (defined in Problem 3) for this displacement.
(d) Calculate the dilatation (defined in Problem 4) for this displacement.

Problem 6 Repeat the analysis of Problem 5 for the displacement ux=Fy, uy=Fx, uz=0, where F is a constant.

Problem 7 Repeat the analysis of Problem 5 for the displacement ux=Hy, uy=-Hx, uz=0, where H is a constant. This is a special case of rigid body motion. Interpret this displacement physically.

Problem 8 Like the strain, the stress can be written as a 3 x 3 symmetric tensor. For an isotropic material, the relationship between the components of the stress tensor, sij, and the strain tensor, εij, is sij=λδijxxyyzz)+2μεij, where λ and μ are the Lame parameters, and δij is the Kronecker delta (1 if i=j, and 0 otherwise).
(a) Show that for the case in Problem 1, this relationship reduces to Eq. 1.25 where s
n=sxx and εnxx. Express the Young’s modulus E in terms of the Lame parameters.
(b) Show that for the case in Problem 2, this relationship reduces to Eq. 1.28 where s
s=sxy and εs=2εxy. Express the shear modulus G in terms of the Lame parameters.
(c) Show that for the case in Problem 4c, this relationship reduces to Eq. 1.32 where the diagonal components of the stress tensor are given in terms of the pressure p as s
xx=syy=szz=-p. Express the compressibility κ in terms of the Lame parameters.

Problem 9 Figure 1.25 shows that pressure p will exert a net force on an element of fluid only if p is not uniform. Similarly, a stress will exert a net force on an element of tissue only if the stress is not uniform. The equations of mechanical equilibrium (zero net force) are ∂six/∂x+∂siy/∂y+∂siz/∂z=0, where i is either x, y, or z.
(a) Substitute the relationship from Problem 8 into these equations, and derive three equations of mechanical equilibrium written in terms of the strain tensor.
(b) Substitute the relationships between the components of the strain tensor and the displacement found in Problem 3 and derive the equations of mechanical equilibrium in terms of the displacement.

Problem 10 Figure 1.20, showing a rod subject to a force along its length, is a simplification. Actually, the cross-sectional area of the rod shrinks as the rod lengthens. A better representation of the displacement than that given in Problem 1 would be ux=Ax, uy=-Aνy, and uz=-Aνz, where A is a constant and ν is the Poisson’s ratio.
(a) Use the results of Problem 4 to calculate the dilatation.
(b) What value of Poisson’s ratio corresponds to an incompressible material (zero dilatation)?
(c) For an isotropic material, -1 « ν « 0.5. How would a material with negative ν behave?
Elliott et al. (2002) measured Poisson’s ratio for articular (joint) cartilage under tension and found 1 « ν « 2. This large value is possible because cartilage is anisotropic: its properties depend on direction.

Problem 11 Many biological tissues are composed mainly of water and are therefore nearly incompressible. To analyze such a tissue, start with the stress-strain relationship in Problem 8. (Assume uz=0 and all derivatives in the z direction are zero; the case of “plane strain”.)
(a) For an incompressible tissue, εxxyy goes to zero and λ goes to infinity such that λ(εxxyy) is finite. Set it equal to –p, where p is the pressure.
(b) The displacement can be found from a stream function φ(x,y), where ux=∂φ/∂y and uy=-∂φ/∂x. Show that these definitions ensure that the dilatation is zero. Express the strain tensor in terms of φ.
(c) Write the components of the stress tensor (sxx, syy, sxy) in terms of p and φ.
(d) Use the analysis of Problem 9 to derive the equations of mechanical equilibrium in terms of p and φ.
(e) Manipulate these equations to find two new equations, one for p only and one for φ only. (Hint: try taking derivatives of the equations).

Problem 12 Start with the stress-strain relationship in Problem 8 and modify it to describe a two-dimensional sheet of cardiac muscle (Ohayon and Chadwick 1988). (Assume uz=0 and all derivatives in the z direction are zero; the case of “plane strain”.)
(a) Cardiac tissue is nearly incompressible. For an incompressible tissue, εxxyy goes to zero and λ goes to infinity such that λ(εxxyy) is finite. Set it equal to –p, where p is the pressure.
(b) Cardiac muscle can develop an active tension T along the myocardial fibers caused by the interaction of actin and myosin molecules. Assume the fibers lie along the x direction, and add the term T to the expression for sxx.
(c) The extracellular space consists of collagen fibers that can exert a shear force. Assume the collagen is isotropic, and interpret μ in Problem 8 as the collagen’s shear modulus.
(d) Derive expressions for sxx, syy, and sxy in terms of p, μ, T, and the strain tensor.
(e) Assume a solution ux=-Ax, uy=Ay, and p=P, where A and P are constants. If the tissue is free at its edges, then it must have zero stress throughout. Use this condition to derive expressions for A and P in terms of T and μ.
(f) Let T=3 x 104 Pa and μ=104 Pa (typical for cardiac tissue). Calculate values for A and P. Are the strains small? Sketch qualitatively the displacement distribution.

Ohayon, J. and R. S. Chadwick (1988) “Effects of Collagen Microstructure on the Mechanics of the Left Ventricle, Biophys. J., Volume 54, Pages 1077–1088.

Friday, March 14, 2014

Light Scattering

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I often discuss the scattering of light. We mention four types of scattering, each differentiated by the name of the brilliant scientist who first studied it: Compton scattering, Thomson scattering, Rayleigh scattering, and Raman scattering. Let’s see if we can get these all straight.

Compton Scattering

In Chapter 15 (Interaction of Photons and Charged Particles with Matter) of IPMB, Russ and I analyze Compton scattering. This is a particularly simple case: a photon interacts with a free electron, resulting in a scattered photon of lower energy and a recoiling electron. This type of scattering is particularly important for x-rays. You might be wondering how often do we encounter a free electron? Aren’t most electrons bound to atoms? If the incident photon has an energy much greater than the binding energy, then the electron is to a first approximation free and Compton scattering occurs. In the interaction of x-rays with biological tissue, Compton scattering is the dominant mechanism contributing to the interaction cross-section at intermediate energies; say, one tenth to a few MeV. Since the electrons act almost as if they were free, the atomic number of the target atom is unimportant and scattering depends only on how many electrons are present (meaning the mass attenuation coefficient is nearly independent of atomic number). You don’t really want to do imaging of tissue when Compton scattering is the dominate interaction because you don’t get much discrimination between different tissues (the weak dependence on atomic number) and, well, you get a lot of scattering that blurs the image.

Compton scattering is named after Arthur Holly Compton (1892–1962), an American physicist who played a key role in the Manhattan Project. Compton scattering was important in the development of quantum mechanics. The light quanta hypothesis had been developed by Planck and Einstein, but was not widely embraced until 1923, when Compton analyzed his x-ray scattering data by treating the x-ray photon as a particle with energy and momentum, interacting with another particle, the electron. Compton won the 1927 Nobel Prize in Physics for his discovery.

Thomson Scattering

When Compton scattering occurs at such a low energy that we can ignore the difference in energy between the incident and scattered photons, the process is called Thomson scattering. We can analyze Thomson scattering by treating the incident light as an electromagnetic wave rather than a photon. The electric field accelerates the electron, causing it to radiate an electromagnetic wave at the same frequency. The direction of the electric field is important for determining the distribution of the outgoing dipole radiation, so Thomson scattering depends on the polarization of the incident light. This type of scattering is particularly important in plasma physics, where many free charged particles are present. It is not too important in biology and medicine, because usually either the photon energy is so high that Compton scattering occurs, or else the photon energy is so low that one cannot treat the electron as being free. Because the frequency of the light (and therefore the energy of the photons) does not change, Thomson scattering is a type of elastic scattering.

Thomson scattering was first analyzed by, and was named after, J. J. Thomson (1856–1940), the British physicist who discovered the electron, for which he received the Nobel Prize in Physics in 1906. I have my own connection to Thomson: academically speaking, he is my great-great-great-great-great-grandfather.

Rayleigh Scattering

Rather than scattering from a single electron, light can also scatter from an entire atom or molecule, and even larger particles. When the wavelength of the light is much larger than the size of the particle, we get Rayleigh scattering. Like for Thomson scattering, in Rayleigh scattering the light is treated as an electromagnetic wave. However, unlike Thomson scattering, in Rayleigh scattering the scatterer is not a single particle, but instead can be represented by a continuous, polarizable medium. The electric field of the light causes the induced charge distribution to oscillate at the same frequency as the incident light, resulting in the scattered light having the same frequency as the incident light. In IPMB, Russ and I refer to Rayleigh scattering as coherent scattering, because the atom responds coherently as a whole, rather than as individual charged particles. In tissue, coherent scattering dominates Compton scattering at low energies (say, below 1 keV), but such low energy photons also interact by the more important photoelectric effect, so Rayleigh scattering is often not very important. It is crucial for understanding how sunlight scatters off the molecules of the air, causing the blue color of the sky.

When I was an undergraduate at the University of Kansas, I had my first research experience in Professor Wes Unruh’s laboratory studying light scattering off of colloidal impurities in crystals. We were able to determine the size of the impurities by measuring the scattered light as a function of angle. However, these colloids tended to be large, so that you could not ignore interference between light scattered from different parts of the particle. In that case, you must use a more advanced theory, called Mie theory, to calculate the distribution of scattered light. I recall struggling to learn Mie theory from Milton Kerker’s book The Scattering of Light and Other Electromagnetic Radiation. I didn’t work much with Unruh himself, but rather was mentored by then-graduate student Robert Bunch. The first item in my CV is an abstract resulting from that research (Bunch, Roth, and Unruh, 1983, “Size Distributions of Ni and Co Colloids Within MgO,” March Meeting of the American Physical Society).

Rayleigh scattering is named after English physicist John William Strutt (1842-1919), also known as Lord Rayleigh. He was awarded the Nobel Prize for Physics in 1904 for the discovery of argon. Because one of Rayleigh’s students was J. J. Thomson, Rayleigh is my academic great-great-great-great-great-great-grandfather. Rayleigh was the second Cavendish Professor of Physics at the University of Cambridge, following Maxwell and succeeded by J. J. Thomson, Ernest Rutherford, and William Bragg; quite an impressive bunch.

Raman Scattering

In IPMB, Russ and I discuss Raman scattering in Chapter 14 (Atoms and Light). The mechanism of Raman scattering is similar to Rayleigh scattering, in that the scattering occurs off an entire molecule. However, it is unlike Rayleigh scattering in that the scattered light does not have the same frequency as the incident light (inelastic scattering). Instead, some of the energy induces transitions between different vibrational energy levels. These transitions result in the scattered light having a lower energy (Stokes) or a higher energy (Anti-Stokes). Also, because the vibrational energy levels are quantized, the spectrum of Raman scattered light consists of a series of discrete lines. This spectrum contains information about the vibrations within the molecule, and therefore about the chemical bonds.

The description of Raman scattering given above (and in IPMB) is a quantum view that depends on the presence of discrete energy levels. However, one can also develop a classical model of Raman scattering. For instance, treat a simple diatomic molecule as two atoms attached by a spring, so that the molecule has its own natural frequency of oscillation, fo. If an electric field of frequency f is incident on the atom, it will respond by not only oscillating both at frequency f (Rayleigh scattering) but also at frequencies f+fo and f-fo (Raman scattering). The frequency difference between adjacent lines is fo, which is the same frequency as one would expect in the infrared absorption spectrum. (For those who have read Appendix F of IPMB and are wondering why the the scattered light oscillates with a component at the natural frequency, realize that the charge induced by polarization depends on the electric field, so the force on the charge--charge times electric field--depends on the square of the electric field and the problem is nonlinear.)

Raman scattering was named after Indian physicist C. V. Raman (1888–1970), whose discovery led to the 1930 Nobel Prize for Physics.


Four types of scattering, named after four Nobel Prize winners. Here are some ways to keep them straight: Compton and Thomson scattering is off a single charged particle (usually an electron), whereas Rayleigh and Raman scattering is off an entire atom or molecule or particle. Thomson and Rayleigh scattering are elastic, whereas Compton and Raman scattering are inelastic. Thomson and Rayleigh scattering are most commonly described using the classical wave theory of light, whereas Compton and Raman scattering are typically analyzed using quantum mechanics (although Raman scattering is sometimes analyzed with classical theory).

I admire all four scientists: Compton, Thomson, Rayleigh, and Raman. Who is my favorite? I like Rayleigh best. Love those Victorians.

Friday, March 7, 2014

Letters to a Young Scientist

Letters to a Young Scientist, by Edward Wilson, superimposed on Intermediate Physics for Medicine and Biology.
Letters to a Young Scientist,
by Edward Wilson.
I just finished reading Edward Wilson’s book Letters to a Young Scientist. (I know, I know….I don’t qualify as a young scientist anymore, but I can still enjoy the book.) Wilson is a leading biologist who established two fields of study: island biogeography and sociobiology. He is one of the world’s experts on the taxonomy of ants. Last week’s blog post about the binomial nomenclature for naming animal species was motivated in part from reading this book. You can hardly get further from physics than the taxonomy of ants, so this may seem like an odd topic to discuss in a blog about physics applied to medicine and biology. But the book considers universal themes common to all scientists.

What is Wilson’s main message for young scientists? He writes
First and foremost, I urge you to stay on the path you’ve chosen, and to travel on it as far as you can. The world needs you—badly.
How true. My favorite of Wilson’s letters was number seven, “Most Likely to Succeed.”
Conventional wisdom holds that science of the future will be more and more the product of “teamthink,” multiple minds put in close contact…But is groupthink the best way to create really new science? Risking heresy, I hereby dissent. I believe the creative process usually unfolds in a very different way. It arises and for a while germinates in a solitary brain. It commences as an idea and, equally important, the ambition of a single person who is prepared and strongly motivated to make discoveries in one domain of science or another. The successful innovator is favored by a fortunate combination of talent and circumstance… When prepared by education to conduct research, the most innovative scientists of my experience do so eagerly and with no prompting. The prefer to take first steps alone. They seek a problem to be solved, an important phenomenon previously overlooked, a cause-and-effect connection never imagined. An opportunity to be the first is their smell of blood.
I also liked the point Wilson made in letter three, “The Path to Follow.”
If a subject is already receiving a great deal of attention, if it has a glamorous aura, if its practitioners are prizewinners who receive large grants, stay away from that subject. Listen to the news coming from the current hubbub, learn how and why the subject became prominent, but in making your own long-term plans be aware it is already crowded with talented people… Take a subject instead that interests you and looks promising, and where established experts are not yet conspicuously competing with one another…You may feel lonely and insecure in your first endeavors, but all other things being equal, your best chance to make your mark and to experience the thrill of discovery will be there.
He then states a general principle using a military metaphor.
March away from the sound of the guns. Observe the fray from a distance, and while you are at it, consider making your own fray.
He continues with an observation about big science.
The sequencing of the human genome, the search for life on Mars, and the finding of the Higgs boson were each of profound importance for medicine, biology, and physics, respectively. Each required the work of thousands and cost billions. Each was worth all the trouble and expense. But on a far smaller scale, in fields and subjects less advanced, a small squad of researchers, even a single individual, can with effort devise an important experiment at relatively low cost.
I agree with Wilson on all these points. I think there is a lot to be said for small groups. And I think that too often researchers chase the latest fad. I second Wilson’s advice to march away from the sound of the guns, and to make your own fray instead.

Often those applying physics to biology and medicine are skirmishers whose goal is to probe the unknown searching for vulnerabilities, rather than to join the mass attack. My suggestion is to first get a broad education in both physics and biology, perhaps using a book like the 4th edition of Intermediate Physics for Medicine and Biology (you knew I would get the plug in somewhere), and then find some interesting but little-studied topic, and see where it leads you. And above all, have fun while you are doing it.

But don’t take my word for it. Read the book, or listen to Wilson give his advice to young scientists in his TED talk.

Edward Wilson giving a TED talk about Advice to Young Scientists.

Friday, February 28, 2014

The Encyclopedia of Life

Although I am a champion of applying physics to biomedicine, physics has little impact on some parts of biology. For instance, much of zoology and botany consist of the identification and naming of different species: taxonomy. Not too much physics there.

A giant in the field of taxonomy is the Sweedish scientist Carl Linnaeus (1707-1778). Linnaeus developed the modern binomial nomenclature to name organisms. Two names are given (often in Latin), genus then species, both italicized with the genus capitalized and the species not. For example, the readers of this blog are Homo sapiens: genus = Homo and species = sapiens. My dog Suki is a member of Canis lupus. Her case is complicated, since the domestic dog is a subspecies of the wolf, Canis lupus familiaris, but because dogs and wolves can interbreed they are considered the same species and to keep things simple (a physicist’s goal, if not a biologist’s) I will just use Canis lupus. Hodgkin and Huxley performed their experiments on the giant axon from the squid, whose binomial name is Loligo forbesi (as reported in Hodgkin and Huxley, J. Physiol., Volume 104, Pages 176–195, 1945; in their later papers they just mention the genus Loligo, and I am not sure what species they used--they might have used several). My daughter Katherine studied yeast when an undergraduate biology major at Vanderbilt University, and the most common yeast species used by biologists is Saccharomyces cerevisiae. The nematode Caenorhabditis elegans is widely used as a model organism when studying the nervous system. You will often see its name shortened to C. elegans (such abbreviations are common in the Linnaean system). Another popular model system is the egg of the frog species Xenopus laevis. The mouse, Mus musculus, is the most common mammal used in biomedical research. I’m not enough of a biologist to know how viruses, such as the tobacco mosaic virus, fit into the binomial nomenclature.

Out of curiosity, I wondered what binomial names Russ hobbie and I mentioned in the 4th edition of Intermediate Physics for Medicine and Biology. It is surprisingly difficult to say. I can’t just search my electronic version of the book, because what keyword would I search for? I skimmed through the text and found these four; there may be others. (Brownie points to any reader who can find one I missed and report it in the comments section of this blog.)
If you want to learn more about any of these species, I suggest going to the fabulous website EOL.org. The site states
The Encyclopedia of Life (EOL) began in 2007 with the bold idea to provide “a webpage for every species.” EOL brings together trusted information from resources across the world such as museums, learned societies, expert scientists, and others into one massive database and a single, easy-to-use online portal at EOL.org.

While the idea to create an online species database had existed prior to 2007, Dr. Edward O. Wilson's 2007 TED Prize speech was the catalyst for the EOL you see today. The site went live in February 2008 to international media attention. …

Today, the Encyclopedia of Life is expanding to become a global community of collaborators and contributors serving the general public, enthusiastic amateurs, educators, students and professional scientists from around the world.

Friday, February 21, 2014

Principles of Musical Acoustics

Principles of Musical Acoustics, by William Hartmann.
Principles of Musical Acoustics,
by William Hartmann.
In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I added a new chapter (Chapter 13) about Sound and Ultrasound. This allows us to discuss acoustics and hearing; an interesting mix of physics and physiology. But one aspect of sound we don’t analyze is music. Yet, there is much physics in music. In a previous blog post, I talked about Oliver Sacks’ book Musicophilia, a fascinating story about the neurophysiology of music. Unfortunately, there wasn’t a lot of physics in that work.

Last year, William Hartmann of Michigan State University (where my daughter Kathy is now a graduate student) published a book that provides the missing physics: Principles of Musical Acoustics. The Preface begins
Musical acoustics is a scientific discipline that attempts to put the entire range of human musical activity under the microscope of science. Because science seeks understanding, the goal of musical acoustics is nothing less than to understand how music “works,” physically and psychologically. Accordingly, musical acoustics is multidisciplinary. At a minimum it requires input from physics, physiology, psychology, and several engineering technologies involved in the creation and reproduction of musical sound.
My favorite chapters in Hartmann’s book are Chapter 13 on Pitch, and Chapter 14 on Localization of Sound. Chapter 13 begins
Pitch is the psychological sensation of the highness or the lowness of a tone. Pitch is the basis of melody in music and of emotion in speech. Without pitch, music would consist only of rhythm and loudness. Without pitch, speech would be monotonic—robotic. As human beings, we have astonishingly keen perception of pitch. The principal physical correlate of the psychological sensation of pitch is the physical property of frequency, and our keen perception of pitch allows us to make fine discriminations along a frequency scale. Between 100 and 10,000 Hz we can discriminate more than 2,000 different frequencies!
That is two thousand different pitches within a factor of one hundred in the range of frequencies (over six octaves), meaning we can perceive pitches that differ in frequency by about 0.23 %.  A semitone in music (for example, the difference between a C and a C-sharp) is about 5.9 %. That's pretty good: twenty-five pitches within one semitone. No wonder we have to hire piano tuners.

Pitch is perceived by “place,” different locations in the cochlea (part of the inner ear) respond to different frequencies, and by “timing,” neurons spike in synchrony with the frequency of the sound. For complex sounds, there is also a “template” theory, in which we learn to associate a collection of frequencies with a particular pitch. The perception of pitch is not a simple process.

There are some interesting differences between pitch perception in hearing and color perception in vision. For instance, on a piano play a middle C (262 Hz) and the next E (330 Hz) a factor of 1.25 higher in frequency. What you hear is not a pure tone, but a mixture of frequencies—a chord (albeit a simple one). But if you mix red light (450 THz) and green light (563 THz, again a factor of 1.25 higher in frequency), what you see is yellow, indistinguishable by eye from a single frequency of about 520 THz. I find it interesting and odd that the eye and ear differ so much in their ability to perceive mixtures of frequencies. I suspect it has something to do with the eye needing to be able to form an image, so it does not have the luxury of allocating different locations on the retina to different frequencies. One the other hand, the cochlea does not form images, so it can distribute the frequency response over space to improve pitch discrimination. I suppose if we wanted to form detailed acoustic images with our ear, we would have to give up music.

Hartmann continues, emphasizing that pitch perception is not just physics.
Attempts to build a purely mechanistic theory for pitch perception, like the place theory or the timing theory, frequently encounter problems that point up the advantages of less mechanistic theories, like the template theory. Often, pitch seems to depend on the listener’s interpretation.
Both Sacks and Hartmann discuss the phenomena of absolute, or perfect, pitch (AP). Hartmann offers this observation, which I find amazing, suggesting that we should be training our first graders in pitch recognition.
Less than 1% of the population has AP, and it does not seem possible for adults to learn AP. By contrast, most people with musical skills have RP [relative pitch], and RP can be learned at any time in life. AP is qualitatively different from RP. Because AP tends to run in families, especially musical families, it used to be thought that AP is an inherited characteristic. Most of the modern research, however, indicates that AP is an acquired characteristic, but that it can only be acquired during a brief critical interval in one’s life—a phenomenon known as “imprinting.” Ages 5–6 seem to be the most important.
My sister (who has perfect pitch) and I both started piano lessons in early grade school. I guess she took those lessons more seriously than I did.

In Chapter 14 Hartmann addresses another issue: localization of sound. It is complex, and depends on differences in timing and loudness between the two ears.
The ability to localize the source of a sound is important to the survival of human beings and other animals. Although we regard sound localization as a common, natural ability, it is actually rather complicated. It involves a number of different physical, psychological, and physiological, processes. The processes are different depending on where the sound happens to be with respect to the your head. We begin with sound localization in the horizontal plane.”
Interestingly, localization of sound gets more difficult when echos are present, which has implications for the design of concert halls. He writes
A potential problem occurs when sounds are heard in a room, where the walls and other surfaces in the room lead to reflections. Because each reflection from a surface acts like a new source of sound, the problem of locating a sound in a room has been compared to finding a candle in a dark room where all the walls are entirely covered with mirrors. Sounds come in from all directions and it’s not immediately evident which direction is the direction of the original source.

The way that the human brain copes with the problem of reflections is to perform a localization calculation that gives different weight to localization cues that arrive at different times. Great weight is placed on the information in the onset of the sound. This information arrives directly from the source before the reflections have a chance to get to the listener. The direct sound leads to localization cues such as ILD [interaural level difference], ITD [interaural time difference], and spectral cues that accurately indicate the source position. The brain gives much less weight to the localization cues that arrive later. It has learned that they give unreliable information about the source location. This weighting of localization cues, in favor of the earliest cues, is called the precedence effect.
The enjoyment of music is a truly complicated event, involving much physics and physiology. The Principles of Musical Acoustics is a great place to start learning about it.

Friday, February 14, 2014

Bacterial Decision Making

Medical and biological physics sometimes appear on the cover of Physics Today. For instance, this month (February, 2014) the cover shows E coli. The caption for the cover picture states
Escherichia coli bacteria have served for decades as the “hydrogen atom” of cellular decision making. In that branch of biology, researchers strive to understand the origin of cellular individuality and how a cell decides whether or not to express a particular gene in its DNA. For some of the physics involved, turn to the article by Jané Kondev on page 31.
The article begins with a description of Jacques Monod’s work with the lac operon: a stretch of DNA that regulates the lac genes responsible for lactose digestion. (This story is told in detail in Horace Freeland Judson’s masterpiece The Eighth Day of Creation.) Kondev writes
The key question I’ll address in this article is, What is the molecular basis by which a cell decides to switch a gene on? Although all the cells in figure 1b are genetically identical and experience the same environment, only one appears to be making the protein. As we’ll see, that cellular individuality is a direct consequence of molecular noise that accompanies cellular decision making. The sources of the noise and its biological consequences are currently a hot topic of research. And statistical physics is proving to be an indispensable tool for producing mathematical models capable of explaining data from experiments that look at decisions made by individual cells.
The caption of Fig. 1b reads
In the presence of a lactose surrogate, individual cells can switch from a state in which they are unable to digest lactose to a state in which they are able to consume the secondary sugar. Yellow indicates the amount of a fluorescently labeled protein, lactose permease, which is one of the enzymes needed by the cell to digest lactose.
The article then draws on several physics concepts that Russ Hobbie and I discuss in the 4th edition of Intermediate Physics for Medicine and Biology: the Boltzmann factor, the Gibbs free energy, the Poisson probability distribution, and feedback. The last of these concepts is crucial.
Thanks to that positive feedback, E. coli cells exist in two different steady states—one in which there are many permeases in the cell (the yellow cell in figure 1b), the other in which the number of permeases is low (the dark cells in 1b). Stochastic fluctuations in the expression of the lac genes—fluctuations, for instance, between an on and an off state of the promoter—can flip the switch and turn a lactose noneater to a lactose eater.
The article concludes
Physics-based models are leading to more stringent tests of the molecular mechanisms responsible for gene expression than those provided by the qualitative model presented in biology textbooks. They also pave the way for the design of so-called synthetic genetic circuits, in which the proteins produced by the expression of one gene affect the expression of another. Such circuits hold the promise of bacterial cells capable of producing useful chemicals or combating diseased human cells, including cancerous cells. Whether this foray of physics into biology will lead to fundamentally new biological insights about gene expression remains to be seen.
Kondev’s review offers us one more example of the importance of physics in biology and medicine. And for those of you who think E. coli bacteria is not an appropriate topic for a Valentine’s Day blog post, I say bah humbug.