Friday, September 2, 2016

Whiplash

Last week, my wife Shirley and I were in an automobile accident. We suffered no serious injuries, thank you, but the car was totaled and we were sore for several days. After the obligatory reflections on the meaning of life, I began to think critically about the biomechanics of auto accident injuries.

Our car was at a complete stop, and the idiot in the other car hit us from behind. The driver’s side air bag deployed and the impact pushed us off to the right of the road (we hit the car in front of us in the process), while the idiot’s car ended up on the opposite shoulder. It looked a little like this; we were m2 and the idiot was m1:
The collision dynamics of our car accident.
The collision dynamics of our car accident.
The police came and our poor car was carried off on a wrecker to a junk yard. Shirley and I walked home; the accident occurred about a quarter mile from our house.

My neck is still stiff. Presumably I suffered a classic—but not too severe—whiplash. Although Intermediate Physics for Medicine and Biology does not discuss whiplash, it does cover most of the concepts needed to understand it: acceleration, shear forces, torques, and biomechanics. Paul Davidovits describes whiplash briefly in Physics in Biology and Medicine. From the second edition:
5.7  Whiplash Injury

Neck bones are rather delicate and can be fractured by even a moderate force. Fortunately the neck muscles are relatively strong and are capable of absorbing a considerable amount of energy. If, however, the impact is sudden, as in a rear-end collision, the body is accelerated in the forward direction by the back of the seat,  and the unsupported neck is then suddenly yanked back at full speed. Here the muscles do not respond fast enough and all the energy is absorbed by the neck bones, causing the well-known whiplash injury.
You can learn more about the physics of whiplash in the paper “Kinematics of a Head-Neck Model Simulating Whiplash” published in The Physics Teacher (Volume 46, Pages 88–91, 2008).
In a typical rear-end collision, the vehicle accelerates forward when struck and the torso is pushed forward by the seat. The structural response of the cervical spine is dependent upon the acceleration-time pulse applied to the thoracic spine and interaction of the head and spinal components. During the initial phases of the impact, it is obvious that the lower cervical vertebrae move horizontally faster than the upper ones. The shear force is transmitted from the lower cervical vertebrae to the upper ones through soft tissues between adjacent vertebrae one level at a time. This shearing motion contributes to the initial development of an S-shape curvature of the neck (the upper cervical spine undergoes flexion while the lower part undergoes extension), which progresses to a C-shape curvature. At the end of the loading phase, the entire head-neck complex is under the extension mode with a single curvature. This implies the stretching of the anterior and compression of the posterior parts of the cervical spine.
Here are links to videos showing what happens to the upper spine during whiplash:




Injury from whiplash depends on the acceleration. What sort of acceleration did my head undergo? I don’t know the speed of the idiot’s car, but I will guess it was 25 miles per hour, which is equal to about 11 meters per second. Most of the literature I have read suggests that the acceleration resulting from such impacts occurs in about a tenth of a second. Acceleration is change in speed divided by change in time (see Appendix B in IPMB), so (11 m/s)/(0.1 s) = 110 m/s2, which is about 11 times the acceleration of gravity, or 11 g. Yikes! Honestly, I don’t know the idiot’s speed. He may have been slowing down before he hit me, but I don’t recall any skidding noises just before impact.

What lesson do I take from this close call with death? My hero Isaac Asimov—who wrote over 500 books in his life—was asked what he would do if told he had only six months to live. His answer was “type faster.” Sounds like good advice to me!

A photograph of our car, after the accident. Its left rear is smashed in. The car was totaled. My wife and I were OK, but could have suffered from whiplash.
Our car, after the accident.

Friday, August 26, 2016

Everything's Up To Date in Kansas City

A photograph of Union Station in Kansas City, Missouri.
Union Station.
I posted the last two entries in this blog while on a trip to Kansas City to visit my parents. I didn’t grow up in the Kansas City area, but I did graduate from Shawnee Mission South High School in Overland Park Kansas, and I was a physics major at the University of Kansas in Lawrence. My dad is a native of Kansas City Missouri, while my mom moved to Kansas City Kansas when young and attended Wyandotte High School.


A photograph of the Kauffman Center for the Performing Arts, in Kansas City, Missouri.
Kauffman Center
for the Performing Arts.

We had a great visit, including a ride on Kansas City’s new streetcar, which travels a route along Main Street from Union Station, past the Power and Light District, within a few blocks of the Kauffman Center, to the River Market. We also had a great pork sandwich at Pigwich in the East Bottoms. Kansas City is booming.

A photograph of Liberty Memorial and the National World War I Museum in Kansas City, Missouri.
Liberty Memorial.
Is there any connection between Kansas City and medical physics? Yes, there is. Rockhurst University, a liberal arts college located a mile west of where my dad grew up on Swope Parkway, offers an undergraduate program in the physics of medicine, which is similar to the medical physics major we offer at Oakland University. I thought the readers of Intermediate Physics for Medicine and Biology might like to see how another school other than Oakland structures its undergraduate medical physics curriculum.

From their Physics of Medicine website:
POM Program Overview: To suit your interests and career goals the POM Program has three program choices:
  • Medical Physics Major - Major Track designed for students wishing to enter graduate school in Medical Physics
  • Physics of Medicine (POM) Pre-Professional Major - Major Track designed for students wishing to enter a Medical/Healthcare Graduate Program
  • Physics of Medicine (POM) Minor— Minor designed to complement pre-healthcare or pre-medicine program
Advantages to students of the POM Program are:
  1. Deeper understanding of physics principles and their applicability to a medical or health field career.
  2. Stronger post-graduate application to competitive health field programs.
  3. Undergraduate research opportunities – potential for capstone area or future graduate work. 
  4. Value to students of interdisciplinary study, allowing them to tie together coursework in science/math with professional goals. 
All Physics of Medicine Coursework is designed to complement the Scientific Foundations for Future Physicians Report, Report of the AAMC-HHMI Committee (2009).
Some courses specific to the program are:
PH 3200 Physics of the Body I: This course expands on the physics principles developed in introductory physics courses through an in-depth study of mechanics, fluids and thermodynamics as they are applied to the human body. Areas of study include the following: biomechanics (torque, force, motion and lever systems of the body; application of vector analysis of human movement to video), thermodynamics and heat transfer (food intake and mechanical efficiency) and the pulmonary system (pressure, volume and compliance relationships). Guest speakers from the medical community will be invited. [This course appears to cover the material in Chapters 1-3 in IPMB]

PH 3210 Physics of the Body II: This course is a continuation of Physics of the Body I with a concentration on the cardiovascular system, electricity and wave motion. Areas of study include the following: cardiovascular system (heart as a force pump, blood flow and pressure), electricity in the body (action potentials, resistance-capacitance circuit of nerve impulse propagation, EEG, EKG, EMG), and sound (hearing, voice production, sound transfer and impedance, ultrasound – transmission and reflection). In addition, students complete a guided, in-depth, individual investigation on a topic pertinent to Physics of the Body. Guest speakers from the medical community will be invited. [Approximately Chapters 6, 7, and 13 in IPMB. PH 3200 and 3210 together are similar to Oakland University’s PHY 325, Biological Physics]

PH 3240 Physics of Medical Imaging: This course focuses on an introduction to areas of modern physics required for an understanding of contemporary medical diagnostic and treatment procedures. Topics include a focus on the physics underlying modern medical imaging instruments: the EM Spectrum, X-Ray, CT, Gamma Camera, SPECT, PET, MRI and hybrid instrumentation. In this course, students learn about the physics involved in how these diagnostic and therapeutic instruments work as well as the numerous physics and patient factors that contribute to the choice of instrument for diagnosis. There will be field trips to local hospitals and medical imaging facilities and invited guest speakers. [Chapters 15-18 in IPMB; similar to OU’s PHY 326, Medical Physics]

PH 4400 Optics: This course covers both the geometric and physical properties of optical principles including optics of the eye, lasers, fiber optics, and use of endoscopy in medicine. Students will complete a final optics research project in which they relate content learned to an area of optics research. [Chapter 14 in IPMB. We have no comparable course at OU. We offer a standard optics class, but with no biomedical emphasis. This class intrigues me.]

PH 4900 Statistics for the Health Sciences: This course introduces the basic principles and methods of health statistics. Emphasis is on fundamental concepts and techniques of descriptive and inferential statistics with applications in health care, medicine and public health. Core content includes research design, statistical reasoning and methods. Emphasis will be on basic descriptive and inferential methods and practical applications. Data analysis tools will include descriptive statistics and graphing, confidence intervals, basic rules of probability, hypothesis testing for means and proportions, and regression analysis. Students will use specialized statistical software to conduct data analysis of health related data sets. [Nothing exactly like this in IPMB. At OU, we require all medical physics majors to take a statistics class, taught by the Department of Mathematics and Statistics.]

PH 4900 Research in Physics of Medicine: Independent student research on coursework from Physics of Medicine Program. Students will choose topic from Physics of Medicine Program coursework to investigate further and prepare for presentation submission. This course will serve as a capstone course for Medical Physics and Physics of Medicine Pre-Professional Majors. [I am a big supporter of undergraduate research. At OU, medical physics majors can satisfy their capstone requirement by either research or our seminar class.]

MT 3260 Mathematical Modeling in Medicine: Students will build mathematical models and use these models to answer questions in various areas of medicine. Topics may include: Epidemic modeling, genetics, drug treatment, bacterial population modeling, and neural systems/networks. [IPMB is focused on mathematical modeling. I teach PHY 325 and 326 as workshops on mathematical modeling in biology and medicine.]
The Rockhurst physics of medicine minor looks like an idea I am tempted to steal. Their requirements are:
To complete the Physics of Medicine Minor:
Prerequisites: one year of introductory/general physics and Calculus I (complete in first two years)
Upper Division Courses: complete 4 upper-division POM courses (12 hrs.total)
Required:
  • PH 3200: Physics of the Body I (3 Hours, Offered Fall Semester Odd years) 
  • PH 3210: Physics of the Body II (3 Hours, Offered Spring Semester even years)
Choose 2 from the following:
  • PH 3240: Physics of Medical Imaging (3 Hours, Offered Spring Semester Odd Years) 
  • PH 4400: Optics (3 hours, Offered Fall Semester Even Years) 
  • MT 3260: Mathematical Modeling in Medicine (3 Hours, Offered Fall Semester Even years)
  • PH 4900: Statistics for the Health Sciences (3 Hours, Offered Spring semesters)
An OU version might be Biological Physics (PHY 325) and Medical Physics (PHY 326), plus their prerequisites: two semesters of introductory physics and two semesters of calculus.

A photograph of the Nelson Art Gallery in Kansas City, Missouri
The Nelson Art Gallery.
I enjoy my trips to Kansas City because there is a lot to do and see there, from the Nelson Art Gallery to Crown Center to the Liberty Memorial and the National World War I Museum. I remember in high school attending shows at the Starlight Theater in Swope Park, and watching many Kansas City Royals baseball games at Kauffman Stadium (where I saw George Brett play in the World Series!). The Truman Library is in nearby Independence Missouri.

A photograph of the Country Club Plaza in Kansas City, Missouri.
The Country Club Plaza.
I didn’t expect to find a hub of medical physics education in Kansas City, but there it is. In addition to the Rockhurst program, the Kansas University Medical Center has a CAMPEP-accredited clinical medical physics residency (while driving on I-35, I could see cranes putting up a new KU Med Center building), and the Stowers Institute, less than a mile north of Rockhurst and just east of the Country Club Plaza, has a strong biomedical research program. As the song says, Everything's Up To Date in Kansas City.

A photograph of thousands of people in Kansas City celebrating the 2015 Royals World Series Championship.
Kansas City celebrating the 2015 Royals World Series Championship.

Friday, August 19, 2016

How to Explain Why Unequal Anisotropy Ratios is Important Using Pictures but No Mathematics

Ten years ago, at the IEEE Engineering in Medicine and Biology Society Annual Conference in New York City, I presented a paper titled “How to Explain Why Unequal Anisotropy Ratios is Important Using Pictures but No Mathematics.” Although it was only a four-page conference proceeding, it remains one of my favorite papers.
I. Introduction 

The bidomain model describes the electrical properties of cardiac tissue. The term “bidomain” arises because the model accounts for two (“bi”) spaces (“domains”): intracellular and extracellular. Both spaces are anisotropic; the electrical conductivity depends on the direction relative to the myocardial fibers. Moreover, the intracellular space is more anisotropic than the extracellular space, a condition referred to in the literature as “unequal anisotropy ratios.” This condition has important consequences for the electrical behavior of the heart.

Many papers describe the implications of unequal anisotropy ratios. The mathematical derivations and numerical calculations in these reports emphasize the consequences of unequal anisotropy ratios, but they often fail to explain physically why these consequences occur. For example, Sepulveda et al. discovered that during unipolar stimulation, depolarization occurs under the cathode but hyperpolarization exists adjacent to it along the fiber direction. The hyperpolarized regions affect the mechanism of excitation, the shape of the strength-interval curve, and the induction of reentry. Yet, when I am asked why the hyperpolarization appears, I find it difficult to give an intuitive, nonmathematical answer.

In this paper, I try to answer the “why” questions that arise from the bidomain model. I present no new results, but many old results are clarified. My hope is that the reader will develop the intuition necessary to understand qualitatively how cardiac tissue behaves, without having to resort to lengthy mathematical derivations or numerical calculations.
Parts of this article have worked their way into Intermediate Physics for Medicine and Biology. For instance, the article explains how a wave front propagating through cardiac tissue creates a magnetic field. This analysis is reproduced as Problem 19 in Chapter 8 on biomagnetism.

Problem 50 in Chapter 7 examines the transmembrane potential induced in cardiac tissue when an electric shock is applied in the presence of an insulating obstacle. I love how this example highlights the importance of unequal anisotropy ratios.
Consider an insulating cylinder in an otherwise uniform tissue with straight fibers (Fig. 7). An electric field is applied from left to right. Far from the insulator, the current is in the x-direction and is distributed equally between the intracellular and extracellular spaces. As current approaches the insulator, it turns left to circle around the obstacle. The current then is flowing approximately perpendicular to the fibers, so most of the current will be extracellular. As the current turns right to flow once again in the x-direction, it will be parallel to the fibers and will again be distributed more or less equally between the two spaces. As current leaves and then reenters the intracellular space, it causes depolarization and then hyperpolarization. The transmembrane potential distribution surrounding the insulator is even in y and odd in x. The result is the complex pattern of polarization surrounding an insulator in cardiac tissue during electrical stimulation.
A figure from “How to explain why unequal anisotropy ratios is important using pictures but no mathematics,” showing how polarization is caused by an insulating obstacle.
Fig. 7. Distribution: Polarization caused by an insulating obstacle.
This figure explains the results observed in [18].
The role of theoretical analysis in biology and medicine is to make predictions that can be tested experimentally. My former PhD advisor John Wikswo and his team used optical mapping to measure the transmembrane potential around an obstacle during a shock. Their results are shown in the picture below. The bottom line: the prediction and the experiment are consistent. Physics works!

Optical mapping to measure the transmembrane potential around an obstacle during a shock, from: Woods et al. "Virtual Electrode Effects Around an Artificial Heterogeneity During Field Stimulation of Cardiac Tissue" (Heart Rhythm, 3:751-752, 2006).
Optical mapping to measure the transmembrane potential around an obstacle during a shock,
from: Woods et al. (Heart Rhythm, 3:751-752, 2006).
One graduate student, Marcella Woods, was involved in both of the projects I mentioned. She performed the theoretical analysis of the magnetic field produced by wave fronts in cardiac muscle under my direction when I was on the faculty of Vanderbilt University. After I left, she worked with Wikswo and carried out the experiments shown above.

Friday, August 12, 2016

Have We Reached the Athletic Limits of the Human Body?

The Olympics are in full swing this week, giving us in the United States a brief respite from our nasty presidential campaign. As you might guess, I view the Olympics through the lens of biological physics. One question that physics can help answer is: Have we reached the athletic limits of the human body? Can sprinters run faster and faster, or have we reached the physical and physiological limit? Can pole vaulters vault higher? Can long jumpers jump longer? Can swimmers swim quicker? An article in last week’s Scientific American by Bret Stetka tries to answer these questions.
At this month’s summer Olympic Games in Rio, the world's fastest man, Usain Bolt—a six-foot-five Jamaican with six gold medals and the sinewy stride of a gazelle—will try to beat his own world record of 9.58 seconds in the 100-meter dash. If he does, some scientists believe he may close the record books for good. Whereas myriad training techniques and technologies continue to push the boundaries of athletics, and although strength, speed and other physical traits have steadily improved since humans began cataloguing such things, the slowing pace at which sporting records are now broken has researchers speculating that perhaps we’re approaching our collective physiological limit—that athletic achievement is hitting a biological brick wall.
The article cites a 2008 paper by Mark Denny, the author of Air and Water, a book often cited in Intermediate Physics for Medicine and Biology. Denny suggests that there are limits, and we are closing in on them.
Are there absolute limits to the speed at which animals can run? If so, how close are present-day individuals to these limits? I approach these questions by using three statistical models and data from competitive races to estimate maximum running speeds for greyhounds, thoroughbred horses and elite human athletes. In each case, an absolute speed limit is definable, and the current record approaches that predicted maximum. While all such extrapolations must be used cautiously, these data suggest that there are limits to the ability of either natural or artificial selection to produce ever faster dogs, horses and humans. Quantification of the limits to running speed may aid in formulating and testing models of locomotion.
Yet Denny was not overly cautious in his paper. He predicted minimum times for many races, including the 100 m dash. Stetka writes
Bolt hopes to beat the researcher’s [that is, Denny’s] fastest predicted 100-meter dash time of 9.48 seconds. Unfortunately, according to Denny, the now notably older sprinter may have missed his chance. The sprinter was a chasm ahead of the pack in a semifinals race at the 2008 Beijing Olympics when he slowed up before crossing the finish line. “I think had he kept going at full speed he would’ve set an all-time, unbeatable world record,” Denny speculates.
Then Stetka quotes Denny as saying
“When I published my paper, the feedback I got was that this was going to destroy the Olympics,” he recollects. “That’s like saying the 1962 Brazilian soccer team was the best ever so no one’s ever going to watch the World Cup again. But if Bolt can run the 100 in 9.47 seconds and beat my prediction, then hats off to him. I think there’s always going to be the lure of ‘maybe someone’s going to do better.’”
I plan to watch the Olympics and see if humans can run faster than ever before. I’m a big fan of Mark Denny, but I’ll be routing for Bolt (or Gatlin) to beat Denny's prediction.

Enjoy!


P.S. A long frustrated sigh goes to Michael Phelps and the other USA swimmers engaged in “cupping” therapy pseudoscience. Oh, where is Bob Park when we need him! Ignore the quackery and gibberish and focus on the swimming.

Friday, August 5, 2016

Zapping Their Brains at Home

A screenshot of Zapping Their Brains at Home, by Anna Wexler.
“Zapping Their Brains at Home,”
by Anna Wexler.
A couple weeks ago, Anna Wexler published an article in the New York Times titled “Zapping Their Brains at Home.”
Earlier this month, in the journal Annals of Neurology, four neuroscientists published an open letter to practitioners of do-it-yourself brain stimulation. These are people who stimulate their own brains with low levels of electricity, largely for purposes like improved memory or learning ability. The letter, which was signed by 39 other researchers, outlined what is known and unknown about the safety of such noninvasive brain stimulation, and asked users to give careful consideration to the risks.
I worked on brain stimulation when at the National Institutes of Health, and Russ Hobbie and I analyze neural stimulation in Intermediate Physics for Medicine and Biology. So what is my reaction to these do-it-yourselfers? My first thought was “Yikes…this sounds like trouble!” But the more I think about it, the less worried I am.

We are talking about transcranial direct current stimulation, which uses weak currents applied to the scalp. I have always been surprised that such tiny currents have any effect at all; see my editorial “What Does the Ratio of Injected Current to Electrode Area Not Tell Us About tDCS?” (Clinical Neurophysiology, Volume 120, Pages 1037–1038, 2009). My advice to the do-it-yourselfers is not so much “be careful” but rather “don’t get your hopes up.”

Of the four coauthors on the letter in Annals of Neurology, the only one I know is Alvaro Pascual-Leone, who I worked with while at NIH and who we cite several times in IPMB. Below I list the main points raised in the letter:
  • Stimulation affects more of the brain than a user may think 
  • Stimulation interacts with ongoing brain activity, so what a user does during tDCS changes its effects 
  • Enhancement of some cognitive abilities may come at the cost of others 
  • Changes in brain activity (intended or not) may last longer than a user may think 
  • Small differences in tDCS parameters can have a big effect 
  • tDCS effects are highly variable across different people 
  • The risk/benefit ratio is different for treating diseases versus enhancing function
What do I think of do-it-yourselfers in general? I have mixed feelings. Heaven help us if they start fooling around with heart defibrillators, which could be suicidal. For transcranial magnetic stimulation, I think the biggest risk would be the construction of a device that sends kiloamps of current through a coil. I have always thought that TMS is more dangerous for the physician (who often holds the coil) than for the patient. Moreover, the induced current in the brain is larger for TMS than for tDCS. I would be wary of do-it-yourself magnetic stimulation. But for D.I.Y.ers using relatively low-level electrical current applied to the scalp, if someone educates themself on the technique and follows reasonable safety recommendations, then I don’t see it as a problem.

Wexler ends her letter
The open letter this month is about safety. But it also a recognition that these D.I.Y. practitioners are here to stay, at least for the time being. While the letter does not condone, neither does it condemn. It sticks to the facts and eschews paternalistic tones in favor of measured ones. The letter is the first instance I’m aware of in which scientists have directly addressed these D.I.Y. users. Though not quite an olive branch, it is a commendable step forward, one that demonstrates an awareness of a community of scientifically involved citizens.
If you want to read more by Wexler, look here and here.

My final, and admittedly self-serving, advice to the D.I.Y.ers: go buy a copy of Intermediate Physics for Medicine and Biology, so you can learn the scientific principles behind this and other techniques.

Friday, July 29, 2016

Niels Bohr and the Stopping Power of Alpha Particles

In Chapter 15 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the interaction of charged particles with electrons.
15.11.1 Interaction with Target Electrons

We first consider the interaction of the projectile with a target electron, which leads to the electronic stopping power, Se. Many authors call it the collision stopping power, Scol. There can be interactions in which a single electron is ejected from a target atom or interactions with the electron cloud as a whole (a plasmon excitation). The stopping power at higher energies, where it is nearly proportional to β−2 [β = v/c, where v is the speed of the projectile and c is the speed of light], has been modeled by Bohr, by Bethe, and by Bloch (see the review by Ahlen 1980).
Niels Bohr's Times: In Physics, Philosophy, and Polity, by Abraham Pais, superimposed on Intermediate Physics for Medicine and Biology.
Niels Bohr's Times:
In Physics, Philosophy, and Polity,
by Abraham Pais.
Bohr is, of course, the famous Niels Bohr, one of the greatest physicists of all time. I am familiar with Bohr’s model of the hydrogen atom (see Sec. 14.3), but not as much with his work on the stopping power of charged particles. It turns out that Bohr’s groundbreaking work on hydrogen grew out of his study of the stopping power of alpha particles. Moreover, the stopping power analysis was motivated by Ernest Rutherford’s experiments on the scattering of alpha particles, which established the nuclear structure of the atom. This chain of events began with the young Niels Bohr arriving in Manchester to work with Rutherford in March 1912. Abraham Pais discusses this part of Bohr’s life in his biography Niels Bohr’s Times: In Physics, Philosophy, and Polity.
Bohr finished his paper on this subject [the energy loss of alpha particles when traversing matter] only after he had left Manchester; it appeared in 1913. The problem of the stopping of electrically charged particles remained one of his lifelong interests. In 1915 he completed another paper on that subject, which includes the influence of effects due to relativity and to straggling (that is, the fluctuations in energy and in range of individual particles)…

Bohr’s 1913 paper on α-particles, which he had begun in Manchester, and which had led him to the question of atomic structure, marks the transition to his great work, also of 1913, on that same problem. While still in Manchester, he had already begun an early sketch of these entirely new ideas. The first intimation of this comes from a letter, from Manchester, to Harald [Niels’ brother]: “Perhaps I have found out a little about the structure of atoms. Don’t talk about it to anybody…It has grown out of a little information I got from the absorption of α-rays.” I leave the discussion of these beginnings to the next chapter.
On 24 July 1912 Bohr left Manchester for his beloved Denmark. His postdoctoral period had come to an end.
So the alpha particle stopping power calculation Russ and I discuss in Chapter 15 led directly to Bohr’s model of the hydrogen atom, for which he got the Nobel Prize in 1922.

Friday, July 22, 2016

Error Rates During DNA Copying

Chapter 3 of Intermediate Physics for Medicine and Biology discusses the Boltzmann factor. In the homework exercises at the end of the chapter, we include a problem in which you apply the Boltzmann factor to estimate the error rate during the copying of DNA.
Problem 30. The DNA molecule consists of two intertwined linear chains. Sticking out from each monomer (link in the chain) is one of four bases: adenine (A), guanine (G), thymine (T), or cytosine (C). In the double helix, each base from one strand bonds to a base in the other strand. The correct matches, A-T and G-C, are more tightly bound than are the improper matches. The chain looks something like this, where the last bond shown is an “error.”
Drawing of a DNA molecule containing an error in the matching of bases.
A DNA molecule containing an error.
The probability of an error at 300 K is about 10−9 per base pair. Assume that this probability is determined by a Boltzmann factor e−U/kBT, where U is the additional energy required for a mismatch.
(a) Estimate this excess energy.
(b) If such mismatches are the sole cause of mutations in an organism, what would the mutation rate be if the temperature were raised 20° C?
This is a nice simple homework problem that provides practice with the Boltzmann factor and insight into the thermodynamics of base pair copying. Unfortunately, reality is more complicated.

Biophysics: Searching for Principles, by William Bialek, superimposed on Intermediate Physics for Medicine and Biology.
Biophysics:
Searching for Principles,
by William Bialek.
William Bialek addresses the problem of DNA copying in his book Biophysics: Searching for Principles (Princeton University Press, 2012). He notes that the A typically binds to T. If A were to bind with G, the resulting base pair would be the wrong size and grossly disrupt the DNA double helix (A and G are both large double-ring molecules). However, if A were to bind incorrectly with C, the result would fit okay (C and T are about the same size) at the cost of eliminating one or two hydrogen bonds, which have a total energy of about 10 kBT. Bialek writes
An energy difference of ΔF ~ 10 kBT means that the probability of an incorrect base pairing should be, according to the Boltzmann distribution, e-ΔF/kBT ~ 10−4. A typical protein is 300 amino acids long, which means that it is encoded by about 1000 bases; if the error probability is 10-4, then replication of DNA would introduce roughly one mutation in every tenth protein. For humans, with a billion base pairs in the genome, every child would be born with hundreds of thousands of bases different from his or her parents. If these predicted error rates seem large, they are—real error rates in DNA replication vary across organisms [see the vignette “what is the error rate in transcription and translation” in Cell Biology by the Numbers], but are in the range of 10−8–10−12, so the entire genome can be copied without almost any mistakes.
So, how is the does the error rate become so small? There are enzymes called DNA polymerases that proofread the copied DNA and correct most errors. Because of these enzymes, the overall error rate is far smaller than the 10−4 rate you would estimate from the Boltzmann factor alone.

Our homework problem is therefore a little misleading, but it has redeeming virtues. First, the error we show in the figure is G-A, which would more severely disrupt the DNA's double helix structure. That specific error may well have a higher energy and therefore a lower error rate from the Boltzmann factor alone. Second, the problem illustrates how sensitive the Boltzmann factor is to small changes in energy. If ΔE = 10 kBT, the Boltzmann factor is e−10 = 0.5 × 10−4. If ΔE = 20 kBT, the Boltzmann factor is e−20 = 2 × 10−9. A factor of two increase in energy translates into more than a factor of 10,000 reduction in error rate. Wow!

Friday, July 15, 2016

Word Clouds

I have always wondered about those funny-looking collections of different-sized, different-colored words: the word cloud. This week I learned how to create a word cloud from any text I choose using the free online software at www.wordclouds.com. Of course, I chose Intermediate Physics for Medicine and Biology. Here is what I got.

A word cloud based on Intermediate Physics for Medicine and Biology.
A word cloud based on Intermediate Physics for Medicine and Biology.
The word cloud speaks for itself, but let me add a few comments. First, I deleted the preface, the table of contents, and the index from a pdf copy of IPMB before submitting it. The software was having trouble with such a large input file, and reducing the size seemed to help. After the list of words and their frequencies was created, I edited it. The software is smart enough to not include common words like “the” and “is,” but I deleted others that seemed generic to me, like “consider” and “therefore.” I kept words that appeared at least 250 times, which was about 65 words. The most common word was “Fig,” as in “...spherical air sacs called alveoli (Fig. 1.1b).” The third most common was “Problem” as in “Problem 1. Estimate the number of....” I considered removing these, but illustrations and end-of-chapter exercises are an important part of the book, so they stayed. I was surprised by the second most common word: “energy.” Russ Hobbie and I did not set out to make this a unifying theme in the book, but apparently it is.

I’ll let you decide if this word cloud is profound or silly. It was fun, and I like to share fun things with the readers of IPMB. Enjoy!

Friday, July 8, 2016

Cell Biology by the Numbers

Cell Biology by the Numbers, by Ron Milo and Rob Phillips, superimposed on Intermediate Physics for Medicine and Biology.
Cell Biology by the Numbers,
by Ron Milo and Rob Phillips.
Six years ago I wrote an entry in this blog about the bionumbers website. Now Ron Milo and Rob Phillips have turned that website into a book: Cell Biology by the Numbers. Milo and Phillips write
One of the central missions of our book is to serve as an entry point that invites the reader to explore some of the key numbers of cell biology. We hope to attract readers of all kinds—from seasoned researchers, who simply want to find the best values for some number of interest, to beginning biology students, who want to supplement their introductory course materials. In the pages that follow, we provide a broad collection of vignettes, each of which focuses on quantities that help us think about sizes, concentrations, energies, rates, information content, and other key quantities that describe the living world.
One part of the book that readers of Intermediate Physics for Medicine and Biology might find useful is their “rules of thumb.” I reproduce a few of them here
• 1 dalton (Da) = 1 g/mol ~ 1.6 × 10−24 g.
• 1 nM is about 1 molecule per bacterial volume [E. coli has a volume of about 1 μm3].
• 1 M is about one per 1 nm3.
• Under standard conditions, particles at a concentration of 1 M are ~ 1 nm apart.
• Water molecule volume ~ 0.03 nm3, (~0.3 nm)3.
• A small metabolite diffuses 1 nm in ~1 ns.
The book consists of a series of vignettes, each phrased as a question. Here is an excerpt form one.
Which is bigger, mRNA or the protein it codes for?

The role of messenger RNA molecules (mRNAs), as epitomized in the central dogma, is one of fleeting messages for the creation of the main movers and shakers of the cell—namely, the proteins that drive cellular life. Words like these can conjure a mental picture in which an mRNA is thought of as a small blueprint for the creation of a much larger protein machine. In reality, the scales are exactly the opposite of what most people would guess. Nucleotides, the monomers making up an RNA molecule, have a mass of about 330 Da. This is about three times heavier that the average amino acid mass, which weighs in at ~110 Da. Moreover, since it takes three nucleotides to code for a single amino acid, this implies an extra factor of three in favor of mRNA such that the mRNA coding a given protein will be almost an order of magnitude heavier.
It’s obvious once someone explains it to you. Here is another that I liked.
What is the pH of a cell?

…Even though hydrogen ions appear to be ubiquitous in the exercise sections of textbooks, their actual abundance inside cells is extremely small. To see this, consider how many ions are in a bacterium or mitochondrion of volume 1 μm3 at pH 7. Using the rule of thumb that 1 nM corresponds to ~ 1 molecule per bacterial cell volume, and recognizing that pH 7 corresponds to a concentration of 10−7 M (or 100 nM), this means that there are about 100 hydrogen ions per bacterial cell…This should be contrasted with the fact that there are in excess of a million proteins in that same cellular volume.
This one surprised me.
What are the concentrations of free matabolites in cells?

…The molecular census of metabolites in E. coli reveals some overwhelmingly dominant molecular players. The amino acid glutamate wins out…at about 100 mM, which is higher than all other amino acids combined…Glutamate is negatively charged, as are most of the other abundant metabolites in the cell. This stockpile of negative charges is balanced mostly by a corresponding positively changed stockpile of free potassium ions, which have a typical concentration of roughly 200 mM.
Somehow, I never realized how much glutamate is in cells. I also learned all sorts of interesting facts. For instance, a 5% by weight mixture of alcohol in water (roughly equivalent to beer) corresponds to a 1 M concentration. I guess the reason this does not wreak havoc on your osmotic balance is that alcohol easily crosses the cell membrane. Apparently yeast use the alcohol they produce to inhibit the growth of bacteria. This must be why John Snow found that during the 1854 London cholera epidemic, the guys working (and, apparently, drinking) in the brewery were immune.

I’ll give you one more example. Milo and Phillips analyze how long it will take a substrate to collide with a protein.
…Say we drop a test substrate molecule into a cytoplasm with a volume equal to that of a bacterial cell. If everything is well mixed and there is no binding, how long will it take for the substrate molecule to collide with one specific protein in the cell? The rate of enzyme substrate collisions is dictated by the diffusion limit, which as shown above, is equal to ~ 109 s−1M−1 times the concentration. We make use of one of our tricks of the trade, which states that in E. coli, a single molecule (say, our substrate) has an effective concentration of about 1 nM (that is, 10−9 M). The rate of collisions is thus 109 s−1M−1 × 10−9 M. That is, they will meet within a second on average. This allows us to estimate that every substrate molecule collides with each and every protein in the cell on average about once per second.
Each and every one, once per second! The beauty of this book, and the value of making these order-of-magnitude estimates, is to provide such insight. I cannot think of any book that has provided me with more insight than Cell Biology by the Numbers.

Readers of IPMB will enjoy CBbtN. It is well written and the illustrations by Nigel Orme are lovely. It may have more cell biology than readers of IPMB are used to (Russ Hobbie and I are macroscopic guys), but that is fine. For those who prefer video over text, listen to Rob Phillips and Ron Milo give their views of life in the videos below.

I’ll give Milo and Phillips the last word, which could also sum up our goals for IPMB.
We leave our readers with the hope that they will find these and other questions inspiring and will set off on their own path to biological numeracy.



Friday, July 1, 2016

The Wien Exponential Law

In Section 14.8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss blackbody radiation. Our analysis is similar to that in many modern physics textbooks. We introduce Planck’s law for Wλ(λ,T) dλ, the spectrum of power per unit area emitted by a completely black surface at temperature T and wavelength λ
An equation for Planck's law of blackbody radiation.
where c is the speed of light, h is Planck’s constant, and kB is Boltzmann’s constant. We then 1) express this function in terms of frequency ν instead of wavelength λ, 2) integrate over all wavelengths to derive the Stefan-Boltzmann law, and 3) show that the wavelength of peak emission decreases with temperature, often known as the Wien displacement law.

Russ and I like to provide homework problems that reinforce the concepts in the text. Ideally, the problem requires the reader to repeat many of the same steps carried out in the book, but for a slightly different case or in a somewhat different context. Below I present such a homework problem for blackbody radiation. It is based on an approximation to Planck’s law at short wavelengths derived by Wilhelm Wien.
Problem 25 ½. Consider the limit of Planck’s law, Eq. 14.33, when hc/λ is much greater than kBT, an approximation known as the Wien exponential law.
(a) Derive the mathematical form of Wλ(λ,T) in this limit.
(b) Convert Wien’s law from a function of wavelength to a function of frequency, and determine the mathematical form of Wν(ν,T).
(c) Integrate Wν(ν,T) over all frequencies to obtain the total power emitted per unit area. Compare this result with the Stefan-Boltzmann law (Eq. 14.34). Derive an expression for the Stefan-Boltzmann constant in terms of other fundamental constants.
(d) Determine the frequency νmax corresponding to the peak in Wν(ν,T). Compare νmax/T with the value obtained from Planck’s law.
Subtle is the Lord: The Science and the Life of Albert Einstein,  by Abraham Pais. superimposed on Intermediate Physics for Medicine and Biology.
Subtle is the Lord,
by Abraham Pais.
The Wien exponential law predated Planck’s law by several years. In his landmark biography ‘Subtle is the Lord…’: The Science and the Life of Albert Einstein, Abraham Pais discusses 19th century attempts to describe blackbody radiation theoretically.
Meanwhile,proposals for the correct form of [Wλ(λ,T)] had begun to appear as early as the 1860s. All these guesses may be forgotten except one, Wien’s exponential law, proposed in 1896…

Experimental techniques had sufficiently advanced by then to put this formula to the test. This was done by Friedrich Paschen from Hannover, whose measurements (very good ones) were made in the near infrared, λ = 1-8 μm (and T = 400 -1600 K). He published his data in January 1897. His conclusion: “It would seem very difficult to find another function…that represents the data with as few constants.” For a brief period, it appeared that Wien’s law was the final answer. But then, in the year 1900, this conclusion turned out to be premature…
And the rest, as they say, is history.