Friday, September 23, 2016

Magneto-Aerotactic Bacteria Deliver Drug-Containing Nanoliposomes to Tumour Hypoxic Regions

In Chapter 8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe magnetotactic bacteria.
Several species of bacteria contain linear strings of up to 20 particles of magnetite, each about 50 nm on a side encased in a membrane (Frankelet al. 1979; Moskowitz 1995). Over a dozen different bacteria have been identified that synthesize these intracellular, membrane-bound particles or magnetosomes (Fig. 8.25). In the laboratory the bacteria align themselves with the local magnetic field. In the problems you will learn that there is sufficient magnetic material in each bacterium to align it with the earth’s field just like a compass needle. Because of the tilt of the earth’s field, bacteria in the wild can thereby distinguish up from down.

Other bacteria that live in oxygen-poor, sulfide-rich environments contain magnetosomes composed of greigite (Fe3S4), rather than magnetite (Fe3O4). In aquatic habitats, high concentrations of both kinds of magnetotactic bacteria are usually found near the oxic–anoxic transition zone (OATZ). In freshwater environments the OATZ is usually at the sediment–water interface. In marine environments it is displaced up into the water column. Since some bacteria prefer more oxygen and others prefer less, and they both have the same kind of propulsion and orientation mechanism, one wonders why one kind of bacterium is not swimming out of the environment favorable to it. Frankel and Bazylinski(1994) proposed that the magnetic field and the magnetosomes keep the organism aligned with the field, and that they change the direction in which their flagellum rotates to move in the direction that leads them to a more favorable concentration of some desired chemical.
I enjoy learning about the biology and physics of magnetotactic bacteria, but I never expected that they had anything to do with medicine. Then last month a paper published in Nature Nanotechnology discussed using these bacteria to treat cancer!
Oxygen-depleted hypoxic regions in the tumour are generally resistant to therapies. Although nanocarriers have been used to deliver drugs, the targeting ratios have been very low. Here, we show that the magneto-aerotactic migration behaviour of magnetotactic bacteria, Magnetococcus marinus strain MC-1 (ref. 4), can be used to transport drug-loaded nanoliposomes into hypoxic regions of the tumour. In their natural environment, MC-1 cells, each containing a chain of magnetic iron-oxide nanocrystals, tend to swim along local magnetic field lines and towards low oxygen concentrations based on a two-state aerotactic sensing system. We show that when MC-1 cells bearing covalently bound drug-containing nanoliposomes were injected near the tumour in severe combined immunodeficient beige mice and magnetically guided, up to 55% of MC-1 cells penetrated into hypoxic regions of HCT116 colorectal xenografts. Approximately 70 drug-loaded nanoliposomes were attached to each MC-1 cell. Our results suggest that harnessing swarms of microorganisms exhibiting magneto-aerotactic behaviour can significantly improve the therapeutic index of various nanocarriers in tumour hypoxic regions.
The IOP website physicsworld.com published an article by Belle Dumé describing this study. It begins
Bacteria that respond to magnetic fields and low oxygen levels may soon join the fight against cancer. Researchers in Canada have done experiments that show how magneto-aerotactic bacteria can be used to deliver drugs to hard-to-reach parts of tumours. With further development, the method could be used to treat a variety of solid tumours, which account for roughly 85% of all cancers.
A similar article, also by Dumé, can be found on medicalphysicsweb.com
As cancer cells proliferate, they consume large amounts of oxygen. This results in oxygen-poor regions in a tumour. It is notoriously difficult to treat these hypoxic regions using conventional pharmaceutical nanocarriers, such as liposomes, micelles and polymeric nanoparticles.

Now, a team led by Sylvain Martel of the NanoRobotics Laboratory at the Polytechnique Montréal has developed a method that exploits the magnetotactic bacteria Magnetoccus marinus (MC-1) to overcome this problem.
Pretty cool stuff.

Friday, September 16, 2016

Rutherford Scattering and the Differential Cross Section

In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the differential cross section.
We may wish to know the probability that particles…are scattered in a certain direction. We have to consider the probability that they are scattered into a small solid angle dΩ. In this case, σ is called the differential scattering cross section and is often written as
Mathematical expressions for the differential cross section
The units of the differential scattering cross section are m2 sr-1. The differential cross section depends on θ, the angle between the directions of travel of the incident and scattered particles.
Perhaps the most famous differential cross section is the Rutherford scattering formula. Ernest Rutherford (who I have discussed before in this blog) derived this formula to explain the results of his alpha particle scattering experiments, in which he fired alpha particles at a thin metal foil and determined the angle of scattering by observing the light produced when a scattered particle hit a zinc sulfide screen. His formula assumes a nonrelativistic alpha particle scatters off a massive (no recoil), spinless, bare, positively charged target nucleus. Below is a new homework problem providing some practice with the Rutherford formula
Problem 16 ½. An example of a differential cross section is the Rutherford scattering formula
The Rutherford scattering formula.
(a) Plot dσ/dΩ versus θ over the range 0 to π.
(b) Repeat part (a) using semilog graph paper.
(c) The constant A is equal to
The leading constant factor in the Rutherford scattering formula.
where q and Q are the charges of the alpha particle and nucleus, and E is the alpha particle energy. Show that A has the units of m2 sr-1. Hint: steradians, like radians, are dimensionless (see Appendix A).
(d) Interpret what happens physically when θ is π. What is the value of the cosecant of π/2? Write A in terms of the distance of closest approach of an alpha particle to the nucleus. Hint: see Chapter 17, Problem 2.
(e) Note that dσ/dΩ goes to infinity as θ goes to zero. Interpret this result physically. What assumption did Rutherford make that may be responsible for this unphysical behavior?
(f) Integrate dσ/dΩ over θ from 0 to π. You may need to use a good table of integrals. Explain your result (which may surprise you) physically.
The Making of the Atomic Bomb, by Richard Rhodes, superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.
Here is the history of the Rutherford scattering experiment, as told by Richard Rhodes in The Making of the Atomic Bomb.
[Hans] Geiger [Rutherford’s assistant] went to work on alpha scattering, aided by Ernest Marsden, then an eighteen-year-old Manchester undergraduate. They observed alpha particles coming out of a firing tube and passing through foils of such metals as aluminum, silver, gold, and platinum. The results were generally consistent with expectation: alpha particles might very well accumulate as much as two degrees of total deflection bouncing around among atoms of the plum-pudding sort [an early model of atomic structure proposed by J. J. Thomson]. But the experiment was troubled with stray particles. Geiger and Marsden thought molecules in the walls of the firing tube might be scattering them. They tried eliminating the strays by narrowing and defining the end of the firing tube with a series of graduated metal washers. That proved no help.

Rutherford wandered into the room. The three men talked over the problem. Something about it alerted Rutherford’s intuition for promising side effects. Almost as an afterthought he turned to Marsden and said, “See if you can get some effect of alpha particles directly reflected from a metal surface.” Marsden knew that a negative result was expected—alpha particles shot through thin foils, they did not bounce back form them—but that missing a positive result would be an unforgivable sin. He took great care to prepare a strong alpha source. He aimed the pencil-narrow beam of alphas at a forty-five degree angle onto a sheet of gold foil. He positioned his scintillation screen on the same side of the foil, beside the alpha beam, so that a particle bouncing back would strike the screen and register as a scintillation. Between firing tube and screen he interposed a thick lead plate so no direct alpha particles could interfere.

Immediately, and to his surprise, he found what he was looking for. “I remember well reporting the result to Rutherford,” he wrote, “…when I met him on the steps leading to his private room, and the joy with which I told him…”

Rutherford had been genuinely astonished by Marsden’s results. “It was quite the most incredible event that has ever happened to me in my life,” he said later. “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration I realized that this scattering backwards must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greatest part of the mass of the atom was concentrated in a minute nucleus.”

Friday, September 9, 2016

The Biomechanics of Solids and Fluids: The Physics of Life

The first page of The Biomechaics of Solids and Fluids: The Physics of Life, by David Alexander, superimposed on Intermediate Physics for Medicine and Biolog.
“The Biomechanics of Solids and Fluids:
The Physics of Life,”
by David Alexander.
This summer a review article about biomechanics by David Alexander appeared in the European Journal of Physics: “The Biomechanics of Solids and Fluids: The Physics of Life” (Volume 37, Article 053011, 2016). It serves as an excellent supplement for much of the material in Chapter 1 (Mechanics) in Intermediate Physics for Medicine and Biology. It describes the biomechanics of solids (elasticity) and fluids (fluid mechanics).
Biomechanics borrows and extends engineering techniques to study the mechanical properties of organisms and their environments. Like physicists and engineers, biomechanics researchers tend to specialize on either fluids or solids (but some do both). For solid materials, the stress–strain curve reveals such useful information as various moduli, ultimate strength, extensibility, and work of fracture. Few biological materials are linearly elastic so modified elastic moduli are defined. Although biological materials tend to be less stiff than engineered materials, biomaterials tend to be tougher due to their anisotropy and high extensibility. Biological beams are usually hollow cylinders; particularly in plants, beams and columns tend to have high twist-to-bend ratios. Air and water are the dominant biological fluids. Fluids generate both viscous and pressure drag (normalized as drag coefficients) and the Reynolds number (Re) gives their relative importance. The no-slip conditions leads to velocity gradients (‘boundary layers’) on surfaces and parabolic flow profiles in tubes. Rather than rigidly resisting drag in external flows, many plants and sessile animals reconfigure to reduce drag as speed increases. Living in velocity gradients can be beneficial for attachment but challenging for capturing particulate food. Lift produced by airfoils and hydrofoils is used to produce thrust by all flying animals and many swimming ones, and is usually optimal at higher Re. At low Re, most swimmers use drag-based mechanisms. A few swimmers use jetting for rapid escape despite its energetic inefficiency. At low Re, suspension feeding depends on mechanisms other than direct sieving because thick boundary layers reduce effective porosity. Most biomaterials exhibit a combination of solid and fluid properties, i.e., viscoelasticity. Even rigid biomaterials exhibit creep over many days, whereas pliant biomaterials may exhibit creep over hours or minutes. Instead of rigid materials, many organisms use tensile fibers wound around pressurized cavities (hydrostats) for rigid support; the winding angle of helical fibers greatly affects hydrostat properties. Biomechanics researchers have gone beyond borrowing from engineers and adopted or developed a variety of new approaches—e.g., laser speckle interferometry, optical correlation, and computer-driven physical models—that are better-suited to biological situations.
One of my favorite parts of the review are the references. Alexander cites many of his own publications, including his book Nature’s Flyers: Birds, Insects, and the Biomechanics of Flight. For some reason, he didn’t cite his recent book On the Wing: Insects, Pterosaurs, Birds, Bats and the Evolution of Animal Flight. By the way, David Alexander is not the same as R. McNeill Alexander, who published Principles of Animal Locomotion, which is also cited in the review, and who died earlier this year. The review cites several works by Mark Denny, although not my favorite: Air and Water. Alexander cites over a dozen works by Steven Vogel, whose Life in Moving Fluids appears on my ideal bookshelf. Finally, he writes that “James Gordon’s book Structures, or Why Things Don’t Fall Down (Gordon 1978) is one of the most entertaining and readable introductions to a technical topic ever written.” I read Gordon’s book many years ago and had almost forgotten it. Alexander is right, it’s a gem.

In Figure 1.21, Russ Hobbie and I show a typical stress-strain curve. Alexander shows similar curves, and analyzes them in more detail. Like our book, he develops the concepts of Young’s modulus, shear modulus, strength, and Poisson’s ratio. Alexander introduces another concept: the strain energy density, which is the area under the stress-strain curve. Stress has units of N/m2, and strain is dimensionless, so the strain energy density has units of N/m2 = J/m3. Alexander writes “this key value measures how much work a material absorbs before breaking, and is sometimes referred to as ‘toughness’. Perhaps counterintuitively, some very hard, rigid materials are not very tough, whereas many floppy, easily extended materials are very tough.”

The section on fluid dynamics covers much of the same ground as analyzed in IPMB. It also discusses high Reynold’s number flow, including turbulence, flow separation, boundary layers, lift, and drag. These are fascinating topics, and are vital for understanding animal flight, but do not impact the low Reynold’s number flow that Russ and I focus on.

One topic that Russ and I give a brief mention is viscoelasticity. Alexander spends more time on this interesting subject.
Most biological materials do not fit perfectly into the solid or fluid categories as engineers and physicists have usually defined them. Many biological structures that we would ordinarily consider solid actually have a time-dependent response to loading that gives them a partly fluid character. A proper Hookean material behaves the same way whether it is loaded for a second or a week: remove the load and it returns to its original shape. A viscoelastic solid, however, displays a property called creep : apply a load briefly and the material will spring back just as if it were Hookean. Apply the same load for a prolonged period, however, and the material will continue to deform gradually. When the load is removed, the material may have acquired a permanent deformation, and if so, the longer it is loaded, the greater the permanent deformation.
Alexandar’s review is a great place to go for more about biomechanics after reading Chapter 1 of IPMB. I highly recommend it.

Friday, September 2, 2016

Whiplash

Last week, my wife Shirley and I were in an automobile accident. We suffered no serious injuries, thank you, but the car was totaled and we were sore for several days. After the obligatory reflections on the meaning of life, I began to think critically about the biomechanics of auto accident injuries.

Our car was at a complete stop, and the idiot in the other car hit us from behind. The driver’s side air bag deployed and the impact pushed us off to the right of the road (we hit the car in front of us in the process), while the idiot’s car ended up on the opposite shoulder. It looked a little like this; we were m2 and the idiot was m1:
The collision dynamics of our car accident.
The collision dynamics of our car accident.
The police came and our poor car was carried off on a wrecker to a junk yard. Shirley and I walked home; the accident occurred about a quarter mile from our house.

My neck is still stiff. Presumably I suffered a classic—but not too severe—whiplash. Although Intermediate Physics for Medicine and Biology does not discuss whiplash, it does cover most of the concepts needed to understand it: acceleration, shear forces, torques, and biomechanics. Paul Davidovits describes whiplash briefly in Physics in Biology and Medicine. From the second edition:
5.7  Whiplash Injury

Neck bones are rather delicate and can be fractured by even a moderate force. Fortunately the neck muscles are relatively strong and are capable of absorbing a considerable amount of energy. If, however, the impact is sudden, as in a rear-end collision, the body is accelerated in the forward direction by the back of the seat,  and the unsupported neck is then suddenly yanked back at full speed. Here the muscles do not respond fast enough and all the energy is absorbed by the neck bones, causing the well-known whiplash injury.
You can learn more about the physics of whiplash in the paper “Kinematics of a Head-Neck Model Simulating Whiplash” published in The Physics Teacher (Volume 46, Pages 88–91, 2008).
In a typical rear-end collision, the vehicle accelerates forward when struck and the torso is pushed forward by the seat. The structural response of the cervical spine is dependent upon the acceleration-time pulse applied to the thoracic spine and interaction of the head and spinal components. During the initial phases of the impact, it is obvious that the lower cervical vertebrae move horizontally faster than the upper ones. The shear force is transmitted from the lower cervical vertebrae to the upper ones through soft tissues between adjacent vertebrae one level at a time. This shearing motion contributes to the initial development of an S-shape curvature of the neck (the upper cervical spine undergoes flexion while the lower part undergoes extension), which progresses to a C-shape curvature. At the end of the loading phase, the entire head-neck complex is under the extension mode with a single curvature. This implies the stretching of the anterior and compression of the posterior parts of the cervical spine.
Here are links to videos showing what happens to the upper spine during whiplash:




Injury from whiplash depends on the acceleration. What sort of acceleration did my head undergo? I don’t know the speed of the idiot’s car, but I will guess it was 25 miles per hour, which is equal to about 11 meters per second. Most of the literature I have read suggests that the acceleration resulting from such impacts occurs in about a tenth of a second. Acceleration is change in speed divided by change in time (see Appendix B in IPMB), so (11 m/s)/(0.1 s) = 110 m/s2, which is about 11 times the acceleration of gravity, or 11 g. Yikes! Honestly, I don’t know the idiot’s speed. He may have been slowing down before he hit me, but I don’t recall any skidding noises just before impact.

What lesson do I take from this close call with death? My hero Isaac Asimov—who wrote over 500 books in his life—was asked what he would do if told he had only six months to live. His answer was “type faster.” Sounds like good advice to me!

A photograph of our car, after the accident. Its left rear is smashed in. The car was totaled. My wife and I were OK, but could have suffered from whiplash.
Our car, after the accident.

Friday, August 26, 2016

Everything's Up To Date in Kansas City

A photograph of Union Station in Kansas City, Missouri.
Union Station.
I posted the last two entries in this blog while on a trip to Kansas City to visit my parents. I didn’t grow up in the Kansas City area, but I did graduate from Shawnee Mission South High School in Overland Park Kansas, and I was a physics major at the University of Kansas in Lawrence. My dad is a native of Kansas City Missouri, while my mom moved to Kansas City Kansas when young and attended Wyandotte High School.


A photograph of the Kauffman Center for the Performing Arts, in Kansas City, Missouri.
Kauffman Center
for the Performing Arts.

We had a great visit, including a ride on Kansas City’s new streetcar, which travels a route along Main Street from Union Station, past the Power and Light District, within a few blocks of the Kauffman Center, to the River Market. We also had a great pork sandwich at Pigwich in the East Bottoms. Kansas City is booming.

A photograph of Liberty Memorial and the National World War I Museum in Kansas City, Missouri.
Liberty Memorial.
Is there any connection between Kansas City and medical physics? Yes, there is. Rockhurst University, a liberal arts college located a mile west of where my dad grew up on Swope Parkway, offers an undergraduate program in the physics of medicine, which is similar to the medical physics major we offer at Oakland University. I thought the readers of Intermediate Physics for Medicine and Biology might like to see how another school other than Oakland structures its undergraduate medical physics curriculum.

From their Physics of Medicine website:
POM Program Overview: To suit your interests and career goals the POM Program has three program choices:
  • Medical Physics Major - Major Track designed for students wishing to enter graduate school in Medical Physics
  • Physics of Medicine (POM) Pre-Professional Major - Major Track designed for students wishing to enter a Medical/Healthcare Graduate Program
  • Physics of Medicine (POM) Minor— Minor designed to complement pre-healthcare or pre-medicine program
Advantages to students of the POM Program are:
  1. Deeper understanding of physics principles and their applicability to a medical or health field career.
  2. Stronger post-graduate application to competitive health field programs.
  3. Undergraduate research opportunities – potential for capstone area or future graduate work. 
  4. Value to students of interdisciplinary study, allowing them to tie together coursework in science/math with professional goals. 
All Physics of Medicine Coursework is designed to complement the Scientific Foundations for Future Physicians Report, Report of the AAMC-HHMI Committee (2009).
Some courses specific to the program are:
PH 3200 Physics of the Body I: This course expands on the physics principles developed in introductory physics courses through an in-depth study of mechanics, fluids and thermodynamics as they are applied to the human body. Areas of study include the following: biomechanics (torque, force, motion and lever systems of the body; application of vector analysis of human movement to video), thermodynamics and heat transfer (food intake and mechanical efficiency) and the pulmonary system (pressure, volume and compliance relationships). Guest speakers from the medical community will be invited. [This course appears to cover the material in Chapters 1-3 in IPMB]

PH 3210 Physics of the Body II: This course is a continuation of Physics of the Body I with a concentration on the cardiovascular system, electricity and wave motion. Areas of study include the following: cardiovascular system (heart as a force pump, blood flow and pressure), electricity in the body (action potentials, resistance-capacitance circuit of nerve impulse propagation, EEG, EKG, EMG), and sound (hearing, voice production, sound transfer and impedance, ultrasound – transmission and reflection). In addition, students complete a guided, in-depth, individual investigation on a topic pertinent to Physics of the Body. Guest speakers from the medical community will be invited. [Approximately Chapters 6, 7, and 13 in IPMB. PH 3200 and 3210 together are similar to Oakland University’s PHY 325, Biological Physics]

PH 3240 Physics of Medical Imaging: This course focuses on an introduction to areas of modern physics required for an understanding of contemporary medical diagnostic and treatment procedures. Topics include a focus on the physics underlying modern medical imaging instruments: the EM Spectrum, X-Ray, CT, Gamma Camera, SPECT, PET, MRI and hybrid instrumentation. In this course, students learn about the physics involved in how these diagnostic and therapeutic instruments work as well as the numerous physics and patient factors that contribute to the choice of instrument for diagnosis. There will be field trips to local hospitals and medical imaging facilities and invited guest speakers. [Chapters 15-18 in IPMB; similar to OU’s PHY 326, Medical Physics]

PH 4400 Optics: This course covers both the geometric and physical properties of optical principles including optics of the eye, lasers, fiber optics, and use of endoscopy in medicine. Students will complete a final optics research project in which they relate content learned to an area of optics research. [Chapter 14 in IPMB. We have no comparable course at OU. We offer a standard optics class, but with no biomedical emphasis. This class intrigues me.]

PH 4900 Statistics for the Health Sciences: This course introduces the basic principles and methods of health statistics. Emphasis is on fundamental concepts and techniques of descriptive and inferential statistics with applications in health care, medicine and public health. Core content includes research design, statistical reasoning and methods. Emphasis will be on basic descriptive and inferential methods and practical applications. Data analysis tools will include descriptive statistics and graphing, confidence intervals, basic rules of probability, hypothesis testing for means and proportions, and regression analysis. Students will use specialized statistical software to conduct data analysis of health related data sets. [Nothing exactly like this in IPMB. At OU, we require all medical physics majors to take a statistics class, taught by the Department of Mathematics and Statistics.]

PH 4900 Research in Physics of Medicine: Independent student research on coursework from Physics of Medicine Program. Students will choose topic from Physics of Medicine Program coursework to investigate further and prepare for presentation submission. This course will serve as a capstone course for Medical Physics and Physics of Medicine Pre-Professional Majors. [I am a big supporter of undergraduate research. At OU, medical physics majors can satisfy their capstone requirement by either research or our seminar class.]

MT 3260 Mathematical Modeling in Medicine: Students will build mathematical models and use these models to answer questions in various areas of medicine. Topics may include: Epidemic modeling, genetics, drug treatment, bacterial population modeling, and neural systems/networks. [IPMB is focused on mathematical modeling. I teach PHY 325 and 326 as workshops on mathematical modeling in biology and medicine.]
The Rockhurst physics of medicine minor looks like an idea I am tempted to steal. Their requirements are:
To complete the Physics of Medicine Minor:
Prerequisites: one year of introductory/general physics and Calculus I (complete in first two years)
Upper Division Courses: complete 4 upper-division POM courses (12 hrs.total)
Required:
  • PH 3200: Physics of the Body I (3 Hours, Offered Fall Semester Odd years) 
  • PH 3210: Physics of the Body II (3 Hours, Offered Spring Semester even years)
Choose 2 from the following:
  • PH 3240: Physics of Medical Imaging (3 Hours, Offered Spring Semester Odd Years) 
  • PH 4400: Optics (3 hours, Offered Fall Semester Even Years) 
  • MT 3260: Mathematical Modeling in Medicine (3 Hours, Offered Fall Semester Even years)
  • PH 4900: Statistics for the Health Sciences (3 Hours, Offered Spring semesters)
An OU version might be Biological Physics (PHY 325) and Medical Physics (PHY 326), plus their prerequisites: two semesters of introductory physics and two semesters of calculus.

A photograph of the Nelson Art Gallery in Kansas City, Missouri
The Nelson Art Gallery.
I enjoy my trips to Kansas City because there is a lot to do and see there, from the Nelson Art Gallery to Crown Center to the Liberty Memorial and the National World War I Museum. I remember in high school attending shows at the Starlight Theater in Swope Park, and watching many Kansas City Royals baseball games at Kauffman Stadium (where I saw George Brett play in the World Series!). The Truman Library is in nearby Independence Missouri.

A photograph of the Country Club Plaza in Kansas City, Missouri.
The Country Club Plaza.
I didn’t expect to find a hub of medical physics education in Kansas City, but there it is. In addition to the Rockhurst program, the Kansas University Medical Center has a CAMPEP-accredited clinical medical physics residency (while driving on I-35, I could see cranes putting up a new KU Med Center building), and the Stowers Institute, less than a mile north of Rockhurst and just east of the Country Club Plaza, has a strong biomedical research program. As the song says, Everything's Up To Date in Kansas City.

A photograph of thousands of people in Kansas City celebrating the 2015 Royals World Series Championship.
Kansas City celebrating the 2015 Royals World Series Championship.

Friday, August 19, 2016

How to Explain Why Unequal Anisotropy Ratios is Important Using Pictures but No Mathematics

Ten years ago, at the IEEE Engineering in Medicine and Biology Society Annual Conference in New York City, I presented a paper titled “How to Explain Why Unequal Anisotropy Ratios is Important Using Pictures but No Mathematics.” Although it was only a four-page conference proceeding, it remains one of my favorite papers.
I. Introduction 

The bidomain model describes the electrical properties of cardiac tissue. The term “bidomain” arises because the model accounts for two (“bi”) spaces (“domains”): intracellular and extracellular. Both spaces are anisotropic; the electrical conductivity depends on the direction relative to the myocardial fibers. Moreover, the intracellular space is more anisotropic than the extracellular space, a condition referred to in the literature as “unequal anisotropy ratios.” This condition has important consequences for the electrical behavior of the heart.

Many papers describe the implications of unequal anisotropy ratios. The mathematical derivations and numerical calculations in these reports emphasize the consequences of unequal anisotropy ratios, but they often fail to explain physically why these consequences occur. For example, Sepulveda et al. discovered that during unipolar stimulation, depolarization occurs under the cathode but hyperpolarization exists adjacent to it along the fiber direction. The hyperpolarized regions affect the mechanism of excitation, the shape of the strength-interval curve, and the induction of reentry. Yet, when I am asked why the hyperpolarization appears, I find it difficult to give an intuitive, nonmathematical answer.

In this paper, I try to answer the “why” questions that arise from the bidomain model. I present no new results, but many old results are clarified. My hope is that the reader will develop the intuition necessary to understand qualitatively how cardiac tissue behaves, without having to resort to lengthy mathematical derivations or numerical calculations.
Parts of this article have worked their way into Intermediate Physics for Medicine and Biology. For instance, the article explains how a wave front propagating through cardiac tissue creates a magnetic field. This analysis is reproduced as Problem 19 in Chapter 8 on biomagnetism.

Problem 50 in Chapter 7 examines the transmembrane potential induced in cardiac tissue when an electric shock is applied in the presence of an insulating obstacle. I love how this example highlights the importance of unequal anisotropy ratios.
Consider an insulating cylinder in an otherwise uniform tissue with straight fibers (Fig. 7). An electric field is applied from left to right. Far from the insulator, the current is in the x-direction and is distributed equally between the intracellular and extracellular spaces. As current approaches the insulator, it turns left to circle around the obstacle. The current then is flowing approximately perpendicular to the fibers, so most of the current will be extracellular. As the current turns right to flow once again in the x-direction, it will be parallel to the fibers and will again be distributed more or less equally between the two spaces. As current leaves and then reenters the intracellular space, it causes depolarization and then hyperpolarization. The transmembrane potential distribution surrounding the insulator is even in y and odd in x. The result is the complex pattern of polarization surrounding an insulator in cardiac tissue during electrical stimulation.
A figure from “How to explain why unequal anisotropy ratios is important using pictures but no mathematics,” showing how polarization is caused by an insulating obstacle.
Fig. 7. Distribution: Polarization caused by an insulating obstacle.
This figure explains the results observed in [18].
The role of theoretical analysis in biology and medicine is to make predictions that can be tested experimentally. My former PhD advisor John Wikswo and his team used optical mapping to measure the transmembrane potential around an obstacle during a shock. Their results are shown in the picture below. The bottom line: the prediction and the experiment are consistent. Physics works!

Optical mapping to measure the transmembrane potential around an obstacle during a shock, from: Woods et al. "Virtual Electrode Effects Around an Artificial Heterogeneity During Field Stimulation of Cardiac Tissue" (Heart Rhythm, 3:751-752, 2006).
Optical mapping to measure the transmembrane potential around an obstacle during a shock,
from: Woods et al. (Heart Rhythm, 3:751-752, 2006).
One graduate student, Marcella Woods, was involved in both of the projects I mentioned. She performed the theoretical analysis of the magnetic field produced by wave fronts in cardiac muscle under my direction when I was on the faculty of Vanderbilt University. After I left, she worked with Wikswo and carried out the experiments shown above.

Friday, August 12, 2016

Have We Reached the Athletic Limits of the Human Body?

The Olympics are in full swing this week, giving us in the United States a brief respite from our nasty presidential campaign. As you might guess, I view the Olympics through the lens of biological physics. One question that physics can help answer is: Have we reached the athletic limits of the human body? Can sprinters run faster and faster, or have we reached the physical and physiological limit? Can pole vaulters vault higher? Can long jumpers jump longer? Can swimmers swim quicker? An article in last week’s Scientific American by Bret Stetka tries to answer these questions.
At this month’s summer Olympic Games in Rio, the world's fastest man, Usain Bolt—a six-foot-five Jamaican with six gold medals and the sinewy stride of a gazelle—will try to beat his own world record of 9.58 seconds in the 100-meter dash. If he does, some scientists believe he may close the record books for good. Whereas myriad training techniques and technologies continue to push the boundaries of athletics, and although strength, speed and other physical traits have steadily improved since humans began cataloguing such things, the slowing pace at which sporting records are now broken has researchers speculating that perhaps we’re approaching our collective physiological limit—that athletic achievement is hitting a biological brick wall.
The article cites a 2008 paper by Mark Denny, the author of Air and Water, a book often cited in Intermediate Physics for Medicine and Biology. Denny suggests that there are limits, and we are closing in on them.
Are there absolute limits to the speed at which animals can run? If so, how close are present-day individuals to these limits? I approach these questions by using three statistical models and data from competitive races to estimate maximum running speeds for greyhounds, thoroughbred horses and elite human athletes. In each case, an absolute speed limit is definable, and the current record approaches that predicted maximum. While all such extrapolations must be used cautiously, these data suggest that there are limits to the ability of either natural or artificial selection to produce ever faster dogs, horses and humans. Quantification of the limits to running speed may aid in formulating and testing models of locomotion.
Yet Denny was not overly cautious in his paper. He predicted minimum times for many races, including the 100 m dash. Stetka writes
Bolt hopes to beat the researcher’s [that is, Denny’s] fastest predicted 100-meter dash time of 9.48 seconds. Unfortunately, according to Denny, the now notably older sprinter may have missed his chance. The sprinter was a chasm ahead of the pack in a semifinals race at the 2008 Beijing Olympics when he slowed up before crossing the finish line. “I think had he kept going at full speed he would’ve set an all-time, unbeatable world record,” Denny speculates.
Then Stetka quotes Denny as saying
“When I published my paper, the feedback I got was that this was going to destroy the Olympics,” he recollects. “That’s like saying the 1962 Brazilian soccer team was the best ever so no one’s ever going to watch the World Cup again. But if Bolt can run the 100 in 9.47 seconds and beat my prediction, then hats off to him. I think there’s always going to be the lure of ‘maybe someone’s going to do better.’”
I plan to watch the Olympics and see if humans can run faster than ever before. I’m a big fan of Mark Denny, but I’ll be routing for Bolt (or Gatlin) to beat Denny's prediction.

Enjoy!


P.S. A long frustrated sigh goes to Michael Phelps and the other USA swimmers engaged in “cupping” therapy pseudoscience. Oh, where is Bob Park when we need him! Ignore the quackery and gibberish and focus on the swimming.

Friday, August 5, 2016

Zapping Their Brains at Home

A screenshot of Zapping Their Brains at Home, by Anna Wexler.
“Zapping Their Brains at Home,”
by Anna Wexler.
A couple weeks ago, Anna Wexler published an article in the New York Times titled “Zapping Their Brains at Home.”
Earlier this month, in the journal Annals of Neurology, four neuroscientists published an open letter to practitioners of do-it-yourself brain stimulation. These are people who stimulate their own brains with low levels of electricity, largely for purposes like improved memory or learning ability. The letter, which was signed by 39 other researchers, outlined what is known and unknown about the safety of such noninvasive brain stimulation, and asked users to give careful consideration to the risks.
I worked on brain stimulation when at the National Institutes of Health, and Russ Hobbie and I analyze neural stimulation in Intermediate Physics for Medicine and Biology. So what is my reaction to these do-it-yourselfers? My first thought was “Yikes…this sounds like trouble!” But the more I think about it, the less worried I am.

We are talking about transcranial direct current stimulation, which uses weak currents applied to the scalp. I have always been surprised that such tiny currents have any effect at all; see my editorial “What Does the Ratio of Injected Current to Electrode Area Not Tell Us About tDCS?” (Clinical Neurophysiology, Volume 120, Pages 1037–1038, 2009). My advice to the do-it-yourselfers is not so much “be careful” but rather “don’t get your hopes up.”

Of the four coauthors on the letter in Annals of Neurology, the only one I know is Alvaro Pascual-Leone, who I worked with while at NIH and who we cite several times in IPMB. Below I list the main points raised in the letter:
  • Stimulation affects more of the brain than a user may think 
  • Stimulation interacts with ongoing brain activity, so what a user does during tDCS changes its effects 
  • Enhancement of some cognitive abilities may come at the cost of others 
  • Changes in brain activity (intended or not) may last longer than a user may think 
  • Small differences in tDCS parameters can have a big effect 
  • tDCS effects are highly variable across different people 
  • The risk/benefit ratio is different for treating diseases versus enhancing function
What do I think of do-it-yourselfers in general? I have mixed feelings. Heaven help us if they start fooling around with heart defibrillators, which could be suicidal. For transcranial magnetic stimulation, I think the biggest risk would be the construction of a device that sends kiloamps of current through a coil. I have always thought that TMS is more dangerous for the physician (who often holds the coil) than for the patient. Moreover, the induced current in the brain is larger for TMS than for tDCS. I would be wary of do-it-yourself magnetic stimulation. But for D.I.Y.ers using relatively low-level electrical current applied to the scalp, if someone educates themself on the technique and follows reasonable safety recommendations, then I don’t see it as a problem.

Wexler ends her letter
The open letter this month is about safety. But it also a recognition that these D.I.Y. practitioners are here to stay, at least for the time being. While the letter does not condone, neither does it condemn. It sticks to the facts and eschews paternalistic tones in favor of measured ones. The letter is the first instance I’m aware of in which scientists have directly addressed these D.I.Y. users. Though not quite an olive branch, it is a commendable step forward, one that demonstrates an awareness of a community of scientifically involved citizens.
If you want to read more by Wexler, look here and here.

My final, and admittedly self-serving, advice to the D.I.Y.ers: go buy a copy of Intermediate Physics for Medicine and Biology, so you can learn the scientific principles behind this and other techniques.

Friday, July 29, 2016

Niels Bohr and the Stopping Power of Alpha Particles

In Chapter 15 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the interaction of charged particles with electrons.
15.11.1 Interaction with Target Electrons

We first consider the interaction of the projectile with a target electron, which leads to the electronic stopping power, Se. Many authors call it the collision stopping power, Scol. There can be interactions in which a single electron is ejected from a target atom or interactions with the electron cloud as a whole (a plasmon excitation). The stopping power at higher energies, where it is nearly proportional to β−2 [β = v/c, where v is the speed of the projectile and c is the speed of light], has been modeled by Bohr, by Bethe, and by Bloch (see the review by Ahlen 1980).
Niels Bohr's Times: In Physics, Philosophy, and Polity, by Abraham Pais, superimposed on Intermediate Physics for Medicine and Biology.
Niels Bohr's Times:
In Physics, Philosophy, and Polity,
by Abraham Pais.
Bohr is, of course, the famous Niels Bohr, one of the greatest physicists of all time. I am familiar with Bohr’s model of the hydrogen atom (see Sec. 14.3), but not as much with his work on the stopping power of charged particles. It turns out that Bohr’s groundbreaking work on hydrogen grew out of his study of the stopping power of alpha particles. Moreover, the stopping power analysis was motivated by Ernest Rutherford’s experiments on the scattering of alpha particles, which established the nuclear structure of the atom. This chain of events began with the young Niels Bohr arriving in Manchester to work with Rutherford in March 1912. Abraham Pais discusses this part of Bohr’s life in his biography Niels Bohr’s Times: In Physics, Philosophy, and Polity.
Bohr finished his paper on this subject [the energy loss of alpha particles when traversing matter] only after he had left Manchester; it appeared in 1913. The problem of the stopping of electrically charged particles remained one of his lifelong interests. In 1915 he completed another paper on that subject, which includes the influence of effects due to relativity and to straggling (that is, the fluctuations in energy and in range of individual particles)…

Bohr’s 1913 paper on α-particles, which he had begun in Manchester, and which had led him to the question of atomic structure, marks the transition to his great work, also of 1913, on that same problem. While still in Manchester, he had already begun an early sketch of these entirely new ideas. The first intimation of this comes from a letter, from Manchester, to Harald [Niels’ brother]: “Perhaps I have found out a little about the structure of atoms. Don’t talk about it to anybody…It has grown out of a little information I got from the absorption of α-rays.” I leave the discussion of these beginnings to the next chapter.
On 24 July 1912 Bohr left Manchester for his beloved Denmark. His postdoctoral period had come to an end.
So the alpha particle stopping power calculation Russ and I discuss in Chapter 15 led directly to Bohr’s model of the hydrogen atom, for which he got the Nobel Prize in 1922.

Friday, July 22, 2016

Error Rates During DNA Copying

Chapter 3 of Intermediate Physics for Medicine and Biology discusses the Boltzmann factor. In the homework exercises at the end of the chapter, we include a problem in which you apply the Boltzmann factor to estimate the error rate during the copying of DNA.
Problem 30. The DNA molecule consists of two intertwined linear chains. Sticking out from each monomer (link in the chain) is one of four bases: adenine (A), guanine (G), thymine (T), or cytosine (C). In the double helix, each base from one strand bonds to a base in the other strand. The correct matches, A-T and G-C, are more tightly bound than are the improper matches. The chain looks something like this, where the last bond shown is an “error.”
Drawing of a DNA molecule containing an error in the matching of bases.
A DNA molecule containing an error.
The probability of an error at 300 K is about 10−9 per base pair. Assume that this probability is determined by a Boltzmann factor e−U/kBT, where U is the additional energy required for a mismatch.
(a) Estimate this excess energy.
(b) If such mismatches are the sole cause of mutations in an organism, what would the mutation rate be if the temperature were raised 20° C?
This is a nice simple homework problem that provides practice with the Boltzmann factor and insight into the thermodynamics of base pair copying. Unfortunately, reality is more complicated.

Biophysics: Searching for Principles, by William Bialek, superimposed on Intermediate Physics for Medicine and Biology.
Biophysics:
Searching for Principles,
by William Bialek.
William Bialek addresses the problem of DNA copying in his book Biophysics: Searching for Principles (Princeton University Press, 2012). He notes that the A typically binds to T. If A were to bind with G, the resulting base pair would be the wrong size and grossly disrupt the DNA double helix (A and G are both large double-ring molecules). However, if A were to bind incorrectly with C, the result would fit okay (C and T are about the same size) at the cost of eliminating one or two hydrogen bonds, which have a total energy of about 10 kBT. Bialek writes
An energy difference of ΔF ~ 10 kBT means that the probability of an incorrect base pairing should be, according to the Boltzmann distribution, e-ΔF/kBT ~ 10−4. A typical protein is 300 amino acids long, which means that it is encoded by about 1000 bases; if the error probability is 10-4, then replication of DNA would introduce roughly one mutation in every tenth protein. For humans, with a billion base pairs in the genome, every child would be born with hundreds of thousands of bases different from his or her parents. If these predicted error rates seem large, they are—real error rates in DNA replication vary across organisms [see the vignette “what is the error rate in transcription and translation” in Cell Biology by the Numbers], but are in the range of 10−8–10−12, so the entire genome can be copied without almost any mistakes.
So, how is the does the error rate become so small? There are enzymes called DNA polymerases that proofread the copied DNA and correct most errors. Because of these enzymes, the overall error rate is far smaller than the 10−4 rate you would estimate from the Boltzmann factor alone.

Our homework problem is therefore a little misleading, but it has redeeming virtues. First, the error we show in the figure is G-A, which would more severely disrupt the DNA's double helix structure. That specific error may well have a higher energy and therefore a lower error rate from the Boltzmann factor alone. Second, the problem illustrates how sensitive the Boltzmann factor is to small changes in energy. If ΔE = 10 kBT, the Boltzmann factor is e−10 = 0.5 × 10−4. If ΔE = 20 kBT, the Boltzmann factor is e−20 = 2 × 10−9. A factor of two increase in energy translates into more than a factor of 10,000 reduction in error rate. Wow!