Friday, July 1, 2011

Physiology is the link between the basic sciences and medicine

Textbook of Medical Physiology, by Guyton and Hall, superimposed on Intermediate Physics for Medicine and Biology.
Textbook of Medical Physiology,
by Guyton and Hall.
When Russ Hobbie and I need to cite a general physiology reference in the 4th edition of Intermediate Physics for Medicine and Biology, we often choose the Textbook of Medical Physiology. The book was originally written by Arthur Guyton, but in the most recent editions the lead author is John Hall. We cite the book in several places: 1) In Chapter 1, we reproduce one of Guyton’s figures of the human circulatory system (we reference the 8th edition of the Textbook of Medical Physiology, 1991), 2) We cite Guyton in Chapter 5 when discussing osmotic pressure and when analyzing countercurrent transport, 3) When discussing nerve synapses and neurotransmitters we cite the 10th edition (2000) on which Hall is also first author, 4) In our chapter on feedback loops (Chapter 10) we reproduce a figure showing how alveolar ventilation responds to exercise from the 9th edition (1995, Guyton sole author), and 5) In Chapter 11 on the method of least squares we base Homework Problem 15 on data from Guyton’s textbook regarding the secretion of cortisol by the adrenal gland.

When I was a graduate student at Vanderbilt University, I decided to sit in on the physiology and biochemistry classes that the medical students took. The physiology class was based on Guyton’s book (likely the 6th or 7th edition). I took the class seriously, but since I had little formal coursework in biology (only one introductory class as an undergraduate at the University of Kansas, plus a high school course), and because I didn’t get as much out of the lectures as I should have, my main accomplishment was reading the Textbook of Medical Physiology, cover to cover. Unfortunately, my copy of the book has been lost (probably loaned out to someone who forgot to return it). It’s a pity, because I have fond memories of that book, and all the physiology I learned while reading it.

The 12th edition of the Textbook of Medical Physiology (2010) was published after the 4th edition of Intermediate Physics for Medicine and Biology went to press. Hall and Guyton’s preface states
The first edition of the Textbook of Medical Physiology was written by Arthur C. Guyton almost 55 years ago. Unlike most major medical textbooks, which often have 20 or more authors, the first eight editions of the Textbook of Medical Physiology were written entirely by Dr. Guyton, with each new edition arriving on schedule for nearly 40 years. The Textbook of Medical Physiology, first published in 1956, quickly became the best-selling medical physiology textbook in the world. Dr. Guyton had a gift for communicating complex ideas in a clear and interesting manner that made studying physiology fun. He wrote the book to help students learn physiology, not to impress his professional colleagues.

I worked closely with Dr. Guyton for almost 30 years and had the privilege of writing parts of the 9th and 10th editions. After Dr. Guyton's tragic death in an automobile accident in 2003, I assumed responsibility for completing the 11th edition.

For the 12th edition of the Textbook of Medical Physiology, I have the same goal as for previous editions—to explain, in language easily understood by students, how the different cells, tissues, and organs of the human body work together to maintain life.

This task has been challenging and fun because our rapidly increasing knowledge of physiology continues to unravel new mysteries of body functions. Advances in molecular and cellular physiology have made it possible to explain many physiology principles in the terminology of molecular and physical sciences rather than in merely a series of separate and unexplained biological phenomena.

The Textbook of Medical Physiology, however, is not a reference book that attempts to provide a compendium of the most recent advances in physiology. This is a book that continues the tradition of being written for students. It focuses on the basic principles of physiology needed to begin a career in the health care professions, such as medicine, dentistry and nursing, as well as graduate studies in the biological and health sciences. It should also be useful to physicians and health care professionals who wish to review the basic principles needed for understanding the pathophysiology of human disease.

I have attempted to maintain the same unified organization of the text that has been useful to students in the past and to ensure that the book is comprehensive enough that students will continue to use it during their professional careers.

My hope is that this textbook conveys the majesty of the human body and its many functions and that it stimulates students to study physiology throughout their careers. Physiology is the link between the basic sciences and medicine. The great beauty of physiology is that it integrates the individual functions of all the body's different cells, tissues, and organs into a functional whole, the human body. Indeed, the human body is much more than the sum of its parts, and life relies upon this total function, not just on the function of individual body parts in isolation from the others…
If you are a physicist studying from Intermediate Physics for Medicine and Biology with little background in biology and medicine, you will need to find a good general source of information about physiology. The Guyton and Hall Textbook of Medical Physiology is a good choice. Another book Russ and I cite a lot is Textbook of Physiology by Patton, Fuchs, Hille, Scher and Steiner. However, I cannot find an edition more recent than 1989, so it would not be a good choice for getting up-to-date information.

Arthur Guyton (1919-2003) was a famous physiologist, known for his research on the circulatory system. An obituary published in The Physiologist says
Arthur Guyton’s research contributions, which include more than 600 papers and 40 books, are legendary and place him among the greatest figures in the history of cardiovascular physiology. His research covered virtually all areas of cardiovascular regulation and led to many seminal concepts that are now an integral part of our understanding of cardiovascular disorders such as hypertension, heart failure, and edema. It is difficult to discuss cardiovascular regulation without including his concepts of cardiac output and venous return, negative interstitial fluid pressure and regulation of tissue fluid volume and edema, regulation of tissue blood flow and whole body blood flow autoregulation, renal-pressure natriuresis, and long-term blood pressure regulation.

Perhaps his most important scientific contribution, however, was a unique quantitative approach to cardiovascular regulation through the application of principles of engineering and systems analysis. He had an extremely analytical mind and an uncanny ability to integrate bits and pieces of information, not only from his own research but also from others, into a quantitative conceptual framework. He built analog computers and pioneered the application of large-scale systems analyses to modeling the cardiovascular system before digital computers were available. With the advent of digital computers, his cardiovascular models expanded dramatically in the 1960’s and 70’s to include the kidneys and body fluids, hormones, autonomic nervous system, as well as cardiac and circulatory functions. He provided the first comprehensive systems analysis of blood pressure regulation and used this same quantitative approach in all areas of his research, leading to new insights that are now part of the everyday vocabulary of cardiovascular researchers.

Many of his concepts were revolutionary and were initially met with skepticism, and even ridicule, when they were first presented. When he first presented his mathematical model of cardiovascular function at the Council for High Blood Pressure Research meeting in 1968, the responses of some of the hypertension experts, recorded at the end of the article, reflected a tone of disbelief and even sarcasm. Guyton’s systems analysis had predicted a dominant role for the renal pressure natriuresis mechanism in long-term blood pressure regulation, a concept that seemed heretical to most investigators at that time. One of the leading figures in hypertension research commented “I realize that it is an impertinence to question a computer and systems analysis, but the answers they have given to Guyton seem authoritarian and revolutionary.” Guyton’s concepts were authoritarian and revolutionary, but after 35 years of experimental studies by investigators around the world, they have also proved to be very powerful in explaining diverse physiological and clinical observations. His far-reaching concepts will continue to be the foundation for generations of cardiovascular physiologists.
If you’re interested in the interface between physics and physiology, you’ll find the Guyton and Hall Textbook of Medical Physiology to be a valuable resource.

Friday, June 24, 2011

William Beaumont

I spent last weekend at Mackinac Island in northern Lake Huron. It’s an interesting little place that you reach by ferry and that does not allow any vehicles (except for a few fire engines and ambulances). The ferry ride is dominated by a view of the Mackinac Bridge (the “Mighty Mac”) connecting the upper and lower peninsulas of Michigan. It is a gorgeous piece of engineering (read about its construction in Henry Petroski’s book Engineers of Dreams: Great Bridge Builders and the Spanning of American). On the island, people walk, bike, and ride in horse-drawn carriages. An old 18th century fort dominates the coastline on the south side of the island, and the nearby town has many shops and restaurants (a cynic might call the town a tourist trap). We visited the fort, observed the firing of a civil war-era cannon, had a carriage tour, stopped at “Arch Rock,” and saw the famous Grand Hotel. Last week happened to be their annual Lilac festival, which included a literal “dog and pony show” (the theme this year was board games, and the little terriers carrying big Scrabble pieces on their backs won first prize).

Buying fudge is a Mackinac Island tradition. We stopped at one of the iconic fudge shops, Murdick’s Fudge, and bought a few slabs. One of the Murdick clan, Ryan Murdick, attended Oakland University, where I teach, and obtained a master’s degree in physics. He and I published several papers together, including one about the bidomain model of the electrical properties of cardiac tissue (see Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology), one about magnetocardiography (the magnetic field produced by the heart, Chapter 8), and one about how eddy currents induced in electroencephalogram electrodes can influence measurements of the magnetoencephalogram (Chapter 8; the effect on the MEG is very small). The papers are:
Murdick, R. and B. J. Roth (2003) “Magneto-encephalogram Artifacts Caused by Electro-encephalogram Electrodes,” Medical and Biological Engineering and Computing, Volume 41, Pages 203–205.

Murdick, R. A. and B. J. Roth (2004) “A Comparative Model of Two Mechanisms From Which a Magnetic Field Arises in the Heart,” Journal of Applied Physics, Volume 95, Pages 5116–5122.

Roth, B. J., S. G. Patel, and R. A. Murdick (2006) “The Effect of the Cut Surface During Electrical Stimulation of a Cardiac Wedge Preparation,” IEEE Transactions on Biomedical Engineering, Volume 53, Pages 1187–1190.
I wasn’t expecting to find material for this blog on Mackinac Island, but I did. In 1822, Alexis St. Martin was accidentally shot in the abdomen in a small trading post near Fort Mackinac. Dr. William Beaumont was summoned to treat St. Martin, and was able to save his life. However, the wound healed in an odd way, leaving an opening providing access to the inside of his stomach.

Readers of the 4th edition of Intermediate Physics for Medicine and Biology will appreciate what happened next. The resourceful Beaumont took advantage of the situation to conduct experiments on digestion. He tied different foods to a string, threaded them into St. Martin’s stomach, left them to digest for a while, and then pulled them out to see what had happened. These ground-breaking experiments were instrumental in establishing how digestion works. I toured a small museum dedicated to Beaumont, which describes these experiments in graphic (perhaps too graphic) detail.

I am interested in Beaumont not just because of his experiments studying digestion. Beaumont Hospital, in Royal Oak Michigan, is the clinical partner for a new medical school recently established at Oakland University. The first class of students at the Oakland University William Beaumont School of Medicine arrives this August. This will be a landmark event in OU’s history, and we are all excited about it.

I can’t help but be intrigued by the juxtaposition of these two stories: William Beaumont’s experiments on Alexis St. Martin, and the establishment of a new medical school bearing Beaumont’s name. St. Martin lived into his 80s. I expect our new medical school will have a similarly long and productive life.

Friday, June 17, 2011

Opus 200

In August 2007 I began posting entries to this blog, in order to highlight topics discussed in the 4th edition of Intermediate Physics for Medicine and Biology. Since then, I’ve posted an entry every Friday morning, without fail. This is my 200th (excluding two rare non-Friday posts).

Why do I keep this blog? First, I hope it sells books. Second, I want a way to keep the book up-to-date. Third, some topics Russ and I only mention in passing, and this blog lets me explore these issues in more detail. Fourth, in the blog I often feature past scientists who contributed to the intersection between physics and biology. Fifth and finally, I enjoy it. I like writing, and I find the topics fascinating.

I get some help. Russ Hobbie often sends me ideas and suggestions. My daughter Kathy posted some key entries when I was in Paris and had very limited computer access. I particularly like comments (thanks Debbie). Feel free to voice your opinion. (However, I’m glad the bozo who posted links to porno sites in the comments has stopped.) I hope the readers find this blog useful.

Remember, the book website contains many useful items, including an errata (listing all known errors in the book), a reprint of our 2009 Resource Letter that appeared in the American Journal of Physics, a link to an interview with Russ Hobbie that appeared in the December 2006 Newsletter of the Division of Biological Physics, which is part of the American Physical Society, and (my personal favorite) a link to Russ Hobbie’s MacDose video on YouTube.

Finally, if you use Facebook, you can join the group “Intermediate Physics for Medicine and Biology” and receive these postings about the book there.

Friday, June 10, 2011

National Academies Press

Getting correct and detailed information about the applications of physics to biology and medicine is important. The 4th edition of Intermediate Physics for Medicine and Biology is an excellent source of such information. Yet I know that you, dear reader, are probably saying: “Yes, but I want a FREE source of information.” Well, for those cheapskates like me, there’s some good news this week from the National Academies Press (forwarded to me via Russ Hobbie). First, what is the National Academies Press? Their website explains:
The National Academies Press (NAP) was created by the National Academies to publish the reports issued by the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council, all operating under a charter granted by the Congress of the United States. The NAP publishes more than 200 books a year on a wide range of topics in science, engineering, and health, capturing the most authoritative views on important issues in science and health policy. The institutions represented by the NAP are unique in that they attract the nation’s leading experts in every field to serve on their award-wining panels and committees. The nation turns to the work of NAP for definitive information on everything from space science to animal nutrition.
Now, what’s the good news? An email from the NAP states
As of June 2, 2011, all PDF versions of books published by the National Academies Press (NAP) will be downloadable free of charge to anyone. This includes our current catalog of more than 4,000 books plus future reports published by NAP.

Free access to our online content supports the mission of NAP—publisher for the National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council—to improve government decision making and public policy, increase public education and understanding, and promote the acquisition and dissemination of knowledge in matters involving science, engineering, technology, and health. In 1994, we began offering free content online. Before today’s announcement, all PDFs were free to download in developing countries, and 65 percent of them were available for free to any user.

Like no other organization, the National Academies can enlist the nation’s foremost scientists, engineers, health professionals, and other experts to address the scientific and technical aspects of society’s most pressing problems through the authoritative and independent reports published by NAP. We invite you to sign up for MyNAP —a new way for us to deliver free downloads of this content to loyal subscribers like you, to offer you customized communications, and to reward you with exclusive offers and discounts on our printed books.
Intermediate Physics for Medicine and Biology cites several NAP reports. For instance, in Section 9.10 about the possible effects of weak external electric and magnetic fields, Russ and I cite and quote from the NAP report Possible Health Effects of Exposure to Residential Electric and Magnetic Fields. I tested the website (free just seemed too good to be true), and was able to download a pdf version of the document with no charge (although I did have to give them my email address when I logged in). I got 379 pages of expert analysis about the biological effects of powerline fields. Russ and I quote the bottom line of this report in our book:
There is no convincing evidence that exposure to 60-Hz electric and magnetic fields causes cancer in animals... There is no evidence of any adverse effects on reproduction or development in animals, particularly mammals, from exposure to power-frequency 50- or 60-Hz electric or magnetic fields.
In Chapter 16 on the medical use of X rays, we cite three of the Biological Effects of Ionizing Radiation (BEIR) reports: V, VI, and VII. These reports provide important background about the linear nonthreshold model of radiation exposure. Then in Chapter 17 on nuclear physics and nuclear medicine we cite BEIR reports IV and VI when discussing radiation exposure caused by radon gas. The full citations listed in our book are:
"BEIR IV (1988) Committee on the Biological Effects of Ionizing Radiations. Health Risks of Radon and Other Internally Deposited Alpha-Emitters. Washington, D.C., National Academy Press.

BEIR Report V (1990) Committee on the Biological Effects of Ionizing Radiation. Health Effects of Exposure to Low Levels of Ionizing Radiation. Washington, DC, National Academy Press.

BEIR VI (1999) Committee on Health Risks of Exposure to Radon. Health Effects of Exposure to Radon. Washington, D.C., National Academy Press.

BEIR Report VII (2005) Committee to Assess Health Risks from Exposure to Low Levels of Ionizing Radiation. Health Risks from Exposure to Low Levels of Ionizing Radiation. Washington, DC, National Academy Press.
Besides the reports cited in our book, there are many others you might like to read. In a previous blog entry, I discussed the report BIO2010: Transforming Undergraduate Education for Future Research Biologists, published by NAP. You can download a copy free. It discusses how we should teach physics to future life scientists. In another blog entry I discussed the book In the Beat of a Heart, which explores biological scaling. It is also published by the NAP.

Yet another report, published just last year, that will be of interest to readers of Intermediate Physics for Medicine and Biology is the NAP report Research at the Intersection of the Physical and Life Sciences. The report summary explains the goals of the report.
Today, while it still is convenient to classify most research in the natural sciences as either biological or physical, more and more scientists are quite deliberately and consciously addressing problems lying at the intersection of these traditional areas. This report focuses on their efforts. As directed by the charges in the statement of task (see Appendix A), the goals of the committee in preparing this report are several fold. The first goal is to provide a conceptual framework for assessing work in this area—that is, a sense of coherence for those not engaged in this research about the big objectives of the field and why it is worthy of attention from fellow scientists and programmatic focus by funding agencies. The second goal is to assess current work using that framework and to point out some of the more promising opportunities for future efforts, such as research that could significantly benefit society. The third and final goal of the report is to set out strategies for realizing those benefits—ways to enable and enhance collaboration so that the United States can take full advantage of the opportunities at this intersection.
An older report that covers much of the material that is in the last half of Intermediate Physics for Medicine and Biology is Mathematics and Physics of Emerging Biomedical Imaging (1996). Finally, yet another useful report is Advancing Nuclear Medicine Through Innovation (2007).

All this and more is now available at no cost. Who says there’s no such thing as a free lunch?

Friday, June 3, 2011

Jean Perrin and Avogadro’s Number

Regular readers of this blog may recall that last summer I visited Paris for my 25th wedding anniversary, which was followed by a string of blog entries about famous French scientists. During this trip, my wife and I toured the Pantheon, where we saw the burial site of French scientist Jean Baptiste Perrin (1870–1942). Russ Hobbie and I mention Perrin in a footnote on page 85 of the 4th edition of Intermediate Physics for Medicine and Biology.
The Boltzmann factor provided Jean Perrin with the first means to determine Avogadro’s number [NA]. The density of particles in the atmosphere is proportional to exp(−mgy/kBT), where mgy is the gravitational potential energy of the particles. Using particles for which m was known, Perrin was able to determine [Boltzmann’s constant] kB for the first time. Since the gas constant R was already known, Avogadro’s number was determined from the relationship R = NAkB.
This brief footnote does not do justice to Perrin’s extensive accomplishments. He played a key role in establishing that matter is not a continuum, but rather is made out of atoms. He performed experiments not only on the exponential distribution of particles (described above, and also known as sedimentation equilibrium), but also on Brownian motion. Russ and I describe this phenomenon in Chapter 4:
This movement of microscopic-sized particles, resulting from bombardment by much smaller invisible atoms, was first observed by English botanist Robert Brown in 1827 and is called Brownian motion.
Molecular Reality: A Perspective on the Scientific Work of Jean Perrin, by Mary Jo Nye.
Molecular Reality:
A Perspective on the
Scientific Work of Jean Perrin,
by Mary Jo Nye.

One can learn more about Perrin in the book Molecular Reality: A Perspective on the Scientific Work of Jean Perrin, by Mary Jo Nye. I would not rank this book with the best histories of science I have read (my top three would be The Making of the Atomic Bomb, The Eighth Day of Creation, and The Maxwellians), or among the best scientific biographies (such as Subtle is the Lord: The Science and Life of Albert Einstein). However, it did provide some valuable insight into Perrin’s achievements. Ney states in her introduction that
What has struck me in a perusal of the literature on these topics [discoveries in physics during the early 20th century] is the tendency to assume what so many of the physical scientists of this pivotal period did not for one minute assume—the discontinuity of the matter which underlies visible reality. In looking back upon the discoveries and theories of particles, one perhaps fails to realize that the focus was not simply upon the nature of the molecules, ions and atoms, but upon the very fact of their existence…

In analyzing the role of Jean Perrin in the eventual acceptance of this assumption among the outspoken majority of the scientific community, I have concentrated upon the period of experimental, theoretical, philosophical and popular science which climaxed with the Solvay conference of 1911 and with the publication of Perrin’s book Les Atomes [read an online English translation here] in 1913…

In conclusion, I have discussed the reception of Perrin’s scientific experimentation and propagandisation on the subject of molecular reality, especially his work on Brownian movement, which climaxed in 1913 with the completion of a number of national and international conferences and the publication of Les Atomes. Though Perrin himself did not view his task as completed at that time, the question was no longer central to the basic working assumptions of scientists, and polemics on this question were no longer an impediment or impetus to the progress of general scientific conceptualization. That Perrin’s role was historically essential to this denouement cannot, in my opinion, be doubted.
Nye’s first chapter on 19th-century background contains a little too much philosophy of science for my taste. But her historical review does indicate that, despite what our footnote says, Perrin did not provide the first estimate for Avogadro’s number, but rather provided a definitive early measurement of that value. Her second chapter about Young Perrin: Initial Investigations was better, and the book really captured my attention in the third chapter on The Essential Debate.
The exponential law which Perrin announced in 1908, describing the vertical distribution of a colloid at equilibrium, was the fruit of laborious experiments on Brownian movement after several years of apprenticeship in the study of colloids. Included in his first 1908 paper on Brownian movement was a successful application of the concepts of osmotic pressure and mean kinetic energy to the visible Brownian particles, as well as a convincing calculation of Avogadro’s number. These endevours were but the prelude to a five-year drama devoted to the erection of an unassailable edifice to house the dictum of molecular reality, a structure buttressed at its most vulnerable point of criticism by the observed laws of visible Brownian movement.
I was particularly fascinated by how Perrin knew the mass of the particles he studied.
In order to find m, Perrin utilized Stoke’s law [see Section 4.5 of Intermediate Physics for Medicine and Biology], applying it to a column of the emulsion in a vertical capillary tube, and observing the fact that when the emulsion is very far from equilibrium, the Brownian granules in the upper layers of the column fall as if they were droplets of a cloud. Using Stokes’ formula relating the velocity of a spherical droplet, its radius, and the viscosity of the medium, Perrin found the radius of the granules [on the order of a micron].
Then from the known density, he could determine the mass. Perrin had to go to great lengths to obtain particles with a uniform distribution of radii, starting with 1200 grams of particles and, after repeated centrifugation, ending with less than a gram of uniform particles.

In 1926, Jean Perrin won the 1926 Nobel Prize in physics “for his work on the discontinuous structure of matter, and especially for his discovery of sedimentation equilibrium.”

Friday, May 27, 2011

e, The Story of a Number

On page 33 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce the constant e.
The number e is approximately equal to 2.71828… and is called the “base of the natural logarithms.” Like Ï€ (3.14159…) e has a long history [Maor (1994)].
e: The Story of a Number,  by Eli Maor, superimposed on Intermediate Physics for Medicine and Biology.
e: The Story of a Number,
by Eli Maor.
The citation is to the delightful book e: The Story of a Number, by Eli Maor. In his preface, Maor explains why he wrote the book.
My goal is to tell the story of e on a level accessible to readers with only a modest background in mathematics. I have minimized the use of mathematics in the text itself, delegating several proofs and derivations to the appendixes. Also, I have allowed myself to digress from the main subject on occasion to explore some side issues of historical interest. These include biographical sketches of the many figures who played a role in the history of e, some of whom are rarely mentioned in textbooks. Above all, I want to show the great variety of phenomena—from physics and biology to art and music—that are related to the exponential function ex, making it a subject of interest in fields well beyond mathematics.
Our Chapter 2, about exponential growth, centers on the exponential and logarithm functions, and our Appendix C lists many of the properties of these functions. Maor explores all sorts of interesting facts about e. For instance, 878/323 is a very good rational approximation to this irrational number. You can recall the first ten digits of e by remembering 2.7 (Andrew Jackson)2 [Jackson was elected president in 1828]. In his Chapter 13, Maor presents some beautiful continued fractions for e that I will not attempt to reproduce here using html.

When developing the Fourier series in Chapter 11 of Intermediate Physics for Medicine and Biology, Russ and I note that “the remarkable property of imaginary numbers that make them useful in this context is that eiθ = cosθ + i sinθ.” (Here, i is the square root of minus one.) Maor sets θ = Ï€ to obtain an equation studied by the Swiss mathematician Leonhard Euler

eiÏ€ = −1 ,

and claims
it must surely rank among the most beautiful formulas in all of mathematics. Indeed, by rewriting it as eÏ€i + 1 = 0, we obtain a formula that connects the five most important constants of mathematics (and also the three most important mathematical operation—addition, multiplication, and exponentiation). These five constants symbolize the four major branches of classical mathematics: arithmetic, represented by 0 and 1; algebra, by i; geometry, by Ï€; and analysis, by e.
Taking a less aesthetic view, Russ and I downplay the use of complex exponentials in Intermediate Physics for Medicine and Biology.
The Fourier transform is usually written in terms of complex exponentials. We have avoided using complex exponentials. They are not necessary for anything done in this book. The sole advantage of complex exponentials is to simplify the notation. The actual calculations must be done with real numbers.
Another reason I often steer clear of complex exponentials is that I place great importance on being able to visualize physically what a mathematical expression is saying, and I find trigonometric functions far easier to envision than complex exponentials. So, while I concede the abstract beauty of the formula eiÏ€ = −1, I don’t find it so useful when thinking about physics.

While educating his readers about e, Maor also introduces them to many famous mathematicians, including Archimedes, Napier, Newton, Gauss, the Bernoullis, and above all Euler, who is apparently one of Maor’s favorites.
Leonhard Euler (1707–1783) is unquestionably the Mozart of mathematics, a man whose immense output--not yet published in full—is estimated to fill at least seventy volumes. Euler left hardly an area of mathematics untouched, putting his mark on such diverse fields as analysis, number theory, mechanics and hydrodynamics, cartography, topology, and the theory of lunar motion.
Maor discusses the uses of logarithms and exponentials in biology. He talks about the logarithmic spiral and its role in growth, for instance, of a nautilus shell. He also makes an interesting comparison between the ear and the eye.
The remarkable sensitivity of the human ear to frequency changes is matched by its audibile range—from about 20 cycles per second to about 20,000 (the exact limits vary somewhat with age). In terms of pitch, this corresponds to about ten octaves (an orchestra rarely uses more then seven). By comparison, the eye is sensitive to a wavelength range from 4,000 to 7,000 angstroms (10−8 cm)—a range of less than two “octaves.” [Doesn’t Maor mean: less than one “octave”?]
I’m particularly fond of Maor’s recreation of a meeting between Bach and one of the Bernoulli’s
Let us imagine a meeting between Johann Bernoulli (Johann I, that is) and Johann Sebastian Bach. The year is 1740. Each is at the peak of his fame. Bach, at the age of fifty-five, is organist, composer, and Kapellmeister (musical director) at St. Thomas’s Church in Leipzig. Bernoulli, at seventy three, is the most distinguished professor of the University of Basel.
The resulting imagined conversation is fascinating and amusing. Musicians interested in the “equal tempered scale” will enjoy this section.

I will close this blog entry the same way Maor ended his book, by letting e take a final bow. Here is e to one hundred decimal places:

2.7182818284590452353
60287471352662497757
24709369995957496696
76277240766303535475
94571382178525166427

Friday, May 20, 2011

Non-Newtonian Fluids and the Rheology of Blood

In Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I explain the difference between a Newtonian fluid and a non-Newtonian fluid.
A fluid can support a viscous shear stress if the shear strain is changing. One way to create such a situation is to immerse two parallel plates, each of area S, in the fluid, and to move one parallel to the other … The variation of velocity between the plates gives rise to a velocity gradient dvx/dy

In order to keep the top plate moving and the bottom plate stationary, it is necessary to exert a force of magnitude F on each plate: to the right on the upper plate and to the left on the lower plate. The resulting shear stress or force per unit area is in many cases proportional to the velocity gradient:

F/S = η dvx/dy .   (1.33)

The constant η is called the coefficient of viscosity … Fluids that are described by Eq. 1.33 are called Newtonian fluids. Many fluids are not Newtonian.
At the end of the chapter, we give an example of a biologically important non-Newtonian fluid.
Blood is not a Newtonian fluid. The viscosity depends strongly on the fraction of volume occupied by red cells (the hematocrit).
An excellent review of blood’s fluid behavior can be found in the article “Rheology of Blood” by Edward Merrill (Physiological Reviews, Volume 49, Pages 863–888, 1969). Rheology is the part of fluid mechanics that deals with non-Newtonian fluids. Merrill explains clearly the difference between a Newtonian fluid with a high viscosity and a Non-Newtonian fluid.
A Newtonian liquid is one in which the viscosity, at fixed temperature and pressure, is independent of the shear stress. Thus, a non-Newtonian liquid is one in which the viscosity depends on shear stress. Water and honey are Newtonian, but many aqueous suspensions of fine particulate matter such as water-base paint, plaster, and oil emulsions are non-Newtonian. The distinction is qualitatively obvious if one imagines two spoons, one in a pot of honey (Newtonian) and the other in a pot of mayonnaise (non-Newtonian emulsion). The honey is harder to stir (has a higher viscosity) than the mayonnaise, but when the spoons are removed and held above the pots, the honey continues to drizzle off its spoon, whereas the mayonnaise coating the other spoon clings indefinitely to it without flow, thus exhibiting “infinite” viscosity.
An important concept when discussing the rheology of blood is yield stress. Merrill explains
Blood … exhibits a “yield stress.” This means that, if …one increases from zero the stress, but keeps it less than a critical value, the response will be elastic … On removal of the stress, the shape of the blood film will be unaltered, i.e., no flow will have occurred. However, if the yield stress is exceeded, irreversible deformation will occur.
In other words, it acts like a solid at low stress, and a fluid at high stress. Merrill concludes by discussing the physiological significance of the non-Newtonian nature of blood.
In summary, the relevance of blood rheology to physiological fluid mechanics is to make stopping of flows easier, starting of flows more difficult, and slow flows more energy consuming than would be expected if blood were a simple, cell-less, micromolecular fluid of equal viscosity—and these effects are increasingly emphasized with increase of hematocrit and fibrinogen concentration.
Besides blood, another dramatic example of a non-Newtonian fluid is a mixture of corn starch and water. My Oakland University colleague Alberto Rojo (whose office is next door to mine) has made a fun video demonstrating how you can “walk on water” by taking advantage of this mixture’s non-Newtonian properties. The effect is fascinating.

Alberto Rojo walks on a mixture of corn starch and water.

Friday, May 13, 2011

Drawing Figures

Two of my favorite figures in the 4th edition of Intermediate Physics for Medicine and Biology are Fig. 7.13 (the extracellular potential produced by an action potential along a nerve axon) and Fig. 8.14 (the magnetic field produced by the same axon). John Wikswo and I prepared these figures when I was in graduate school at Vanderbilt University.
Fig. 7.13 of Intermediate Physics for Medicine and Biology, 4th edition. The exterior potential calculated using the method of Clark and Plonsey.
Fig. 7.13. The exterior potential calculated using the method of Clark and Plonsey.
From Intermediate Physics for Medicine and Biology, 4th edition.
Fig. 8.14 of Intermediate Physics for Medicine and Biology, 4th edition. A three-dimensional plot of the magnetic field around the crayfish axon.
Fig. 8.14. A three-dimensional plot of the magnetic field around the crayfish axon.
From Intermediate Physics for Medicine and Biology, 4th edition.
Soon after entering graduate school in 1982, I took a class taught by John based on Russ Hobbie’s first edition of Intermediate Physics for Medicine and Biology. Clearly, the book had a significant influence on my subsequent career. (I remember the bright yellow cover of the first edition: my office is probably one of the few places outside of Minnesota where all four editions of the book sit proudly, side-by-side, on a bookshelf.) When preparing the second edition, Russ added a chapter on biomagnetism, and asked John to contribute a figure showing the magnetic field produced by an axon. Of course, this is just the sort of work graduate students are good for, and I was given the task of preparing the figure (actually two figures, as we decided to make a similar figure for the extracellular potential). This was not a big job, because I already had access to the computer code that my friend Jim Woosley had written for his master’s thesis, and which I used when preparing our paper “The Magnetic Field of a Single Axon: A Volume Conductor Model,” (Woosley, Roth, and Wikswo, 1985, Mathematical Biosciences, Volume 76, Pages 1–36).

In the mid-1980s, three-dimensional graphics programs were not as common as they are now, but we had one and I was able to create the figure. What we didn’t have was a publication-quality printer or software to prepare and manipulate figures. Therefore, once I had the plots created, they went to the drafting room to be finished. John usually had one or more undergraduates hired for the sole task of preparing figures. I don’t remember exactly who worked on the two figures for the 2nd edition, but it may have been David Barach, son of Vanderbilt physics professor John Barach. The daftsman’s job was to retrace the figure, thereby providing a higher quality appearance than a dot-matrix printer could provide. As I recall, his job was also to remove hidden lines (I don’t think that our 3-d graphics program was “smart” enough to remove hidden lines on its own). He also labeled all the axes using some really neat rub-on letters that John was able to purchase in both Roman and Greek fonts (note the “μ” in μV in Fig. 7.13). I remember David working on figures at a large, slanted drafting table, using very high quality, vellum-like paper. He had rulers, triangles, and “French curves” of all types. First the drawing was done in pencil, and then traced with black ink. Once finished, additional copies were made using photography by a center in the Vanderbilt Medical School dedicated to such work. Before Photoshop, Powerpoint, and other such programs, that is the way figures were prepared. John had a policy that all graduate students had to get some experience at the drafting table, which I didn’t mind at all. At the risk of sounding like a Luddite who is nostalgic for the days of buggy whips, I think those figures have a little more personality and visual appeal than computer-generated figures drawn today.

The figures appeared in the second edition of Russ’s book, and have continued on through subsequent editions (including the 4th edition, on which I have the high honor of becoming a coauthor). Figures like that required much time and expense to prepare, and are difficult to edit. But my, it was more fun to really “draw” those figures than it is to churn out figures using graphics software.

Friday, May 6, 2011

Central Slice Theorem and Ronald Bracewell

Chapter 12 of the 4th edition of Intermediate Physics for Medicine and Biology deals with images and tomography. One of the key ideas in tomography is the “central slice theorem.” Russ Hobbie and I write in Section 12.4 that
The Fourier transform of the projection at angle θ is equal to the two-dimensional Fourier transform of the object, evaluated in the direction θ in Fourier transform space. This result is known as the projection theorem or the central slice theorem (Problem 17). The transforms of a set of projections at many different angles provide values of C and S [the cosine and sine parts of the 2-d Fourier transform] throughout the kxky plane [frequency space] that can be used in Eq. 12.9a [the definition of the 2-d Fourier transform] to calculate f(x,y).
I consider the central slice theorem to be one of the most important concepts in medical imaging. How was this fundamental idea first developed? The answer to that question provides a fascinating example of how physics and engineering can contribute to medicine.

Ronald Bracewell first developed the central slice theorem while working in the field of radio astronomy. His 2007 New York Times obituary states
Ronald N. Bracewell, an astronomer and engineer who used radio telescopes to make early images of the Sun’s surface, in work that also led to advances in medical imaging, died on Aug. 12 at his home in Stanford, Calif. He was 86…

With his colleagues at Stanford University in the 1950s, Dr. Bracewell designed a specialized radio telescope, called a spectroheliograph, to receive and evaluate microwaves emitted by the Sun…

Later, in the 1970s, the techniques and a formula devised by Dr. Bracewell were applied by other scientists in developing X-ray imaging of tumors, called tomography, and other forms of medical imaging that scan electromagnetic and radio waves. Dr. Bracewell advised researchers at Stanford and other institutions, but did not conduct laboratory research in the field.
The Fourier Transform  and Its Applications,  by Ronald Bracewell, superimposed on Intermediate Physics for Medicine and Biology.
The Fourier Transform
and Its Applications,
by Ronald Bracewell.
Russ and I cite Bracewell’s 1990 paper “Numerical Transforms” (Science, Volume 248, Pages 697–704). The central slice theorem was published in 1956 in the Australian Journal of Physics (Volume 9, Pages 198–217). Early in this career Bracewell published a lot in that journal, which is now defunct but maintains a website with free access to all the papers. Bracewell also wrote a marvelous book: The Fourier Transform and Its Applications (originally published in 1965, the revised 2nd edition is published by McGraw-Hill, New York, 1986). When writing this blog entry, I checked this book out of Kresge Library here at Oakland University. Once I opened it, I realized it is an old friend. I am sure I read this book in graduate school. It contains many pictures that allow the student to gain an intuition about the Fourier transform; an extraordinarily valuable skill to develop. The introduction states
The present work began as a pictorial guide to Fourier transforms to complement the standard lists of pairs of transforms expressed mathematically. It quickly became apparent that the commentary would far outweigh the pictorial list in value, but the pictorial dictionary of transforms is nevertheless important, for a study of the entries reinforces the intuition, and many valuable and common types of function are included which, because of their awkwardness when expressed algebraically, do not occur in other lists.
The text also does a fine job describing convolutions.
Convolution is used a lot here. Experience shows that it is a fairly tricky concept when it is presented bluntly under its integral definition, but it becomes easy if the concept of a functional is first understood.
Many of the ideas that Russ and I present in Chapter 11 of Intermediate Physics for Medicine and Biology are examined in more detail in Bracewell’s book. I recommend it as a reference to keep at your side as your plow through the mathematics of Fourier analysis.

Finally, Bracewell’s view of homework problems, as stated in his Preface to the second edition, mirrors my own.
A good problem assigned at the right stage can be extremely valuable for the student, but a good problem is hard to compose. Among the collection of supplementary problems now included at the end of the book are several that go beyond being mathematical exercises by inclusion of technical background or by asking for opinions.

Friday, April 29, 2011

Bursting

Last week in this blog I talked briefly about bursting in pancreatic beta cells. A bursting cell fires several action potential spikes consecutively, followed by an extended quiescent period, followed again by another burst of action potentials, and so on. One of the first and best-known models for bursting was developed by James Hindmarsh and Malcolm Rose (“A Model of Neuronal Bursting Using Three Coupled First Order Differential Equations,” Proceedings of the Royal Society of London, B, Volume 221, Pages 87–102, 1984). Their analysis was an extension of the FitzHugh-Nagumo model, with an additional variable governed by a very slow time constant. Their system of equations is

dx/dt = y – x3 + 3 x2 – z + I

dy/dt = 1 – 5 x2 – y

dz/dt = 0.001 [ 4(x + 1.6) – z]

where x is the membrane potential (appropriately made dimensionless), y is a recovery variable (like a sodium channel inactivation gate), z is the slow bursting variable, and I is an external stimulus current. For some values of I, this model predicts bursting behavior.

Bursting: The Genesis of
Rhythm in the Nervous System,
by Stephen Coombes and Paul Bressloff.
There is an entire book dedicated to this topic: Bursting--The Genesis of Rhythm in the Nervous System, by Stephen Coombes and Paul Bressloff (World Sci. Pub. Co., 2005). The first chapter, co-written by Hindmarsh, provides a little of the history behind the Hindmarsh-Rose model:
The collaboration that led to the Hindmarsh-Rose model began in 1979 shortly after Malcolm Rose joined Cardiff University. The particular project was to model the synchronization of firing of two snail neurons in a relatively simple way that did not use the full Hodgkin-Huxley equations... A natural choice at the time was to use equations of the FitzHugh [type]…

A problem with this choice was that these equations do not provide a very realistic description of the rapid firing of the neuron compared to the relatively long interval between firing. Attempts were made to achieve a more realistic description by making the time constants … voltage dependent. In particular so the rates of change of x and y were much smaller in the subthreshold or recovery phase. These were not convincing and it was not until Malcolm raised the question about whether the FitzHugh equations could account for “tail current reversal” that progress was made.

The modification of the FitzHugh equations to account for tail current reversal was crucial for the development of the Hindmarsh-Rose model.
For those not familiar with the FitzHugh-Nagumo model, see Problem 33 in Chapter 10 of the 4th edition of Intermediate Physics for Medicine and Biology, or see the Scholarpedia article by FitzHugh himself, written before he died in 2007. If you want to see some bursting patterns, check out this youtube video. It is not great, but you will get the drift of what the model predicts.

My friend Artie Sherman also had a chapter in the bursting book, titled “Beyond Synchronization: Modulatory and Emergent Effects of Coupling in Square-Wave Bursting.” He has been working on bursting in pancreatic beta cells for years, as a member (and now chief) of the Laboratory of Biological Modeling in the Mathematical Research Branch, the National Institute of Diabetes, Digestive and Kidney Diseases, part of the National Institutes of Health. His work is the best I am aware of for modeling bursting.