## Friday, December 28, 2018

### The Pitfalls of Using Handbooks and Formulae

 Structures: Or Why Things Don't Fall Down, by J. E. Gordon.
Last week I discussed James Gordon’s book Structures: Or Why Things Don’t Fall Down. The book contains several appendices. The first appendix is ostensibly about using handbooks and formulas to make structural calculations.
Over the last 150 years the theoretical elasticians have analysed the stresses and deflections of structures of almost every conceivable shape when subjected to all sorts and conditions of loads…Fortunately a great deal of this information has been reduced to a set of standard cases or examples the answers to which can be expressed in the form of quite simple formulae.
Then, to my surprise, Gordon changes tack and warns about pitfalls when using these formulas. His counsel, however, applies to all calculations, not just mechanical ones. In fact, his advice is invaluable for any young scientist or engineer. Below, I quote parts of this appendix. Read carefully, and whenever you encounter a word specific to mechanics substitute a general one, or one related to your own field.
[Formulae] must be used with caution.
 Appendix 1 of Structures.
1. Make sure that you really understand what the formula is about.
2. Make sure that it really does apply to your particular case.
3. Remember, remember, remember, that these formulae take no account of stress concentrations or other special local conditions.
After this, plug the appropriate loads and dimensions into the formula—making sure that the units are consistent and that the noughts are right. [I’m not sure what “noughts” are, but I think the Englishman Gordon is saying to make sure the decimal point is in the right place.] Then do a little elementary arithmetic and out will drop a figure representing a stress or a deflection.

Now look at this figure with a nasty suspicious eye and think if it looks and feels right. In any case you had better check your arithmetic; are you sure that you haven’t dropped a two?...

If the structure you propose to have made is an important one, the next thing to do, and a very right and proper thing, is to worry about it like blazes. When I was concerned with the introduction of plastic components into aircraft I used to lie awake night after night worrying about them, and I attribute the fact that none of these components ever gave trouble almost entirely to the beneficent effects of worry. It is confidence that causes accidents and worry which prevents them. So go over your sums not once or twice but again and again and again.
 Structures: Or Why Things Don't Fall Down.
This is the attitude I try to instill in my students when teaching from Intermediate Physics for Medicine and Biology. I implore them to think before they calculate, and then think again to judge if their answer makes sense. Students sometimes submit an answer to a homework problem (almost always given to five or six significant figures) that is absurd because they didn't look at their answer with a “nasty suspicious eye.” I insist they "remember, remember, remember" the assumptions and limitations of a mathematical model and its resulting formulas. Maybe Gordon goes a little overboard with his “night after night” of lost sleep, but at least he cares enough about his calculation to wonder “again and again and again” if it is correct. A little worry is indeed a “right and proper thing.”

Who would of expected such wisdom tucked away in an appendix about handbooks and formulae?

## Friday, December 21, 2018

### Structures: Or Why Things Don't Fall Down

 Structures: Or Why Things Don't Fall Down, by James Gordon.
When I was in graduate school, I read a fascinating book by James Gordon titled Structures: Or Why Things Don’t Fall Down. It showed me to how engineers think about mechanics. Recently, I reread Structures and read for the first time its sequel The New Science of Strong Materials: Or Why You Don’t Fall Through the Floor. I enjoyed both books thoroughly.

In Chapter 1 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss two mechanical properties of a material: stiffness and strength. Stiffness describes how much a material lengthens when pulled (that is, strains when stressed), and is quantified by its Young’s modulus. Strength measures how much stress a material can withstand before failing. Gordon summarizes these ideas succinctly.
A biscuit is stiff but weak, steel is stiff and strong, nylon is flexible and strong, raspberry jelly is flexible and weak. The two properties together describe a solid about as well as you can reasonably expect two figures to do.
Just two figures, however, are not sufficient to characterize a material, especially when it's used to build a structure.
The worst sin in an engineering material is not lack of strength or lack of stiffness, desirable as these properties are, but lack of toughness, that is to say, lack of resistance to the propagation of cracks.
Toughness is opposite to brittleness, and is related to but not identical to ductility. It is quantified by the work of fracture—the energy needed to produce a new surface by propagation of a crack through the material—a concept introduced by Alan Griffith during his research on fracture mechanics.
A strained material contains strain energy which would like to be released just as a raised weight contains potential energy and would like to fall…The relief of strain energy …. [is] proportional to the square of the crack length…On the other side of the account book is the surface energy…needed to form the new surfaces and clearly increases as only the first power of the depth of the crack…When the crack is shallow it is consuming more energy as surface energy than it is releasing as relaxed strain energy and therefore conditions are unfavorable for it to propagate. As the crack gets longer however these conditions are reversed and beyond the ‘critical Griffith length’ lg the crack is producing more energy than it is consuming, so it may start to run away in an explosive manner.
In heterogeneous materials, internal interfaces act as crack stoppers. This makes wood exceptionally tough; Its cellular, fibrous structure prevents a crack from propagating. Toughness is important in biological materials that must undergo large strains without breaking. Wood is not dense (compared to, say, steel), so you get lots of toughness for little weight, which is one reason wood is so popular as a building material. On the other hand, wood isn’t very stiff, and it swells, burns, and rots.

Gordon provides deep insight into the behavior of structures and materials. Consider the stress in the wall of a cylindrical pressure vessel (a long cylinder with spherical end caps). The circumferential stress in the cylinder's wall is given by the Law of Laplace (see IPMB, Chapter 1, Problem 18). The longitudinal stress is equal to the stress in the end caps (the stress in a sphere is two times that in a cylinder, see Problem 19). Thus
the circumferential stress in the wall of a cylindrical pressure vessel is twice the longitudinal stress...One consequence of this must have been observed by everyone who has ever fried a sausage. When the filling inside the sausage swells and the skin bursts, the split is almost always longitudinal.
Then Gordon develops this theme.
 Figure 5 from Structures.
If we make a tube or cylinder from such a material [as rubber] and then inflate it, by means of an internal pressure, so as to involve a circumferential strain of 50 per cent or more, then the inflation or swelling process will become unstable, and the tube will bulge out...into a spherical protrusion which a doctor would describe as an “aneurism”....Since veins and arteries do, in fact, generally operate at strains around 50 per cent, and since, as any doctor will tell you, one of the conditions it is most desirable to avoid in blood-vessels is the production of aneurisms, any sort of rubbery elasticity is quite unsuitable....The only sort of elasticity which is completely stable under fluid pressures at high strains is that which is represented by Figure 5 [showing the stress increasing exponentially with the strain]. With minor variations, this shape of stress-strain curve is very common indeed for animal tissue....Materials with this [exponential] type of stress-strain curve are extremely difficult to tear. One reason is, perhaps, that the strain energy stored under such a curve--and therefore available to propagate fracture...is minimized."
He continues
Perhaps partly for these reasons the molecular structure of animal tissue does not often resemble that of rubber or artificial plastics. Most of these natural materials are highly complex, and in many cases they are of a composite nature, with at least two components; that is to say, they have a continuous phase or matrix which is reinforced by means of strong fibres of filaments of another substance. In a good many animals this continuous phase or matrix contains a material called 'elastin', which has a very low modulus and a [flat] stress-strain curve...The elastin is, however, reinforced by an arrangement of bent and zig-zagged fibres of collagen...a protein, very much the same as tendon, which has a high modulus...Because the reinforcing fibres are so much convoluted, when the material is in its resting or low-strain condition they contribute very little to its resistance to extension, and the initial elastic behavior is pretty well that of elastin. However, as the composite tissue stretches the collagen fibres begin to come taut; thus in the extended state the modulus of the material is that of the collagen, which more or less accounts for Figure 5.
As you probably can tell, Gordon writes wonderfully and explains mechanics so it's understandable to a layman. His writing is a model of clarity.

 Structures: Or Why Things Don't Fall Down.
Structures was my first exposure to continuum mechanics, but certainly not my last. I was a member of the Mechanical Engineering Section when I worked at the National Institutes of Health, so I was surrounded by outstanding mechanical engineers. My friend Peter Basser—himself a mechanical engineer—would lend me his books, and I recall reading classics such as Love’s A Treatise on the Mathematical Theory of Elasticity and Schlichting’s Boundary Layer Theory. I was impressed by Basser’s model of infusion-induced swelling in the brain and Richard Chadwick’s studies of cardiac biomechanics (Richard was another member of our Mechanical Engineering Section). In many ways, NIH provided a liberal education in physics applied to biology and medicine.

Throughout my career, most of my research has focused on bioelectricity and biomagnetism. Recently, however, I have been working on problems in biomechanics. But that is another story.

## Friday, December 14, 2018

### Computerized Transverse Axial Scanning (Tomography)

 “Computerized Transverse Axial Scanning (Tomography).”
Section 16.8 of Intermediate Physics for Medicine and Biology discusses computed tomography. Russ Hobbie and I describe the history of this technique.
The Nobel Prize in Physiology or Medicine was shared in 1979 by a physicist, Allan Cormack, and an engineer, Godfrey Hounsfield…. Hounsfield, working independently [of Cormack], built the first clinical [computed tomography] machine, which was installed in 1971. It was described in 1973 in the British Journal of Radiology. The Nobel Prize acceptance speeches (Cormack 1980; Hounsfield 1980) are interesting to read. A neurologist, William Oldendorf, had been working independently on the problem but did not share in the Nobel Prize…
Oddly, Russ and I did not include Hounsfield’s 1973 paper in our list of references. I decided to dig it up and have a look. The reference and abstract is:
Hounsfield GN (1973) Computerized transverse axial scanning (tomography): Part I. Description of the system. Br J Radiol 46:1016-1022

This article describes a technique in which X-ray transmission readings are taken through the head at a multitude of angles: from these data, absorption values of the material contained within the head are calculated on a computer and presented as a series of pictures of slices of the cranium. The system is approximately 100 times more sensitive than conventional X-ray systems to such an extent that variations in soft tissues of nearly similar density can be displayed.
1. This is Hounsfield’s most highly cited paper, with 4667 citations according to Google Scholar. That's a respectable number (ten times more than any of my papers have), yet seems curiously small for a Nobel Prize-winning advance.
2. Hounsfield’s paper is the first of a trilogy. Hounsfield is not a coauthor on the other two; they report clinical studies using the new technique.
3. Hounsfield lists his institution as “Central Research Laboratories of EMI Limited.” EMI is famous in the music industry; it is the recording label responsible for the early hits of the Beatles.
4. Hounsfield’s paper has only three references: two to his own preliminary reports and one to an article by Oldendorf. He didn’t cite Cormack’s papers.
5. Hounsfield sounds most impressed not by recreating three-dimensional images from two-dimensional projections (which to me is the big advance) but instead by the increased sensitivity of the technique to small differences in x-ray absorption coefficient.
6. Figure 3, illustrating the scanning device and sequence, is similar to Fig. 16.25 in IPMB.
 Fig. 16.25 of IPMB.
7. Hounsfield measured 160 points in each translation and performed 180 rotations. Each two-dimensional image was represented by an 80 × 80 grid of pixels.
8. The reconstruction method was different from the two Russ and I analyze in Chapter 12 of IPMB: i) Fourier transform reconstruction and ii) filtered back-projection. Instead, Hounsfield just fit his data using the least squares method (see Section 11.1 of IPMB). Hounsfield writes “Each beam path [in the CT scan], therefore, forms one of a series of 28,800 simultaneous equations, in which there are 6,400 variables and, providing that there are more equations than variables, the values of each [pixel] …. can be solved.”
9. The Hounsfield unit was introduced in Fig. 9, but he did not, of course, call it that. Interestingly, his definition is different than what is used today. Equation 16.25 in IPMB gives the Hounsfield unit as
where μtissue and μwater are x-ray attenuation coefficients. In his paper, Hounsfield defines the unit the same way, except he replaces 1000 by 500.
10. The article describes preliminary experiments using an iodine-containing contrast agent and digital subtraction, analogous to Fig. 16.23 in IPMB.
11. The computer equipment pictured in Hounsfield’s paper look big and clunky today. I can only guess what paltry computer power he had available for these first reconstructions.
12. I love the British Journal of Radiology, also known as BJR. (What journal did you think that Bradley John Roth would like?)
I’ll conclude with Hounsfield’s final paragraph. To my ear, it sounds like classic British understatement.
It is possible that this technique may open up a new chapter in X-ray diagnosis. Previously, various tissues could only be distinguished from one another if they differed appreciably in density. In this procedure absolute values of the absorption coefficient of the tissues are obtained. The increased sensitivity of computerized X-ray section scanning thus enables tissues of similar density to be separated and a picture of the soft tissue structure within the cranium to be built up.

## Friday, December 7, 2018

### Imaging and Velocimetry of the Human Retinal Circulation with Color Doppler Optical Coherence Tomography

 Page 394 of IPMB.
Section 14.7 of Intermediate Physics for Medicine and Biology discuses optical coherence tomography (OCT). Russ Hobbie and I write
Optical range measurements using the time delay of reflected or backscattered light from pulses of a few femtosecond (10-15 s) duration can be used to produce images similar to those of ultrasound…Since it is difficult to measure time intervals that short, most measurements are done using interference properties of the light. Optical coherence tomography is conceptually similar to range measurements but uses interference measurements…It is widely used in ophthalmology….
 Fig. 14.15 of IPMB.
The basic apparatus [for OCT] is shown in Fig. 14.15….The light pulse travels over an optical fiber to a 50/50 beam splitter. Part travels to the sample, where it is reflected back to the 50/50 coupler and then to the detector. The other half of the light goes to the reference mirror, where it is also reflected back to the detector. Changing the position of the reference mirror changes the depth of the image plane in the sample….

 Fig. 14.17 of IPMB.
It is possible to make many kinds of images. Fig. 14.17 shows the parabolic velocity profile of blood flowing in a retinal blood vessel 176 μm diameter. It was obtained by measuring the Doppler shift in light scattered from moving blood cells.
 Yazdanfar et al. (2000).
Figure 14.17 is from the paper “Imaging and Velocimetry of the Human Retinal Circulation with Color Doppler Optical Coherence Tomography,” by Siavash Yazdanfar, Andrew Rollins, and Joseph Izatt (Optical Letters, Volume 25, Pages 1448–1450, 2000). [For some reason Russ and I did not include the year in our citation—another item for the errata].
Abstract: Noninvasive monitoring of blood flow in retinal microcirculation may elucidate the progression and treatment of ocular disorders, including diabetic retinopathy, age-related macular degeneration, and glaucoma. Color Doppler optical coherence tomography (CDOCT) is a technique that allows simultaneous micrometer-scale resolution cross-sectional imaging of tissue microstructure and blood flow in living tissues. CDOCT is demonstrated for the first time in living human subjects for bidirectional blood-flow mapping of retinal vasculature.
I like Fig 14.17 because it combines ideas from Chapters 1, 11, 13, and 14 of IPMB. It also highlights the excellent spatial resolution you can obtain with OCT.

The illustration below shows the geometry associated with Fig. 14.17. The light is reflected by blood cells moving at speed v, causing a Doppler shift in its frequency. By adjusting the reference mirror, different depths are selected. The vessel makes an angle θ relative to the incident light. As the depth is scanned across the vessel, the Doppler shift determines the blood flow profile.
 The geometry associated with IPMB Fig. 14.17.
To help the reader learn more about the physics of OCT and Fig. 14.17, I have written two new homework problems. The solutions are included at the bottom of the post (upside down, to encourage readers to solve the problems themselves first). Enjoy!
Section 14.7

Problem 24 ⅓. This problem and the next explore the physics behind Fig. 14.17, which shows the velocity profile in a blood vessel measured using color Doppler optical coherence tomography. The data is based on an article by Yazdanfar et al. (2000). For this problem ignore the index of refraction of the tissue and assume θ = 60°.

(a) If the wavelength, λ, of the incident light is 832 nm and the wavelength bandwidth, Δλ, is 15 nm, determine the frequency, f, and the frequency bandwidth, Δf, in THz.

(b) The coherence time, τcoh, is equal to 1/(πΔf). Calculate τcoh in fs, and the coherence length, x2x1, in microns. The coherence length determines the spatial resolution of the measurement.

(c) Use Eq. 13.42 to derive an expression for the speed of blood flow in the direction of the light, v', in terms of the Doppler frequency shift, df. Assume that the speed of light, c, is much greater than v'. Calculate v' if df = 4 kHz. The Doppler technique measures the component of motion in the direction of the light. Determine the speed v along the vessel.

Problem 24 ⅔. The Doppler shift, df, of OCT data as a function of depth z across a blood vessel is given below. For viscous flow in a tube (Sec. 1.17), the blood speed varies parabolically across the vessel cross section (Eq. 1.37). Fit a parabola to this data of the form df = Az2 + Bz + C, and determine constants A, B, and C. Use these constants to find the peak value of df in this vessel, the location of the center of the vessel, and the vessel diameter (the width of the parabola when df = 0). The measured diameter corresponds to an oblique section at θ = 60°. Correct this result to get the true diameter.
z (mm)    df (kHz)
0.15 3.26
0.20 5.20
0.25 6.12
0.30 6.02
0.35 4.89
0.40 2.75
I will give the final word to Yazdanfar, Rollins, and Izatt, who conclude
In summary, CDOCT has been applied for what is believed to be the first time to retinal blood-flow mapping in the human eye. Depth-resolved quantification of retinal hemodynamics may prove helpful in understanding the pathogenesis of several ocular diseases. Unlike fluorescein angiography, CDOCT is entirely noninvasive and does not require dilation of the pupil. Furthermore, CDOCT operates at longer wavelengths than does laser Doppler velocimetry, so light exposure times can be safely increased. CDOCT is believed to be the first technique for determining, with micrometer-scale resolution, the depth, diameter, and flow rate of blood vessels within the living retina.
 Solution to new Homework Problem 24 ⅓
 Solution to new Homework Problem 24 ⅔.