Friday, July 26, 2024

Why Does Inductance Not Play a Bigger Role in Biology?

In this blog, I talk a lot about topics discussed in Intermediate Physics for Medicine and Biology. Almost as interesting is what topics are NOT discussed in IPMB. One example is inductance.

It’s odd that inductance is not examined in more detail in IPMB, because it is one of my favorite physics topics. To be fair, Russ Hobbie and I do discuss electromagnetic induction: how a changing magnetic field induces an electric field and consequently creates eddy currents. That process underlies transcranial magnetic stimulation, and is analyzed extensively in Chapter 8. However, what I want to focus on today is inductance: the constant of proportionality relating a changing current (I) and an induced electromotive force (; it’s similar to a voltage, although there are subtle differences). The self-inductance of a circuit element is usually denoted L, as in the equation

             = - L dI/dt .

The word “inductance” appears only twice in IPMB. When deriving the cable equation of a nerve axon, Russ and I write
This rather formidable looking equation is called the cable equation or telegrapher’s equation. It was once familiar to physicists and electrical engineers as the equation for a long cable, such as a submarine cable, with capacitance and leakage resistance but negligible inductance.

Joseph Henry
Joseph Henry
(1797–1878)

Then, in Homework Problem 44 of Chapter 8, Russ and I ask the reader to calculate the mutual inductance between a nerve axon and a small, toroidal pickup coil. The mutual inductance between two circuit elements can be found by calculating the magnetic flux threading one element divided by the current in the other element. This means the units of inductance are tesla meter squared (flux) over ampere (current), which is given the nickname the henry (H), after American physicist Joseph Henry.

The inductance plays a key role in some biomedical devices. For example, during transcranial magnetic stimulation a magnetic stimulator passes a current pulse through a coil held near the head, inducing an eddy current in the brain. The self-inductance of the coil determines the rate of rise of the current pulse. Another example is the toroidal pickup coil mentioned earlier, where the mutual inductance is the magnetic flux induced in the coil divided by the current in an axon.

Interestingly, the magnetic permeability, μ0, is related to the inductance. In fact, the units of μ0 can be expressed in henries per meter (H/m, an inductance per unit length). If you are using a coaxial cable in an electrical circuit to make electrophysiological measurements, the inductance introduced by the cable is equal to μ0 times the length of the cable times a dimensionless factor that depends on things like the geometry of the cable.

In a circuit, the inductance will induce an electromotive force that opposes a change in the current; It’s a conservative process that acts to keep the current from easily changing. It’s the electrical analogue to mechanical inertia. An inductor sometimes acts like a “choke,” preventing high frequency current from passing through a circuit (say, a few microsecond long spike caused by a nearby lighting strike) while having little effect on the low frequency current (say, the 60 Hz current associated with our power distribution system). You can use inductors to create high- and low-pass filters (although capacitors are more commonly used nowadays).

Why do inductors play such a small role in biology? The self-inductance of a circuit is typically equal to μ0 times ℓ, where ℓ is a characteristic distance, so Lμ0ℓ. What can you do to make the inductance larger? First, you could use iron or some other material with a large magnetic permeability, so instead of the magnetic permeability being μ0 (the permeability of free space) it is μ (which can be many thousands of times larger than μ0). Another way to increase the inductance is to wind a conductor with many (N) turns  of wire. The self-inductance generally increases as N2. Finally, you can just make the circuit larger (increase ℓ). However, biological materials contain little or no iron or other ferromagnetic materials, so the magnetic permeability is just μ0. Rarely do you find lots of turns of wire (some would say the myelin wrapping around a nerve axon is a biological example with large N, but there is little evidence that current flows around the axon within the myelin sheath). And most electrical circuits are small (say, on the order of millimeters or centimeters). If we take the permeability of biological tissue (4π × 10-7 H/m) times a size of 10 cm (0.1 m) you get an inductance of about 10-7 H. That’s a pretty small inductance.

Why do I say that 10-7 H is small? Let’s calculate the induced electromotive force by a current changing in a circuit. Most biological currents are small (John Wikswo and I measured currents of a microamp in a large crayfish nerve axon, and rarely are biological currents larger than this). They also don’t change too rapidly. Nerves work on a time scale on the order of a millisecond. So the magnitude of the induced electromotive force is

             = L dI/dt = (10-7 H) (10-6 A)/(10-3 s) = 10-10 V.

Nerves work using voltages on the order of tens or hundreds of millivolts. So, the induced electromotive force is a thousand million times too small to affect nerve conduction. Sure, some of my assumptions might be too conservative, but even if you find a trick to make a thousand times larger, it is still a million times too small to be important. 

There is one more issue. An electrical circuit with inductance L and resistance R will typically have a time constant of L/R. Regardless of the inductance, if the resistance is large the time constant will be small and inductive effects will happen so quickly that they won’t really matter. If you want small resistance use copper wires, whose conductivity is a million times greater than saltwater. If you’re stuck with saline or other body fluids, the resistance will be high and the time constant will be short.

In summary, the reason why inductance is unimportant in biology is that there is no iron to increase the magnetic field, no copper to lower the resistance, no large number of turns of wire, the circuits are small, and the current changes too slowly. Inductive effects are tiny in biology, which is why we rarely discuss them in Intermediate Physics for Medicine and Biology.

Joseph Henry: Champion of American Science

https://www.youtube.com/watch?v=1t0nTCBG7jY&t=758s

 


 Inductors explained

https://www.youtube.com/watch?v=KSylo01n5FY

Friday, July 19, 2024

Happy Birthday, Robert Plonsey!

Wednesday was the 100th anniversary of Robert Plonsey’s birth. He is one of the most highly cited authors in Intermediate Physics for Medicine and Biology.

Plonsey was born on July 17, 1924 in New York City. He served in the navy during the second world war and then obtained his PhD in electrical engineering from Berkeley. In 1957 he joined Case Institute of Technology (now part of Case Western Reserve University) as an Assistant Professor. In 1983 he moved from Case to Duke University, joining their biomedical engineering department.

Plonsey and Barr, Biophys. J.,
45:557–571, 1984.
To honor Plonsey’s birthday, I want to look at one of my favorite papers: “Current Flow Patterns in Two-Dimensional Anisotropic Bisyncytia with Normal and Extreme Conductivities.” He and his Duke collaborator Roger Barr published it forty years ago, in the March, 1984 issue of the Biophysical Journal (Volume 45, Pages 557–571). The abstract is given below.
Cardiac tissue has been shown to function as an electrical syncytium in both intracellular and extracellular (interstitial) domains. Available experimental evidence and qualitative intuition about the complex anatomical structure support the viewpoint that different (average) conductivities are characteristic of the direction along the fiber axis, as compared with the cross-fiber direction, in intracellular as well as extracellular space. This report analyzes two-dimensional anisotropic cardiac tissue and achieves integral equations for finding intracellular and extracellular potentials, longitudinal currents, and membrane currents directly from a given description of the transmembrane voltage. These mathematical results are used as a basis for a numerical model of realistic (though idealized) two-dimensional cardiac tissue. A computer stimulation based on the numerical model was executed for conductivity patterns including nominally normal ventricular muscle conductivities and a pattern having the intra- or extracellular conductivity ratio along x, the reciprocal of that along y. The computed results are based on assuming a simple spatial distribution for [the transmembrane potential], usually a circular isochrone, to isolate the effects on currents and potentials [on] variations in conductivities without confounding propagation differences. The results are in contrast to the many reports that explicitly or implicitly assume isotropic conductivity or equal conductivity ratios along x and y. Specifically, with reciprocal conductivities, most current flows in large loops encompassing several millimeters, but only in the resting (polarized) region of the tissue; further, a given current flow path often includes four or more rather than two transmembrane excursions. The nominally normal results showed local currents predominantly with only two transmembrane passages; however, a substantial part of the current flow patterns in two-dimensional anisotropic bisyncytia may have qualitative as well as quantitative properties entirely different from those of one-dimensional strands.
This article was one of the first to analyze cardiac tissue using the bidomain model. In 1984 (the year before I published my first scientific paper as a young graduate student at Vanderbilt University) the bidomain model was only a few years old. Plonsey and Barr cited Otto Schmitt, Walter Miller, David Geselowitz, and Les Tung as the originators of the bidomain concept. One of Plonsey and Barr’s key insights was the role of anisotropy, and in particular the role of differences of anisotropy in the intracellular and extracellular spaces (sometimes referred to as “unequal anisotropy ratios”), in determining the tissue behavior. In their calculation, they assumed a known transmembrane potential wavefront and calculated the potentials and currents in the intracellular and extracellular spaces.

Plonsey and Barr found that for isotropic tissue, and for tissue with equal anisotropy ratios, the intracellular and extracellular currents were equal and opposite, so the net current (intracellular plus extracellular) was zero. However, for nominal conductivities that have unequal anisotropy ratios they found the net current did not cancel, but instead formed loops that extended well outside the region of the wave front.

Looking back at this paper after several decades, the computational technique seems cumbersome and the plots of the current distributions look primitive. However, Plonsey and Barr were among the first to examine these issues, and when you’re first you can be forgiven if the analysis isn’t as polished as in subsequent reports.

When Plonsey and Barr’s paper was published, my graduate advisor John Wikswo realized the the large current loops they predicted would produce a measurable magnetic field. That story I’ve told before in this blog. Plonsey’s article led directly to Nestor Sepulveda and Wikswo’s paper on the biomagnetic field signature of the bidomain model, indirectly to my adoption of the bidomain model for studying a strand of cardiac tissue, and ultimately to the Sepulveda/Roth/Wikswo analysis of unipolar electrical stimulation of cardiac tissue

Happy birthday, Robert Plonsey. We miss ya!

Friday, July 12, 2024

Taylor Series

The Taylor series is particularly useful for analyzing how functions behave in limiting cases. This is essential when translating a mathematical expression into physical intuition, and I would argue that the ability to do such translations is one of the most important skills an aspiring physicist needs. Below I give a dozen examples from Intermediate Physics for Medicine and Biology, selected to give you practice with Taylor series. In each case, expand the function in the dimensionless variable that I specify. For every example—and this is crucial—interpret the result physically. Think of this blog post as providing a giant homework problem about Taylor series.

Find the Taylor series of:
  1. Eq. 2.26 as a function of bt (this is Problem 26 in Chapter 2). The function is the solution for decay plus input at a constant rate. You will need to look up the Taylor series for an exponential, either in Appendix D or in your favorite math handbook. I suspect you’ll find this example easy.
  2. Eq. 4.69 as a function of ξ (this is Problem 47 in Chapter 4). Again, the Taylor series for an exponential is required, but this function—which arises when analyzing drift and diffusion—is more difficult than the last one. You’ll need to use the first four terms of the Taylor expansion.
  3. The argument of the inverse sine function in the equation for C(r,z) in Problem 34 of Chapter 4, as a function of z/a (assume r is less than a). This expression arises when calculating the concentration during diffusion from a circular disk. Use your Taylor expansion to show that the concentration is uniform on the disk surface (z = 0). This calculation may be difficult, as it involves two different Taylor series. 
  4. Eq. 5.26 as a function of ax. Like the first problem, this one is not difficult and merely requires expanding the exponential. However, there are two equations to analyze, arising from the study of countercurrent transport
  5. Eq. 6.10 as a function of z/c (assume c is less than b). You will need to look up or calculate the Taylor series for the inverse tangent function. This expression indicates the electric field near a rectangular sheet of charge. For z = 0 the electric field is constant, just as it is for an infinite sheet.
  6. Eq. 6.75b as a function of b/a. This equation gives the length constant for a myelinated nerve axon with outer radius b and inner radius a. You will need the Taylor series for ln(1+x). The first term of your expansion should be the same as Eq. 6.75a: the length constant for an unmyelinated nerve with radius a and membrane thickness b.
  7. The third displayed equation of Problem 46 in Chapter 7 as a function of t/tC. This expression is for the strength-duration curve when exciting a neuron. Interestingly, the short-duration behavior is not the same as for the Lapicque strength-duration curve, which is the first displayed equation of Problem 46.
  8. Eq. 9.5 as a function of [M']/[K]. Sometimes it is tricky to even see how to express the function in terms of the required dimensionless variable. In this case, divide both sides of Eq. 9.5, to get [K']/[K] in terms of [M']/[K]. This problem arises from analysis of Donnan equilibrium, when a membrane is permeable to potassium and chloride ions but not to large charged molecules represented by M’.
  9. The expression inside the brackets in Eq. 12.42 as a function of ξ. The first thing to do is to find the Taylor expansion of sinc(ξ), which is equal to sin(ξ)/ξ. This function arises when solving tomography problems using filtered back projection.
  10. Eq. 13.39 as a function of a/z. The problem is a little confusing, because you want the limit of large (not small) z, so that a/z goes to zero. The goal is to show that the intensity falls off as 1/z2 for an ultrasonic wave in the Fraunhoffer zone.
  11. Eq. 14.33 as a function of λkBT/hc. This problem really is to determine how the blackbody radiation function behaves as a function of wavelength λ, for short wavelength (high energy) photons. You are showing that Planck's blackbody function does not suffer from the ultraviolet catastrophe.
  12. Eq. 15.18 as a function of x. (This is Problem 15 in Chapter 15). This function describes how the Compton cross section depends on photon energy. Good luck! (You’ll need it).

Brook Taylor
Who was Taylor? Brook Taylor (1685-1731) was an English mathematician and a fellow of the Royal Society. He was a champion of Newton’s version of the calculus over Leibniz’s, and he disputed with Johann Bernoulli. He published a book on mathematics in 1715 that contained his series.

Friday, July 5, 2024

Depth of Field and the F-Stop

In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I briefly discuss depth of field: the distance between the nearest and the furthest objects that are in focus in an image captured with a lens. However, we don’t go into much detail. Today, I want to explain depth of field in more—ahem—depth, and explore its relationship to other concepts like the f-stop. Rather than examine these ideas quantitatively using lots of math, I’ll explain them qualitatively using pictures.

Consider a simple optical system consisting of a converging lens, an aperture, and a screen to detect the image. This configuration looks like what you might find in a camera with the screen being film (oh, how 20th century) or an array of light detectors. Yet, it also could be the eye, with the aperture representing the pupil and the screen being the retina. We’ll consider a generic object positioned to the left of the focal point of the lens. 


 
To determine where the image is formed, we can draw three light rays. The first leaves the object horizontally and is refracted by the lens so it passes through the focal point on the right. The second passes through the center of the lens and is not refracted. The third passes through the focal point on the left and after it is refracted by the lens it travels horizontally. Where these three rays meet is where the image forms. Ideally, you would put your screen at this location and record a nice crisp image. 


Suppose you are really interested in another object (not shown) to the right of the one in the picture above. Its image would be to the right of the image shown, so that is where we place our screen. In that case, the image of our first object would not be in focus. Instead, it would form a blur where the three rays hit the screen. The questions for today are: how bad is this blurring and what can we do to minimize it?

So far, we haven’t talked about the aperture. All three of our rays drawn in red pass through the aperture. Yet, these aren’t the only three rays coming from the object. There are many more, shown in blue below. Ones that hit the lens near its top or bottom never reach the screen because they are blocked by the aperture. The size of the blurry spot on the screen is specified by a dimensionless number called the f-stop: the ratio of the aperture diameter to the focal length of the lens. It is usually written f/#, where # is the numerical value of the f-stop. In the picture below, the aperture diameter is twice the focal length, so the f-stop is f/0.5.

 
We can reduce the blurriness of the out-of-focus object by partially closing the aperture. In the illustration below, the aperture is narrower and now has a diameter equal to the focal length, so the f-stop is f/1. More rays are blocked from reaching the screen, and the size of the blur is decreased. In other words, our image looks closer to being in focus than it did before. The blurring of an out-of-focus image is reduced. 


 
It seems like we got something for nothing. Our image is crisper and better just by narrowing the aperture. Why not narrow it further? We can, and the figure below has an f-stop of f/2. The blurring is reduced even more. But we have paid a price. The narrower the aperture, the less light reaches the screen. Your image is dimmer. And this is a bigger effect than you might think from my illustration, because the amount of light goes as the square of the aperture diameter (think in three dimensions). To make up for the lack of light, you could detect the light for a longer time. In a camera, the shutter speed indicates how long the aperture is open and light reaches the screen. Usually as the f-stop is increased (the aperture is narrowed), the shutter speed is changed so the light hits the screen for a longer time. If you are taking a picture of a stationary object, this is not a problem. If the object is moving, you will get a blurry image not because the image is out of focus on the screen, but because the image is moving across the screen. So, there are tradeoffs. If you want a large depth of focus and you don’t mind using a slow shutter speed, use a narrow aperture (a large f-stop). If you want to get a picture of a fast moving object using a fast shutter speed, your image may be too dim unless you use a wide aperture (small f-stop), and you will have to sacrifice depth of field. 

With your eye, there is no shutter speed. The eye is open all the time, and your pupil adjusts its radius to let in the proper amount of light. If you are looking at objects in dim light, your pupil will open up (have a larger radius) and you will have problems with depth-of-focus. In bright light the pupil will narrow down and images will appear crisper. If you are like me and you want to read some fine print but you forgot where you put your reading glasses, the next best thing is to try reading under a bright light.

Most photojournalists use fairly large f-stops, like f/8 or f/16, and a shutter speed of perhaps 5 ms. The human eye has an f-stop between f/2 (dim light) and f/8 (bright light). So, my illustrations above aren’t really typical; the aperture is generally much narrower.