Showing posts sorted by date for query Fourier. Sort by relevance Show all posts
Showing posts sorted by date for query Fourier. Sort by relevance Show all posts

Friday, February 14, 2025

Sine and Cosine Integrals and the Delta Function

The cover of Intermediate Physics for Medicine and Biology.
Trigger warning: This post is for mature audiences only; it may contain Fourier transforms and Dirac delta functions

In Chapter 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I examine some properties of Fourier transforms. In particular, we consider three integrals of sines and cosines. After some analysis, we conclude that these integrals are related to the Dirac delta function, δ(ωω’), equal to infinity at ωω’ and zero everywhere else (it’s a strange function consisting of one really tall, thin spike).

Are these equations correct? I now believe that they’re almost right, but not entirely. I propose that instead they should be 


You’re probably thinking “what a pity, the second three equations looks more complicated than the first three.” I agree. But let me explain why I think they’re better. Hang on, it’s a long story.

Let’s go back to our definition of the Fourier transform in Eq. 11.57 of IPMB

The first thing to note is that y(t) consists of two parts. The first depends on cos(ωt), which is an even function, meaning cos(–ωt) = cos(ωt). There’s an integral over ω, implying that many different frequencies contribute to y(t), weighted by the function C(ω). But one thing we know for sure is that when you add up the contributions from all these many frequencies, the result must be an even function (the sum of even functions is an even function). The second part depends on sin(ωt), which is an odd function, sin(–ωt) = – sin(ωt). Again, when you add up all the contributions from these many frequencies weighted by S(ω), you must get an odd function. So, we can say that we’re writing y(t) as the sum of an even part, yeven(t), and an odd part, yodd(t). In that case, we can rewrite our Fourier transform expressions as

We should be able to take our expression for yeven(t), put our expression for C(ω) into it, and then—if all works as it should—get back yeven(t). Let’s try it and see if it works. To start I’ll just rewrite the first of the four equations listed above

Now for C(ω) I’ll use the third of the four equations listed above. In that expression, there is an integral over t, but t is a dummy variable (it’s an “internal” variable; after you do the integral, the result does not depend on t), so to keep things from getting confusing we’ll call the dummy variable by another name, t'

Next switch the order of the integrals, so the integral over t' is on the outside and the integral over ω is on the inside

Ha! There, inside the bracket, is one of those integrals were’re talking about. Okay, the variables ω and t are swapped, but otherwise it’s the same. So, let’s put in our new expression for the integral

The 2π’s cancel, and a factor of one half comes out. An integral containing a delta function just picks out the value where the argument of the delta function is zero. We get


But, we know that yeven(t) is an even function, meaning yeven(–t) equals yeven(t). So finally


It works! We go “around the loop” and get back our original function.

You could perform another calculation just like this one but for yodd(t). Stop reading and do it, to convince yourself that again you get back to where you started from, yodd(t) = yodd(t).

Now, you folks who are really on the ball might realize that if you had used the old delta function relationships given in IPMB (the first three equations in this post), they would also work. (Again, try it and see.) So why use my fancy new formulas? Well, if you have an integral that adds up a bunch of cos(ωt), you know you’re gonna get an even function. There’s no way it can be equal to δ(ωω’), because that function is neither even nor odd. So, it just doesn’t make sense to say that summing up a bunch of even functions gives you something that isn’t even. In my new formula, that sum of two delta functions is an even function. The same argument holds for the integral with sin(ωt), which must be odd.


Finally (and this is what got me started down this rabbit hole), you often see the delta function written as

Jackson even gives this equation, so it MUST be correct. (For those of who aren’t physicists, John David Jackson wrote the highly regarded graduate textbook Classical Electrodynamics, known by physics graduate students simply as “Jackson.”)

In Jackson’s equation, i is the square root of minus one. So, this representation of the delta function uses complex numbers. You won’t see it in IPMB because Russ and I avoid complex numbers (I hate them).

Let’s use the Euler formula e = cosθ + i sinθ to change the integral in Jackson’s delta function expression to

Now use a couple trig identities, cos(AB) = cosA cosB + sinA sinB and sin(AB) = sinA cosB –cosA sinB, to get

This is really four integrals,


Then, using the relations between these integrals and the delta function given in IPMB (the first three equations at the top of this post), you get that the sum of these integrals is equal to


which is obviously wrong; we started with 2πδ and ended up with 4πδ. Even worse, do the same calculation for δ(ω + ω') with a plus instead of a minus in front of the ω'. I’ll leave it to you to work out the details, but you’ll get zero! Again, nonsense. However, if you use the integral relations I propose above (second set of three integrals at the top of this blog), everything works just fine (try it).

Gene Surdutovich, my new coauthor for the sixth edition of IPMB, and I are still deciding how to discuss this issue in the new edition (which we are hard at work on but I doubt will be out within the next year). I don’t want to get bogged down in mathematical minutia that isn’t essential to our book’s goals, but I want our discussion to be correct. Once the sixth edition is published, you can see how we handle it.

I haven’t seen my new delta function/Fourier integral relationships in any other textbook or math handbook. This makes me nervous. Are they correct? Moreover, Intermediate Physics for Medicine and Biology does not typically contain new mathematical results. Maybe I haven’t looked hard enough to see if someone else published these equations (if you’ve seen them before, let me know where…please!). Maybe I’ll find these results in Morse and Feshbach (another one of those textbooks known to all physics graduate students) or some other mathematical tome. I need to make a trip to the Oakland University library to look through their book collection, but right now its too cold and snowy (we got about four to five inches of the white stuff in the last 48 hours).

Friday, January 10, 2025

The Physics of Birds


In this second installment of my series on the physics of native gardening, I’ll talk about the physics of birds. We get a lot of birds in our yard. Robins visit the lawn and our crabapple tree. Too many house sparrows come to our bird feeders; they’re invasive pests. We see lots of blue jays, those big bullies, as well as goldfinches, downy woodpeckers, and black-capped chickadees. Every fall we know that winter is approaching when the dark-eyed juncos come down to Michigan from Canada. Canadian geese fly overhead, but they never stop at our house.

Flight

I often see birds high in the sky, soaring through the air without flapping their wings. I suspect many are red-tailed hawks, but I’ve never gotten close enough to one to say for sure. How does soaring work? First, it requires a thermal updraft. The sun heats the earth and the earth heats the air next to it, resulting in a temperature gradient: the air near the ground is hotter than the cooler air high above. However, hot air is lighter and therefore tends to rise. This unstable situation results in thermal updrafts. Hot air at one location will rise, and then as it cools will sink at some nearby location. The hawk can glide in the uprising air, so it slowly sinks with respect to the air but rises with respect to the ground. Once high up, it can then glide anywhere while searching for food, until it is low enough that it must seek another updraft.

Life in Moving Fluids, by Steven Vogel, superimposed on the cover of Intermediate Physics for Medicine and Biology.
Life in Moving Fluids,
by Steven Vogel.
Most birds don’t soar but instead flap their wings to fly. This flapping is complicated enough that I’ll let Steven Vogel—my favorite expert on biological fluid dynamics—explain it. The following excerpt is from his wonderful book Life in Moving Fluids.
In birds, bats, and insects, flapping wings combine the functions that airplanes divide between fixed wings and propellers—in a sense, they’re closer to helicopters than to airplanes, and it’s all too easy to be misled by our habit of calling the propulsive appendages “wings” rather than “propeller blades.” But they aren’t quite like ordinary propellers either, since flapping wings produce both thrust and lift directly, rather than producing thrust directly and getting lift by diverting some of the thrust to pay for the drag of fixed, lift-producing wings. The composite function, as well as their reciprocating rather than rotational motion, mean that the motion of flapping wings is inevitably complex… The downstroke moves a wing forward as well as downward and produces mainly upward force but usually some rearward force as well. The upstroke goes backward as well as upward, producing mainly rearward force but often some upward force.

Scaling

Scaling: Why is Animal Size So Important?, by Knut Schmidt-Nielsen, superimposed on the cover of Intermediate Physics for Medicine and Biology.
Scaling,
by Knut Schmidt-Nielsen.
Each summer my wife puts out a feeder filled with sugar water, and near it we plant red cardinal flowers, to attract hummingbirds. The hummingbirds are tiny and are constantly eating. In Intermediate Physics for Medicine and Biology, Russ Hobbie and I explain how an animal’s metabolic rate scales with its size and mass. The heat produced from metabolism increases with the volume of the animal, but heat is lost by an animal at its surface. As you compare larger animals to smaller ones, the volume increases with size faster than the surface area does. This means that large animals have trouble getting rid of excess heat, while small animals find it difficult to stay warm. The tiny hummingbird is smaller than other birds, so it tends to cool down quickly (it has a large surface-to-volume ratio). To keep warm, it has to boost its metabolism, which means it must eat a lot. A high metabolic rate requires not only much food but also oxygen, which implies that the hummingbird’s heart must pump a lot of blood. The heart rate of a hummingbird can be upwards of 1000 beats per minute (a normal heart rate for a human is 60 to 100 bpm).

Scaling relationships
like we just saw for the hummingbird are common in biology. If you want to learn more about this topic, I suggest Knut Schmidt-Nielsen’s fascinating book Scaling: Why is Animal Size so Important?

Drinking

My favorite bird is the mourning dove. We sometimes will have eight or more of these sweet, gentle birds around our bird feeder. I love their low-pitched coo… coo… coooooooooo song. They mate for life.

Doves are unique among birds in the way they drink. Most birds fill their bill with water and then gravity pulls it down to their stomach. Sometimes they tilt their head back to help the water flow. Mourning doves, on the other hand, suck water into their bill, like we suck water through a straw. Professor Gart Zweers, from the University of Leiden, took high-speed x-ray photos, and concluded that doves draw a partial vacuum which pulls the water up.

Singing

Bird songs are analyzed using plots of time and frequency. As discussed in Chapter 11 of Intermediate Physics for Medicine and Biology, you can resolve any function of time into its component frequencies: Fourier analysis. If you plot the instantaneous frequency versus time, you get a sonogram. The higher the frequency, the higher the pitch that we hear. The northern cardinal’s song starts on a high pitch (around 4 kilohertz, which is about the frequency of highest pitched note on a piano) and then slurs downward an octave (to 2 kilohertz) in about half a second.

Trevisan and Mindlin (Philosophical Transactions A, Volume 367, Pages 3239–3254, 2009) have modeled the bird’s vocal organ, called the syrinx. Their model might be familiar to physics students: it is Newton’s second law, force equals mass times acceleration. The important parameters that enter the model are the mass, stiffness, and a constant characterizing the dissipation or attenuation of the motion. The dissipation can be nonlinear, leading to all sorts of complex dynamics. The model predicts an oscillatory behavior (like that for a mass on a spring). Furthermore, the beak acts as a resonance tube (somewhat like an organ pipe).

We get majestic red cardinals visiting our birdfeeders all the time. Next time you hear a cardinal singing, think of all the physics going on.

Magnetoreception

Are Electromagnetic Fields Making Me Ill? superimposed on the cover of Intermediate Physics for Medicine and Biology.
Are Electromagnetic Fields
Making Me Ill?
Many birds make long migrations, and one wonders how they find their way. One method is magnetoreception: the sensing of magnetic fields. Most organisms cannot detect magnetic fields, but some birds can. Magnetoreception is possible because the birds have small particles of magnetite, called magnetosomes, which provide a magnetic moment that can interact with a magnetic field. I discussed magnetoreception in my book Are Electromagnetic Fields Making Me Ill?
In 1963, German zoologist Wolfgang Wiltschko placed European robins inside a chamber and turned on a magnetic field comparable in strength to the earth’s field. He did not expect a response, but to his surprise the birds oriented with the field… The robins proved adept at sensing magnetic signals during their annual migration.

Some researchers believe there are other mechanisms for magnetoreception besides magnetite particles. I wrote

A few animals, including the European robin, may take advantage of free radical reactions instead of magnetite for magnetoreception. Sonke Johnsen and Kenneth Lohmann [Physics Today, Volume 61, Pages 29–35, 2008], after reviewing the data, conclude that “the current evidence for the radical-pair hypothesis is maddeningly circumstantial…” The jury is still out on this issue.
To tell you the truth, I’m skeptical that free radical reactions are important.

Another animal that may detect the earth’s magnetic field and use it to navigate is the bee. Next week we will continue this series on the physics of native gardening by examining the physics of bees.

 Northern cardinal song

https://www.youtube.com/watch?v=e_b4PkcpDe0

Friday, August 9, 2024

A Comparison of Two Models for Calculating the Electrical Potential in Skeletal Muscle

Roth and Gielen,
Annals of Biomedical Engineering,
Volume 15, Pages 591–602, 1987
Today I want to tell you how Frans Gielen and I wrote the paper “A Comparison of Two Models for Calculating the Electrical Potential in Skeletal Muscle” (Annals of Biomedical Engineering, Volume 15, Pages 591–602, 1987). It’s not one of my more influential works, but it provides insight into the kind of mathematical modeling I do.

The story begins in 1984 when Frans arrived as a post doc in John Wikswo’s Living State Physics Laboratory at Vanderbilt University in Nashville. Tennessee. I had already been working in Wikswo’s lab since 1982 as a graduate student. Frans was from the Netherlands and I called him “that crazy Dutchman.” My girlfriend (now wife) Shirley and I would often go over to Frans and his wife Tiny’s apartment to play bridge. I remember well when they had their first child, Irene. We all became close friends, and would go camping in the Great Smoky Mountains together.

Frans had received his PhD in biophysics from Twente University. In his dissertation he had developed a mathematical model of the electrical conductivity of skeletal muscle. His model was macroscopic, meaning it represented the electrical behavior of the tissue averaged over many cells. It was also anisotropic, so that the conductivity was different if measured parallel or perpendicular to the muscle fiber direction. His PhD dissertation also reported many experiments he performed to test his model. He used the four-electrode method, where two electrodes pass current into the tissue and two others measure the resulting voltage. When the electrodes are placed along the muscle fiber direction, he found that the resulting conductivity depended on the electrode separation. If the current-passing electrodes where very close together then the current was restricted to the extracellular space, resulting in a low conductivity. If, however, the electrodes were farther apart then the current would distribute between the extracellular and intracellular spaces, resulting in a high conductivity.

When Frans arrived at Vanderbilt, he collaborated with Wikswo and me to revise his model. It seemed odd to have the conductivity (a property of the tissue) depend on the electrode separation (a property of the experiment). So we expressed the conductivity using Fourier analysis (a sum of sines and cosines of different frequencies), and let the conductivity depended on the spatial frequency k. Frans’s model already had the conductivity depend on the temporal frequency, ω, because of the muscle fiber’s membrane capacitance. So our revised model had the conductivity σ be a function of both k and ωσ = σ(k,ω). Our new model had the same behavior as Fran’s original one: for high spatial frequencies the current remained in the extracellular space, but for low spatial frequencies it redistributed between the extracellular and intracellular spaces. The three of us published this result in an article titled “Spatial and Temporal Frequency-Dependent Conductivities in Volume-Conduction Calculations for Skeletal Muscle” (Mathematical Biosciences, Volume 88, Pages 159–189, 1988; the research was done in January 1986, although the paper wasn’t published until April of 1988).

Meanwhile, I was doing experiments using tissue from the heart. My goal was to calculate the magnetic field produced by a strand of cardiac muscle. Current could flow inside the cardiac cells, in the perfusing bath surrounding the strand, or in the extracellular space between the cells. I was stumped about how to incorporate the extracellular space until I read Les Tung’s PhD dissertation, in which he introduced the “bidomain model.” Using this model and Fourier analysis, I was able to derive equations for the magnetic field and test them in a series of experiments. Wikswo and I published these results in the article “A Bidomain Model for the Extracellular Potential and Magnetic Field of Cardiac Tissue” (IEEE Transactions of Biomedical Engineering, Volume 33, Pages 467–469, 1986).

By the summer of 1986 I had two mathematical models for the electrical conductivity of muscle. One was a “monodomain” model (representing an averaging over both the intracellular and extracellular spaces) and one was a “bidomain” model (in which the intracellular and extracellular spaces were each individually averaged over many cells). It was strange to have two models, and I wondered how they were related. One was for skeletal muscle, in which each muscle cell is long and thin but not coupled to its neighbors. The other was for cardiac muscle, which is a syncytium where all the cells are coupled through intercellular junctions. I can remember going into Frans’s office and grumbling that I didn’t know how these two mathematical representations were connected. As I was writing the equations for each model on his chalkboard, it suddenly dawned on me that the main difference between the two models was that for cardiac tissue current could flow perpendicular to the fiber direction by passing through the intercellular junctions, whereas for skeletal muscle there was no intracellular path transverse to the uncoupled fibers. What if I took the bidomain model for cardiac tissue and set the transverse, intracellular conductivity equal to zero? Wouldn’t that, in some way, be equivalent to the skeletal muscle model?

I immediately went back to my own office and began to work out the details. This calculation starts on page 85 of my Vanderbilt research notebook #15, dated June 13, 1986. There were several false starts, work scratched out, and a whole page crossed out with a red pen. But by page 92 I had shown that the frequency-dependent conductivity model for skeletal muscle was equivalent to the bidomain model for cardiac muscle if I set the bidomain transverse intracellular conductivity to zero, except for one strange factor that included the membrane impedance, which represented current traveling transverse to the skeletal muscle fibers by shunting across the cell membrane. But this extra factor was important only at high temporal frequencies (when capacitance shorted out the membrane) and otherwise was negligible. I proudly marked the end of my analysis with “QED” (quod erat demonstrandum; Latin for “that which was to be demonstrated,” which often appears at the end of a mathematical proof).

Two pages (85 and 92) from my Research Notebook #15 (June, 1986).

Frans and I published this result in the Annals of Biomedical Engineering, and it is the paper I cite at the top of this blog post. Wikswo was not listed as an author; I think he was traveling that summer, and when he returned to the lab we already had the manuscript prepared, so he let us publish it just under our names. The abstract is given below:

We compare two models for calculating the extracellular electrical potential in skeletal muscle bundles: one a bidomain model, and the other a model using spatial and temporal frequency-dependent conductivities. Under some conditions the two models are nearly identical, However, under other conditions the model using frequency-dependent conductivities provides a more accurate description of the tissue. The bidomain model, having been developed to describe syncytial tissues like cardiac muscle, fails to provide a general description of skeletal muscle bundles due to the non-syncytial nature of skeletal muscle.

Frans left Vanderbilt in December, 1986 and took a job with the Netherlands section of the company Medtronic, famous for making pacemakers and defibrillators. He was instrumental in developing their deep brain stimulation treatment for Parkinson’s disease. I graduated from Vanderbilt in August 1987, stayed for one more year working as a post doc, and then took a job at the National Institutes of Health in Bethesda, Maryland.

Those were fun times working with Frans Gielen. He was a joy to collaborate with. I’ll always remember than June day when—after brainstorming with Frans—I proved how those two models were related.

Short bios of Frans and me published in an article with Wikswo in the IEEE Trans. Biomed. Eng.,
cited on page 237 of Intermediate Physics for Medicine and Biology.
 

Friday, July 7, 2023

Integral of the Bessel Function

Have you ever been reading a book, making good progress with everything making sense, and then you suddenly stop at say “wait… what?”. That happened to me recently as I was reading Homework Problem 31 in Chapter 12 of Intermediate Physics for Medicine and Biology. (Wait…what? I’m a coauthor of IPMB! How could there be any surprises for me?) The problem is about calculating the two-dimensional Fourier transform of 1/r, and it supplies the following Bessel function identity 

An equation for the integral of the Bessel function J0(kr).

The function J0 is a Bessel function of the first kind of order zero. What surprised me is that if you let x = kr, you get that the integral of the Bessel function is one,

An equation for the integral of the Bessel function J0(x), which equals one.

Really? Here’s a plot of J0(x).

A plot of the J0(x) Bessel function versus x.

It oscillates like crazy and the envelope of those oscillations falls off very slowly. In fact, an asymptotic expansion for J0 at large x is

An asymptotic expression for the J0 Bessel function at large argument.

The leading factor of 1/√x decays so slowly that its integral from zero to infinity does not converge. Yet, when you include the cosine so the function oscillates, the integral does converge. Here’s a plot of

An expression for the integral of the Bessel function J0(x') from 0 to x.

A plot of the integral of the J0 Bessel function.

The integral approaches one at large x, but very slowly. So, the expression given in the problem is correct, but I sure wouldn’t want to do any numerical calculations using it, where I had to truncate the endpoint of the integral to something less than infinity. That would be a mess!

Here’s another interesting fact. Bessel functions come in many orders—J0, J1, J2, etc.—and they all integrate to one.

Who’s responsible for these strangely-behaved functions? They’re named after the German astronomer Friedrich Bessel but they were first defined by the Swiss mathematician Daniel Bernoulli (1700–1782), a member of the brilliant Bernoulli family. The Bernoulli equation, mentioned in Chapter 1 of IPMB, is also named for Daniel Bernoulli. 

There was a time when I was in graduate school that I was obsessed with Bessel functions, especially modified Bessel functions that don’t oscillate. I’m not so preoccupied by them now, but they remain my favorite of the many special functions encountered in physics.

Friday, January 27, 2023

The Preface as Poetry

Alexander Pope
I mentioned before in this blog that I’m reading Will and Ariel Durant’s eleven-volume masterpiece The Story of Civilization. Recently, in Volume Nine about The Age of Voltaire, they discussed Alexander Pope, the eighteenth century English poet who, among other things, translated the Iliad into English poetry using iambic pentameter heroic couplets. Thus inspired, I decided to translate the preface of Intermediate Physics for Medicine and Biology from prose to poetry. I’d tell you to “Enjoy!” but I’m not sure that’ll be possible for such doggerel.

In the preface to the third edition, 
Russell Hobbie set out on a mission. 
For the two years before seventy three 
He audited the University
Of Minnesota’s medical courses. 
He found a lack of physics discourses. 
An intermediate physics class would 
For these students be so useful and good. 
This book is the result of when he taught 
Such a course that students needed a lot. 
He hoped that even those physics teachers 
Scared of bio would value its features. 
And doctors would find it a good reference 
With the physics concepts never too dense. 
Because the book was used by those whose tools 
From math were not vast he set down four rules: 
Calculus would be used without regret, 
And reviewed in detail when not seen yet. 
Readers should know the vocabulary 
But it’s told from the start, it’s not scary. 
He did not skip steps in derivations, 
And shunned any weirdo math notations. 
Hobbie added someone to help him write 
The Fourth Edition, and they did not fight. 
They wrote a new chapter, but made sure that 
The new edition did not get too fat. 
They added more than one homework problem, 
And a solution manual for them. 
Chapter One reviews biomechanics
Stress and strain, also fluid dynamics
Then Two discusses the exponential 
A math function that’s truly essential.
Next Three deals with temperature and heat
Biothermodynamics it does treat. 
Diffusion’s the topic of Chapter Four 
A random walk, and drift, and so much more. 
Chapter Five describes flow across membranes
And osmosis from ions that it contains. 
Six covers bioelectricity
And a model by Hodgkin and Huxley
Chapter Seven contains the EKG
And defibrillation is really key. 
Chapter Eight deals with all things magnetic
Interesting, but not too poetic. 
Nine begins with the model of Donnan
Then Nernst-Planck, Debye-Huckel, and so on. 
Chapter Ten has lots of mathematics
With feedback and chaos and dynamics
Eleven derives Fourier transforms.
Least squares and correlations, it brainstorms. 
Next Twelve analyzes tomography
Which allows you to image in 3D!
Chapter Thirteen considers ultrasound
A topic that you’ll find really profound. 
Next Chapter Fourteen summarizes light
Mix red and green and blue and you get white!
Fifteen considers photons and matter: 
Photoelectric and Compton Scatter
Chapter Sixteen is about the x-ray
Where the unit for dose is named the gray
Seventeen is about technetium
Nuclear medicine, and even then some. 
Then Eighteen is not about anything 
But magnetic resonance imaging
Biophysics is a subject quite broad, 
Of which we are always completely awed. 
Our book has grown and become large enough, 
To fit molecular stuff would be tough. 
We would like to get any corrections,
Or even hear about your suggestions. 
And last we thank our long-suffering wives, 
We know that this book disrupted their lives.
The Age of Voltaire,
by Will and Ariel Durant.

 The Iliad, translated by Alexander Pope (the heroic couplets start at 2:20).

https://www.youtube.com/watch?v=28RNGOCIzYI

 

Friday, March 11, 2022

Numerical Recipes is Online

Numerical Recipes, by Press, Teukolsky, Vetterling, and Flannery, superimposed on Intermediate Physics for Medicine and Biology.
Numerical Recipes,
by Press, Teukolsky, Vetterling, and Flannery.
In Intermediate Physics for Medicine and Biology, Russ Hobbie and I often refer to Numerical Recipes: The Art of Scientific Computing, by William Press, Saul Teukolsky, William Vetterling, and Brian Flannery. We usually cite the second edition of the book with programs written in C (1995), but the copy on my bookshelf is the second edition using Fortran 77 (1992). For those of you who don’t own a copy of this wonderful book, did you know you can read it online?

The address of the Numerical Recipes website is easy to remember: numerical.recipes. There you will find free copies of the second editions of Numerical Recipes for Fortran 77, Fortran 90, C, and C++ (2002). If you want easy, quick access to the third edition (2007), you will have to pay a fee. But if you are willing to put up with brief delays and annoying messages (which the authors call “nags”), you also can read the third edition for free.

The text below is from the Preface to the third edition.
“I was just going to say, when I was interrupted...” begins Oliver Wendell Holmes in the second series of his famous essays, The Autocrat of the Breakfast Table. The interruption referred to was a gap of 25 years. In our case, as the autocrats of Numerical Recipes, the gap between our second and third editions has been “only” 15 years. Scientific computing has changed enormously in that time.

The first edition of Numerical Recipes was roughly coincident with the first commercial success of the personal computer. The second edition came at about the time that the Internet, as we know it today, was created. Now, as we launch the third edition, the practice of science and engineering, and thus scientific computing, has been profoundly altered by the mature Internet and Web. It is no longer difficult to find somebody’s algorithm, and usually free code, for almost any conceivable scientific application. The critical questions have instead become, “How does it work?” and “Is it any good?” Correspondingly, the second edition of Numerical Recipes has come to be valued more and more for its text explanations, concise mathematical derivations, critical judgments, and advice, and less for its code implementations per se.

Recognizing the change, we have expanded and improved the text in many places in this edition and added many completely new sections. We seriously considered leaving the code out entirely, or making it available only on the Web. However, in the end, we decided that without code, it wouldn’t be Numerical Recipes. That is, without code you, the reader, could never know whether our advice was in fact honest, implementable, and practical. Many discussions of algorithms in the literature and on the Web omit crucial details that can only be uncovered by actually coding (our job) or reading compilable code (your job). Also, we needed actual code to teach and illustrate the large number of lessons about object-oriented programming that are implicit and explicit in this edition.
Russ and I cited Numerical Recipes in IPMB when we discussed integration, least squares fitting, random number generators, partial differential equations, the fast Fourier transform, aliasing, the correlation function, the power spectral density, and bilinear interpolation. Over the years, in my own research I have consulted the book about other topics, including solving systems of linear equations, evaluation of special functions, and computational analysis of eigensystems.

I highly recommend Numerical Recipes to anyone doing numerical computing. I found the book to be indispensable.

Friday, September 10, 2021

Is Shot Noise Also White Noise?

In Chapters 9 and 11 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss shot noise.

9.8.1 Shot Noise

The first (and smallest) limitation [on our ability to measure current] is called shot noise. It is due to the fact that the charge is transported by ions that move randomly and independently through the channels....

11.16.2 Shot Noise

Chapter 9 also mentioned shot noise, which occurs because the charge carriers have a finite charge, so the number of them passing a given point in a circuit in a given time fluctuates about an average value. One can show that shot noise is also white noise [my italics].
Introduction to Membrane Noise, by Louis DeFelice, superimposed on Intermediate Physics for Medicine and Biology.
Introduction to Membrane Noise,
by Louis DeFelice
How does one show that shot noise is white noise (independent of frequency)? I’m going to follow Lou DeFelice’s explanation in his book Introduction to Membrane Noise (cited in IPMB). I won’t give a rigorous proof. Instead, I’ll first state Campbell’s theorem (without proving it), and then show that the whiteness of shot noise is a consequence of that theorem.

Campbell’s Theorem

To start, I’ll quote DeFelice, but I will change the names of a few variables.
Suppose N impulses i(t) arrive randomly in the time interval T. The sum of these will result in a random noise signal I(t). This is shown qualitatively in Figure 78.1.

Below is my version of Fig. 78.1.

A diagram illustrating the sum of N impulses, i(t), each shown in red, arriving randomly in the time interval T. The blue curve represents their sum, I(t), and the green dashed line represents the average, <I(t)>. Adapted from Fig. 78.1 in Introduction to Membrane Noise by Louis DeFelice.
A diagram illustrating the sum of N impulses, i(t), each shown in red, arriving randomly in the time interval T. The blue curve represents their sum, I(t), and the green dashed line represents the average, <I(t)>. Adapted from Fig. 78.1 in Introduction to Membrane Noise by Louis DeFelice.

DeFelice shows that the average of I(t), which I’ll denote <I(t)>, is

Equation for the average of I(t).
Here he lets T and N both be large, but their ratio (the average rate that the impulses arrived) remains finite.

He then shows that the variance of I(t), called σI2, is
Equation for the variance of I(t).
Finally, he writes

In order to calculate the spectral density of I(t) from i(t) we need Rayleigh’s theorem [also known as Parseval’s theorem]…
Parseval's theorem
where î(f) is the Fourier transform of i(t) [and f is the frequency].

He concludes that the spectral density SI(f) is given by

Equation for the spectral density of I(t).

These three results (for the average, the variance, and the spectral density) constitute Campbell’s theorem.

Shot Noise

Now, let’s analyze shot noise by using Campbell’s theorem assuming the impulse is a delta function (zero everywhere except at t = 0 where it’s infinite). Set i(t) = q δ(t), where q is the charge of each discrete charge carrier.

First, the average <I(t)> is simply Nq/T, or the total charge divided by the total time. 

Second, the variance is the integral of the delta function squared. When any function is multiplied by a delta function and then integrated over time, you get that function evaluated at time zero. So, the integral of the square of the delta function gives the delta function itself evaluated at zero, which is infinity. Yikes! The variance of shot noise is infinite.

Third, to get the spectral density of shot noise we need the Fourier transform of the delta function. 

Equation for the spectral density of shot noise.
The key point is that SI(f) is independent of frequency; it’s white.

DeFelice ends with

This [the expression for the spectral density] is the formula for shot noise first derived by Schottky (1918, pp. 541-567) in 1918. Evidently, the variance defined as
Equation for the variance in terms of the spectral density.
is again infinite; this is a consequence of the infinitely small width of the delta function.
As DeFelice reminds us, shot noise is white because the delta function is infinitely narrow. As soon as you assume i(t) has some width (perhaps the time it takes for a charge to cross the membrane), the spectrum will fall off at high frequencies, the variance won’t be infinite (thank goodness!), and the noise won’t be white. The bottom line is that shot noise is white because the Fourier transform of a delta function is a constant.

Conclusion

Perhaps you’re thinking I haven’t helped you all that much. I merely changed your question from “why is shot noise white” to “how do I prove Campbell’s theorem.” You have a point. Maybe proving Campbell’s theorem can be the story of another post.

I met Lou DeFelice in 1984, when I was a graduate student at Vanderbilt University and he came to give a talk. In the summer of 1986, my PhD advisor John Wikswo and I traveled to Emory University to visit DeFelice and Robert DeHaan. During that trip, Wikswo and I were walking across the Emory campus when Wikswo decided he knew a short cut (he didn’t). He left the sidewalk and entered a forest, with me following behind him. After what seemed like half an hour of wandering through a thicket, we emerged from the woods at a back entrance to the Yerkes Primate Research Center. We’re lucky we weren’t arrested.

DeFelice joined the faculty at Vanderbilt in 1995, and we both worked there in the late 1990s. He was a physicist by training, but spent most of his career studying electrophysiology. Sadly, in 2016 he passed away.

Friday, August 20, 2021

The Central Slice Theorem: An Example

The central slice theorem is key to understanding tomography. In Intermediate Physics for Medicine and Biology, Russ Hobbie and I ask the reader to prove the central slice theorem in a homework problem. Proofs are useful for their generality, but I often understand a theorem better by working an example. In this post, I present a new homework problem that guides you through every step needed to verify the central slice theorem. This example contains a lot of math, but once you get past the calculation details you will find it provides much insight.

The central slice theorem states that taking a one-dimensional Fourier transform of a projection is equivalent to taking the two-dimensional Fourier transform and evaluating it along one direction in frequency space. Our “object” will be a mathematical function (representing, say, the x-ray attenuation coefficient as a function of position). Here is a summary of the process, cast as a homework problem.

Section 12.4 

Problem 21½
. Verify the central slice theorem for the object

(a) Calculate the projection of the object using Eq. 12.29,

Then take a one-dimensional Fourier transform of the projection using Eq. 11.59,
 
(b) Calculate the two-dimensional Fourier transform of the object using Eq. 12.11a,
Then transform (kx,ky ) to (θ,k) by converting from Cartesian to polar coordinates in frequency space.
(c) Compare your answers to parts (a) and (b). Are they the same?


I’ll outline the solution to this problem, and leave it to the reader to fill in the missing steps. 

 
Fig. 12.12 from Intermediate Physics for Medicine and Biology, showing how to do a projection.
Fig. 12.12 from IPMB, showing how to do a projection.

The Projection 

Figure 12.12 shows that the projection is an integral of the object along various lines in the direction θ, as a function of displacement perpendicular to each line, x'. The integral becomes


Note that you must replace x and y by the rotated coordinates x' and y'


You can verify that x2 + y2= x'2 + y'2.

After some algebra, you’re left with integrals involving eby'2 (Gaussian integrals) such as those analyzed in Appendix K of IPMB. The three you’ll need are


The resulting projection is


Think of the projection as a function of x', with the angle θ being a parameter.

 

The One-Dimensional Fourier Transform

The next step is to evaluate the one-dimensional Fourier transform of the projection

The variable k is the spatial frequency. This integral isn’t as difficult as it appears. The trick is to complete the square of the exponent


Then make a variable substitution u = x' + ik2b. Finally, use those Gaussian integrals again. You get


This is our big result: the one-dimensional Fourier transform of the projection. Our next goal is to show that it’s equal to the two-dimensional Fourier transform of the object evaluated in the direction θ.

Two-Dimensional Fourier Transform

To calculate the two-dimensional Fourier transform, we must evaluate the double integral


The variables kx and ky are again spatial frequencies, and they make up a two-dimensional domain we call frequency space.

You can separate this double integral into the product of an integral over x and an integral over y. Solving these requires—guess what—a lot of algebra, completing the square, and Gaussian integrals. But the process is straightforward, and you get


Select One Direction in Frequency Space

If we want to focus on one direction in frequency space, we must convert to polar coordinates: kx = k cosθ and ky = k sinθ. The result is 

This is exactly the result we found before! In other words, we can take the one-dimensional Fourier transform of the projection, or the two-dimensional Fourier transform of the object evaluated in the direction θ in frequency space, and we get the same result. The central slice theorem works.

I admit, the steps I left out involve a lot of calculations, and not everyone enjoys math (why not?!). But in the end you verify the central slice theorem for a specific example. I hope this helps clarify the process, and provides insight into what the central slice theorem is telling us.