Friday, August 30, 2019

The Book of Why

The Book of Why: The New Science of Cause and Effect, by Judea Pearl and Dana MacKenzie, superimposed on Intermediate Physics for Medicine and Biology.
The Book of Why,
by Judea Pearl.
At Russ Hobbie’s suggestion, I read The Book of Why, by Judea Pearl. This book presents a new way of analyzing data, using causal inferences in addition to more traditional, hypothesis-free statistical methods. In his introduction, Pearl writes
If I could sum up the message of this book in one pithy phrase, it would be that you are smarter than your data. Data do not understand causes and effects; humans do. I hope that the new science of causal inference will enable us to better understand how we do it, because there is no better way to understand ourselves than by emulating ourselves. In the age of computers, this new understanding also brings with it the prospect of amplifying our innate abilities so that we can make better sense of data, be it big or small.
I had a hard time with this book, mainly because I’m not a fan of statistics. Rather than asking “why” questions, I usually ask “what if” questions. In other words, I build mathematical models and then analyze them and make predictions. Intermediate Physics for Medicine and Biology has a similar approach. For instance, what if drift and diffusion both act in a pore; which will dominate under what circumstances (Section 4.12 in IPMB)? What if an ultrasonic wave impinges on an interface between tissues having different acoustic impedances; what fraction of the energy in the wave is reflected (Section 13.3)? What if you divide up a round of radiation therapy into several small fractions; will this preferentially spare healthy tissue (Section 16.9)? Pearl asks a different type of question: the data shows that smokers are more likely to get lung cancer; why? Does smoking cause lung cancer, or is there some confounding effect responsible for the correlation (for instance, some people have a gene that makes them both more susceptible to lung cancer and more likely to smoke)?

Although I can’t say I’ve mastered Pearl’s statistical methods for causal inference, I do like the way he adopts a causal model to test data. Apparently for a long time statisticians analyzed data using no hypotheses, just statistical tests. If they found a correlation, they could not infer causation; does smoking cause lung cancer or does lung cancer cause smoking? Pearl draws many causal diagrams to make his causation assumptions explicit. He then uses these illustrations to derive his statistical model. These drawings remind me of Feynman diagrams that we physicists use to calculate the behavior of elementary particles.

Simpson’s Paradox

Just when my interest in The Book of Why was waning, Pearl shocked me back to attention with Simpson’s paradox.
Imagine a doctor—Dr. Simpson, we’ll call him—reading in his office about a promising new drug (Drug D) that seems to reduce the risk of a heart attack. Excitedly, he looks up the researcher’s data online. His excitement cools a little when he looks at the data on male patients and notices that their risk of a heart attack is actually higher if they take Drug D. “Oh well,” he says, “Drug D must be very effective for women.”

But then he turns to the next table, and his disappointment turns to bafflement. “What is this?” Dr. Simpson exclaims. “It says here that women who took Drug D were also at higher risk of a heart attack. I must be losing my marbles! This drug seems to be bad for women, bad for men, but good for people.”
To illustrate this effect, consider the example analyzed by Pearl. In a clinical trial some patients received a drug (treatment) and some didn’t (control). Patients who subsequently had heart attacks are indicated by red boxes, and patients who did not by blue boxes. In the figure below, the data is analyzed by gender: males and females.

An example of Simpson's paradox, showing men and women being divided into treatment and control groups. Based on The Book of Why, by Judea Pearl and Dana MacKenzie.

One out of twenty (5%) of the females in the control group had heart attacks, while three out of forty (7.5%) in the treatment group did. For women, the drug caused heart attacks! For males, twelve out of forty men in the control group (30%) suffered heart attacks, and eight out of twenty (40%) in the treatment group did. The drug caused heart attacks for the men too!

Now combine the data for men and women.

An example of Simpson's paradox, showing men and women pooled together into treatment and control groups. Based on The Book of Why, by Judea Pearl and Dana MacKenzie.

In the control group, 13 out of 60 patients had a heart attack (22%). In the treatment group, 11 of 60 patients had one (18%). The drug prevented heart attacks! This seems impossible, but if you don’t believe me, count the boxes; it’s not a trick. What do we make of this? As Pearl says “A drug can’t simultaneously cause me and you to have a heart attack and at the same time prevent us both from having heart attacks.”

To resolve the paradox, Pearl notes that this was not a randomized clinical trial. Patients could decide to take the drug or not, and women chose the drug more often then men. The preference for taking the drug is what Pearl calls a “confounder.” The chance of having a heart attack is much greater for men than women, but more women elected to join the treatment group then men. Therefore, the treatment group was overweighted with low-risk women, and the control group was overweighted with high-risk men, so when data was pooled the treatment group looked like they had fewer heart attacks than the control group. In other words, the difference between treatment and control got mixed up with the difference between men and women. Thus, the apparent effectiveness of the drug in the pooled data is a statistical fluke. A random trial would have shown similar data for men and women, but a different result when the data was pooled. The drug causes heart attacks.

Mathematics

The Book of Why contains only a little mathematics; Pearl tries to make the discussion accessible to a wide audience. He does, however, use lots of math in his research. His opinion of math is similar to mine and to IPMB’s.
Many people find formulas daunting, seeing them as a way of concealing rather than revealing information. But to a mathematician, or to a person who is adequately trained in the mathematical way of thinking, exactly the reverse is true. A formula reveals everything: it leaves nothing to doubt or ambiguity. When reading a scientific article, I often catch myself jumping from formula to formula, skipping the words altogether. To me, a formula is a baked idea. Words are ideas in the oven.
One goal of IPMB is to help students gain the skills in mathematical modeling so that formulas reveal rather than conceal information. I often tell my students that formulas aren’t things you stick numbers into to get other numbers. Formulas tell a story. This idea is vitally important. I suspect Pearl would agree.

Modeling

The causal diagrams in The Book of Why aid Pearl in deriving the correct statistical equations needed to analyze data. Toy models in IPMB aid students in deriving the correct differential equations needed to predict behavior. I see modeling as central to both activities: you start with an underlying hypothesis about what causes what, you translate that into mathematics, and then you learn something about your system. As Pearl notes, statistics does not always have this approach.
In certain circles there is an almost religious faith that we can find the answers to these questions in the data itself, if only we are sufficiently clever at data mining. However, readers of this book will know that this hype is likely to be misguided. The questions I have just asked are all causal, and causal questions can never be answered from data alone. They require us to formulate a model of the process that generates the data, or at least some aspects of that process. Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.
I enjoyed The Book of Why, even if I didn’t entirely understand it. It was skillfully written, thanks in part to coauthor Dana MacKenzie. It’s the sort of book that, once finished, I should go back and read again because it has something important to teach me. If I liked statistics more I might do that. But I won’t.

Friday, August 23, 2019

Happy Birthday, Godfrey Hounsfield!

Godfrey Hounsfield (1919-2004).
Godfrey Hounsfield
(1919-2004).
Wednesday, August 28, is the hundredth anniversary of the birth of Godfrey Hounsfield, the inventor of the computed tomography scanner.

In Intermediate Physics for Medicine and Biology, Russ Hobbie and I write
The history of the development of computed tomography is quite interesting (Kalender 2011). The Nobel Prize in Physiology or Medicine was shared in 1979 by a physicist, Allan Cormack, and an engineer, Godfrey Hounsfield…The Nobel Prize acceptance speeches (Cormack 1980; Hounsfield 1980) are interesting to read.
To celebrate the centenary of Hounsfield’s birth, I’ve collected excerpts from his interesting Nobel Prize acceptance speech.
When we consider the capabilities of conventional X-ray methods, three main limitations become obvious. Firstly, it is impossible to display within the framework of a two-dimensional X-ray picture all the information contained in the three-dimensional scene under view. Objects situated in depth, i. e. in the third dimension, superimpose, causing confusion to the viewer.

Secondly, conventional X-rays cannot distinguish between soft tissues. In general, a radiogram differentiates only between bone and air, as in the lungs. Variations in soft tissues such as the liver and pancreas are not discernible at all and certain other organs may be rendered visible only through the use of radio-opaque dyes.

Thirdly, when conventional X-ray methods are used, it is not possible to measure in a quantitative way the separate densities of the individual substances through which the X-ray has passed. The radiogram records the mean absorption by all the various tissues which the X-ray has penetrated. This is of little use for quantitative measurement.

Computed tomography, on the other hand, measures the attenuation of X-ray beams passing through sections of the body from hundreds of different angles, and then, from the evidence of these measurements, a computer is able to reconstruct pictures of the body’s interior...
The technique’s most important feature is its [enormous] sensitivity. It allows soft tissue such as the liver and kidneys to be clearly differentiated, which radiographs cannot do…
It can also very accurately measure the values of X-ray absorption of tissues, thus enabling the nature of tissue to be studied.
These capabilities are of great benefit in the diagnosis of disease, but CT additionally plays a role in the field of therapy by accurately locating, for example, a tumour so indicating the areas of the body to be irradiated and by monitoring the progress of the treatment afterwards...
Famous scientists and engineers often have fascinating childhoods. Learn about Hounsfield’s youth by reading these excerpts from his Nobel biographical statement.
I was born and brought up near a village in Nottinghamshire and in my childhood enjoyed the freedom of the rather isolated country life. After the first world war, my father had bought a small farm, which became a marvellous playground for his five children… At a very early age I became intrigued by all the mechanical and electrical gadgets which even then could be found on a farm; the threshing machines, the binders, the generators. But the period between my eleventh and eighteenth years remains the most vivid in my memory because this was the time of my first attempts at experimentation, which might never have been made had I lived in a city… I constructed electrical recording machines; I made hazardous investigations of the principles of flight, launching myself from the tops of haystacks with a home-made glider; I almost blew myself up during exciting experiments using water-filled tar barrels and acetylene to see how high they could be waterjet propelled…

Aeroplanes interested me and at the outbreak of the second world war I joined the RAF as a volunteer reservist. I took the opportunity of studying the books which the RAF made available for Radio Mechanics and looked forward to an interesting course in Radio. After sitting a trade test I was immediately taken on as a Radar Mechanic Instructor and moved to the then RAF-occupied Royal College of Science in South Kensington and later to Cranwell Radar School. At Cranwell, in my spare time, I sat and passed the City and Guilds examination in Radio Communications. While there I also occupied myself in building large-screen oscilloscope and demonstration equipment as aids to instruction...

It was very fortunate for me that, during this time, my work was appreciated by Air Vice-Marshal Cassidy. He was responsible for my obtaining a grant after the war which enabled me to attend Faraday House Electrical Engineering College in London, where I received a diploma.
I joined the staff of EMI in Middlesex in 1951, where I worked for a while on radar and guided weapons and later ran a small design laboratory. During this time I became particularly interested in computers, which were then in their infancy… Starting in about 1958 I led a design team building the first all-transistor computer to be constructed in Britain, the EMIDEC 1100

I was given the opportunity to go away quietly and think of other areas of research which I thought might be fruitful. One of the suggestions I put forward was connected with automatic pattern recognition and it was while exploring various aspects of pattern recognition and their potential, in 1967, that the idea occurred to me which was eventually to become the EMI-Scanner and the technique of computed tomography...
Happy birthday, Godfrey Hounsfield. Your life and work made a difference.

 Watch “The Scanner Story,” a documentary made by EMI 
about their early computed tomography brain scanners.
The video, filmed in 1978, shows its age but is engaging.

Part Two of “The Scanner Story.”

Friday, August 16, 2019

This View of Life

What’s the biggest idea in science that’s not mentioned in Intermediate Physics for Medicine and Biology? Most of the grand principles of physics appear: quantum mechanics, special relativity, the second law of thermodynamics. The foundations of chemistry are included, such as atomic theory and radioactive decay. Many basic concepts from mathematics are discussed, like calculus and chaos theory. Fundamentals of biology are also present, like the structure of DNA.

In my opinion, the biggest scientific idea never mentioned in Intermediate Physics for Medicine and Biology, not even once, is evolution. As Theodosius Dobzhansky said, “nothing in biology makes sense except in the light of evolution.” So why is evolution absent from IPMB?

A simple, if not altogether satisfactory, answer is that no single book can cover everything. As Russ Hobbie and I write in the preface to IPMB, “This book has become long enough.”

At a deeper level, however, physicists focus on principles that are common to all organisms; which unify our view of life. Evolutionary biologists, on the other hand, delight in explaining how diverse organisms come about through the quirks and accidents of history. Russ and I come from physics, and emphasize unity over diversity.

Ever Since Darwin, by Stephen Jay Gould, superimposed on Intermediate Physics for Medicine and Biology.
Ever Since Darwin,
by Stephen Jay Gould.
Suppose you want to learn more about evolution; how would you do it? I suggest reading books by Stephen Jay Gould (1941-2002), and in particular his collections of essays. I read these years ago and loved them, both for the insights into evolution and for the beauty of the writing. In the prologue of Gould’s first collection—Ever Since Darwin—he says
These essays, written from 1974-1977, originally appeared in my monthly column for Natural History Magazine, entitled “This View of Life.” They range broadly from planetary and geological to social and political history, but they are united (in my mind at least) by the common thread of evolutionary theory—Darwin’s version. I am a tradesman, not a polymath; what I know of planets and politics lies at their intersection with biological evolution.
Is evolution truly missing from Intermediate Physics for Medicine and Biology? Although it’s not discussed explicitly, ideas about how physics constrains evolution are implicit. For instance, one homework problem in Chapter 4 instructs the student to “estimate how large a cell …can be before it is limited by oxygen transport.” Doesn’t this problem really analyze how diffusion impacts natural selection? Another problem in Chapter 3 asks “could a fish be warm blooded and still breathe water [through gills]?” Isn’t this really asking why mammals such as dolphins and whales, which have evolved to live in the water, must nevertheless come to the surface to breathe air? Indeed, many ideas analyzed in IPMB are relevant to evolution.

In Ever Since Darwin, Gould dedicates an essay (Chapter 21, “Size and Shape”) to scaling. Russ and I discuss scaling in Chapter 1 of IPMB. Gould explains that
Animals are physical objects. They are shaped to their advantage by natural selection. Consequently, they must assume forms best adapted to their size. The relative strength of many fundamental forces (gravity, for example) varies with size in a regular way, and animals respond by systematically altering their shapes.
The Panda's Thumb, by Stephen Jay Gould, superimposed on Intermediate Physics for Medicine and Biology.
The Panda's Thumb,
by Stephen Jay Gould.
Gould returns to the topic of scaling in an essay on “Our Allotted Lifetimes,” Chapter 29 in his collection titled The Panda’s Thumb. This chapter contains mathematical expressions (rare in Gould’s essays but common in IPMB) analyzing how breathing rate, heart rate and lifetime scale with size. In his next essay (Chapter 30, “Natural Attraction: Bacteria, the Birds and the Bees”), Gould addresses another topic covered in IPMB: magnetotactic bacteria. He writes
In the standard examples of nature’s beauty—the cheetah running, the gazelle escaping, the eagle soaring, the tuna coursing, and even the snake slithering or the inchworm inching—what we perceive as graceful form also represents an excellent solution to a problem in physics. When we wish to illustrate the concept of adaptation in evolutionary biology, we often try to show that organisms “know” physics—that they have evolved remarkably efficient machines for eating and moving.
Gould knew one of my heroes, Isaac Asimov. In his essay on magnetotactic bacteria, Gould describes how he and Asimov discussed topics similar to those in Edward Purcell’s article “Life at Low Reynolds Number” cited in IPMB.
The world of a bacterium is so unlike our own that we must abandon all our certainties about the way things are and start from scratch. Next time you see Fantastic Voyage... ponder how the miniaturized adventurers would really fare as microscopic objects within a human body... As Isaac Asimov pointed out to me, their ship could not run on its propeller, since blood is too viscous at such a scale. It should have, he said, a flagellum—like a bacterium.
I’m fond of essays, which often provide more insight than journal articles and textbooks. Gould’s 300 essays appeared in every issue of Natural History between 1974 and 2001; he never missed a month. Asimov also had a monthly essay in The Magazine of Fantasy and Science Fiction, and his streak lasted over thirty years, from 1959 to 1992. My twelve-year streak in this blog seems puny compared to these ironmen. Had Gould and Asimov been born a half century later, I wonder if they’d be bloggers?

Gould ends his prologue to The Panda’s Thumb by quoting The Origin of Species, written by his hero Charles Darwin. There in the final paragraph of this landmark book we find a juxtaposition of physics and biology.
Charles Darwin chose to close his great book with a striking comparison that expresses this richness. He contrasted the simpler system of planetary motion, and its result of endless, static cycling, with the complexity of life and its wondrous and unpredictable change through the ages:
There is a grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

Listen to Stephen Jay Gould talk about evolution.
https://www.youtube.com/embed/049WuppYa20

 National Public Radio remembers Stephen Jay Gould (May 22, 2002).
https://www.youtube.com/embed/7mTirfwTMsU

Friday, August 9, 2019

Arthur Sherman wins the Winfree Prize

A photo of Arthur Sherman, winner of the Arthur T. Winfree Prize from the Society of Mathematical Biology.
Arthur Sherman
My friend Arthur Sherman—who I knew when I worked at the National Institutes of Health in the 1990s—has won the Arthur T. Winfree Prize from the Society of Mathematical Biology. The SMB website states
Arthur Sherman, National Institute of Diabetes and Digestive and Kidney Diseases, will receive the Arthur T. Winfree Prize for his work on biophysical mechanisms underlying insulin secretion from pancreatic beta-cells. Since insulin plays a key role in maintaining blood glucose, this is of basic physiological interest and is also important for understanding the causes and treatment of type 2 diabetes, which arises from a combination of defects in insulin secretion and insulin action. The Arthur T. Winfree Prize was established in memory of Arthur T. Winfree’s contributions to mathematical biology. This prize is to honor a theoretician whose research has inspired significant new biology. The Winfree Prize consists of a cash prize of $500 and a certificate given to the recipient. The winner is expected to give a talk at the Annual Meeting of the Society for Mathematical Biology (Montreal 2019).
Russ Hobbie and I discuss the glucose-insulin negative feedback loop in Chapter 10 of Intermediate Physics for Medicine and Biology. I’ve written previously in this blog about Winfree.

Read how Sherman explains his research in lay language on a NIDDK website.
Insulin is a hormone that allows the body to use carbohydrates for quick energy. This spares fat for long-term energy storage and protein for building muscle and regulating cellular processes. Without sufficient insulin many tissues, such as muscle, cannot use glucose, the product of digestion of carbohydrates, as a fuel. This leads to diabetes, a rise in blood sugar that damages organs. It also leads to heart disease, kidney failure, blindness, and finally, premature death. We use mathematics to study how the beta cells of the pancreas know how much glucose is available and how much insulin to secrete, as well as how failure of various components of insulin secretion contributes to the development of diabetes.
When I was at NIH, Sherman worked with John Rinzel studying bursting. Here’s a page from my research notebook, showing my notes from a talk that Artie (as we called him then) gave thirty years ago. A sketch of a bursting pancreatic beta cell is in the bottom right corner.

A photo of my notes from my NIH Research Notebook 1, March 30, 1989, taken during a talk by Arthur Sherman.
From my NIH Research Notebook 1, March 30, 1989.
I recommend the video of a talk by Sherman that you can view at https://video.mbi.ohio-state.edu/video/player/?id=338. His abstract says
I will trace the history of models for bursting, concentrating on square-wave bursters descended from the Chay-Keizer model for pancreatic beta cells. The model was originally developed on a biophysical and intutive basis but was put into a mathematical context by John Rinzel's fast-slow analysis. Rinzel also began the process of classifying bursting oscillations based on the bifurcations undergone by the fast subsystem, which led to important mathematical generalization by others. Further mathematical work, notably by Terman, Mosekilde and others, focused rather on bifurcations of the full bursting system, which showed a fundamental role for chaos in mediating transitions between bursting and spiking and between bursts with different numbers of spikes. The development of mathematical theory was in turn both a blessing and a curse for those interested in modeling the biological phenomena⁠—having a template of what to expect made it easy to construct a plethora of models that were superficially different but mathematically redundant. This may also have steered modelers away from alternative ways of achieving bursting, but instructive examples exist in which unbiased adherence to the data led to discovery of new bursting patterns. Some of these had been anticipated by the general theory but not previously instantiated by Hodgkin-Huxley-based examples. A final level of generalization has been the addition of multiple slow variables. While often mathematically reducible to models with a one-variable slow subsystem, such models also exhibit novel resetting properties and enhanced dynamic range. Analysis of the dynamics of such models remains a current challenge for mathematicians.
Congratulations to Arthur Sherman, for this well-deserved honor.

Arthur Sherman giving a talk at the  Colorado School of Mines, October 2017.

https://www.youtube.com/watch?v=kcfHLYxsrYg

Friday, August 2, 2019

Can Magnetic Resonance Imaging Detect Electrical Activity in Your Brain?

Can magnetic resonance imaging detect electrical activity in your brain? If so, it would be a breakthrough in neural recording, providing better spatial resolution than electroencephalography or magnetoencephalography. Functional magnetic resonance imaging (fMRI) already is used to detect brain activity, but it records changes in blood flow (BOLD, or blood-oxygen-level-dependent, imaging), which is an indirect measure of electrical signaling. MRI ought to be able to detect brain function directly; bioelectric currents produce their own biomagnetic fields that should affect a magnetic resonance image. Russ Hobbie and I discuss this possibility in Section 18.12 of Intermediate Physics for Medicine and Biology.

The magnetic field produced in the brain is tiny; a nanotesla or less. In an article I wrote with my friend Ranjith Wijesinghe of Ball State University and his students (Medical and Biological Engineering and Computing, Volume 50, Pages 651‐657, 2012), we concluded
MRI measurements of neural currents in dendrites [of neurons] may be barely detectable using current technology in extreme cases such as seizures, but the chance of detecting normal brain function is very small. Nevertheless, MRI researchers continue to develop clever new imaging methods, using either sophisticated pulse sequences or data processing. Hopefully, this paper will outline the challenges that must be overcome in order to image dendritic activity using MRI.
Toward Direct MRI of Neuro-Electro-Magnetic Oscillations in the Human Brain, Truong et al., Magn. Reson. Med. 81:3462-3475, 2019, superimposed on Intermediate Physics for Medicine and Biology.
Truong et al. (2019) “Toward Direct
MRI of Neuro-Electro-Magnetic
Oscillations in the Human Brain,”
Magn. Reson. Med.
81:3462-3475.
Since we published those words seven years ago, has anyone developed a clever pulse sequence or a fancy data processing method that allows imaging of biomagnetic fields in the brain? Yes! Or, at least, maybe. Researchers in Allen Song’s laboratory published a paper titled “Toward Direct MRI of Neuro-Electro-Magnetic Oscillations in the Human Brain” in the June 2019 issue of Magnetic Resonance in Medicine. I reproduce the abstract below.
Purpose: Neuroimaging techniques are widely used to investigate the function of the human brain, but none are currently able to accurately localize neuronal activity with both high spatial and temporal specificity. Here, a new in vivo MRI acquisition and analysis technique based on the spin-lock mechanism is developed to noninvasively image local magnetic field oscillations resulting from neuroelectric activity in specifiable frequency bands.

Methods: Simulations, phantom experiments, and in vivo experiments using an eyes-open/eyes-closed task in 8 healthy volunteers were performed to demonstrate its sensitivity and specificity for detecting oscillatory neuroelectric activity in the alpha‐band (8‐12 Hz). A comprehensive postprocessing procedure was designed to enhance the neuroelectric signal, while minimizing any residual hemodynamic and physiological confounds.

Results: The phantom results show that this technique can detect 0.06-nT magnetic field oscillations, while the in vivo results demonstrate that it can image task-based modulations of neuroelectric oscillatory activity in the alpha-band. Multiple control experiments and a comparison with conventional BOLD functional MRI suggest that the activation was likely not due to any residual hemodynamic or physiological confounds.

Conclusion: These initial results provide evidence suggesting that this new technique has the potential to noninvasively and directly image neuroelectric activity in the human brain in vivo. With further development, this approach offers the promise of being able to do so with a combination of spatial and temporal specificity that is beyond what can be achieved with existing neuroimaging methods, which can advance our ability to study the functions and dysfunctions of the human brain.
I’ve been skeptical of work by Song and his team in the past; see for instance my article with Peter Basser  (Magn. Reson. Med. 61:59‐64, 2009) critiquing their “Lorentz Effect Imaging” idea. However, I’m optimistic about this recent work. I’m not expert enough in MRI to judge all the technical details—and there are lots of technical details—but the work appears sound.

The key to their method is “spin-lock,” which I discussed before in this blog. To understand spin-lock, let’s compare it with a typical MRI π/2 pulse (see Section 18.5 in IPMB). Initially, the spins are in equilibrium along a static magnetic field Bz (blue in the left panel of the figure below). To be useful for imaging, you must rotate the spins into the x-y plane so they precess about the z axis at the Larmor frequency (typically a radio frequency of many Megahertz, with the exact frequency depending on Bz). If you apply an oscillating magnetic field Bx (red) perpendicular to Bz, with a frequency equal to the Larmor frequency and for just the right duration, you will rotate all the spins into the x-y plane (green). The behavior is simpler if instead of viewing it from the static laboratory frame of reference (x, y, z) we view it from a frame of reference rotating at the Larmor frequency (x', y', z'). At the end of the π/2 pulse the spins point in the y' direction and appear static (they will eventually relax back to equilibrium, but we ignore that slow process in this discussion). If you’re having trouble visualizing the rotating frame, see Fig. 18.7 in IPMB; the z and z' axes are the same and it’s the x-y plane that’s rotating.

After the π/2 pulse, you would normally continue your pulse sequence by measuring the free induction decay or creating an echo. In a spin-lock experiment, however, after the π/2 pulse ends you apply a circularly polarized magnetic field By' (blue in the right panel) at the Larmor frequency. In the rotating frame, By' appears static along the y' direction. Now you have a situation in the rotating frame that’s similar to the situation you had originally in the laboratory frame: a static magnetic field and spins aligned with it seemingly in equilibrium. What magnetic field during the spin-lock plays the role of the radiofrequency field during the π/2 pulse? You need a magnetic field in the z' direction that oscillates at the spin-lock frequency; the frequency that spins precess about By'. An oscillating neural magnetic field Bneural (red) would do the job. It must oscillate at the spin-lock frequency, which depends on the strength of the circularly polarized magnetic field By'. Song and his team adjusted the magnitude of By' so the spin-lock frequency matched the frequency of alpha waves in the brain (about 10 Hz). This causes the spins to rotate from y' to z' (green). Once you accumulate spins in the z' direction, turn off the spin lock and you are back to where you started (spins in the z direction in a static field Bz), except that the number of these spins depends on the strength of the neural magnetic field and the duration of the spin-lock. A neural magnetic field not at resonance⁠—that is, at any other frequency other than the spin-lock frequency⁠—will not rotate spins to the z' axis. You now have an exquisitely sensitive method of detecting an oscillating biomagnetic field, analogous to using a lock-in amplifier to isolate a particular frequency in a signal.

Comparison of a π/2 pulse and spin-lock during magnetic resonance imaging.
Comparison of a π/2 pulse and spin-lock.
There’s a lot more to Song’s method than I’ve described, including complicated techniques to eliminate any contaminating BOLD signal. But it seems to work.

Will this method revolutionize neural imaging? Time will tell. I worry about how well it can detect neural fields that are not oscillating at a single frequency. Nevertheless, Song’s experiment—together with the work from Okada’s lab that I discussed three years ago—may mean we’re on the verge of something big: using MRI to directly measure neural magnetic fields.

In the conclusion of their article, Song and his coworkers strike an appropriate balance between acknowledging the limitations of their method and speculating about its potential. Let’s hope their final sentence comes true.
Our initial results provide evidence suggesting that MRI can be used to noninvasively and directly image neuroelectric oscillations in the human brain in vivo. This new technique should not be viewed as being aimed at replacing existing neuroimaging techniques, which can address a wide range of questions, but it is expected to be able to image functionally important neural activity in ways that no other technique can currently achieve. Specifically, it is designed to directly image neuroelectric activity, and in particular oscillatory neuroelectric activity, which BOLD fMRI cannot directly sample, because it is intrinsically limited by the temporal smear and temporal delay of the hemodynamic response. Furthermore, it has the potential to do so with a high and unambiguous spatial specificity, which EEG/MEG cannot achieve, because of the limitations of the inverse problem. We expect that our technique can be extended and optimized to directly image a broad range of intrinsic and driven neuronal oscillations, thereby advancing our ability to study neuronal processes, both functional and dysfunctional, in the human brain.