Friday, March 10, 2017

My Honors College Class: The Making of the Atomic Bomb

The Making of the Atomic Bomb, by Richard Rhodes, superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.
This semester I am teaching a class in Oakland University’s Honors College called “The Making of the Atomic Bomb,” based on Richard Rhodes’s book by the same name. The class is a mixture of nuclear physics, a history of the Manhattan Project, and a discussion about World War II (today we discuss Pearl Harbor). I became interested in this topic from the writings of Cameron Reed of Alma College here in Michigan.

The Honors College students are outstanding, but they are from disciplines throughout the university and do not necessarily have strong math and science backgrounds. Therefore the mathematics in this class is minimal, but nevertheless we do a two or three quantitative examples. For instance, Chadwick’s discovery of the neutron in 1932 was based on conclusions drawn from collisions of particles, and relies primarily on conservation of energy and momentum. When we analyze Chadwick’s experiment in my Honors College class, we consider the head-on collision of two particles of mass M1 and M2. Before the collision, the incoming particle M1 has kinetic energy T and the target particle M2 is at rest. After the collision, M1 has kinetic energy T1 and M2 has kinetic energy T2.

Intermediate Physics for Medicine and Biology examines an identical situation in Section 15.11 on Charged-Particle Stopping Power.
The maximum possible energy transfer Wmax can be calculated using conservation of energy and momentum. For a collision of a projectile of mass M1 and kinetic energy T with a target particle of mass M2 which is initially at rest, a nonrelativistic calculation gives
One important skill I teach my Honors College students is how to extract a physical story from a mathematical expression. One way to begin is to introduce some dimensionless parameters. Let t be the ratio of kinetic energy picked up by M2 after the collision to the incoming kinetic energy T, so t = T2/T or, using the notation in IPMB, t = Wmax/T (the subscript “max” arises because this maximum value of T2 corresponds to a head-on collision; a glancing blow will result in a smaller T2). Also, let m be the ratio of M1 to M2, so m = M1/M2. A little algebra results in the simpler-looking equation
The goal is to unmask the physical behavior hidden in this equation. The best way to proceed is to examine limiting cases. There are three that are of particular interest.

m much less than 1. When m is small (think of a fast-moving proton colliding with a stationary lead nucleus) the denominator is approximately one, so t = 4m. Because m is small, so is t. This means the proton merely bounces back elastically as if striking a brick wall. Little energy is transferred to the lead nucleus.
An illustration showing how a light mass behaves when it hits a stationary heavy mass.
 m much greater than 1. When m is large (think of a fast-moving lead nucleus smashing into a stationary proton) the denominator is approximately m2, so t = 4/m. Because m is large, t is small. This means the lead continues on as if the proton were not even there, with little loss of energy. The proton flies off at a high speed, but because of its small mass it carries off negligible energy.
An illustration showing how a heavy mass behaves when it hits a stationary light mass.
m equal to 1. When m is one (think of a neutron colliding with a proton, which was the situation examined by Chadwick), the denominator becomes 4, and t = 1. All of the energy of the neutron is transferred to the proton. The neutron stops and the proton flies off at the same speed the neutron flew in.
An illustration showing how a mass behaves when it hits a stationary mass.
A mantra I emphasize to my students is that equations are not just things you put numbers into to get other numbers. Equations tell a physical story. Being able to extract this story from an equation is one of the most important abilities a student must learn. Never pass up a chance to reinforce this skill.

Friday, March 3, 2017

Glucose, Mannitol, Sucrose, and Raffinose

The structure of glucose.
The structure of glucose.
You would think by now I would know everything in Introductory Physics for Medicine and Biology; after all, I’m one of the authors. So when thumbing through the book the other day (doesn’t everyone thumb through IPMB when they have a spare moment?) I came across Figure 4.11, showing a log-log plot of the diffusion constant as a function of molecular radius. Four data points stand out—glucose, mannitol, sucrose, and raffinose—because they are plotted as open rather than solid circles. This figure was drawn originally by Russ Hobbie and has appeared in every edition of IPMB. I got to wondering “why did Russ choose to plot those four molecules out of the thousands available?” And then, more specifically, I found myself asking “just what is raffinose anyways?”

Figure 4.11 of Intermediate Physics for Medicine and Biology, showing the diffusion constant of a molecule as a function of the size of the molecule.

To figure all this out, I grabbed the textbook I read in graduate school while auditing the biochemistry class taken by Vanderbilt medical students (Biochemistry, by the late Geoffrey Zubay). These molecules are carbohydrates or, more simply, sugars. Glucose is the canonical example; this six-carbon molecule C6H12O6 is “the single most important substrate for energy metabolism” and in humans it is “the single most important sugar in the blood”. It usually exists in a ring conformation. It is a monosaccharide because it consists of a single ring. Other monosaccharides are fructose and galactose, which all have the same formula, C6H12O6, but the arrangement of the atoms is slightly different.

Mannitol differs from glucose by having an extra two hydrogen atoms: C6H14O6. Technically it’s a sugar alcohol rather than a sugar. You’d think it would act similarly to glucose, but it doesn’t. Mannitol is relatively inert in humans. It doesn’t cross the blood-brain barrier (I discussed the implications of this previously in this blog) and it is not reabsorbed by the kidney like glucose is so it acts as an osmotic diuretic. In Fig. 4.11, the mannitol and glucose data almost overlap, and it is hard to tell which data point is which. According to a paper by Bashkatov et al. (2003), glucose has a larger diffusion coefficient than mannitol, so glucose must be the data point above and to the left, and mannitol below and to the right.

Sucrose is a disaccharide, which means it is two monosaccharides bound together through a “glycosidic linkage”. It’s common table sugar, and consists of a molecule of glucose bound to a molecule of fructose. Russ probably chose to plot sucrose as a typical disaccharide. Two other  disaccharides he could have chosen are lactose (glucose + galactose) and maltose (glucose + glucose).

Raffinose is a trisaccharide, consisting of galactose + glucose + fructose. Therefore, Russ’s choice of plotting glucose, sucrose, and raffinose makes sense: the most important monosaccharide, disaccharide, and trisaccharide. A fun fact about raffinose is that the human digestive tract does not have the enzyme needed to digest it. However, certain gas-producing bacteria in our gut can digest it, resulting in flatulence. You probably won’t be surprised to learn that beans often contain a lot of raffinose.

So, Russ is a clever fellow. He hid a short review of carbohydrate biochemistry in Fig. 4.11. Who knew?

Friday, February 24, 2017

Benefits and Barriers of Accommodating Intraocular Lenses

In Chapter 14 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss vision. In particular, we analyze the refraction of light by the lens of the eye, and examine different disorders such as hyperopia, and myopia. We then write
This ability of the lens to change shape and provide additional converging power is called accommodation…. As we age, the accommodation of the eye decreases... A normal viewing distance of 25 cm or less requires 4 diopters or more of accommodation... This limit is usually reached in the early 40s. To make up for the lack of accommodation, one can place a converging lens in front of the eye when viewing nearby objects (reading glasses).
When a patient has a cataract, their lens becomes cloudy. A common surgical procedure is to remove the opaque lens and replace it with an artificial intraocular lens. A conventional IOL is designed to supply the correct power to provide clear distance vision, but it cannot accommodate. Reading glasses provide one option for close vision, but many patients find them to be an inconvenient nuisance.

Researchers are now racing to create accommodating IOLs. A recent review by Jay Pepose, Joshua Burke, and Mujtaba Qazi discusses the “Benefits and Barriers of Accommodating Intraocular Lenses” (Current Opinion in Ophthalmology, Volume 28, Pages 3–8, 2017).
Presbyopia [the loss of accommodation] and cataract development are changes that ubiquitously affect the aging population. Considerable effort has been made in the development of intraocular lenses (IOLs) that allow correction of presbyopia postoperatively. The purpose of this review is to examine the benefits and barriers of accommodating IOLs, with a focus on emerging technologies.
Apparently the current accommodating intraocular lenses don’t function by changing their focal length, but rather by being pushed forward when the eye muscles responsible for accommodation contract. They only provide about 1 diopter of accommodation, which is not enough to avoid reading glasses. The review concludes
Such limitations [of the presently available accommodating IOLs] may be circumvented in the future by accommodative design strategies that rely more on shape-related changes in the surfaces of the IOLs or in refractive index than by forward translation alone. Fibrosis and contraction of the capsular bag, which can alter the position of the IOL optic or the performance of an accommodating IOL represent other challenges, and at least one accommodating IOL … has been designed for implantation in the ciliary sulcus. Approval of accommodating IOLs capable of delivering three or more diopters of accommodation would allow a full range of intermediate and near vision without the compromise of photic phenomenon or loss of contrast inherent to other optical strategies, and perhaps also allow refractive targeting that could minimize hyperopic surprises by taking advantage of this expanded amplitude of accommodation.
Some “accommodating” IOLs are multifocal, providing two focal lengths and therefore two images simultaneously, one for distance vision and one for reading. The brain then sorts out the mess. Apparently this is not as difficult as is sounds.

I predict that accommodating intraocular lenses will soon become very sophisticated. Cataract surgery is performed on millions of people each year; it is common in the elderly population, which is growing dramatically as baby boomers age; in principle the problem is not complex, you just need to make a lens that can adjust its focal length by about ten percent; compared to other medical devices like pacemakers and defibrillators, accommodating IOLs should be cheap; and new nanotechnologies plus knowledge gained from the miniaturization of other medical devices may pave the way to rapid advances. Accommodating intraocular lenses may soon become an example of how to successfully apply physics to solve a problem in medicine.

Friday, February 17, 2017

Sir Peter Mansfield (1933-2017)

MRI pioneer Peter Mansfield died last week. Russ Hobbie and I mention Mansfield in Chapter 18 of Intermediate Physics for Medicine and Biology
Many more techniques are available for imaging with magnetic resonance than for x-ray computed tomography. They are described by Brown et al. (1994), by Cho et al. (1993), by Vlaardingerbroek and den Boer (2004), and by Liang and Lauterbur (2000). One of these authors, Paul C. Lauterbur, shared with Sir Peter Mansfield the 2003 Nobel Prize in physiology or medicine for the invention of magnetic resonance imaging.
Mansfield made many contributions to the development of MRI, including the invention of echo-planar imaging. Russ and I write
Echo-planar imaging (EPI) eliminates the π pulses [normally used to rotate the spins in the x-y plane to form a spin echo]. It requires a magnet with a very uniform magnetic field, so that T2 [the transverse relaxation time, that is determined in part by dephasing of the spins in the x-y plane] (in the absence of a gradient) is only slightly greater than T2* [the experimentally observed transverse relaxation time]. The gradient fields are larger, and the gradient pulse durations shorter, than in conventional imaging. The goal is to complete all the k-space [all the points kx-ky in the spatial frequency domain] measurements in a time comparable to T2*. In EPI the echoes are not created using π pulses. Instead, they are created by dephasing the spins at different positions along the x axis using a Gx gradient, and then reversing that gradient to rephrase the spins, as shown in Fig. 18.32.
A magnetic resonance imaging pulse sequence for echo planar imaging, from Intermediate Physics for Medicine and Biology.
A MRI pulse sequence for echo planar imaging,
from Intermediate Physics for Medicine and Biology.

Mansfield tells about his first presentation on echo-planar imaging in his autobiography, The Long Road to Stockholm.
It was during the course of 1976 that Raymond Andrew convened a meeting in Nottingham of interested people in imaging…Most attendees brought us up to date with their images and gave us short talks on the goals that they were pursuing. Although my group had made considerable headway in a whole range of topics, I chose to speak about an entirely new imaging method that I had worked out theoretically but for which I had really no experimental results. The technique was called echo planar imaging (EPI), a condensation of planar imaging using spin echoes. I spoke for something like half an hour, talking in great detail, and at the end of the talk the audience seemed to be left in stunned silence. There were no questions, there was no discussion at all, and it was almost as though I had never spoken. In fact I had given a detailed talk about how one could produce very rapid images in a typically one shot process lasting, conservatively, for something like 40 or 50 milliseconds.
You can learn more about Mansfield in obituaries in the New York Times, in The Scientist, and from the BBC. Also, the Nobel Prize website has much information including a biography and his Nobel Prize address. Below, watch and listen to Mansfield talk about MRI.




Friday, February 10, 2017

Good, Fast, Cheap: Pick Any Two

When I worked at the National Institutes of Health, one of my coworkers had a sign in his office that read “Good, Fast, Cheap: Pick Any Two.” A recent video about “Building tomorrow's MRI--faster, smaller, and cheaper” reminded me of that saying. The video is part of a series called Science Happens! by Carl Zimmer.


Russ Hobbie and I describe magnetic resonance imaging in Chapter 18 of Intermediate Physics for Medicine and Biology. However, we don’t discuss the possibilities related to low-field MRI. The Science Happens! website says
Matthew Rosen and his colleagues at the Martinos Center for Biomedical Imaging in Boston want liberate the MRI. They’re hacking a new kind of scanner that’s fast, small, and cheap. Using clever algorithms, they can use a weak magnetic field to get good images of our brains and other organs. Someday, people may not have to go to hospital for an MRI. The scanners may show up in sports arenas, battlefields, and even the backs of ambulances.
A longer, more technical video of Rosen describing his work is given below.


For more details, see Rosen’s open access article “Low-Cost High-Performance MRI” (Scientific Reports, 5:15177, 2015).
Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5–3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (less than 10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (less than $50,000) and robust portable devices.
$50,000 is expensive by Manu Prakash's standards, but for an MRI device $50k is darn cheap! Recalling my friend’s motto, I think that Rosen has picked Fast and Cheap, but he’s gotten Pretty Good too, which is not a bad trade-off (all of engineering is trade-offs). If you want Super Good, spend the million bucks.

Finally, Rosen isn’t the only one interested in reinventing magnetic resonance imaging. Michael Garwood at Russ’s own University of Minnesota is also working on smaller, lighter, cheaper MRI.

Let the race begin. Humanity will be the winner.

Enjoy!

Friday, February 3, 2017

Alan Perelson wins the 2017 Max Delbruck Prize in Biological Physics

Alan Perelson, of Los Alamos National Laboratory, has been named the winner of the 2017 Max Delbruck Prize in Biological Physics by the American Physical Society. His award was “for profound contributions to theoretical immunology, which bring insight and save lives.”

One skill Russ Hobbie and I try to develop in students using Intermediate Physics for Medicine and Biology is the ability to translate words into mathematics. Below I present a new homework problem based on one of Perelson’s most highly cited papers (Perelson et al., 1996, Science, 271:1582–1586), which provides practice in this important technique. This exercise asks the student to make a mathematical model of the immune system that explains how T-cells—a type of white blood cell—respond to HIV infection.
Section 10.8

Problem 37 1/2. A model of HIV infection includes the concentration of uninfected T-cells, T, the concentration of infected T-cells, T*, and the concentration of virions, V.

(a) Write a pair of coupled differential equations for T* and V based on the following assumptions
  • If no virions are present, the immune system removes infected T-cells with rate δ,
  • If no infected T-cells are present, the immune system removes virions with rate c
  • Infected T-cells are produced at a rate proportional to the product of the concentrations of uninfected T-cells and virions; let the constant of proportionality be k
  • Virions are produced at a rate proportional to the concentration of infected T-cells with a constant of proportionality , where N is the number of virions per infected T-cell. 
(b) In steady state, determine the concentration of uninfected T-cells.
One of Perelson’s coauthors on the 1996 paper was David Ho. Yes, the David Ho who was Time Magazine’s Man of the Year in 1996.

For those who prefer video, watch Perelson discuss immunology for physicists.


Friday, January 27, 2017

The Genetic Effects of Radiation

The Genetic Effects of Radiation, by Isaac Asimov.
The Genetic Effects of Radiation,
by Isaac Asimov.
My local public library had their quarterly used book sale last weekend, and as usual I went to search for Isaac Asimov books. I collect Asimov’s books in part to pay homage to the huge influence he had on me as a teenager. He was the main reason I became a scientist. No luck this time; I came back from the sale empty handed. It’s difficult for me to find Asimov books that I don’t have, because I have so many (most bought second-hand for a pittance). Nevertheless, he wrote over 500 books, and I own far fewer than that, so I always have a chance.

You would think that I would at least know about all his books, even if I don’t own a copy. Yet somehow, I was unaware of (or had forgotten about) his book The Genetic Effects of Radiation, although it is a topic closely related to Intermediate Physics for Medicine and Biology. If interested, you can download a pdf of the book free (and I think legally) here. Below I present an excerpt about natural background radiation.
Background Radiation

Ionizing radiation in low intensities is part of our natural environment. Such natural radiation is referred to as background radiation. Part of it arises from certain constituents of the soil. Atoms of the heavy metals, uranium and thorium, are constantly, though very slowly, breaking down and in the process giving off alpha rays, beta rays, and gamma rays. These elements, while not among the most common, are very widely spread; minerals containing small quantities of uranium and thorium are to be found nearly everywhere.

In addition, all the earth is bombarded with cosmic rays from outer space and with streams of high-energy particles from the sun.

Various units can be used to measure the intensity of this background radiation. The roentgen, abbreviated r, and named in honor of the discoverer of X rays, Wilhelm Roentgen, is a unit based on the number of ions produced by radiation. Rather more convenient is another unit that has come more recently into prominence. This is the rad (an abbreviation for “radiation absorbed dose”) that is a measure of the amount of energy delivered to the body upon the absorption of a particular dose of ionizing radiation. One rad is very nearly equal to one roentgen.

Since background radiation is undoubtedly one of the factors in producing spontaneous mutations, it is of interest to try to determine how much radiation a man or woman will have absorbed from the time he is first conceived to the time he conceives his own children. The average length of time between generations is taken to be about 30 years, so we can best express absorption of background radiation in units of rads per 30 years.

The intensity of background radiation varies from place to place on the earth for several reasons. Cosmic rays are deflected somewhat toward the magnetic poles by the earth’s magnetic field. They are also absorbed by the atmosphere to some extent. For this reason, people living in equatorial regions are less exposed to cosmic rays than those in polar regions; and those in the plains, with a greater thickness of atmosphere above them, are less exposed than those on high plateaus.

Then, too, radioactive minerals may be spread widely, but they are not spread evenly. Where they are concentrated to a greater extent than usual, background radiation is abnormally high.

Thus, an inhabitant of Harrisburg, Pennsylvania, may absorb 2.64 rads per 30 years, while one of Denver, Colorado, a mile high at the foot of the Rockies, may absorb 5.04 rads per 30 years. Greater extremes are encountered at such places as Kerala, India, where nearby soil, rich in thorium minerals, so increases the intensity of background radiation that as much as 84 rads may be absorbed in 30 years.

In addition to high-energy radiation from the outside, there are sources within the body itself. Some of the potassium and carbon atoms of our body are inevitably radioactive. As much as 0.5 rad per 30 years arises from this source.

Rads and roentgens are not completely satisfactory units in estimating the biological effects of radiation. Some types of radiation—those made up of comparatively large particles, for instance—are more effective in producing ions and bring about molecular changes with greater ease than do electromagnetic radiations delivering equal energy to the body. Thus if 1 rad of alpha particles is absorbed by the body, 10 to 20 times as much biological effect is produced as there would be in the absorption of 1 rad of X rays, gamma rays, or beta particles.

Sometimes, then, one speaks of the relative biological effectiveness (RBE) of radiation, or the roentgen equivalent, man (rem). A rad of X rays, gamma rays, or beta particles has a rem of 1, while a rad of alpha particles has a rem of 10 to 20.

If we allow for the effect of the larger particles (which are not very common under ordinary conditions) we can estimate that the gonads of the average human being receive a total dose of natural radiation of about 3 rems per 30 years. This is just about an irreducible minimum.
This is typical Asimov. Let me add a few observations.
  1. Asimov rarely wrote specifically about medical physics, although he wrote much about related topics. I think The Genetic Effects of Radiation is closer to IPMB than his other books.
  2. The Genetic Effects is over 50 years old; it is out of date. For example, it uses the archaic units of rad and rem instead of gray and sievert (100 rem = 1 Sv). Moreover, radon gas is now known to make the largest contribution to background radiation, but the word “radon” never appeared in The Genetic Effects. Yet, I was surprised how much has not changed. I think a reader of IPMB would still find much useful information in Asimov’s book.
  3. The text is aimed at a general audience, rather than an expert. The book is not a replacement for, say, Radiobiology for the Radiologist, or even IPMB. Yet, for a 16-year old kid (as I was when devouring Asimov’s books about science), the level is just right.
  4. The discussion is not mathematical. At times Asimov writes about mathematical results, but he rarely presents equations. Certainly Asimov’s writing has vastly fewer equations than IPMB.
  5. Asimov doesn’t use a lot of figures. The Genetic Effects contains more pictures than most of his books.
  6. This excerpt illustrates the "clear, cool voice of Asimov." I admire the clarity of his writing. No one explains things better.
And now I have to return to my search for more Asimov books. I am particularly a fan of his science essay collections originally published in Fantasy and Science Fiction. I have most of them, but somehow missed the very first: Fact and Fancy (1962). Garage sale season will be here in a few months. Hope springs eternal.

Friday, January 20, 2017

Manu Prakash and the Paperfuge

Three years ago I wrote a blog post about a crazy Stanford engineer, Manu Prakash, who developed a paper origami microscope called “foldscope” costing less than a dollar. LESS THAN A DOLLAR!

Well, he’s done it again. Now his team has invented a hand-held, lightweight centrifuge called “paperfuge” costing under 20 cents. UNDER 20 CENTS!!!

I’m thinking of buying one if I can scrape up the cash. Buddy, can you spare a dime?

Excuse me if you have already heard about paperfuge on social media; its been popping up a lot on Facebook and Twitter. I hate to jump on a bandwagon, but this is so amazing I have to tell you about it.

The physics of the centrifuge is described in a series of homework problems in the first chapter of Intermediate Physics for Medicine and Biology. Please, don’t think that topics relegated to the end-of-the-chapter exercises are less important than subjects discussed in the text. Sometimes key issues such as the centrifuge lend themselves to the homework, and you learn more by actively doing the problems then by passively reading prose (even mine!).

But let me get back to Prakash. His team recently published a paper in Nature Biomedical Engineering titled “Hand-Powered Ultralow-Cost Paper Centrifuge” (read it online here). The abstract says:
In a global-health context, commercial centrifuges are expensive, bulky and electricity-powered, and thus constitute a critical bottleneck in the development of decentralized, battery-free point-of-care diagnostic devices. Here, we report an ultralow-cost (20 cents), lightweight (2 g), human-powered paper centrifuge (which we name “paperfuge”) designed on the basis of a theoretical model inspired by the fundamental mechanics of an ancient whirligig (or buzzer toy; 3,300 BC). The paperfuge achieves speeds of 125,000 r.p.m. (and equivalent centrifugal forces of 30,000 g), with theoretical limits predicting 1,000,000 r.p.m. We demonstrate that the paperfuge can separate pure plasma from whole blood in less than 1.5 min, and isolate malaria parasites in 15 min. We also show that paperfuge-like centrifugal microfluidic devices can be made of polydimethylsiloxane, plastic and 3D-printed polymeric materials. Ultracheap, power-free centrifuges should open up opportunities for point-of-care diagnostics in resource-poor settings and for applications in science education and field ecology.
I’m telling you, the paperfuge is huge. My gosh, Prakash; this whirligig may make it big. I’m a fan of can-do Manu and his breakthrough; it’s a real coup.

You also might like the “News and Views” editorial that accompanies their paper. Plus, articles lauding the paperfuge are all over the internet, such as this one in PhysicsWorld and this one on the National Public Radio website.

For those of you who prefer video, check out this great clip put out by Stanford University. 


To learn more about foldscope, watch Prakash’s TED talk.


Finally, here is a video about his MacArthur genius grant.


Enjoy!

Friday, January 13, 2017

Tony Barker receives the International Brain Stimulation Award

In Chapter 8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss transcranial magnetic stimulation
Since a changing magnetic field generates an induced electric field, it is possible to stimulate nerve or muscle cells without using electrodes. The advantage is that for a given induced current deep within the brain, the currents in the scalp that are induced by the magnetic field are far less than the currents that would be required for electrical stimulation. Therefore transcranial magnetic stimulation (TMS) is relatively painless….

One of the earliest investigations was reported by Barker, Jalinous and Freeston (1985). They used a solenoid in which the magnetic field changed by 2 T in 110 μs to apply a stimulus to different points on a subject’s arm and skull. The stimulus made a subject’s finger twitch after the delay required for the nerve impulse to travel to the muscle. For a region of radius a = 10 mm in material of conductivity 1 S m−1, the induced current density for the field change in Barker’s solenoid was 90 A m−2. (This is for conducting material inside the solenoid; the field falls off outside the solenoid, so the induced current is less.) This current density is large compared to current densities in nerves (Chap. 6).
Tony Barker was the lead engineer who developed the first useful magnetic stimulator. For this ground-breaking work, he recently received the International Brain Stimulation Award. An Institute of Physics and Engineering in Medicine news article states
The pioneer of Transcranial Magnetic Stimulation of the brain has become the first recipient of a new international award.

Professor Tony Barker, a Fellow of the Institute of Physics and Engineering in Medicine, has been awarded the International Brain Stimulation Award by publisher Elsevier.

The award acknowledges outstanding contributions to the field of brain stimulation. These contributions may be in basic, translational, or clinical aspects of neuromodulation, and must have had a profound influence in shaping this field of neuroscience and medicine.

Professor Barker, who recently retired after 38 years at Sheffield Teaching Hospitals’ Department of Medical Physics and Clinical Engineering, led the small team which developed the Transcranial Magnetic Stimulation (TMS) technique in the early 1980s.

He first started his research on using time-varying magnetic fields to induce current flow in tissue in order to depolarize neurons. Prior to this effort, direct electrical stimulation, with electrodes placed on the scalp (or other body part), was the principle method used to induce neuronal depolarization. This method, however, had several flaws and the high intensity of electrical stimulation is often painful. Magnetic fields, in contract, pass through the scalp and skull unimpeded and gives much more precise results.

In 1985, Professor Barker, together with his colleagues Dr Reza Jalinous and Professor Ian Freeston, reported the first demonstration of TMS. They produced twitching in a specific area of the hand in human volunteers by applying TMS to the motor cortex in the opposite hemisphere that controls movement of that muscle. This demonstrated that TMS was capable of stimulating a precise area of the brain and without the pain of electrical stimulation. Moreover, they did this with awake-alert human volunteers.

Today, TMS has become a vital tool in neuroscience, since, depending on stimulation parameters, specific brain areas can either be excited or inhibited. The TMS technique has evolved into a critical tool in basic neuroscience investigation, in the study of brain abnormalities in disease states, and in the treatment of a host of neurological and psychiatric conditions….

Professor Barker will receive his award at the 2nd International Brain Stimulation Conference in March, which is being held in Barcelona. He will also give a plenary lecture at the conference entitled Transcranial Magnetic Stimulation—past, present and future.
According to Google Scholar, Barker’s paper “Non-invasive magnetic stimulation of human motor cortex” in The Lancet has over 3400 citations, reflecting its impact (Oops…the citation to this article in IPMB left out the word “motor” in the title; yet another item for the errata). The figure below is from Barker’s Scholarpedia article about TMS.

The Sheffield group with the stimulator that first achieved transcranial magnetic stimulation, February 1985. From left to right: Reza Jalinous, Ian Freeston and Tony Barker.
The Sheffield group with the stimulator that first achieved
transcranial magnetic stimulation, February 1985.
From left to right: Reza Jalinous, Ian Freeston and Tony Barker.

Friday, January 6, 2017

Neuroskeptic

Over the Christmas break I discovered a blog written under the pen name Neuroskeptic.
Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.
Neuroskeptic’s interests overlap topics covered in Intermediate Physics for Medicine and Biology. For instance, often Neuroskeptic writes about functional magnetic resonance imaging, a technique Russ Hobbie and I describe in Chapter 18 of IPMB.
“The term functional magnetic resonance imaging (fMRI) usually refers to a technique developed in the 1990s that allows one to study structure and function simultaneously. The basis for fMRI is inhomogeneities in the magnetic field caused by the differences in the magnetic properties of oxygenated and deoxygenated hemoglobin. No external contrast agent is required. Oxygenated hemoglobin is less paramagnetic than deoxyhemoglobin. If we make images before and after a change in the blood flow to a small region of tissue (perhaps caused by a change in its metabolic activity), the difference between the two images is due mainly to changes in the blood oxygenation. One usually sees an increase in blood flow to a region of the brain when that region is active. This BOLD contrast in the two images provides information about the metabolic state of the tissue, and therefore about the tissue function (Ogawa et al. 1990; Kwong et al. 1992).”
Neuroskeptic’s author is obviously an expert in this method, but is suspicious about some of its claims. Often he—I will use the masculine pronoun for convenience, but I have no idea about his gender—analyzes new papers in the field. For instance, in his recent New Year’s Eve blog post he writes
Earlier this year, neuroscience was shaken by the publication in PNAS of “Cluster Failure: Why fMRI Inferences for Spatial Extent have Inflated False-Positive Rates.” In this paper, Anders Eklund, Thomas E. Nichols and Hans Knutsson reported that commonly used software for analysing fMRI data produces many false-positives. But now, Boston College neuroscientist Scott D. Slotnick has criticized Eklund et al.’s alarming conclusions in a new piece in Cognitive Neuroscience. In my view, while Slotnick makes some valid points, he falls short of debunking Eklund et al.’s worrying findings.
Another area Neuroskeptic analyzes is functional electrical stimulation. In Chapter 7 of IPMB, Russ and I write
stimulating electrodes…may be used for electromyographic studies; for stimulating muscles to contract called functional electrical stimulation (Peckham and Knutson 2005); for a cochlear implant to partially restore hearing (Zeng et al.2008); deep brain stimulation for Parkinson’s disease (Perlmutterand Mink 2006); for cardiac pacing (Moses andMullin 2007); and even for defibrillation (Dosdall et al.2009). The electrodes may be inserted in cells, placed in or on a muscle, or placed on the skin.
Two recent Neuroskeptic posts (here and here) analyze a controversial method of electrical stimulation called transcranial direct current stimulation (tDCS). I share his doubts about this technique, in which week currents (about 1 mA) are applied to the scalp. In a post last August, I hinted at some of my concerns, but my suspicions continue to grow. The electric fields induced in the brain by a 1 mA current to the scalp are minuscule.

Neuroskeptic also wonders if nonionizing radiation can cause cancer, a topic covered extensively in Chapter 9 of IPMB. He writes
Does non-ionizing radiation pose a health risk? Everyone knows that ionizing radiation, like gamma rays, can cause cancer by damaging DNA. But the scientific consensus is that there is no such risk from non-ionizing radiation such as radiowaves or Wi-Fi. Yet according to a remarkable new paper from Magda Havas, the risk is real: it’s called “When Theory and Observation Collide: Can Non-Ionizing Radiation Cause Cancer?”... Non-ionizing radiation such as radiowaves and microwaves consists of photons, just like visible light, but at a lower frequency. Because the energy of a photon is proportional to its frequency, very high frequency photons (like gamma rays) have enough energy to disrupt atoms…But visible light can’t do this, and still less can microwaves or radiowaves. There’s no known mechanism by which such low-energy photons could harm living tissue – except that they can heat tissue up in high doses, but the amount of heating produced by radio and wireless devices is tiny.
Neuroskeptic is not merely a debunker. Sometimes he examines promising new methods, but always with a questioning eye. For instance, his review of a paper developing new contrast agents for MRI is fascinating.
In a new paper called “Molecular fMRI,” MIT researchers Benjamin B. Bartelle, Ali Barandov, and Alan Jasanoff discuss technological advances that could provide neuroscientists with new tools for mapping the brain.

Currently, one of the leading methods of measuring brain activity is functional MRI (fMRI)…. Recent work, however, holds out the hope that a future “molecular fMRI” could be developed to extend the power of fMRI…. Molecular fMRI would involve the use of a molecular probe, a form of “contrast agent,” which would modulate the MRI signal in response to specific conditions.
I would be interested in knowing what Neuroskeptic (I always want to type “the Neuroskeptic” but he never uses the definite article before his name, so I won’t either) thinks about claims of using the biomagnetic field as the gradient field in MRI, as I discussed in my June 2016 blog post.

Everyone has their own gimmick, and Neuroskeptic’s is that he keeps his real identity secret. Some of you are thinking: “Oh, I wish Roth would be anonymous! We hear far too much about him and his little dog Suki and his own research in this blog.” Well, sorry. It’s too late to change now, so you are stuck hearing about Suki and me in addition to learning about physics in medicine and biology. But if you want a well-written, anonymous, and sometimes dissenting view of neuroscience, read the Neuroskeptic.

Friday, December 30, 2016

The Story of the World in 100 Species

The Story of the World  in 100 Species,  by Christopher Lloyd, superimposed on Intermediate Physics for Medicine and BIology.
The Story of the World
in 100 Species,
by Christopher Lloyd.
I recently finished reading The Story of the World in 100 Species. The author Christopher Lloyd writes in the introduction
This book is a jargon-free attempt to explain the phenomenon we call life on Earth. It traces the history of life from the dawn of evolution to the present day through the lens of one hundred living things that have changed the world. Unlike Charles Darwin’s theory published more than 150 years ago, it is not chiefly concerned with the “origin of species,” but with the influence and impacts that living things have had on the path of evolution, on each other and on our mutual environment, planet Earth.
Of course, I began to wonder how many of the top hundred species Russ Hobbie and I mention in Intermediate Physics for Medicine and Biology. Lloyd lists the species in order of impact. The number 1 species is the earthworm. As Darwin understood, you would have little agriculture without worms churning the soil. The highest ranking species that was mentioned in IPMB is number 2, algae, which produces much of the oxygen in our atmosphere. According to Lloyd, algae might provide the food (ick!) and fuel we need in the future.

Number 6 is ourselves: humans. Although the species name Homo sapiens never appears in IPMB, several chapters—those dealing with medicine—discuss us. Number 8 yeast (specifically, S. cerevisiae) is not in IPMB, although it is mentioned previously in this blog. Number 15 is the fruit fly Drosophila melanogaster, which made the list primarily because it is an important model species for research. IPMB mentions D. melanogaster when discussing ion channels.

Cows are number 17; a homework problem in IPMB contains the phrase “consider a spherical cow.” The flea is number 18, and is influential primarily for spreading diseases such as the Black Death. In IPMB, we analyze how fleas survive high accelerations. Wheat reaches number 19 and is one of several grains on the list. In Chapter 11, Russ and I write: “Examples of pairs of variables that may be correlated are wheat price and rainfall, ….” I guess that wheat is in IPMB, although the appearance is fairly trivial. Like yeast, number 20 C. elegans, a type of roundworm, is never mentioned in IPMB but does appear previously in this blog because it is such a useful model. I am not sure if number 21, the oak tree, is in IPMB. My electronic pdf of the book has my email address, roth@oakland.edu, as a watermark at the bottom of every page. Oak is not in the appendix, and I am pretty sure Russ and I never mention it, but I haven’t the stamina to search the entire pdf, clicking on each page. I will assume oak does not appear.

Number 24, grass, gets a passing mention: in a homework problem about predator-prey models, we write that “rabbits eat grass…foxes eat only rabbits.” When I searched the book for number 25 ant, I found constant, quantum, implant, elephant, radiant, etc. I gave up after examining just a few pages. Let’s say no for ant. Number 28 rabbit is in that predator-prey problem. Number 32 rat is in my favorite J. B. S. Haldane quote “You can drop a mouse down a thousand-yard mine shaft; and arriving at the bottom, it gets a slight shock and walks away. A rat is killed, and man is broken, a horse splashes.” Number 33 bee is in the sentence “Bees, pigeons, and fish contain magnetic particles,” and number 38 shark is in the sentence “It is possible that the Lorentz force law allows marine sharks, skates, and rays to orient in a magnetic field.” My favorite species, number 42 dog, appears many times. I found number 44 elephant when searching for ant. I am not sure about number 46 cat (complicated, scattering, indicate, cathode, … you search the dadgum pdf!). It doesn’t matter; I am a dog person and don’t care for cats.

Number 53 apple; IPMB suggests watching Russ Hobbie in a video about the program MacDose at the website https://itunes.apple.com/us/itunes-u/photon-interactions-simulation/id448438300?mt=10. No way am I counting that; you gotta draw the line somewhere. Number 58 horse; “…horse splashes…”. Number 59 sperm whale; we mention whales several times, but don’t specify the species—I’m counting it. Number 61 chicken appears in one of my favorite homework problems: “compare the mass and metabolic requirements…of 180 people…with 12,600 chickens…” Number 65 red fox; see predator-prey problem. Number 67 tobacco; IPMB mentions it several times. Number 71 tea; I doubt it but am not sure (instead, steady, steam, ….). Number 77 HIV; see Fig. 1.2. Number 85 coffee; see footnote 7, page 258.

Altogether, IPMB includes twenty of the hundred species (algea, human, fruit fly, cow, flea, wheat, grass, rabbit, rat, bee, shark, dog, elephant, horse, whale, chicken, fox, tobacco, HIV, coffee), which is not as many as I expected. We will have to put more into the 6th edition (top candidates: number 9 influenza, number 10 penicillium, number 14 mosquito, number 26 sheep, number 35 maize aka corn).

Were any important species missing from Lloyd’s list? He includes some well-known model organisms (S. cerevisiae, D. melanogaster, C. elegans) but inexplicably leaves out the bacterium E. coli (Fig. 1.1 in IPMB). Also, I am a bioelectricity guy, so I would include Hodgkin and Huxley’s squid with its giant axon. Otherwise, I think Lloyd’s list is pretty complete.

If you want to get a unique perspective on human history, learn some biology, and better appreciate evolution, I recommend The Story of the World in 100 Species.

Friday, December 23, 2016

Implantable microcoils for intracortical magnetic stimulation

When I worked at the National Institutes of Health in the 1990s, I studied transcranial magnetic stimulation. Russ Hobbie and I discuss magnetic stimulation in Chapter 8 of Intermediate Physics for Medicine and Biology. Pass a pulse of current through a coil held near the head; the changing magnetic field of the coil induces an electric field in the brain that stimulates neurons. Typical magnetic stimulation coils are constructed from several turns of wire, each turn carrying kiloamps of current in pulses that last for a tenth of a millisecond. Most are a few centimeters in size. Researchers have tried to make smaller coils, but these typically require even larger currents, resulting in magnetic forces that tear the coil apart as well as prohibitive Joule heating.

Imagine my surprise when Russ told me about a recently published paper describing magnetic stimulation using microcoils, written by Seung Woo Lee and his colleagues (“Implantable microcoils for intracortical magnetic stimulation,” Science Advances, 2:e1600889, 2016). Frankly, I am not sure what to make of this paper. On the one hand, the authors describe a careful study in which they perform all the control experiments I would have insisted on had I reviewed the paper for the journal (I did not). On the other hand, it just doesn’t make sense. 

Lee et al. built a coil by bending a 50 micron insulated copper wire into a single tight turn having a diameter of about 100 microns (see figure). Their current pulse lasted a few tenths of a millisecond, and had a peak current of….drum roll, please….about fifty milliamps. Yes, that would be nearly a million times smaller than the kiloamp currents used in traditional transcranial magnetic stimulation. Can this be? If true, it is a breakthrough, opening up the use of magnetic stimulation with implanted coils at the single neuron level.

Figure from Lee et al. Science Advances, 2:e1600889, 2016.
 Figure from Lee et al. Science Advances, 2:e1600889, 2016.

Why am I skeptical? You can calculate the induced electric field E from the product of μo/4π times the rate of change of the current times an integral over the coil,
An equation to calculate the electric field during transcranial magnetic stimulation.
where R is the distance from a point on the coil to the point where you calculate E. The constant μo/4π is 10-7 V s/A m. The rate of change of the current is about 0.05 A/0.0001 s = 500 A/s. The product of these two factors is roughly 5 × 10-5 V/m. The difficult part of the calculation is the integral. However, it is dimensionless and if the coil size and distance to the field point are similar it should be on the order of unity. Maybe a strange geometry could provide a factor of two, or π, or even ten, but you don’t expect a dimensionless integral like this one to be orders of magnitude larger than one (Lee et al. derived an expression for this integral containing a logarithm, and we all know how slowly that function changes). So, the electric field induced by such a microcoil should on the order of 10-4 V/m. Hause has estimated an electric field threshold for a neuron of about 10 V/m. How do you account for the missing factor of 100,000?

Lee et al. focus on the gradient of the electric field, rather than on the electric field itself. The gradient of the electric field plays an important role when performing traditional magnetic stimulation of a long straight axon, as you might find in the median nerve of the arm. However, when the spatial extent over which the electric field varies is smaller than the length constant, the relationship between the transmembrane potential and the electric field gradient becomes complicated. Also, in the brain neurons bend, branch, and bulge, so that the electric field may be the more appropriate quantity to use when estimating threshold. Yet, the electric field induced by a microcoil is really small.

So what is going on? I don’t know. As I said, the authors do several control experiments, and their data is convincing. My hunch is that they stimulated by capacitive coupling, but they examined that possibility and claim it is not the mechanism. I don’t have an answer, but their results are too strange to believe and too important to ignore. One thing I know for sure: the experiments need to be consistent with the fundamental physical laws outlined in Intermediate Physics for Medicine and Biology.

Friday, December 16, 2016

Optical Magnetic Detection of Single-Neuron Action Potentials using Quantum Defects in Diamond

A figure from Barry et al., Optical Magnetic Detection of Single-Neuron Action Potentials Using Quantum Defects in Diamond. PNAS, 113:14133–14138, 2016.
A figure from Barry et al., PNAS, 113:14133-14138, 2016.
Last week in this blog, I discussed using a wire-wound toroid to measure the magnetic field of a nerve axon. In the comments to my November 25 post, my friend Raghu Parthasarathy (of the Eighteenth Elephant) pointed me to a recent article by Barry et al.: “Optical magnetic detection of single-neuron action potentials using quantum defects in diamond” (Proceedings of the National Academy of Sciences, 113:14133–14138, 2016). I liked the article, and not just because it cited five of my papers. It presents a new method for measuring biomagnetic fields that does not require toroids or SQUID magnetometers, the only two methods discussed in Chapter 8 of Intermediate Physics for Medicine and Biology.  You can read it online; it’s open access.

Barry et al.’s abstract states:
Magnetic fields from neuronal action potentials (APs) pass largely unperturbed through biological tissue, allowing magnetic measurements of AP dynamics to be performed extracellularly or even outside intact organisms. To date, however, magnetic techniques for sensing neuronal activity have either operated at the macroscale with coarse spatial and/or temporal resolution—e.g., magnetic resonance imaging methods and magnetoencephalography—or been restricted to biophysics studies of excised neurons probed with cryogenic or bulky detectors that do not provide single-neuron spatial resolution and are not scalable to functional networks or intact organisms. Here, we show that AP magnetic sensing can be realized with both single-neuron sensitivity and intact organism applicability using optically probed nitrogen-vacancy (NV) quantum defects in diamond, operated under ambient conditions and with the NV diamond sensor in close proximity (∼10 μm) to the biological sample. We demonstrate this method for excised single neurons from marine worm and squid, and then exterior to intact, optically opaque marine worms for extended periods and with no observed adverse effect on the animal. NV diamond magnetometry is noninvasive and label-free and does not cause photodamage. The method provides precise measurement of AP waveforms from individual neurons, as well as magnetic field correlates of the AP conduction velocity, and directly determines the AP propagation direction through the inherent sensitivity of NVs to the associated AP magnetic field vector.
Here is my poor attempt to explain how their technique works; I must confess I don’t understand it completely. Nitrogen-vacancy defects in a diamond create a spin system that you can use for optically detected electron spin resonance. You shine light onto the system and detect the fluorescence with a photodiode. The shift in the magnetic resonance spectrum contained in the fluoresced light provides a measurement of the magnetic field. For our purposes we can think of the system as a black box that measures the magnetic field; new type of magnetometer.

Barry et al.’s paper started me thinking about the relative advantages and disadvantages of toroids versus optical methods for measuring magnetic fields of nerves.
  1. A disadvantage of toroids is that you have to thread the nerve through the toroid center, which usually requires cutting the nerve. My PhD advisor John Wikswo created a “clip-on” toroid that avoids any cutting, but that technique never really caught on. In the optical method, you just drape the nerve over the detector like you would lie down on a bed. Winner: optical method.
  2. The toroid appears to provide a better signal-to-noise ratio than the optical method. Figure 3a in Roth and Wikswo (1985) shows that a signal of about 300 pT peak-to-peak can be detected with a signal-to-noise ratio of roughly 20 with no averaging (I don’t need to look at our 1985 paper to know this, I have a framed copy of this data on display in my office). Figure 2c of Barry et al. (2016) shows a 4000 pT peak-to-peak signal measured optically with a signal-to-noise ratio of about 10 after 600 averages. Perhaps this comparison is unfair because in 1985 we had spent several years optimizing our toroids, whereas this is a first measurement using the optical technique. Nevertheless, I think the toroids have a better signal-to-noise ratio. Winner: toroids.
  3. The optical technique appears to have better spatial resolution, although not dramatically so. Our toroids were typically one or two millimeters in size. In the optical method, the sensing layer is only 13 microns thick, but the length over which the detection occurs is two millimeters, so their method corresponds to a wide but small-diameter toroid. The spatial resolution of both methods could probably be improved, but once the size of the recording device is less than the length over which the action potential upstroke extends (about a millimeter) there is little to be gained by making the detector smaller. Both methods integrate the magnetic signal over the area of the device. Interestingly, the solution to last weeks new homework problem--integrating the biomagnetic field over the cross section of the toroid--is derived in the page 8 of the Barry et al.'s supplementary information. Winner: optical method.
  4. The optical method has better temporal resolution. Both, however, have  temporal resolution adequate to record action potentials with an upstroke of a tenth of a millisecond or longer. The toroid does not record the time course of the magnetic field directly, but rather a mixture of the magnetic field and its derivative, and the technique requires a calibration pulse to get the “frequency compensation” correct. As best I can tell, the optical method measures the magnetic field directly. However, I believe any errors arising from frequency compensation of the toroids are small, and that the temporal resolution of both methods is fine. Winner: optical method.
  5. The toroids are encapsulated in epoxy, so they are biocompatible. I don’t know how biocompatible the optical method is, but I suspect it could be made biocompatible if a thin insulating layer covered the detecting layer, with a very small and probably negligible reduction in spatial resolution. Winner: tie.
  6. The question of convenience is tricky. I am accustomed to using the toroids, so that to me they are not difficult to use, whereas I have never tried the optical method. Nevertheless, I think toroids are more convenient. The optical method requires a static bias magnetic field, and toroids do not. Toroid measurements are insensitive to the exact position of the nerve within the toroid, whereas the optical method appears to be sensitive to the exact placement of the nerve on or near the detector. The optical method requires a spectroscopic analysis of the light signal; the toroid only needs an amplifier to record the current induced in the toroid winding. Finally, the optical method is based on magnetic splitting of spin states and magnetic resonance, whereas toroids rely on good old Faraday induction—my kind of physics. Although I consider toroids more convenient, I would not want to defend that opinion against a skeptical scientist holding a different view, because it is more a matter of taste than science. Winner: toroids.
The result is 3 for the optical method, 2 for toroids, and 1 tie. However, the optical method victories for spatial and temporal resolution were close calls, whereas the toroid victory for signal-to-noise ratio was a landslide. This is not the electoral college: I declare toroids to be the overall winner. (It’s my blog!)

Perhaps a more interesting comparison is for situations where you want to make two-dimensional measurements of current distributions. Barry et al. discuss how the optical method might be extended to do 2D imaging. The traditional method that would be the main competition would be Wikswo’s microSQUID or nanoSQUID: an array of small superconducting coils placed over a sample. But in that case you need cryogenics to keep the coils cold, and I could easily see how a room-temperature array of quantum defect detectors in diamond, recorded optically with a camera, might be simpler.

Both the optical method and toroids require placing a detector near the nerve, so both are invasive. However, if (IF!) the biomagnetic field can be used as the gradient field for magnetic resonance imaging (see my June 3 post) then that technique becomes totally noninvasive. Winner: MRI.