Screen could offer better safety tests for new chemicals

Using specialized liver cells, a new test can quickly detect potentially cancer-causing DNA damage.

MIT biological engineers developed a test that can reveal high levels of DNA damage in human cells.
MIT biological engineers developed a test that can reveal high levels of DNA damage in human cells.
Image: courtesy of the researchers

Anne Trafton | MIT News Office
December 17, 2019

It’s estimated that there are approximately 80,000 industrial chemicals currently in use, in products such as clothing, cleaning solutions, carpets, and furniture. For the vast majority of these chemicals, scientists have little or no information about their potential to cause cancer.

The detection of DNA damage in cells can predict whether cancer will develop, but tests for this kind of damage have limited sensitivity. A team of MIT biological engineers has now come up with a new screening method that they believe could make such testing much faster, easier, and more accurate.

The National Toxicology Program, a government research agency that identifies potentially hazardous substances, is now working on adopting the MIT test to evaluate new compounds.

“My hope is that they use it to identify potential carcinogens and we get them out of our environment, and prevent them from being produced in massive quantities,” says Bevin Engelward, a professor of biological engineering at MIT and the senior author of the study. “It can take decades between the time you’re exposed to a carcinogen and the time you get cancer, so we really need predictive tests. We need to prevent cancer in the first place.”

Engelward’s lab is now working on further validating the test, which makes use of human liver-like cells that metabolize chemicals very similarly to real human liver cells and produce a distinctive signal when DNA damage occurs.

Le Ngo, a former MIT graduate student and postdoc, is the lead author of the paper, which appears today in the journal Nucleic Acids Research. Other MIT authors of the paper include postdoc Norah Owiti, graduate student Yang Su, former graduate student Jing Ge, Singapore-MIT Alliance for Research and Technology graduate student Aoli Xiong, professor of electrical engineering and computer science Jongyoon Han, and professor emerita of biological engineering Leona Samson.

Carol Swartz, John Winters, and Leslie Recio of Integrated Laboratory Systems are also authors of the paper.

Detecting DNA damage

Currently, tests for the cancer-causing potential of chemicals involve exposing mice to the chemical and then waiting to see whether they develop cancer, which takes about two years.

Engelward has spent much of her career developing ways to detect DNA damage in cells, which can eventually lead to cancer. One of these devices, the CometChip, reveals DNA damage by placing the DNA in an array of microwells on a slab of polymer gel and then exposing it to an electric field. DNA strands that have been broken travel farther, producing a comet-shaped tail.

While the CometChip is good at detecting breaks in DNA, as well as DNA damage that is readily converted into breaks, it can’t pick up another type of damage known as a bulky lesion. These lesions form when chemicals stick to a strand of DNA and distort the double helix structure, interfering with gene expression and cell division. Chemicals that cause this kind of damage include aflatoxin, which is produced by fungi and can contaminate peanuts and other crops, and benzo[a]pyrene, which can form when food is cooked at high temperatures.

Engelward and her students decided to try to adapt the CometChip so that it could pick up this type of DNA damage. To do that, they took advantage of cells’ DNA repair pathways to generate strand breaks. Typically, when a cell discovers a bulky lesion, it will try to repair it by cutting out the lesion and then replacing it with a new piece of DNA.

“If there’s something glommed onto the DNA, you have to rip out that stretch of DNA and then replace it with fresh DNA. In that ripping process, you’re creating a strand break,” Engelward says.

To capture those broken strands, the researchers treated cells with two compounds that prevent them from synthesizing new DNA. This halts the repair process and generates unrepaired single-stranded DNA that the Comet test can detect.

The researchers also wanted to make sure that their test, which is called HepaCometChip, would detect chemicals that only become hazardous after being modified in the liver through a process called bioactivation.

“A lot of chemicals actually are inert until they get metabolized by the liver,” Ngo says. “In the liver you have a lot of metabolizing enzymes, which modify the chemicals so that they become more easily excreted by the body. But this process sometimes produces intermediates that can turn out to be more toxic than the original chemical.”

To detect those chemicals, the researchers had to perform their test in liver cells. Human liver cells are notoriously difficult to grow outside the body, but the MIT team was able to incorporate a type of liver-like cell called HepaRG, developed by a company in France, into the new test. These cells produce many of the same metabolic enzymes found in normal human liver cells, and like human liver cells, they can generate potentially harmful intermediates that create bulky lesions.

Enhanced sensitivity

To test their new system, the researchers first exposed the liver-like cells to UV light, which is known to produce bulky lesions. After verifying that they could detect such lesions, they tested the system with nine chemicals, seven of which are known to lead to single-stranded DNA breaks or bulky lesions, and found that the test could accurately detect all of them.

“Our new method enhances the sensitivity, because it should be able to detect any damage a normal Comet test would detect, and also adds on the layer of the bulky lesions,” Ngo says.

The whole process takes between two days and a week, offering a significantly faster turnaround than studies in mice.

The researchers are now working on further validating the test by comparing its performance with historical data from mouse carcinogenicity studies, with funding from the National Institutes of Health.

They are also working with Integrated Laboratory Systems, a company that performs toxicology testing, to potentially commercialize the technology. Engelward says the HepaCometChip could be useful not only for manufacturers of new chemical products, but also for drug companies, which are required to test new drugs for cancer-causing potential. The new test could offer a much easier and faster way to perform those screens.

“Once it’s validated, we hope it will become a recommended test by the FDA,” she says.

The research was funded by the National Institute of Environmental Health Sciences, including the NIEHS Superfund Basic Research Program, and the MIT Center for Environmental Health Sciences.

Widening metal tolerance for hydrogels

MIT graduate student Seth Cazzell shows controlling pH enables reversible hydrogel formation in wider range of metal concentrations.

Inspired by tissue that keep mussels attached to rocks underwater, MIT graduate student Seth Cazzell (pictured) and Associate Professor Niels Holten-Andersen found that controlling pH enables reversible hydrogel formation.
Inspired by tissue that keep mussels attached to rocks underwater, MIT graduate student Seth Cazzell (pictured) and Associate Professor Niels Holten-Andersen found that controlling pH enables reversible hydrogel formation.
Photo: Denis Paiste/Materials Research Laboratory

Denis Paiste | Materials Research Laboratory
December 23, 2019

Researchers seeking to develop self-healing hydrogels have long sought to mimic the natural ability of mussels to generate strong, flexible threads underwater that allow the mussels to stick to rocks.

The natural process that gives these mussel threads, which are called byssal, the ability to break apart and re-form is a purely chemical process, not a biological one, MIT graduate student Seth Cazzell noted in a presentation to the Materials Research Society fall meeting in Boston on Dec. 5.

The critical step in the process is the chemical binding of polymer chains to a metal atom (a protein-to-metal bond in the case of the mussel). These links are called cross-linked metal coordination bonds. Their greatest strength occurs when each metal atom binds to three polymer chains, and they form a network that results in a strong hydrogel.

In a recently published PNAS paper, Cazzell and associate professor of materials science and engineering Niels Holten-Andersen demonstrated a method to create a self-healing hydrogel in a wider range of metal concentrations through the use of competition controlled by the pH, or acidity and alkalinity, of the environment. Cazzell is a former National Defense Science and Engineering Graduate Fellow.

In their model computational system, Cazzell showed that in the absence of pH-controlled competition, excess metal — typically iron, aluminum, or nickel — overwhelms the ability of the polymer to form strong cross-links. In the presence of too much metal, the polymers will bind singly to metal atoms instead of forming cross-linked complexes, and the material remains a liquid.

One commonly studied mussel-inspired metal coordinating ligand is catechol. In this study, a modified catechol, nitrocatechol, was attached to polyethylene glycol. By studying the nitrocatechol system coordinated with iron, as well as a second model hydrogel system (histidine coordinated with nickel), Cazzell experimentally confirmed that the formation of strong cross-links could be induced under excess metal concentrations, supporting their computational evidence of the competitive role of hydroxide ions (negatively charged hydrogen-oxygen pairs), which act as a competitor to the polymer for binding to metal.

In these solutions, polymers can bind to metal atoms in ones, twos, or threes. When more metal atoms bind to the hydroxide ions, there are fewer metal atoms available to bind to polymer atoms, which increases the likelihood that the polymer atoms will bind to the metal atoms in strong triple cross-links that produce the desired putty-like gel.

“What we really like about this study is we’re not looking at biology directly, but we think it’s giving us nice evidence of something that might be happening in biology. So it’s an example of materials science informing what we think the organism is actually using to build these materials,” Cazzell says.

In simulations, Cazzell plotted the effect of the hydroxide competitor on strong hydrogel formation and found that as competitor strength increases, “we can enter into a range where we can form a gel almost anywhere.” But, he says, “Eventually the competitor gets too strong, and you lose the ability to form a gel at all.”

These results have potential for use in advanced 3D printing of synthetic tissues and other biomedical applications.

This work was supported by the National Science Foundation through the MIT Materials Research Laboratory’s Materials Research Science and Engineering Center program, and by the U.S. Office of Naval Research.

The billion-year belch

Michael Calzadilla and colleagues describe a violent black hole outburst that provides new insight into galaxy cluster evolution.

Giant cavities in the X-ray emitting intracluster medium (shown in blue, as observed by NASA's Chandra X-ray Observatory) have been carved out by a black hole outburst. X-ray data are overlaid on top of optical data from the Hubble Space Telescope (in red/orange), where the central galaxy that is likely hosting the culprit supermassive black hole is also visible.
Giant cavities in the X-ray emitting intracluster medium (shown in blue, as observed by NASA’s Chandra X-ray Observatory) have been carved out by a black hole outburst. X-ray data are overlaid on top of optical data from the Hubble Space Telescope (in red/orange), where the central galaxy that is likely hosting the culprit supermassive black hole is also visible.
Image courtesy of the researchers.

Fernanda Ferreira | School of Science
December 23, 2019

Billions of years ago, in the center of a galaxy cluster far, far away (15 billion light-years, to be exact), a black hole spewed out jets of plasma. As the plasma rushed out of the black hole, it pushed away material, creating two large cavities 180 degrees from each other. In the same way you can calculate the energy of an asteroid impact by the size of its crater, Michael Calzadilla, a graduate student at the MIT Kavli Institute for Astrophysics and Space Research (MKI), used the size of these cavities to figure out the power of the black hole’s outburst.

In a recent paper in The Astrophysical Journal Letters, Calzadilla and his coauthors describe the outburst in galaxy cluster SPT-CLJ0528-5300, or SPT-0528 for short. Combining the volume and pressure of the displaced gas with the age of the two cavities, they were able to calculate the total energy of the outburst. At greater than 1,054 joules of energy, a force equivalent to about 1,038 nuclear bombs, this is the most powerful outburst reported in a distant galaxy cluster. Coauthors of the paper include MKI research scientist Matthew Bayliss and assistant professor of physics Michael McDonald.

The universe is dotted with galaxy clusters, collections of hundreds and even thousands of galaxies that are permeated with hot gas and dark matter. At the center of each cluster is a black hole, which goes through periods of feeding, where it gobbles up plasma from the cluster, followed by periods of explosive outburst, where it shoots out jets of plasma once it has reached its fill. “This is an extreme case of the outburst phase,” says Calzadilla of their observation of SPT-0528. Even though the outburst happened billions of years ago, before our solar system had even formed, it took around 6.7 billion years for light from the galaxy cluster to travel all the way to Chandra, NASA’s X-ray emissions observatory that orbits Earth.

Because galaxy clusters are full of gas, early theories about them predicted that as the gas cooled, the clusters would see high rates of star formation, which need cool gas to form. However, these clusters are not as cool as predicted and, as such, weren’t producing new stars at the expected rate. Something was preventing the gas from fully cooling. The culprits were supermassive black holes, whose outbursts of plasma keep the gas in galaxy clusters too warm for rapid star formation.

The recorded outburst in SPT-0528 has another peculiarity that sets it apart from other black hole outbursts. It’s unnecessarily large. Astronomers think of the process of gas cooling and hot gas release from black holes as an equilibrium that keeps the temperature in the galaxy cluster — which hovers around 18 million degrees Fahrenheit — stable. “It’s like a thermostat,” says McDonald. The outburst in SPT-0528, however, is not at equilibrium.

According to Calzadilla, if you look at how much power is released as gas cools onto the black hole versus how much power is contained in the outburst, the outburst is vastly overdoing it. In McDonald’s analogy, the outburst in SPT-0528 is a faulty thermostat. “It’s as if you cooled the air by 2 degrees, and thermostat’s response was to heat the room by 100 degrees,” McDonald explains.

Earlier in 2019, McDonald and colleagues released a paper looking at a different galaxy cluster, one that displays a completely opposite behavior to that of SPT-0528. Instead of an unnecessarily violent outburst, the black hole in this cluster, dubbed Phoenix, isn’t able to keep the gas from cooling. Unlike all the other known galaxy clusters, Phoenix is full of young star nurseries, which sets it apart from the majority of galaxy clusters.

“With these two galaxy clusters, we’re really looking at the boundaries of what is possible at the two extremes,” McDonald says of SPT-0528 and Phoenix. He and Calzadilla will also characterize the more normal galaxy clusters, in order to understand the evolution of galaxy clusters over cosmic time. To explore this, Calzadilla is characterizing 100 galaxy clusters.

The reason for characterizing such a large collection of galaxy clusters is because each telescope image is capturing the clusters at a specific moment in time, whereas their behaviors are happening over cosmic time. These clusters cover a range of distances and ages, allowing Calzadilla to investigate how the properties of clusters change over cosmic time. “These are timescales that are much bigger than a human timescale or what we can observe,” explains Calzadilla.

The research is similar to that of a paleontologist trying to reconstruct the evolution of an animal from a sparse fossil record. But, instead of bones, Calzadilla is studying galaxy clusters, ranging from SPT-0528 with its violent plasma outburst on one end to Phoenix with its rapid cooling on the other. “You’re looking at different snapshots in time,” says Calzadilla.  “If you build big enough samples of each of those snapshots, you can get a sense how a galaxy cluster evolves.”

Researchers produce first laser ultrasound images of humans

Technique may help remotely image and assess health of infants, burn victims, and accident survivors in hard-to-reach places.

A new ultrasound technique uses lasers to produce images beneath the skin, without making contact with the skin as conventional ultrasound probes do. The new laser ultrasound technique was used to produce an image (left) of a human forearm (above), which was also imaged using conventional ultrasound (right).
A new ultrasound technique uses lasers to produce images beneath the skin, without making contact with the skin as conventional ultrasound probes do. The new laser ultrasound technique was used to produce an image (left) of a human forearm (above), which was also imaged using conventional ultrasound (right).
Image courtesy of the researchers

Jennifer Chu | MIT News Office
December 19, 2019

For most people, getting an ultrasound is a relatively easy procedure: As a technician gently presses a probe against a patient’s skin, sound waves generated by the probe travel through the skin, bouncing off muscle, fat, and other soft tissues before reflecting back to the probe, which detects and translates the waves into an image of what lies beneath.

Conventional ultrasound doesn’t expose patients to harmful radiation as X-ray and CT scanners do, and it’s generally noninvasive. But it does require contact with a patient’s body, and as such, may be limiting in situations where clinicians might want to image patients who don’t tolerate the probe well, such as babies, burn victims, or other patients with sensitive skin. Furthermore, ultrasound probe contact induces significant image variability, which is a major challenge in modern ultrasound imaging.

Now, MIT engineers have come up with an alternative to conventional ultrasound that doesn’t require contact with the body to see inside a patient. The new laser ultrasound technique leverages an eye- and skin-safe laser system to remotely image the inside of a person. When trained on a patient’s skin, one laser remotely generates sound waves that bounce through the body. A second laser remotely detects the reflected waves, which researchers then translate into an image similar to conventional ultrasound.

In a paper published today by Nature in the journal Light: Science and Applications, the team reports generating the first laser ultrasound images in humans. The researchers scanned the forearms of several volunteers and observed common tissue features such as muscle, fat, and bone, down to about 6 centimeters below the skin. These images, comparable to conventional ultrasound, were produced using remote lasers focused on a volunteer from half a meter away.

“We’re at the beginning of what we could do with laser ultrasound,” says Brian W. Anthony, a principal research scientist in MIT’s Department of Mechanical Engineering and Institute for Medical Engineering and Science (IMES), a senior author on the paper. “Imagine we get to a point where we can do everything ultrasound can do now, but at a distance. This gives you a whole new way of seeing organs inside the body and determining properties of deep tissue, without making contact with the patient.”

Early concepts for noncontact laser ultrasound for medical imaging originated from a Lincoln Laboratory program established by Rob Haupt of the Active Optical Systems Group and Chuck Wynn of the Advanced Capabilities and Technologies Group, who are co-authors on the new paper along with Matthew Johnson. From there, the research grew via collaboration with Anthony and his students, Xiang (Shawn) Zhang, who is now an MIT postdoc and is the paper’s first author, and recent doctoral graduate Jonathan Fincke, who is also a co-author. The project combined the Lincoln Laboratory researchers’ expertise in laser and optical systems with the Anthony group’s experience with advanced ultrasound systems and medical image reconstruction.

Yelling into a canyon — with a flashlight

In recent years, researchers have explored laser-based methods in ultrasound excitation in a field known as photoacoustics. Instead of directly sending sound waves into the body, the idea is to send in light, in the form of a pulsed laser tuned at a particular wavelength, that penetrates the skin and is absorbed by blood vessels.

The blood vessels rapidly expand and relax — instantly heated by a laser pulse then rapidly cooled by the body back to their original size — only to be struck again by another light pulse. The resulting mechanical vibrations generate sound waves that travel back up, where they can be detected by transducers placed on the skin and translated into a photoacoustic image.

While photoacoustics uses lasers to remotely probe internal structures, the technique still requires a detector in direct contact with the body in order to pick up the sound waves. What’s more, light can only travel a short distance into the skin before fading away. As a result, other researchers have used photoacoustics to image blood vessels just beneath the skin, but not much deeper.

Since sound waves travel further into the body than light, Zhang, Anthony, and their colleagues looked for a way to convert a laser beam’s light into sound waves at the surface of the skin, in order to image deeper in the body. 

Based on their research, the team selected 1,550-nanometer lasers, a wavelength which is highly absorbed by water (and is eye- and skin-safe with a large safety margin).  As skin is essentially composed of water, the team reasoned that it should efficiently absorb this light, and heat up and expand in response. As it oscillates back to its normal state, the skin itself should produce sound waves that propagate through the body.

The researchers tested this idea with a laser setup, using one pulsed laser set at 1,550 nanometers to generate sound waves, and a second continuous laser, tuned to the same wavelength, to remotely detect reflected sound waves.  This second laser is a sensitive motion detector that measures vibrations on the skin surface caused by the sound waves bouncing off muscle, fat, and other tissues. Skin surface motion, generated by the reflected sound waves, causes a change in the laser’s frequency, which can be measured. By mechanically scanning the lasers over the body, scientists can acquire data at different locations and generate an image of the region.

“It’s like we’re constantly yelling into the Grand Canyon while walking along the wall and listening at different locations,” Anthony says. “That then gives you enough data to figure out the geometry of all the things inside that the waves bounced against — and the yelling is done with a flashlight.”

In-home imaging

The researchers first used the new setup to image metal objects embedded in a gelatin mold roughly resembling skin’s water content. They imaged the same gelatin using a commercial ultrasound probe and found both images were encouragingly similar. They moved on to image excised animal tissue — in this case, pig skin — where they found laser ultrasound could distinguish subtler features, such as the boundary between muscle, fat, and bone.

Finally, the team carried out the first laser ultrasound experiments in humans, using a protocol that was approved by the MIT Committee on the Use of Humans as Experimental Subjects. After scanning the forearms of several healthy volunteers, the researchers produced the first fully noncontact laser ultrasound images of a human. The fat, muscle, and tissue boundaries are clearly visible and comparable to images generated using commercial, contact-based ultrasound probes.

The researchers plan to improve their technique, and they are looking for ways to boost the system’s performance to resolve fine features in the tissue. They are also looking to hone the detection laser’s capabilities. Further down the road, they hope to miniaturize the laser setup, so that laser ultrasound might one day be deployed as a portable device.

“I can imagine a scenario where you’re able to do this in the home,” Anthony says. “When I get up in the morning, I can get an image of my thyroid or arteries, and can have in-home physiological imaging inside of my body. You could imagine deploying this in the ambient environment to get an understanding of your internal state.” 

This research was supported in part by the MIT Lincoln Laboratory Biomedical Line Program for the United States Air Force and by the U.S. Army Medical Research and Material Command’s Military Operational Medicine Research Program.

A new way to remove contaminants from nuclear wastewater

Method concentrates radionuclides in a small portion of a nuclear plant’s wastewater, allowing the rest to be recycled.

A small-scale device, seen here, was used in the lab to demonstrate the effectiveness of the new shockwave-based system for removing radioactive contaminants from the cooling water in nuclear powerplants.
A small-scale device, seen here, was used in the lab to demonstrate the effectiveness of the new shockwave-based system for removing radioactive contaminants from the cooling water in nuclear powerplants.
Image courtesy of the researchers

David L. Chandler | MIT News Office
December 19, 2019

Nuclear power continues to expand globally, propelled, in part, by the fact that it produces few greenhouse gas emissions while providing steady power output. But along with that expansion comes an increased need for dealing with the large volumes of water used for cooling these plants, which becomes contaminated with radioactive isotopes that require special long-term disposal.

Now, a method developed at MIT provides a way of substantially reducing the volume of contaminated water that needs to be disposed of, instead concentrating the contaminants and allowing the rest of the water to be recycled through the plant’s cooling system. The proposed system is described in the journal Environmental Science and Technology, in a paper by graduate student Mohammad Alkhadra, professor of chemical engineering Martin Bazant, and three others.

The method makes use of a process called shock electrodialysis, which uses an electric field to generate a deionization shockwave in the water. The shockwave pushes the electrically charged particles, or ions, to one side of a tube filled with charged porous material, so that concentrated stream of contaminants can be separated out from the rest of the water. The group discovered that two radionuclide contaminants — isotopes of cobalt and cesium — can be selectively removed from water that also contains boric acid and lithium. After the water stream is cleansed of its cobalt and cesium contaminants, it can be reused in the reactor.

The shock electrodialysis process was initially developed by Bazant and his co-workers as a general method of removing salt from water, as demonstrated in their first scalable prototype four years ago. Now, the team has focused on this more specific application, which could help improve the economics and environmental impact of working nuclear power plants. In ongoing research, they are also continuing to develop a system for removing other contaminants, including lead, from drinking water.

Not only is the new system inexpensive and scalable to large sizes, but in principle it also can deal with a wide range of contaminants, Bazant says. “It’s a single device that can perform a whole range of separations for any specific application,” he says.

In their earlier desalination work, the researchers used measurements of the water’s electrical conductivity to determine how much salt was removed. In the years since then, the team has developed other methods for detecting and quantifying the details of what’s in the concentrated radioactive waste and the cleaned water.

“We carefully measure the composition of all the stuff going in and out,” says Bazant, who is the E.G. Roos Professor of Chemical Engineering as well as a professor of mathematics. “This really opened up a new direction for our research.” They began to focus on separation processes that would be useful for health reasons or that would result in concentrating material that has high value, either for reuse or to offset disposal costs.

The method they developed works for sea water desalination, but it is a relatively energy-intensive process for that application. The energy cost is dramatically lower when the method is used for ion-selective separations from dilute streams such as nuclear plant cooling water. For this application, which also requires expensive disposal, the method makes economic sense, he says. It also hits both of the team’s targets: dealing with high-value materials and helping to safeguard health. The scale of the application is also significant — a single large nuclear plant can circulate about 10 million cubic meters of water per year through its cooling system, Alkhadra says.

For their tests of the system, the researchers used simulated nuclear wastewater based on a recipe provided by Mitsubishi Heavy Industries, which sponsored the research and is a major builder of nuclear plants. In the team’s tests, after a three-stage separation process, they were able to remove 99.5 percent of the cobalt radionuclides in the water while retaining about 43 percent of the water in cleaned-up form so that it could be reused. As much as two-thirds of the water can be reused if the cleanup level is cut back to 98.3 percent of the contaminants removed, the team found.

While the overall method has many potential applications, the nuclear wastewater separation, is “one of the first problems we think we can solve [with this method] that no other solution exists for,” Bazant says. No other practical, continuous, economic method has been found for separating out the radioactive isotopes of cobalt and cesium, the two major contaminants of nuclear wastewater, he adds.

While the method could be used for routine cleanup, it could also make a big difference in dealing with more extreme cases, such as the millions of gallons of contaminated water at the damaged Fukushima Daichi power plant in Japan, where the accumulation of that contaminated water has threatened to overpower the containment systems designed to prevent it from leaking out into the adjacent Pacific. While the new system has so far only been tested at much smaller scales, Bazant says that such large-scale decontamination systems based on this method might be possible “within a few years.”

The research team also included MIT postdocs Kameron Conforti and Tao Gao and graduate student Huanhuan Tian.

Chemists glimpse the fleeting “transition state” of a reaction

New technique for observing reaction products offers insights into the chemical mechanisms that formed them.

MIT chemists have devised a way to observe the transition state of the chemical reaction that occurs when vinyl cyanide is broken apart by an ultraviolet laser.
MIT chemists have devised a way to observe the transition state of the chemical reaction that occurs when vinyl cyanide is broken apart by an ultraviolet laser.
Image: Christine Daniloff, MIT

Anne Trafton | MIT News Office
December 16, 2019

During a chemical reaction, the molecules involved in the reaction gain energy until they reach a “point of no return” known as a transition state.

Until now, no one has glimpsed this state, as it lasts for only a few femtoseconds (quadrillionths of a second). However, chemists at MIT, Argonne National Laboratory, and several other institutions have now devised a technique that allows them to determine the structure of the transition state by detailed observation of the products that result from the reaction.

“We’re looking at the consequences of the event, which have encoded in them the actual structure of the transition state,” says Robert Field, the Robert T. Haslam and Bradley Dewey Professor of Chemistry at MIT. “It’s an indirect measurement, but it’s among the most direct classes of measurement that have been possible.”

Field and his colleagues used millimeter-wave spectroscopy, which can measure the rotational-vibrational energy of reaction product molecules, to determine the structure of the products of the breakdown of vinyl cyanide caused by ultraviolet light. Using this approach, they identified two different transition states for the reaction and found evidence that additional transition states may be involved.

Field is the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences. The lead author is Kirill Prozument, a former MIT postdoc who is now at Argonne National Laboratory.

A central concept of chemistry

For any chemical reaction to occur, the reacting molecules must receive an input of energy that enables the activated molecules to reach a transition state, from which the products are formed.

“The transition state is a central concept of chemistry,” Field says. “Everything we think about in reactions really hinges on the structure of the transition state, which we cannot directly observe.”

In a paper published in 2015, Field and his colleagues used laser spectroscopy to characterize the transition state for a different type of reaction known as an isomerization, in which a molecule undergoes a change of shape.

In their new study, the researchers explored another style of reaction, using ultraviolet laser radiation to break molecules of vinyl cyanide into acetylene and other products. Then, they used millimeter-wave spectroscopy to observe the vibrational level population distribution of the reaction products a few millionths of a second after the reaction occurred.

Using this technique, the researchers were able to determine nascent populations of molecules in different levels of vibrational energy — a measure of how much the atoms of a molecule move relative to each other. Those vibrational energy levels also encode the geometry of the molecules when they were born at the transition state, specifically, how much bending excitation there is in the bond angles between hydrogen, carbon, and nitrogen atoms.

This also allowed the researchers to distinguish between two slightly different products of the reaction — hydrogen cyanide (HCN), in which a central carbon atom is bound to hydrogen and nitrogen, and hydrogen isocyanide (HNC), in which nitrogen is the central atom, bound to carbon and hydrogen.

“This is the fingerprint of what the structure was during the instant that the molecule was released,” Field says. “Previous methods of looking at reactions were blind to the vibrational populations, and they were blind to the difference between HCN and HNC.”

The researchers found both HCN and HNC, which are produced via different transition states, among the reaction products. This suggests that both of those transition states, which represent different mechanisms of reaction, are in play when vinyl cyanide is broken apart by the ultraviolet laser.

“This implies that there are two different mechanisms competing for transition states, and we’re able to separate the reaction into these different mechanisms,” Field says. “This is a completely new technique, a new way of going to the heart of what happens in a chemical reaction.”

The new technique allows scientists to explore the transition state in a way that has previously not been possible, says Arthur Suits, a professor of chemistry at the University of Missouri.

“In this work, the researchers use the powerful new technique of broadband rotational spectroscopy to monitor the nascent vibrational distributions of the products of a photodissociation reaction, thereby gaining deep insight into two different transition states,” says Suits, who was not involved in the study. “Broadband rotational spectroscopy continues to amaze us with unexpected applications such as this glimpse of the elusive transition, and other exciting advances driven by this technique are no doubt on the way.”

Additional mechanisms

The researchers’ data shows that there are additional reaction mechanisms beyond those two, but more study is needed to determine their transition state structures.

Field and Prozument are now using this technique to study the reaction products of the pyrolytic breakdown of acetone. They also hope to use it to explore how triazine, a six-membered ring of alternating carbon and nitrogen atoms, breaks down into three molecules of HCN, in particular, whether all three products form simultaneously (a “triple whammy”) or sequentially.

The research was funded by the Department of Energy, the Petroleum Research Fund, and the National Science Foundation. Other authors of the paper include Joshua Baraban PhD ’13 of Ben-Gurion University; G. Barratt Park PhD ’15 of the Max Planck Institute for Biophysical Chemistry; Rachel Shaver SM ’13; P. Bryan Changala of the University of Colorado at Boulder; John Muenter of the University of Rochester; Stephen Klippenstein of Argonne National Laboratory; and Vladimir Chernyak of Wayne State University.

When laser beams meet plasma: New data addresses gap in fusion research

When laser beams meet plasma: New data addresses gap in fusion research
Researchers used the Omega Laser Facility at the Rochester’s Laboratory for Laser Energetics to make highly detailed measurements of laser-heated plasmas. Credit: University photo / J. Adam Fenster

DECEMBER 2, 2019

by University of Rochester

New research from the University of Rochester will enhance the accuracy of computer models used in simulations of laser-driven implosions. The research, published in the journal Nature Physics, addresses one of the challenges in scientists’ longstanding quest to achieve fusion.

In laser-driven inertial confinement fusion (ICF) experiments, such as the experiments conducted at the University of Rochester’s Laboratory for Laser Energetics (LLE), short beams consisting of intense pulses of light—pulses lasting mere billionths of a second—deliver energy to heat and compress a target of hydrogen fuel cells. Ideally, this process would release more energy than was used to heat the system.

Laser-driven ICF experiments require that many laser beams propagate through a plasma—a hot soup of free moving electrons and ions—to deposit their radiation energy precisely at their intended target. But, as the beams do so, they interact with the plasma in ways that can complicate the intended result.

“ICF necessarily generates environments in which many laser beams overlap in a hot plasma surrounding the target, and it has been recognized for many years that the laser beams can interact and exchange energy,” says David Turnbull, an LLE scientist and the first author of the paper.

To accurately model this interaction, scientists need to know exactly how the energy from the laser beam interacts with the plasma. While researchers have offered theories about the ways in which laser beams alter a plasma, none has ever before been demonstrated experimentally.

Now, researchers at the LLE, along with their colleagues at Lawrence Livermore National Laboratory in California and the Centre National de la Recherche Scientifique in France, have directly demonstrated for the first time how laser beams modify the conditions of the underlying plasma, in turn affecting the transfer of energy in fusion experiments.

“The results are a great demonstration of the innovation at the Laboratory and the importance of building a solid understanding of laser-plasma instabilities for the national fusion program,” says Michael Campbell, the director of the LLE.


Researchers often use supercomputers to study the implosions involved in fusion experiments. It is important, therefore, that these computer models accurately depict the physical processes involved, including the exchange of energy from the laser beams to the plasma and eventually to the target.

For the past decade, researchers have used computer models describing the mutual laser beam interaction involved in laser-driven fusion experiments. However, the models have generally assumed that the energy from the laser beams interacts in a type of equilibrium known as Maxwellian distribution—an equilibrium one would expect in the exchange when no lasers are present.

“But, of course, lasers are present,” says Dustin Froula, a senior scientist at the LLE.

Froula notes that scientists predicted almost 40 years ago that lasers alter the underlying plasma conditions in important ways. In 1980, a theory was presented that predicted these non-Maxwellian distribution functions in laser plasmas due to the preferential heating of slow electrons by the laser beams. In subsequent years, Rochester graduate Bedros Afeyan ’89 (Ph.D.) predicted that the effect of these non-Maxwellian electron distribution functions would change how laser energy is transferred between beams.

But lacking experimental evidence to verify that prediction, researchers did not account for it in their simulations.

Turnbull, Froula, and physics and astronomy graduate student Avram Milder conducted experiments at the Omega Laser Facility at the LLE to make highly detailed measurements of the laser-heated plasmas. The results of these experiments show for the first time that the distribution of electron energies in a plasma is affected by their interaction with the laser radiation and can no longer be accurately described by prevailing models.

The new research not only validates a longstanding theory, but it also shows that laser-plasma interaction strongly modifies the transfer of energy.

“New inline models that better account for the underlying plasma conditions are currently under development, which should improve the predictive capability of integrated implosion simulations,” Turnbull says.

Helping machines perceive some laws of physics

Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI.

An MIT-invented model demonstrates an understanding of some basic “intuitive physics” by registering “surprise” when objects in simulations move in unexpected ways, such as
rolling behind a wall and not reappearing on the other side.
An MIT-invented model demonstrates an understanding of some basic “intuitive physics” by registering “surprise” when objects in simulations move in unexpected ways, such as rolling behind a wall and not reappearing on the other side.
Image: Christine Daniloff, MIT

Rob Matheson | MIT News Office
December 2, 2019

Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick.

Now MIT researchers have designed a model that demonstrates an understanding of some basic “intuitive physics” about how objects should behave. The model could be used to help build smarter artificial intelligence and, in turn, provide information to help scientists understand infant cognition.

The model, called ADEPT, observes objects moving around a scene and makes predictions about how the objects should behave, based on their underlying physics. While tracking the objects, the model outputs a signal at each video frame that correlates to a level of “surprise” — the bigger the signal, the greater the surprise. If an object ever dramatically mismatches the model’s predictions — by, say, vanishing or teleporting across a scene — its surprise levels will spike.

In response to videos showing objects moving in physically plausible and implausible ways, the model registered levels of surprise that matched levels reported by humans who had watched the same videos.  

“By the time infants are 3 months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport,” says first author Kevin A. Smith, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). “We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes.”

Joining Smith on the paper are co-first authors Lingjie Mei, an undergraduate in the Department of Electrical Engineering and Computer Science, and BCS research scientist Shunyu Yao; Jiajun Wu PhD ’19; CBMM investigator Elizabeth Spelke; Joshua B. Tenenbaum, a professor of computational cognitive science, and researcher in CBMM, BCS, and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and CBMM investigator Tomer D. Ullman PhD ’15.

Mismatched realities

ADEPT relies on two modules: an “inverse graphics” module that captures object representations from raw images, and a “physics engine” that predicts the objects’ future representations from a distribution of possibilities.

Inverse graphics basically extracts information of objects — such as shape, pose, and velocity — from pixel inputs. This module captures frames of video as images and uses inverse graphics to extract this information from objects in the scene. But it doesn’t get bogged down in the details. ADEPT requires only some approximate geometry of each shape to function. In part, this helps the model generalize predictions to new objects, not just those it’s trained on.

“It doesn’t matter if an object is rectangle or circle, or if it’s a truck or a duck. ADEPT just sees there’s an object with some position, moving in a certain way, to make predictions,” Smith says. “Similarly, young infants also don’t seem to care much about some properties like shape when making physical predictions.”

These coarse object descriptions are fed into a physics engine — software that simulates behavior of physical systems, such as rigid or fluidic bodies, and is commonly used for films, video games, and computer graphics. The researchers’ physics engine “pushes the objects forward in time,” Ullman says. This creates a range of predictions, or a “belief distribution,” for what will happen to those objects in the next frame.

Next, the model observes the actual next frame. Once again, it captures the object representations, which it then aligns to one of the predicted object representations from its belief distribution. If the object obeyed the laws of physics, there won’t be much mismatch between the two representations. On the other hand, if the object did something implausible — say, it vanished from behind a wall — there will be a major mismatch.

ADEPT then resamples from its belief distribution and notes a very low probability that the object had simply vanished. If there’s a low enough probability, the model registers great “surprise” as a signal spike. Basically, surprise is inversely proportional to the probability of an event occurring. If the probability is very low, the signal spike is very high.  

“If an object goes behind a wall, your physics engine maintains a belief that the object is still behind the wall. If the wall goes down, and nothing is there, there’s a mismatch,” Ullman says. “Then, the model says, ‘There’s an object in my prediction, but I see nothing. The only explanation is that it disappeared, so that’s surprising.’”

Violation of expectations

In development psychology, researchers run “violation of expectations” tests in which infants are shown pairs of videos. One video shows a plausible event, with objects adhering to their expected notions of how the world works. The other video is the same in every way, except objects behave in a way that violates expectations in some way. Researchers will often use these tests to measure how long the infant looks at a scene after an implausible action has occurred. The longer they stare, researchers hypothesize, the more they may be surprised or interested in what just happened.

For their experiments, the researchers created several scenarios based on classical developmental research to examine the model’s core object knowledge. They employed 60 adults to watch 64 videos of known physically plausible and physically implausible scenarios. Objects, for instance, will move behind a wall and, when the wall drops, they’ll still be there or they’ll be gone. The participants rated their surprise at various moments on an increasing scale of 0 to 100. Then, the researchers showed the same videos to the model. Specifically, the scenarios examined the model’s ability to capture notions of permanence (objects do not appear or disappear for no reason), continuity (objects move along connected trajectories), and solidity (objects cannot move through one another).

ADEPT matched humans particularly well on videos where objects moved behind walls and disappeared when the wall was removed. Interestingly, the model also matched surprise levels on videos that humans weren’t surprised by but maybe should have been. For example, in a video where an object moving at a certain speed disappears behind a wall and immediately comes out the other side, the object might have sped up dramatically when it went behind the wall or it might have teleported to the other side. In general, humans and ADEPT were both less certain about whether that event was or wasn’t surprising. The researchers also found traditional neural networks that learn physics from observations — but don’t explicitly represent objects — are far less accurate at differentiating surprising from unsurprising scenes, and their picks for surprising scenes don’t often align with humans.

Next, the researchers plan to delve further into how infants observe and learn about the world, with aims of incorporating any new findings into their model. Studies, for example, show that infants up until a certain age actually aren’t very surprised when objects completely change in some ways — such as if a truck disappears behind a wall, but reemerges as a duck.

“We want to see what else needs to be built in to understand the world more like infants, and formalize what we know about psychology to build better AI agents,” Smith says.

Designing humanity’s future in space

The Space Exploration Initiative’s latest research flight explores work and play in microgravity.

Ariel Ekblaw, founder and lead of the Space Exploration Initiative, tests the latest iteration of her TESSERAE self-assembling architecture onboard a parabolic research flight.
Ariel Ekblaw, founder and lead of the Space Exploration Initiative, tests the latest iteration of her TESSERAE self-assembling architecture onboard a parabolic research flight.
Photo: Steve Boxall/ZERO-G

Janine Liberty | MIT Media Lab
November 26, 2019

How will dancers perform in space? How will scientists do lab experiments without work tables? How will artists pursue crafting in microgravity? How can exercise, gastronomy, research, and other uniquely human endeavors be reimagined for the unique environment of space? These are the questions that drove the 14 projects aboard the MIT Media Lab Space Exploration Initiative’s second parabolic research flight.

Just past the 50th anniversary of the Apollo moon landing, humanity’s life in space isn’t so very far away. Virgin Galactic just opened its spaceport with the goal of launching space tourists into orbit within months, not years; Blue Origin’s New Shepard rocket is gearing up to carry its first human cargo to the edge of space, with New Glenn and a moon mission not far behind. We are nearing a future where trained, professional astronauts aren’t the only people who will regularly leave Earth. The new Space Age will reach beyond the technical and scientific achievements of getting people into space and keeping them alive there; the next frontier is bringing our creativity, our values, our personal pursuits and hobbies with us, and letting them evolve into a new culture unique to off-planet life. 

But unlike the world of Star Trek, there’s no artificial gravity capability in sight. Any time spent in space will, for the foreseeable future, mean life without weight, and without the rules of gravity that govern every aspect of life on the ground. Through its annual parabolic flight charter with the ZERO-G Research Program, the Space Exploration Initiative (SEI) is actively anticipating and solving for the challenges of microgravity.

Space for everyone

SEI’s first zero-gravity flight, in 2017, set a high bar for the caliber of the projects, but it was also a learning experience in doing research in 20-second bursts of microgravity. In preparation for an annual research flight, SEI founder and lead Ariel Ekblaw organized MIT’s first graduate course for parabolic flights (Prototyping Our Sci-Fi Space Future: Zero Gravity Flight Class) with the goal of preparing researchers for the realities of parabolic flights, from the rigors of the preflight test readiness review inspections to project hardware considerations and mid-flight adjustments.

The class also served to take some of the intimidation factor out of the prospect of space research and focused on democratizing access to microgravity testbed environments. 

“The addition of the course helped us build bridges across other departments at MIT and take the time to document and open-source our mentorship process for robust, creative, and rigorous experiments,” says Ekblaw.

SEI’s mission of democratizing access to space is broad: It extends to actively recruiting researchers, artists, and designers, whose work isn’t usually associated with space, as well as ensuring that the traditional engineering and hard sciences of space research are open to people of all genders, nationalities, and identities. This proactive openness was manifest in every aspect of this year’s microgravity flight. 

While incubated in the Media Lab, the Space Exploration Initiative now supports research across MIT. Paula do Vale Pereira, a grad student in MIT’s Department of Aeronautics and Astronautics (AeroAsto), was on board to test out automated actuators for CubeSats. Tim McGrath and Jeremy Strong, also from AeroAstro, built an erg machine specially designed for exercise in microgravity. Chris Carr and Maria Zuber, of the Department of Earth, Atmospheric and Planetary Sciences, flew to test out the latest iteration of their Electronic Life-detection Instrument (ELI) research.

Research specialist Maggie Coblentz is pursuing her fascination with food in space — including the world’s first molecular gastronomy experiment in microgravity. She also custom-made an astronaut’s helmet specially designed to accommodate a multi-course tasting menu, allowing her to experiment with different textures and techniques to make both food and eating more enjoyable on long space flights. 

“The function of food is not simply to provide nourishment — it’s a key creature comfort in spaceflight and will play an even more significant role on long-duration space travel and future life in space habitats. I hope to uncover new food cultures and food preparation techniques by evoking the imagination and sense of play in space, Willy Wonka style,” says Coblentz.

With Sensory Synchrony, a project supported by NASA’s Translational Research Institute for Space Health, Abhi Jain and fellow researchers in the Media Lab’s Fluid Interfaces group investigated vestibular neuromodulation techniques for mitigating the effects of motion sickness caused by the sensory mismatch in microgravity. The team will iterate on the data from this flight to consider possibilities for novel experiences using augmented and virtual reality in microgravity environments.

The Space Enabled research group is testing how paraffin wax behaves as a liquid in microgravity, exploring it as an affordable, accessible alternative satellite fuel. Their microgravity experiment, run by Juliet Wanyiri, aimed to determine the speed threshold, and corresponding voltage, needed for the wax to form into a shape called an annulus, which is one of the preferred geometric shapes to store satellite fuel. “This will help us understand what design might be appropriate to use wax as a satellite fuel for an on-orbit mission in the future,” explains Wanyiri.

Xin Liu flew for the second time this year, with a new project that continues her explorations into the relationship between couturemovement, and self-expression when an artist is released from the constraints of gravity. This year’s project, Mollastica, is a mollusk-inspired costume designed to swell and float in microgravity. Liu also motion-captured a body performance to be rendered later for a “deep-sea-to-deep-space” video work.

The human experience

The extraordinary range of fields, goals, projects, and people represented on this year’s microgravity flight speaks to the unique role the Space Exploration Initiative is already starting to play in the future of space. 

For designer and researcher Alexis Hope, the flight offered the opportunity to discover how weightlessness affects the creative process — how it changes not only the art, but also the artist. Her project, Space/Craft, was an experiment in zero-g sculpture: exploring the artistic processes and possibilities enabled by microgravity by using a hot glue gun to “draw in 3D.”

Like all of the researchers aboard the flight, Hope found the experience both challenging and inspiring. Her key takeaway, she says, is excitement for all the unexplored possibilities of art, crafting, and creativity in space.

“Humans always find a way to express themselves creatively, and I expect no different in a zero-gravity environment,” she says. “I’m excited for new materials that will behave in interesting ways in a zero-gravity environment, and curious about how those new materials might inspire future artists to create novel structures, forms, and physical expressions.”

Ekblaw herself spent the flight testing out the latest iteration of TESSERAE, her self-assembling space architecture prototype. The research has matured extensively over the last year and a half, including a recent suborbital test flight with Blue Origin and an upcoming International Space Station mission to take place in early 2020. 

All of the research projects from this year’s flight — as well as some early results, the projects from the Blue Origin flight, and the early prototypes for the ISS mission — were on display at a recent SEI open house at the Media Lab. 

For Ekblaw, the great challenge and the great opportunity in these recurring research flights is helping researchers to keep their projects and goals realistic in the moment, while keeping SEI’s gaze firmly fixed on the future. 

“While parabolic flights are already a remarkable experience, this year was particularly meaningful for us. We had the immense privilege of finalizing our pre-flight testing over the exact days when Neil Armstrong, Buzz Aldrin, and Mike Collins were in microgravity on their way to the moon,” she says. “This 50th anniversary of Apollo 11 reminds us that the next 50 years of interplanetary civilization beckons. We are all now part of this — designing, building, and testing artifacts for our human, lived experience of space.”

The plot thickens for a hypothetical X17 particle

The plot thickens for a hypothetical X17 particle
The NA64 experiment at CERN (Image: CERN)

NOVEMBER 29, 2019

by Ana Lopes, CERN

resh evidence of an unknown particle that could carry a fifth force of nature gives the NA64 collaboration at CERN a new incentive to continue searches.

In 2015, a team of scientists spotted an unexpected glitch, or “anomaly”, in a nuclear transition that could be explained by the production of an unknown particle. About a year later, theorists suggested that the new particle could be evidence of a new fundamental force of nature, in addition to electromagnetism, gravity and the strong and weak forces. The findings caught worldwide attention and prompted, among other studies, a direct search for the particle by the NA64 collaboration at CERN.

new paper from the same team, led by Attila Krasznahorkay at the Atomki institute in Hungary, now reports another anomaly, in a similar nuclear transition, that could also be explained by the same hypothetical particle.

The first anomaly spotted by Krasznahorkay’s team was seen in a transition of beryllium-8 nuclei. This transition emits a high-energy virtual photon that transforms into an electron and its antimatter counterpart, a positron. Examining the number of electron–positron pairs at different angles of separation, the researchers found an unexpected surplus of pairs at a separation angle of about 140º. In contrast, theory predicts that the number of pairs decreases with increasing separation angle, with no excess at a particular angle. Krasznahorkay and colleagues reasoned that the excess could be interpreted by the production of a new particle with a mass of about 17 million electronvolts (MeV), the “X17” particle, which would transform into an electron–positron pair.

The latest anomaly reported by Krasznahorkay’s team, in a paper that has yet to be peer-reviewed, is also in the form of an excess of electron–positron pairs, but this time the excess is from a transition of helium-4 nuclei. “In this case, the excess occurs at an angle 115º but it can also be interpreted by the production of a particle with a mass of about 17 MeV,” explained Krasznahorkay. “The result lends support to our previous result and the possible existence of a new elementary particle,” he adds.

Sergei Gninenko, spokesperson for the NA64 collaboration at CERN, which has not found signs of X17 in its direct search, says: “The Atomki anomalies could be due to an experimental effect, a nuclear physics effect or something completely new such as a new particle. To test the hypothesis that they are caused by a new particle, both a detailed theoretical analysis of the compatibility between the beryllium-8 and the helium-4 results as well as independent experimental confirmation is crucial.”

The NA64 collaboration searches for X17 by firing a beam of tens of billions of electrons from the Super Proton Synchrotron accelerator onto a fixed target. If X17 did exist, the interactions between the electrons and nuclei in the target would sometimes produce this particle, which would then transform into an electron–positron pair. The collaboration has so far found no indication that such events took place, but its datasets allowed them to exclude part of the possible values for the strength of the interaction between X17 and an electron. The team is now upgrading their detector for the next round of searches, which are expected to be more challenging but at the same time more exciting, says Gninenko.

Among other experiments that could also hunt for X17 in direct searches is the LHCb experiment. Jesse Thaler, a theoretical physicist from the Massachusetts Institute of Technology, says: “By 2023, the LHCb experiment should be able to make a definitive measurement to confirm or refute the interpretation of the Atomki anomalies as arising from a new fundamental force. In the meantime, experiments such as NA64 can continue to chip away at the possible values for the hypothetical particle’s properties, and every new analysis brings with it the possibility (however remote) of discovery.”

Create your website with
Get started