Machine Learning Improves Fusion Experiment Predictions at NIF

Lawrence Livermore researchers combine HPC resources such as the Sierra supercomputer (left) and the National Ignition Facility (right) to understand complex problems in fusion. Source:

Physicists of the Lawrence Livermore National Laboratory (LLNL) have combined a new machine learning strategy with their computer models to drastically improve the accuracy of simulations modelling Inertial Confinement Fusion (ICF) experiments at the National Ignition Facility (NIF). This technique leverages past experiments to calibrate simulations, enabling to predict the outcome of new experiments with higher accuracy than simulations on their own.

The goal of ICF is to produce neutrons by compressing a small capsule filled with deuterium and tritium, creating the suitable conditions for fusion reactions. ICF experiments are rather complex and expensive, thus researchers resort to computer models to establish the appropriate setup to validate their hypothesis. Unfortunately, due to the many approximations needed to reduce the computational cost, simulations are not fully predictive across the whole design space and deviate from measurements for the most demanding events. Now, thanks to his new technique the input of past experimental data will allow to improve the prediction of future performance.

The method consists in a deep neural network trained on simulation results and transfer learned with experimental data. Neural networks are popular machine learning models consisting in a series of nonlinear functions that are fitted to data and which show great performance at many tasks. The neural network parameters, weights and biases, are adjusted to minimize the error between the expected output and the experimental results. The training process is achieved through the observation of a large set of inputs with the corresponding outputs. This work at NIF is based on a type of neural network known as autoencoder, which is often used for data compression. On the other hand, transfer learning is a machine learning technique that takes a model trained to solve one task and partially re-trains is to tackle a different but related problem for which there is not enough training data.

The fact that this model gets more accurate every time an experiment is carried out and new data is acquired constitutes a remarkable advantage, as stated by Kelly Humbird, lead author of the study. Furthermore, design physicist Luc Peterson said that “In this sense, even experiments that don’t perform as expected are good experiments, since we can quantitatively learn from each new experience”. Humbird confirms that “we’ve observed the prediction error generally decreasing as we continue to acquire more data.”

These results were presented at the 62nd APS Division of Plasma Physics 2020 meeting held in November 2020 and reported in the paper “Cognitive simulation models for inertial confinement fusion: Combining simulation and experimental data” that can be accessed here.


Leave a Comment

WP Twitter Auto Publish Powered By :