Back to top
The best way to understand how a material behaves under certain conditions is to test it. For instance, an engineering project may rely on understanding a material’s strength—its resistance to permanent deformation—across a range of temperatures and strain rates. Gathering all relevant experimental data could be challenging or even impossible due to the time and cost required, material availability, or the difficulty of recreating certain conditions in a laboratory setting. Researchers, therefore, rely on models informed by available data to predict the performance of materials at untested conditions.
Lawrence Livermore National Laboratory has a vested interest in understanding the accuracy of these models and the data that feed them. Material property models play a foundational role in a range of Livermore’s science and engineering research endeavors including stockpile stewardship, the National Nuclear Security Administration’s program to ensure the safety and reliability of the nation’s nuclear stockpile. Materials modeler Nathan Barton explains, “As we shift manufacturing and design approaches to more modern methods, we need to quantify uncertainty to maintain confidence in our nuclear stockpile and our stockpile modernization activities. Understanding the uncertainties gives us increased confidence in the experimental results and the models informed by the experimental data.”
Led by Livermore materials scientist Jeff Florando and supported by the Laboratory Directed Research and Development (LDRD) program, Barton and other Laboratory statisticians, computational modelers, and materials scientists have been developing a statistical framework for researchers to better assess the relationship between model uncertainties and experimental data. In an earlier effort, Florando helped build the Material Implementation, Database, and Analysis Source (MIDAS), a central repository for material strength-related data and models. (S&TR January/February 2012, A Comprehensive Resource for Modeling, Simulation, and Experiments.) “My role in developing MIDAS helped me realize we needed to do a better job understanding uncertainties in material strength research,” says Florando. “MIDAS helps us create material strength model parameterizations, but the simulations are deterministic—they give us an answer that is based on the parameters we put in them.”
The latest framework, based on Bayesian methodology, allows for uncertainties to be updated as new and different types of strength data become available and can be used to determine the future experiment with the greatest potential to reduce uncertainty. Methods developed by the team have informed experimental planning efforts within the Laboratory’s Weapons and Complex Integration (WCI) organization as well as research ventures exploring how materials evolve and degrade.
Rigorous Model Comparison
The first step in the statistical framework—sensitivity analysis—determines which inputs to a given strength model most strongly influence the output of the model under a specific set of conditions, highlighting the most significant variables and those with little effect that can remain fixed. Shrinking the number of variables from the dozens or even hundreds involved in a strength model significantly reduces the computational cost, paving the way for the next step: calibration.
After using sensitivity analysis to determine which parameters to vary, researchers apply Bayesian calibration, training several models on a subset of experimental data and quantifying how well the trained models predict data from elsewhere in the data set. Each model is tested hundreds or thousands of times, predicting results across a range of conditions, and the models are ranked based on their overall prediction error. In this flexible framework, Florando and colleagues could, for example, combine the data from two experiments—one at low strain rates and the other at high—on the deformation of tantalum, a material of interest due to its stable crystal structure over a wide range of pressure and temperature conditions, in their initial demonstration to ensure sufficient coverage for model cross-validation.
To reduce computational demand, Livermore researchers created surrogate versions of the finite element models that only emulate behavior under a subset of conditions. “Often we are trying to replicate complex physics, which requires computationally expensive 3D simulations,” explains Florando. “So, we create a surrogate model that matches the physics data of the full model in a very narrow regime but runs much more cheaply. We can use statistical tools to do tens of thousands of runs with the surrogate model to explore the parameter space in this regime.”
The calibration and cross-validation element of the framework enables researchers to identify which of the tested strength models provides the most accurate predictions overall, as well as under specific experimental conditions. For instance, while all strength models in one study appeared to provide similarly accurate strength predictions for very high temperature conditions, one of the models was significantly less accurate than the others for room-temperature conditions. Such insights into the relative strengths and weaknesses of a model help researchers assess how confident they can be in the accuracy of that model’s output. Most importantly, however, the comparison helps engineers and scientists anticipate the accuracy of future predictions—that is, how well a given model will generate new strength projections in a certain pressure, temperature, and strain rate regime—aiding researchers in selecting the optimal model for a given project.
Experimental Design Enhancement
The sensitivity analysis and calibration elements of the statistical framework not only uncover sources of uncertainty in the strength models, they are also intended to help guide and optimize experimental design decisions. Having determined which input parameters have the greatest influence on model results, researchers can shape future experiments to decrease uncertainty in a given parameter. Further, if model cross-validation reveals that a model predicts poorly for a certain range of conditions, experiments could focus on collecting more data at those conditions to improve model performance. Results of the analysis might also suggest that the model form needs to be refined to better capture experimental observations. Another round of Bayesian calibration and cross-validation incorporating the additional data (or an update of the model) could help determine which models provide the best predictions and under which conditions, given the new information.
The team has incorporated statistical methods into the framework for evaluating, based on previous experimental data and model performance, which experiment will give the greatest reduction in parameter uncertainty. Florando says, “We can use the framework to help inform two important questions: If I had data in a different phase space, how would that change the answer? And given these choices of experiments, which one best reduces the overall uncertainty? Working with stress–strain curves in which we had confidence, we used this approach to pick the strain rate experiment that would best help lower the uncertainty.” In the future, Florando would like to see this approach used to discriminate among a more heterogeneous set of experiments. For instance, would it be better to run a higher strain rate test or a higher temperature test in a given context?
The team notes that while these methods will likely help researchers gather more useful data more efficiently, they may not, necessarily, need to do fewer experiments. Minor differences in experimental setup and some random variation in the results always exist. Their findings do foster a fresh outlook on experimental design. Statistician Ana Kupresanin explains, “Incorporating statistical rigor into your approaches requires changing methodologies and ways of thinking and questioning every data set you work with, even if the data comes from another scientist. Different parameters and materials with slightly different properties are involved in each experiment. If the goal in the experiment is to characterize variability, you need to collect enough, and representative enough, samples to make the method work. The experimental setup must be in line with the methodology.”
Building Block for Big Codes
Material strength models are typically incorporated into larger, more complex, and more computationally intensive physics codes. Understanding how uncertainty in the input parameters of the strength models can affect the output of these larger codes is a difficult task, and researchers are still evaluating the best path forward, making the statistical framework foundational, according to the team. Says statistician Katie Schmidt, “By improving the accuracy of our models of these different materials, we are creating the building blocks for larger and more complex models.”
Uncertainty quantification is a growing field within the stockpile stewardship program, and efforts are already underway to apply the tools and methods developed in this LDRD project to specific problems within the program as well as to other national security-relevant materials science projects. “We have seen a real and healthy continuation of this work in the program space, which is gratifying,” observes Barton. “Ideas from our project have helped inform WCI strategic planning, including experimental choices.” The team is also looking to incorporate some of their statistics tools into MIDAS.
A rewarding part of the project for the team was bringing together researchers from diverse disciplines—computational modeling, statistics, and materials science—and all career stages—from postdoctoral scholar to senior scientist. Says Schmidt, “This was my first project as a postdoctoral scholar at Lawrence Livermore. I knew nothing about materials science coming in, but I was able to absorb a lot. Now I’m working on other materials science projects.” In addition to Schmidt, postdocs engaged in the project include Jason Bernstein, David Rivera, Amanda Muyskens, Matthew Nelms, and William Schill. Florando adds, “This project gave me a deeper appreciation for statisticians and statistics. I’m thinking of ways to incorporate statistics into my other projects.”
—Rose Hansen
Key Words: Bayesian statistics, Laboratory Directed Research and Development (LDRD) program, Material Implementation, Database, and Analysis Source (MIDAS), material strength, stockpile stewardship, uncertainty quantification.
For further information contact Jeff Florando at (925) 422-0698 (florando1 [at] llnl.gov (florando1[at]llnl[dot]gov)).