COMPUTER simulation has become an important tool for finding solutions in almost every field, from physics, astrophysics, meteorology, chemistry, and biology to economics, psychology, and social science. Running simulations allows researchers to explore ideas, gain insights into new technology, and estimate the performance of systems too complex for conventional experimental analysis. For example, automotive engineers perform complex calculations to consider design adjustments before they produce the first physical model. Aerospace engineers use simulations to evaluate proposed combinations of aircraft features instead of building and testing prototype models for each possibility.
Research teams at Lawrence Livermore are now applying the power of high-performance computing (HPC) to improve energy systems throughout the country. Laboratory scientist Julio Friedmann, who leads the carbon-management program for the Global Security Principal Directorate, says, “The energy and environmental challenges facing the nation are so immense, urgent, and complex, high-performance computing is one of the most important tools we have to accelerate the development and deployment of solutions. Simulations will give us the confidence to move ahead more rapidly, and we don’t have the luxury of learning the slow way.”
HPC will provide U.S. industry with a competitive advantage in solving environmental challenges, achieving energy independence, and reducing the nation’s reliance on imported fossil fuels. In addition, using these computational tools to explore technology solutions will save time and money by helping utility companies reduce capital expenditures, avoid industrial failures, and prevent damage to power-generation equipment.
A Foundry for Solutions
With HPC, design engineers can scale through simulated prototypes much more quickly. “HPC allows us to skip steps in the scaling process,” Friedmann says. “Without these simulations, we’d have to keep building larger prototypes from a benchtop to a 10-kilowatt model on up to 100 kilowatts, 1 megawatt, and so forth.”
In support of stockpile stewardship, the Laboratory has already created many complex simulation tools and developed the expertise to run them effectively on massively parallel computer systems. The successful application of HPC to help maintain a reliable nuclear weapons stockpile has increased confidence in the power and effectiveness of these tools. As a result, large and small firms throughout the energy industry are interested in tapping into the Laboratory’s HPC resources.
“Utilities and those involved with improving energy efficiency work with computational tools every day,” says John Grosh, deputy associate director for Computation’s programs. “They are frequently hampered, though, because they are running applications on desktop computers or small server systems. The computational horsepower offered by our machines is 1,000 to 100,000 times greater than what they have available.” HPC simulations can examine complex scenarios with fine resolution and high fidelity—that is, with the level of detail and accuracy required to ensure that simulated results emulate reality.
In looking for partnership opportunities that are most suitable for addressing national problems, Friedmann has found many energy projects in which HPC simulations could play an important primary or supporting role, improving the quality of solutions and the rate of deployment. He notes, however, that simulation and modeling are not the goal. “They are the medium by which we deliver solutions to problems,” he says. “Like a foundry, we want to forge solutions to address threats to American competitiveness and energy security.”
These challenges are providing a wide range of opportunities where HPC simulations can make a difference. One Laboratory effort is focused on predicting how the intermittent nature of renewable energy sources such as wind and solar power will affect electricity generation. In another project, Livermore researchers are developing HPC simulations to evaluate the environmental implications of new technologies such as those for enhanced energy production and carbon capture and sequestration. Says Friedmann, “Delivering solutions to these problems is our measure of success.”
A New Look at Today’s Technology
“Before we induce hydraulic fracturing or stimulate gas flow in an underground network, we need to evaluate the effects of our proposed techniques,” says Ryerson. “Then we can refine the best methods to get improved energy extraction in a safe and environmentally responsible manner.”
Jeff Roberts, who leads Livermore’s Renewable Energy Program, notes that this multidisciplinary effort builds on the Laboratory’s expertise in seismology and rock mechanics as well as HPC. “Our existing codes were not designed to simulate fracture generation in tightly coupled geologic materials—for example, areas where underground water flows through different rock layers,” says Roberts. “A key challenge in resolving this issue has been to develop a simulation framework that allows us to explore the interactions between fluids and solids during the fracturing process.”
Livermore researchers are also part of the Greater Philadelphia Innovation Cluster (GPIC), a collaboration designed to help organizations build, retrofit, and operate facilities for greater energy efficiency. “We need better insight into how buildings consume energy and lose heat,” says Grosh. “Simulation tools can help us gain this understanding at higher fidelity.” With that information, engineers, architects, and operators can modify designs to improve a facility’s energy efficiency.
As part of this project, Laboratory researchers are developing algorithms and other computational tools to quantify the uncertainties in the energy simulations they are running. Uncertainty quantification is a growing field of science that focuses on quantifying the accuracy of simulated results, in particular, which predicted outcomes are most likely to occur. (See S&TR, July/August 2010, Narrowing Uncertainties.) Determining the quantitative level of model accuracy is especially difficult because calculations include approximations for some physical processes and not all features of a system can be exactly known. By quantifying the uncertainty and numerical errors in simulations of a facility’s energy consumption, Livermore researchers and their GPIC partners can develop more robust and effective building controls.
Forecasts in the Wind
To improve the numerical resolution of the simulated results, the team applies finer mesh grids over the zone of interest, a process called nesting. By nesting the grid resolution, researchers can see in detail how changes in global circulation patterns and local terrain affect the thermal cycling that drives winds on a daily schedule. Postdoctoral scholar Katie Lundquist is working to incorporate the Immersed Boundary Method into the base WRF code. This model will more precisely represent complex terrain, such as mountains, foothills, and other topographic changes that WRF does not resolve, and thus improve the accuracy of the simulated results.
Miller’s team is also developing computational tools to model atmospheric turbulence. Gusts are a form of turbulence that can significantly alter the availability of wind energy at a site as well as the stability and uniformity of wind currents—characteristics that can affect a power plant’s production capabilities. In addition, says Miller, “A wind gust strong enough to heel a sailboat over can be trouble for a turbine,” causing component fatigue or even failure ahead of a turbine’s rated lifetime.
In-depth analysis of wind patterns provides valuable information for determining where to locate large wind-turbine farms. Building a wind farm requires considerable capital expenditures, and choosing a site can affect a developer’s return on investment. HPC simulations can incorporate field data as well as historical averages of wind patterns to characterize potential locations and predict the amount of power each one could produce.
Livermore simulations will also evaluate how wind forecasts for an area can predict energy production at a particular wind farm, information utility companies can use to fine-tune the balance between supply and demand. Many utilities supplement peak load requirements with gas turbines to ensure that the amount of power supplied to the grid remains steady even as wind patterns change. When demand for power peaks, as it would on a hot, still day when many people turn on their air conditioners, gas turbines generate the peak energy needed to help meet those demands. At other times, wind alone can generate the power required.
With timely, accurate predictions of these changing conditions, utilities could make adjustments more quickly and better control their operating costs. The simulations developed to date do not run in real time, but researchers at Livermore and elsewhere are refining the models to operate faster.
Prior to supercomputers, the energy industry relied on experimental data and observations, both of which are expensive to acquire. Researchers must gather enough samples to guarantee that results are statistically valid. As an example, Miller describes an effort to collect data on offshore wind power. The average cost for an offshore meteorological tower is $5 million, and surveying the entire length of the California coast would require 1,000 towers. “Computer simulations are wildly cheaper than that project would be,” says Miller. He notes that field samples are still necessary, providing data to validate model accuracy. “If a simulation starts to diverge from reality, we can use field data to tune the model into alignment, even as it’s running.”
Putting Carbon in Its Place
To design the catalyst, Lightstone’s team is borrowing methodologies from the pharmaceutical industry. In searching for an effective, broad-spectrum antibiotic, drug developers must identify key interactions between small molecules that bind to specific proteins. Designing the synthetic lung catalyst involves making and breaking chemical bonds as well. HPC tools allow the team to quickly analyze candidate compounds. “Our goal is to give the experimentalists a lot of suggestions for effective molecular combinations,” says Lightstone. “Then we provide a fast, iterative feedback loop to modify the options.”
Without HPC, the turnaround time would make this work impractical. “We’d have to do it the old-fashioned way—think of an idea and try it in the lab,” says Lightstone. If researchers relied only on trial and error, they would have to synthesize samples of each candidate molecule to be tested, a difficult and time-consuming process. Instead, using HPC simulations, they can design hundreds of possible combinations and synthesize only the most promising candidates. After creating the catalyst, the researchers will hand it off to Babcock and Wilcox, an international provider of energy products and services, for small-scale systems testing.
Roberts adds that HPC is also important for evaluating the effects of carbon sequestration technologies. “We need to improve our understanding of fluid flow in underground reservoirs,” he says. “For example, where does carbon dioxide go when it’s pumped into the subsurface? And how does that fluid movement affect the surrounding geologic layers.” Evaluating new technologies for carbon capture and sequestration is a long-term, complex process, but HPC simulations speed up the process significantly. Says Friedmann, “With simulations, we expect to cut the deployment cycle in half, reducing a 10- or 15-year timeline to only 5.”
A Thousand Scenarios in a Day
To help utility companies determine what resources are needed for the electric grid of tomorrow, Livermore scientists are using HPC simulations to model the impacts when generation capacity is increased by adding a large number of intermittent wind and solar resources to the grid. Instead of building conventional generating capacity to back up these intermittent resources, grid operators could rely on techniques such as distributed energy storage or demand response, in which consumers shut off appliances on request to reduce the system’s load.
“The advent of distributed storage, generation, and demand response has increased the number of grid state and control variables by orders of magnitude,” says Livermore scientist Thomas Edmunds, who works in the Engineering Directorate. “We need larger-scale planning and operations models to optimize the performance of these systems.” A Laboratory Directed Research and Development project led by Edmunds is focused on developing optimization algorithms for this application.
He notes that HPC can also contribute to grid reliability. Grid managers must operate the system in a fault-tolerate mode with generating levels set such that no single failure will cause a widespread blackout. To ensure reliability, researchers must analyze many independent models of the grid with different failure modes. “This problem is ideal for high-performance computing,” says Edmunds.
Legislation enacted in California calls for 33 percent of the state’s energy supply to come from renewable resources by 2020. Mathematician Carol Meyers of the Engineering Directorate is working on a study initiated by the California Public Utilities Commission and managed by the California Independent System Operator (CAISO) to help determine how the 2020 standard will affect operations of the state’s grid. “Potentially billions of dollars are at stake in terms of backup generation and transmission costs to incorporate renewable resources on a large scale,” says Meyers. “The power utilities need to better understand all of the issues involved so they can adapt to distributed generation.”
For the CAISO study, Meyers and software developers at Energy Exemplar adapted the company’s PLEXOS energy simulator to run on the Laboratory’s supercomputers. “PLEXOS is front-end software that generates the mathematical model for our simulations,” says Meyers. In demonstration runs on the Hyperion test bed, PLEXOS looked at the 2,100 generators across the entire western grid, plus a large number of load, storage, transmission, and reserve requirements. The resulting model included more than 225,000 variables and 400,000 constraints and took an incredibly long time to run—several days to compute one yearlong scenario.
“We dug into the model to determine what slowed it down,” says Meyers. The bottleneck was in the Mixed Integer Programming solver, which takes a mathematical description of the variables, constraints, and objective function and solves the model. “IBM provided licenses for CPLEX, their state-of-the-art mixed-integer optimization software,” she says. Adding CPLEX allowed the researchers to run simulations in parallel. When combined with the Laboratory’s HPC processing power, the modified PLEXOS could simulate a thousand scenarios a day. The development team then modified the mathematics routines behind the model to improve variable interactions. The resulting calculations ran four times faster.
“HPC has the potential to be game-changing in the energy industry,” says Meyers. “It not only answers existing questions but also expands the very nature of questions to be asked.” The team’s future work involves streamlining the PLEXOS–HPC user interface, modifying the optimization routines, and collaborating with IBM to extend CPLEX to run on massively parallel systems. The work has already proven valuable, serving as the demonstration test case for a proposal to simulate the possible consequences of end-to-end changes to the energy system.
Reduced Barriers to Partnership
The new High-Performance Computing Innovation Center (HPCIC) is also helping to extend the Laboratory’s HPC capabilities to energy-related work. Part of the Livermore Valley Open Campus adjacent to Lawrence Livermore and Sandia national laboratories, HPCIC is a public–private partnership whose mission is to boost American industrial competitiveness, scientific research, education, and national security by broadening the adoption and application of supercomputing technology. (See S&TR, March 2011, New Campus Set to Transform Two National Laboratories.) The center provides partnering organizations with access to secure supercomputer resources and computational expertise that would otherwise be unavailable.
HPCIC projects will focus on big, complex challenges and opportunities in the energy sector as well as in climate science, health care, manufacturing, and bioscience. The center will allow industrial partners to access the full range of scientific, algorithmic, and application support available at the national laboratories. Grosh notes that although many companies develop codes that run on desktop computers, the ability to write for modest to large computing systems is much rarer outside the national laboratories, the national security community, and a few select industries. HPCIC will expand access to these computational resources so that industrial partners can perform virtual prototyping and testing, conduct multidisciplinary science research, optimize software applications, and develop system architecture for next-generation computers. “With this new capability, we foresee transforming the way U.S. industry uses HPC and providing an innovation advantage to the energy sector,” says Grosh.
Friedmann adds, “Supercomputing centers are popping up around the country, and they’re all looking for applications in manufacturing and energy and for software that is ready to run on their machines. Working with them to apply our expertise in HPC is a natural outgrowth of the Laboratory’s mission. We have an opportunity to merge diverse projects into a coherent effort and create a knowledge pipeline for tackling important national issues. The growth potential for the Laboratory is immense.”
Key Words: carbon capture and sequestration, clean energy, energy sector, high-performance computing (HPC) simulation, High-Performance Computing Innovation Center (HPCIC), smart electric grid, wind energy.
Lawrence Livermore National Laboratory
Privacy & Legal Notice | UCRL-TR-52000-11-12 | December 8, 2011