Lawrence Livermore National Laboratory



Computational Innovation Boosts Manufacturing
As part of the High Performance Computing for Manufacturing (HPC4Mfg) Program, researchers have combined high-performance computing (HPC) and additive manufacturing to design and build new devices and materials with unique physical and microstructural properties. This artist’s rendering shows a novel material’s octet truss structure created using microstereolithography.

Computational Innovation Boosts Manufacturing

Manufacturing industries create a vast array of products, from steel I-beams to Blu-ray™ players to paperback books. However, ensuring the long-term vitality and economic competitiveness of U.S. industries in an increasingly globalized marketplace will require more energy-efficient processes and better material conservation. In 2015, the Advanced Manufacturing Office, within the Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE), and Lawrence Livermore instituted the High Performance Computing for Manufacturing (HPC4Mfg) Program to advance clean-energy technologies, increase the efficiency of manufacturing processes, accelerate innovation, reduce the time it takes to bring new technologies to market, and improve the quality of products.

The program unites the world-class high-performance computing (HPC) resources and expertise of Lawrence Livermore, Lawrence Berkeley, and Oak Ridge national laboratories with U.S. manufacturers to deliver solutions that could revolutionize the manufacturing industry. Jeff Roberts, Livermore’s deputy director for Energy and Climate Security, who oversees EERE projects at the Laboratory, says, “We are showing companies how they can use high-performance computing and national laboratory expertise to become more competitive and bring new products to market faster by reducing energy, waste, and rejected parts.” Peg Folta, a deputy program manager in the Laboratory’s Global Security Principal Directorate and founding director of the HPC4Mfg Program, adds, “The national laboratories have experts in advanced modeling, simulation, and data analysis. We match our people with industry partners to address technical challenges targeted by the manufacturers.”

Under the guidance of Folta and her successor, Lori Diachin, the department head for Information Technology in Livermore’s Computation Directorate, the HPC4Mfg Program solicits proposals on manufacturing challenges twice a year. Once concept papers are selected, principal investigators (PIs) from within the three partner laboratories team up with the industry partners to produce full proposals. This year the program will add to its list of participating laboratories, making more PIs available to help execute individual projects. Each $3 million solicitation provides up to $300,000 for a funded project. Folta notes, “Through extensive outreach, we encourage manufacturers during the submittal and re-submittal process, identifying projects that best utilize high-performance computing to address issues with high impact.” She adds the proposals provide insight into the challenges facing these companies. “We found a surprising number of unique materials projects were being proposed, as well as ones related to unusual welding issues.” Roberts explains, “The Laboratory is world recognized for its expertise in materials science, additive manufacturing, and HPC codes and simulations. The ability to help predict materials performance is becoming increasingly important, and the Laboratory shines in this area.”

Yearlong demonstration projects based on the funded proposals are aimed at showing industry partners how HPC can address their manufacturing challenges and provide a high return on investment. Solutions developed through these projects can then be fully implemented through industry, consortium, government funding, or a combination thereof. The entire process has been designed to streamline private–public partnership and provide a means for sharing what has been learned to the broader industry, while protecting intellectual property. Three of the program’s initial five seedling projects serve as prominent examples of what can be achieved through the HPC4Mfg collaborative model. (See the box below.)

In the steel-making process, iron ore and coke pellets are fed through the top of a blast furnace into the device’s main body, or shaft. Oxygen and coal are heated and injected into the raceway section of the furnace. The raw materials ignite and descend to the bottom of the furnace over 6 to 8 hours, liquefying into slag (waste material) and pig iron. The slag and iron are drained from the hearth at regular intervals. As part of the “Virtual Blast Furnace” project, Livermore is helping parallelize a series of Purdue-developed simulation codes to model furnace processes.






In the steel-making process, iron ore and coke pellets are fed through the top of a blast furnace into the device’s main body, or shaft. Oxygen and coal are heated and injected into the raceway section of the furnace. The raw materials ignite and descend to the bottom of the furnace over 6 to 8 hours, liquefying into slag (waste material) and pig iron. The slag and iron are drained from the hearth at regular intervals. As part of the “Virtual Blast Furnace” project, Livermore is helping parallelize a series of Purdue-developed simulation codes to model furnace processes.

It's a Blast (Furnace, That Is)

Steel is used in many industry sectors, including transportation, home goods, energy, and construction. The steel industry is also the fourth largest energy-consuming industry in the nation. Thus, decreasing its energy consumption by even a small percentage could yield big cost savings and reduce environmental impacts.

Livermore computational physicist Aaron Fisher is working with Purdue University Northwest’s Center for Innovation through Visualization and Simulation (CIVS) and a steel manufacturing consortium on the HPC4Mfg “Virtual Blast Furnace” project. One goal is to help reduce steel manufacturers’ reliance on coke—a coal-based fuel with high carbon content. To manufacture steel, iron ore is combined with coke in a blast furnace then heated and melted, creating molten pig iron.

“If we could optimize the smelting process to reduce the average amount of coke used to produce a ton of hot metal by 5 percent, the industry could save $80 million a year,” says Fisher. Another aspect of their research focuses on increasing the energy efficiency of the centuries-old furnace process. Fisher notes, “Twenty furnaces in the U.S. produce all the country’s steel, and those furnaces consume about 65 percent of the energy in the steel-making process.”

Much of steel manufacturing is still an art, dependent on the experience of skilled workers to understand the conditions inside the furnace. “We can’t place sensors inside the furnace to gather data because the furnace temperatures are too high,” says Fisher. To help operators and manufacturers better understand this heat-intensive environment, CIVS created a series of simulation codes to model three independent sections of the furnace. The Shaft code models gas flow and reactions through alternating layers of iron ore and coke. The Raceway code models the fuel injection flow and combustion occurring below the shaft. The Hearth code models the behavior of the bottom part of the furnace where the liquid iron and the slag (waste material) flow out of tapholes. These codes run serially on a dedicated desktop computer at Purdue. It takes one week of 24/7 operations for the codes to complete a two-dimensional (2D) simulation, and 1 to 2 months to run a three-dimensional (3D) simulation—and that’s without a crash or power outage.

The lack of serious computational power means the models are limited. For instance, they cannot describe dynamic processes, such as those occurring during startup, shutdown, and periods of instability. Also, the three models must be connected manually, in sequence, to simulate the whole furnace. Fisher explains, “We proposed integrating the individual codes into a single one that can run on Livermore’s HPC clusters. By using a thousand processors in parallel, we could run the code nearly a thousand times faster and reduce the run time of a 3D simulation to less than one day.”

The team conducted two demonstration studies. In the first study, Livermore developed the capability to launch multiple instances of the Shaft code in parallel. Each 2D simulation had different input parameters for the amount of oxygen enrichment, the natural gas-injection rate, and the hot blast temperature. The study showed that with Livermore’s HPC clusters, potentially hundreds of simulations could be run for analyzing coke consumption rates and furnace stabilities in significantly less time than with CIVS current computing resources. In the second demonstration, the team focused on the steel industry’s “ladle” operation, in which molten iron and alloying materials are poured into a huge cup and stirred to make steel. Foundries are interested in shortening the stirring time while increasing how much molten steel can be produced. One stirring method involves injecting neutral gases to facilitate the mixing of iron and alloy materials. Using the CIVS computing cluster and desktop and an off-the-shelf engineering code, it took CIVS two weeks to model the gas process at low resolution. Livermore proposed using 2,000 processors and adding zones to achieve more detail. A scaling study with these simulations showed that the Livermore clusters could run problems of this type (and larger) 35 times faster than the CIVS cluster and 1,400 times faster than the dedicated desktop.

The project team is now merging the three CIVS codes and redesigning them to run in parallel on Livermore’s machines. Fisher says, “Our long-term vision is to use HPC and advanced numerical and computational methods to develop an interactive, virtual blast furnace that combines a comprehensive, integrated, high-fidelity, dynamic, multiphysics model with fast data visualization.” Once the codes are running as one, the team plans to partner with members of the steel consortium to study ways to improve furnace operation.

 In a demonstration study, Livermore researchers ran more than 200 simulations of critical blast furnace parameters, such as oxygen gas enrichment, natural gas injection rate, and hot blast temperature. Simulation outputs (shown here) were analyzed for coke consumption rate and furnace stability.












In a demonstration study, Livermore researchers ran more than 200 simulations of critical blast furnace parameters, such as oxygen gas enrichment, natural gas injection rate, and hot blast temperature. Simulation outputs (shown here) were analyzed for coke consumption rate and furnace stability.



 	Livermore researchers established the versatility and speed of using a commercial, off-the-shelf code running on the Laboratory’s supercomputing clusters to simulate the steel industry’s ladle operation. The simulation was run using a range of mesh sizes and different numbers of central processing units. The color scale represents the percentage of alloyed steel in the fluid. Red is fully mixed steel, and blue is the slag layer.






Livermore researchers established the versatility and speed of using a commercial, off-the-shelf code running on the Laboratory’s supercomputing clusters to simulate the steel industry’s ladle operation. The simulation was run using a range of mesh sizes and different numbers of central processing units. The color scale represents the percentage of alloyed steel in the fluid. Red is fully mixed steel, and blue is the slag layer.

Paper's Pressing Problem

Turning wood pulp into paper is the third largest energy-consuming process in manufacturing. The Agenda 2020 Technology Alliance, a nonprofit organization that aims to identify and solve challenges in the pulp and paper industry, turned to HPC4Mfg to explore ways to cut costs and save energy. Livermore’s Yue Hao and Wei Wang partnered with Lawrence Berkeley’s David Trebotich and Agenda 2020’s Jun Xu and David Turpin to improve the industry’s energy efficiency.

In the paper-pressing process, wet, porous paper pulp is fed onto a moving belt of fine-mesh screening that holds a felt layer. The felt–pulp layers are squeezed through rollers and passed over steam-heated cylinders to remove the remaining water. Hao explains, “Reducing the amount of energy required in the drying process by 20 percent could save the industry $250 million annually.”

One way to save considerable energy might be to reduce the “re-wetting” of the pulp that occurs during the pressing process. The rollers squeeze water out of the paper at the pinch point (called the nip), and the felt soaks up the water. However, as the layers leave the rollers and the pressure eases, the pulp sucks up some of the residual moisture from the felt, re-wetting the paper. Hao says, “The industry determined it needed an accurate numerical model to understand the physics of re-wetting, so it could take steps to design more energy-efficient equipment.”

The exact mechanism for re-wetting is not well understood. Thus, Agenda 2020 turned to the simulation experts at Livermore and Berkeley to create a multiphysics modeling framework. Using existing industry data, including felt measurements, computerized tomography (CT) images of the felt, and paper-machine press data, the two national laboratories developed a coupled-physics simulation framework to determine how water flows through porous paper pulp during and after the pressing process. Berkeley developed a 3D model to look at pore-scale flow behaviors in the felt, running the model on up to 50,000 central processing units at the National Energy Research Scientific Computing Center. Livermore then used the results of Berkeley’s pore-scale simulation to constrain its continuum model, which integrates all multiscale data into a single model.

Hao notes that the problem was complicated, involving a very narrow physical space and short timeframes. The results from the initial continuum model clearly showed the deformation and dryness of the paper as it traverses rollers and provided a detailed numerical view of the process—an essential first step to optimizing paper drying. Next, industry must be encouraged to continue support for model development, which will require obtaining better CT images of the paper for improving model fidelity. A CT machine with much higher resolution than what the industry currently has available will be needed since the paper is only 400 micrometers thick. “As new information and data are added,” says Hao, “the model can be modified to more closely reflect the reality of the process.”

 	In paper processing, wet paper pulp and felt layers are pressed between rollers at high speed to remove water from the paper. “Re-wetting” occurs after the paper and felt leave the high-pressure area of the nip. In collaboration with the Agenda 2020 Technology Alliance, Lawrence Livermore and Lawrence Berkeley national laboratories are exploring the re-wetting process with the goal of maximizing water removal and minimizing power consumption.
In paper processing, wet paper pulp and felt layers are pressed between rollers at high speed to remove water from the paper. “Re-wetting” occurs after the paper and felt leave the high-pressure area of the nip. In collaboration with the Agenda 2020 Technology Alliance, Lawrence Livermore and Lawrence Berkeley national laboratories are exploring the re-wetting process with the goal of maximizing water removal and minimizing power consumption.

HPC4energy: Precursor to HPC4Mfg

Started in 2015, the High Performance Computing for Manufacturing (HPC4Mfg) Program is fairly new, but it had its genesis in previous Laboratory efforts. Four years earlier, Lawrence Livermore hosted a meeting on high-performance computing (HPC) for the Council on Competitiveness Technology Leadership and Strategy Initiative advisory committee. The Council on Competitiveness is a nonprofit, nonpartisan, nongovernmental organization made up of corporate chief executive officers (CEOs), university presidents, and labor leaders. During that meeting, the council’s president and CEO called HPC an “innovation accelerator,” adding it offered an extraordinary opportunity for the United States to design products faster and to minimize the time needed for creating and testing prototypes. Soon after, Livermore created the HPC4energy incubator, a one-year pilot program for accelerating energy technology development and boosting U.S. competitiveness in the global marketplace by bringing together industry and the Laboratory’s scientific and computing resources. The pilot concluded in 2013 to praise from the Laboratory and industry participants alike (see S&TR, June 2013, Scaling Up Energy Innovation through Advanced Computing; and June 2012, Incubator Busy Growing Energy Technologies).


Growing High-Quality Crystals

Gallium nitride (GaN) is an emerging semiconductor material making inroads in many technological areas, such as solid-state lighting and power electronics. One application that most people are familiar with is the Blu-ray player, which uses a violet laser diode on a GaN substrate to read Blu-ray DVDs. For GaN-based light‑emitting diodes, GaN layers are typically deposited on a nonnative substrate such as sapphire or silicon carbide, leading to lattice strain—displacement of atoms—between the two materials and reducing device reliability and performance. GaN-based devices that use a GaN substrate (known as GaN-on-GaN technology) have higher power operation and higher efficiencies than those made with traditional semiconductor materials. As a result, they also have the potential to drastically cut energy consumption in consumer applications.

The challenge to making GaN-on-GaN devices a widespread reality is finding scalable ways to grow high-quality crystals of the material quickly and inexpensively. Semiconducting materials are typically grown using melt techniques. However, GaN crystals cannot be grown using such methods because the material’s melting temperature is exceedingly high (2,500 degrees Celsius), and high pressures are needed to keep the material from decomposing into its two elemental constituents. The most common GaN-production process is hydride vapor-phase epitaxy (HVPE), which involves reacting ammonia with gallium chloride at about 1,100 degrees Celsius. Although this process has high growth rates, it is also expensive and usually results in crystals with too many defects for many applications.

The SORAA company, the world’s leading developer of solid-state lighting based on GaN substrates, is working on a promising GaN crystal-growth technology that could reduce production costs of high-quality GaN-on-GaN light-emitting diodes by 20 percent and enable the development of next-generation power electronics, such as controllers for motors in hybrid cars. The company is developing a small, high-pressure autoclave or reactor for ammonothermal growth, where ammonia (a solvent) helps reduce the required growth temperature. (The process is similar to a highly successful technique wherein water is used as a solvent to grow quartz crystals.) However, with the current reactor setup, the productivity is still rather low. In addition, the autoclave can operate at pressures exceeding 700 megapascals (7,000 atmospheres) and at temperatures as high as 750 degrees Celsius. Given the corrosive nature of ammonia and various chemicals used, the reactor must be made of high-strength steels or corrosion-resistant alloys and metals. These conditions, combined with the infeasibility of measuring the environment within the reactor, make it extremely difficult to understand what is happening inside the autoclave.

SORAA teamed with Livermore through HPC4Mfg to better understand the growth process using multiphysics simulations run on the Laboratory’s HPC systems. Livermore computer scientist Nick Killingsworth, the PI for the project, turned to the licensed code StarCCM+ to simulate the reactor, improve throughput, and run higher fidelity models. “SORAA has the experience, but they just needed some simulation power to help them understand the dynamics inside the reactor,” he explains. Using the Laboratory’s SYRAH supercomputer and StarCCM+, the team ran simulations incorporating more mesh points to better understand the flow within the ammonothermal reactor, completing each simulation in two to three days. Previous simulations run on SORAA’s 12-processor workstation took an entire week to complete.

Results from the higher fidelity simulations revealed a much more complicated flow structure in the autoclave than anticipated. Modeling the flow and temperature profile along the walls of the reactor showed a flow that was transient—changing over time—and turbulent. The results improved predictions of local temperatures and flow velocities within the reactor, providing valuable insight. As a result, SORAA is now in a better position to optimize the uniform growth of GaN crystals. Killingsworth says, “This new high-fidelity model could save years of trial-and-error experimentation that are typically needed to bring a process into large-scale commercial production.” Once large crystals can be grown quickly and with fewer defects, the door will be open for wider use of GaN in high-power electronics and other applications.

Lawrence Berkeley used industry-derived data and measurements collected from scanning electron microscopy and computerized tomography (CT) to create a pore-scale model of water flow behaviors in the felt. Results were then fed into a multiscale continuum model developed by Livermore. The model shows the deformation and dryness of the compressed paper and felt. Red represents higher dryness and blue represents lower dryness.
Lawrence Berkeley used industry-derived data and measurements collected from scanning electron microscopy and computerized tomography (CT) to create a pore-scale model of water flow behaviors in the felt. Results were then fed into a multiscale continuum model developed by Livermore. The model shows the deformation and dryness of the compressed paper and felt. Red represents higher dryness and blue represents lower dryness.



The SORAA reactor for growing high-quality gallium-nitride (GaN) crystals is filled with supercritical ammonia and is hotter at the bottom than at the top. The polycrystalline GaN “nutrient” is placed on the bottom, where the temperature and solubility is highest. Dissolved GaN is transported by free convection up through a baffle to the cool end of the system. As it cools, the material’s solubility decreases and it deposits on seed crystals. The GaN-depleted solvent then sinks to the bottom of the reactor and the cycle is repeated. The crystals grow larger with each loop through the system.
The SORAA reactor for growing high-quality gallium-nitride (GaN) crystals is filled with supercritical ammonia and is hotter at the bottom than at the top. The polycrystalline GaN “nutrient” is placed on the bottom, where the temperature and solubility is highest. Dissolved GaN is transported by free convection up through a baffle to the cool end of the system. As it cools, the material’s solubility decreases and it deposits on seed crystals. The GaN-depleted solvent then sinks to the bottom of the reactor and the cycle is repeated. The crystals grow larger with each loop through the system.



High-fidelity simulations of the flow within the GaN reactor, developed as part of the collaboration between Livermore and SORAA, show more complicated, turbulent flow structures compared to previous work. Color gradient indicates magnitude and direction of velocity vectors within the reactor. Complicated vertical structures near the reactor walls were found to vary with time.








High-fidelity simulations of the flow within the GaN reactor, developed as part of the collaboration between Livermore and SORAA, show more complicated, turbulent flow structures compared to previous work. Color gradient indicates magnitude and direction of velocity vectors within the reactor. Complicated vertical structures near the reactor walls were found to vary with time.

HPC4Mfg Running at Steady State

Since its inception, the HPC4Mfg Program has expanded from the first five seedling projects to more than 28 projects involving three national laboratories. As the program continues to solicit proposals twice a year, it aims to build an HPC–manufacturing community through industry outreach and academic involvement. The first annual “Industry Engagement Day,” to be held in March 2017, will bring together companies, laboratories, academia, consortia, and state and local government officials to learn about the benefits of HPC adoption in manufacturing and how HPC capabilities at the national laboratories can support manufacturing companies. Participants will also be able to discuss current manufacturing challenges. In addition, the event provides a venue for HPC4Mfg Program managers to receive feedback on work done thus far for refining and improving collaborations.

Long-term, the HPC4Mfg Program will also play a role in fostering future talent for the manufacturing industry. The program has been successful in recruiting postdoctoral researchers and students to the Laboratory, and has provided input on ways to improve engineering curriculums at universities. Expanded programs are being planned to encourage students, professors, and industry professionals to train the next-generation manufacturing workforce.

Already, the partnerships achieved through the HPC4Mfg Program are proving fruitful. “Our projects address diverse challenges,” says Folta, “helping with process optimization, design improvement, and introduction of new computational tools where they can benefit most. We continue to reach out to companies and are working to ensure that after each project is completed, HPC is being woven into the fabric of manufacturing.” The program strives to make connections and help the nation’s most energy-intensive industries become more energy-efficient and globally competitive, culminating in a win–win for U.S. manufacturing, the national laboratories, and the nation.

—Ann Parker

Key Words: ammonothermal growth, Blu-ray player, crystal growth, economic competitiveness, energy efficiency, gallium nitride (GaN), GaN-on-GaN technology, high-performance computing (HPC), HPC4energy, High Performance Computing for Manufacturing (HPC4Mfg) Program, light-emitting diode, paper manufacturing, semiconductor, steel industry.

For further information contact Lori Diachin (925) 422-7130 (diachin2@llnl.gov) or Peg Folta (925) 422-7708 (folta2@llnl.gov).