AI Leadership for National Security

Back to top

Back to top

A square-shaped device held in a hand.
El Capitan’s innovative processors, built by Advanced Micro Devices, Inc., are well suited for AI-assisted data analysis of large-scale simulations.

AI has become a turning point in the current technological era, a force not only transforming humans’ daily lives but revolutionizing entire sectors. As the application of AI across industries accelerates the pace of development, so too must national security remain at the cutting edge, a task requiring extensive collaboration to deploy the nation’s most critical resources.

“We’re seeing AI penetrate all aspects of modern life—science, mathematics, literature, media—and the national security space is no exception,” says Brian Giera, the director of Livermore’s Data Science Institute (DSI). “Understanding the gap between what the rest of the world is working on informs the refinements we need to deploy in our space.” Adds Brian Spears, the director of Livermore’s AI Innovation Incubator (AI3) and Cognitive Simulation Institutional Initiative, “AI tools have become so powerful that they’re transforming and accelerating the way science itself is done, and the entity that succeeds in capturing these technologies for scientific innovation first is likely to gain advantage for itself in a way that will not be overcome short of another scientific revolution.” In pursuit of this advantage, Lawrence Livermore leads efforts to leverage AI through varied scientific research, diligent work with external stakeholders, and internal knowledge base development.

Poised for Leadership

The national advancement of AI at a competitive pace requires strength in data, modeling, computational capabilities, and applications—the four pillars of AI for science. The Department of Energy (DOE) provides this strength through existing infrastructure and expertise, with Lawrence Livermore playing a critical role. “We already have diverse AI research and a portfolio in AI with many programmatic hooks. Some projects have used AI for over a decade,” says Giera. “People at the Laboratory can definitely claim they worked on AI before AI was cool.” 

Livermore is a significant supporter of the first pillar: data. The Laboratory possesses around 1,000 trillion tokens of scientific data, approximately 100 times the amount of data that OpenAI used to train the GPT-4 large language model. According to Spears, no models in the world can understand such a high volume of data, making that data a key asset. Livermore stands as a leader in the second pillar—modeling—not only using million-line simulation codes to verify the predictions of AI models but also possessing an institutional understanding of scalable machine learning (ML) to take models to the large scales necessary for solving ever-complex national security problems. To manage and make sense of this data and to train and then run such resource-intensive models requires computational capabilities—the third pillar—on an even grander scale. In addition to El Capitan, the world’s fastest supercomputer and the National Nuclear Security Administration’s (NNSA’s) first exascale computer, Livermore brings its entire computing enterprise to bear on national security missions. (See S&TR, December 2024, Introducing El Capitan.)

This core combination of data, modeling, and computational capabilities comes together in pursuit of applications, the fourth pillar of AI for science. According to Spears, national security efforts such as strategic deterrence, biodefense, manufacturing, and fusion science provide AI models with some of the most complicated reasoning tasks imaginable—tasks that enable models’ continuous improvement. Cindy Gonzales, DSI’s deputy director, says, “The computational heft that Livermore brings to the table is that we work in a context to support national security missions. We’re generating models of the highest consequence alongside leading experts with the knowledge and expertise needed to create an effective system to defend our country.” Adds Spears, “Offering this expertise to ourselves and to our private partners for a strategic U.S. ecosystem sets the north star toward which we build out all our capabilities.” 

Discover, Design, Manufacture, Deploy

The complexity of high-consequence national security missions means that efforts to generate technological solutions for the nation’s pertinent problems often unfold in stages. Aligning all its capabilities, Livermore applies AI for groundbreaking projects in every stage of the “Discover, Design, Manufacture, and Deploy” (DDMD) framework for national security science. “Modern AI solutions are a way to accelerate the path from concept to solution in our strategic space,” says Spears.

For example, in the realm of discovery, the Laboratory’s Generative Unconstrained Intelligent Drug Engineering (GUIDE) program harnesses ML in its process for therapeutic development. Through AI-assisted supercomputing, the GUIDE platform narrows down vast amounts of antibody candidates to those most viable as defenses against an antigen. This approach significantly decreases the amount of experimental testing required and the time to discover a successful therapeutic. (See S&TR, September 2024, GUIDEing Drug Development.) The program has been applied in the fight against the spread of COVID-19 and offers potential for defense against other emerging biological threats.

AI poses huge benefits to engineering and design, as evidenced by the Laboratory’s DarkStar Strategic Initiative. DarkStar investigates applications of AI in scientific problems of complex hydrodynamics, shockwave physics, and energetic materials. (See S&TR, October/November 2024, Beginning at the End.) A notable result was the creation of the AI-aided inverse design approach, which points researchers to the best engineering-based design solutions by working backward from the desired result: specifically controlled material dynamics. “Demonstrating that complex systems can be developed directly from a final state and the initial design resolved by satisfying several constraints simultaneously via AI and machine learning allowed us to make groundbreaking discoveries in the area of hydrodynamic instability,” says Jon Belof, DarkStar project lead.

AI can improve manufacturing efficiency and quality for national security-relevant projects. For example, the Scorpius accelerator, to be installed at the Nevada National Security Site, will enable the radiographic imaging of dynamic subcritical experiments with plutonium, simulating late stages of a nuclear implosion and providing data about the effects of aging and manufacturing methods on nuclear weapons. (See S&TR, April 2021, Shining a Bright Light on Plutonium, and March 2025, Taking the Pulse of Stockpile Stewardship.) AI and ML ensure high-quality images are produced as Livermore manufactures the accelerator’s eventual 984 pulsed-power cells, called line replaceable units (LRUs). The team uses AI to model the environment of the pulsers and optimize them to create clean waveforms on every pulse, despite differing conditions. Working in tandem with hardware testing, this model improves the manufacturing process and ensures that the hundreds of LRUs will work together to generate images effectively.

A digital twin.
A digital twin can be compared to its physical counterpart, then the resulting data can be fed back into the manufacturing pipeline. In addition, digital twins can be used to simulate a part under different conditions. For example, a digital mesh of a woodpile structure printed by a direct-ink-writing machine (left) was simulated (right) when uncured and subjected to self-weight, providing a representation of its deformation and range of stresses (represented by different colors) under these conditions.

The centerpiece of the deployment stage at Livermore is its work on stockpile material aging and compatibility to certify the safety of assets deployed in the nuclear stockpile. Although “deploy” is the most recently applied capability at the Laboratory, future AI projects will capitalize on existing aging data and build better predictive models to identify the need for new materials, mitigate aging conditions, and more. In addition, digital twins—exact digital versions of physical parts with part-specific understanding—assist with deployment by tracing part behavior over time and providing further aging data. 

Wide-Reaching Efforts

In addition to Livermore’s existing scientific applications of AI, external and internal efforts include engaging with partners and building a national AI program. DOE is working to secure funding for a nationwide AI initiative intended to accelerate national security missions and keep the U.S. competitive in the race to capture AI tools for science. With the potential for a multibillion-dollar investment in the three NNSA laboratories, the initiative would expand Livermore’s computing and personnel capabilities significantly in pursuit of this goal. “Sandia, Lawrence Livermore, and Los Alamos national laboratories have had a fantastic partnership over the last couple of years, and we’re all on the same page trying to advance a national AI initiative,” says Jason Pruet, the director of the AI Office Council at Los Alamos. “One dimension of AI’s importance in the national security space is its potential for enabling us to solve challenges we had not expected to be able to solve for generations.”

AI3 is another external vehicle for partnerships and conversations surrounding AI. Through relationships with companies providing each component of AI infrastructure—OpenAI for large models, Microsoft for applications of models to Laboratory-relevant workflows, NVIDIA as a producer of computing hardware, and Hewlett Packard Enterprise as an integrator for computers, for example—Livermore supports the scale-up of AI technologies that would not otherwise be possible. Such relationships provide value in both directions; the Laboratory remains in touch with future technologies that might help advance the national security mission, and other companies gain an understanding of the problems existing in the national security space that can make models better. “A transformational science technology is being driven at enormous scale outside the Laboratory,” says Spears. “Public–private partnerships are the central focus of AI3. We must understand and steer what’s going on outside Lawrence Livermore. We also must be able to pick up those capabilities and pull them inside for our Laboratory missions.”

Complementary to AI3, DSI contributes to both internal and external efforts surrounding AI. Externally, DSI fosters partnerships with academic institutions through programs such as the Data Science Summer Institute and, for the University of California system specifically, the Data Science Challenge, each designed to engage students in solving challenging, real-world data science and AI problems and to build an informed workforce pipeline. Livermore and DSI also have a rich partnership with Case Western Reserve University through an NNSA-funded Center of Excellence. “The Center is a type of academic open space to which we bring partners from all over the NNSA complex and help them learn materials data science,” says Giera. “They’re experts at looking at materials data, the nuances associated with that, and deploying AI or machine learning in those spaces.” Internally, DSI contributes to Livermore’s workforce development and AI Community of Practice—an umbrella organization encompassing AI at the Laboratory—through its consulting service, seminar series, staff training program, and variety of workshops.

An illustration indicating links with community, collaborators, and capabilities to the incubator.
Public–private partnerships are a key element of the Laboratory’s AI Innovation Incubator, which communicates a vision for activities to grow external collaboration and internal AI capabilities.

Other internal efforts aligned with the AI Community of Practice are designed to prepare the entire Laboratory workforce to use AI for the benefit of national security. aiEDGE (AI Education for Development, Growth, and Excellence) encourages Laboratory staff to use AI tools, lowering the barrier to entry through accessible training modules, seminars, sample prompts, and shared success stories. LivChat, an internal AI tool similar to ChatGPT, further encourages employee participation. Greg Herweg, chief technology officer for the LivIT (Livermore Information Technology) program, says, “We have high hopes that, from a day-to-day productivity perspective, AI will help employees deliver more, whether they have a scientific role or an operational role. If we’re more productive, then we’re going to be more competitive with other enterprises.”

In a near future in which AI is an inevitable element of emerging technologies for national security, Livermore’s efforts position the Laboratory, DOE, and the nation at the leading edge. “AI is proliferating more rapidly than anything we’ve seen previously, and we’re at an inflection point as a global society in how we embrace or don’t embrace it,” says Gonzales. “Being in the AI and machine-learning fields and tackling some of the nation’s and the world’s hardest problems is exciting.” Adds Spears, “I’ve used these models, and I’ve seen what we can achieve that we’ve not been able to do before. That level of capability in the hands of a hundred or a thousand people, or shared with 20 or 30 thousand DOE scientists, means that we’re going to do transformational things.”

—Lilly Ackerman

For further information contact Brian Spears (925) 423-4825 (spears9 [at] llnl.gov (spears9[at]llnl[dot]gov)).