With three more R&D 100 awards in 2024, Livermore brings visibility to powerful laser optics and high-performance computing software technologies.
The words technology innovation may bring to mind Silicon Valley, billion-dollar venture capital investments, and the constant spin-up of research and development (R&D) projects. At Lawrence Livermore, the phrase buttresses the everyday activities that lead to scientific discovery. Solving challenging national security problems requires investment in, and a philosophy of, innovation.
Awards for innovation are not the Laboratory’s primary R&D objective, but they do increase the visibility of new technologies while highlighting potential opportunities for partnerships, commercialization, and adoption. R&D World magazine’s annual awards recognize the top 100 scientific and technological inventions in categories such as Mechanical/Materials, Process/Prototyping, and Software/Services.
The Laboratory has won at least one R&D 100 Award nearly every year since 1978 for a total of 182. (See the box below, "A Winning Record.") “The R&D 100 awards are judged by an independent group of industry players who scrutinize our science and discoveries,” states Deputy Director of the Innovation and Partnerships Office (IPO), Elsie Quaite-Randall. “We compete on a global scale, which is quite different from awards given within the Department of Energy (DOE) or at the Laboratory itself.”
A Winning Record
As always, Science & Technology Review (S&TR) keeps pace with—and provides a record of—the Laboratory’s history of innovation. (See the article, Celebrating 30 Years of Science & Technology Review, in this issue.) In the three decades since S&TR’s 1995 debut, each year’s scientific editor pencils into the production schedule potential coverage of any R&D 100 Award-winning technologies. Winning teams have developed a variety of science and engineering solutions, from linear solvers and artificial retinas to corrosion-resistant coatings and pathogen detection devices.
Charlotte Eng, a Business Development Executive in Livermore’s Innovations and Partnerships Office, says, “Our researchers work in many different areas, so it’s no surprise that the technologies that gain recognition are wide-ranging and reflect our high-caliber workforce across the board.”
Even the magazine’s long-running predecessor publication, Energy & Technology Review (E&TR), featured R&D 100 Award winners, with an archived example available on the current S&TR website. (See E&TR, April 1994, R&D 100 Award Winners.) For instance, one of the 1993 winning teams developed a single-shot transient digitizer to record electrical data generated by Livermore’s Nova laser, with future use intended at the proposed National Ignition Facility. Another winning team designed and built a two-color digital camera system for astronomy applications, specifically the search for dark matter.
Common Threads
Livermore teams brought home three R&D 100 awards in 2024, each underscoring two of the Laboratory’s most prominent advancements in recent years: fusion ignition and exascale computing. “Our record of wins illustrates the Laboratory’s commitment to mission-driven scientific breakthroughs that lead to real-world, impactful innovation,” says IPO Director Matthew Garrett. (See the box below, "Open for Collaboration.")
The purpose and goals of the National Ignition Facility (NIF) have demanded decades of development in lasers, optics, and high-energy-density science alongside new diagnostic and analytic techniques. As the world’s largest and most energetic laser system, NIF has also inspired smaller scale innovations with impact beyond the Laboratory’s gates. One key R&D area is advanced optics, where factors such as size, shape, fabrication, defect mitigation, and beam quality compel researchers to improve upon traditional methods and materials. The 2024 award-winning optic design illustrates the enduring quest for high-performing optical solutions. “The EXUDE (EXtreme Ultralow-Power Dispersive Element) Elite team has capitalized and improved upon spectral beam combining, showcasing our ability to meet industry needs in this space,” states IPO Business Development Executive for Livermore’s laser and optics portfolio, Alex Hess.
Amid a 70-year history of fielding advanced computing systems, Livermore researchers have developed software solutions that streamline complex processes, often shifting the responsibility for computational efficiency from application codes to supporting software tools and libraries. Many of the Laboratory’s award-winning software projects take this approach, which reduces and often eliminates the need for code rewriting. Similar to the two winning software projects, many are open source, so the wider high-performance computing (HPC) community—at DOE laboratories and beyond—can benefit from these proven technologies. (See S&TR, March 2024, Energizing Enclaves.) IPO Business Development Executive for Livermore’s software portfolio, Mary Holden-Sanchez, notes. “In my 20 years at the Laboratory, I’ve seen computing- and software-related teams regularly win these awards because the technology is constantly advancing.” Garrett adds, “With the recent unveiling of El Capitan and ongoing innovation in computing and software, Livermore continues to commit to leading the HPC field in ways that benefit national security and advance science and technology overall.”
Open for Collaboration
Innovation does not happen in a vacuum. Livermore’s multidisciplinary teams collaborate with each other, industry partners, universities, and national laboratories to drive science and technology forward. The ever-expanding roster of commercial partnerships encourages creative solutions to market challenges using mission-based innovations. These partnerships are pivotal in the Laboratory’s mission to pursue big ideas—so much so that an entire office exists to foster those engagements: the Innovations and Partnerships Office (IPO).
“National laboratories play a vital role in U.S. economic competitiveness,” explains IPO Deputy Director Elsie Quaite-Randall. “The Department of Energy has heavily invested in building technical capabilities that commercial companies can’t build on their own, such as our supercomputers and the access to use them.” As Livermore researchers make new discoveries and develop new tools for the national security mission, other real-world needs come into focus.
For example, Livermore’s DYNA3D code was developed in the 1970s to predict the structural response of weapons under impact conditions. The project eventually evolved into a commercial product called LS-DYNA, a mainstay in the automotive industry for crash-test simulation. (See S&TR, June 2017, Ready, Set, Innovate! Entrepreneurship Flourishes at the Laboratory.)
By coordinating intellectual property (IP) protection activities such as patents and copyrights, IPO manages the Laboratory’s technology transfer and its vast IP portfolios through licensing and research collaborations. “Tech transfer enables Livermore’s award-winning research to make a tangible impact on the everyday lives of Americans. Early-stage scientific breakthroughs can mature through collaboration and become impactful innovations in a range of industries,” explains IPO Director Matthew Garrett. Quaite-Randall points out that the benefits flow in the other direction, too. “The Laboratory is quick to grab onto new technologies in the commercial sector that may be useful in serving our mission,” she notes. Large language models are a recent example.
Upgrading Laser Optics
High-power laser sources with maximum beam quality are in high demand for material processing techniques such as cutting, drilling, and welding, which require beams to propagate over long distances with minimal undesirable beam spreading. Making a single laser (which operates at a single wavelength of light) at sufficiently high power to deliver the needed intensity on target results in problems including waste heat and loss of beam quality, which have necessitated exploration of other laser scaling approaches such as spectral beam combining (SBC). SBC is similar to the reverse process of a white light source entering a prism and exiting the other side having been separated into many individual colors. The technique uses a single critical optical element to combine multiple laser inputs of different wavelengths into one beam, providing the benefit of increased power from preexisting small source lasers.
EXUDE Elite builds on the 2014 R&D 100 Award–winning EXUDE technology. (See S&TR, October/November 2014, The Power of Combined Laser Light.) The original EXUDE is an electrically efficient, near-diffraction-limited surface-relief grating structure embedded into the top layer of a reflective multilayer dielectric thin film, which lent itself to SBC fiber laser systems with multikilowatt outputs. While a powerful technology, technological developments, particularly for optical elements, outpaced EXUDE’s capabilities. “Lasers have advanced many orders of magnitude, yet the optic itself did not advance. As a result, the optic became the limiting factor for spectral beam combining technology,” says Hoang Nguyen, principal investigator for EXUDE Elite. “Until the invention of EXUDE Elite, everybody was still using the same original EXUDE component.”
EXUDE Elite advances SBC optics and enables laser output powers to scale up to megawatt levels—a 20-fold improvement over the original EXUDE. Livermore researchers Nguyen and James Nissen of the original EXUDE team, as well as Michael Rushford, Candis Jackson, Sean Tardif, and Brad Hickman, created the modern version, which combines laser beams through transmission. Beams enter the optical component separately and emerge through the other side as one beam, rather than reflection, in which individual source laser beams reflect off the optic as a combined beam. This configuration allows for a drastically more compact design than that of EXUDE.
Made of entirely all-bulk fused silica material, EXUDE Elite provides a 100-fold improvement to EXUDE’s damage threshold. The original technology’s delicate grating surface-relief structure and multilayer dielectric film stack lack robustness and require a larger laser beam diameter to reduce the irradiance, or density of radiance on a surface, to which the optic is exposed. EXUDE Elite’s sturdy fused-silica optic tolerates a smaller, more intense laser beam, reducing the cost and size of the system. “Having an optically small system means that optical aberrations can be made less sensitive to the high-quality output beam,” says Rushford. “High quality is a measure of how well the beam can focus and be transmitted through additional optical systems.”
EXUDE Elite’s capacity for monolithic assembly enhances its robustness. The short distance between the optic and the required transform lens enables the two components to be coupled into one compact system, with the optic fixed in position to the transform lens within a hollow tube. This specific assembly, the monolithic spectral beam combiner (MOSPEC), is Livermore-patented. The critical grating structure of the optic can be faced inward in this tube arrangement, isolating it from the environmental contamination that ails the original EXUDE and providing ruggedness and ease of alignment.
Such improvements address the challenges the original EXUDE encountered throughout its decade as the state-of-the-art optical component for SBC systems. EXUDE Elite also meets industry and research demand for power scaling of single-output lasers to hundreds of kilowatts and even megawatts. Notably, its output beam quality makes it applicable in laser systems for material processing at far distances—for example, the technology would benefit worker safety in such circumstances as the dismantling of the Baltimore bridge after its 2024 collapse. “Much of this work is extremely dangerous, where workers must come in close contact with heavy beams of metal, physically cut the beams apart, and put them on some sort of rig to carry them away,” says Nguyen. “Any applications involving cutting or welding something at a safe distance could use the laser optic technology that we’ve produced.”
The team is optimistic about EXUDE Elite’s commercialization following in the footsteps of its predecessor, which has generated an estimated $4 billion in industrial revenue since 2016. “Laser and optic technologies have a clear path for technology transfer out of DOE laboratories,” says Hess. “Since our technologies have high functionality and durability standards for deployment to meet mission requirements, they often have high untapped commercial potential.” Adds Nguyen, “We’re looking forward to licensing this technology to manufacturing systems companies, and we’re also looking at different architectures to make it even smaller, more readily available, and more cost-effective for them.”
An illustration of Livermore’s ability to foster continuous innovation, EXUDE Elite’s 2024 R&D 100 Award recognizes the team for pushing the cutting edge even after having achieved it once before. “This award shows what the Laboratory is capable of and how we’re moving forward continually,” says Tardif. Adds Rushford, “EXUDE Elite is, in my opinion, going to be a game changer. But, I also think it is in no way the climax to what can be done by this group.”
Busting Supercomputer Bottlenecks
Large-scale HPC applications are difficult to execute on massively parallel supercomputers. Using these machines to their full capability requires elaborate choreography among the hardware components, the system software, and the application itself—imagine a supercharged airport coordinating thousands of planes across shared runways and gates. Livermore computer scientists have long been developing software solutions that tackle different aspects of HPC operations, bringing to fruition a robust software ecosystem that adapts to the most demanding applications and newest machines, including the El Capitan exascale system. (See S&TR, February 2021, The Exascale Software Portfolio.)
Scientific applications may take days or even weeks to compute, so computational efficiency is a key research area. For instance, as physically separate machines, the supercomputer and the parallel file system send data back and forth while the application runs. This dynamic read/write communication between computational and data storage processes is known as input/output (I/O). One award-winning software technology, UnifyFS, solves many problems associated with I/O slowdowns.
Led by computer scientist Kathryn Mohror, the UnifyFS team developed a software intermediary that accelerates an application’s I/O handoffs. “I/O innovation hasn’t kept up with the leaps in HPC computational performance, which means I/O performance can actually worsen with each generation of HPC systems,” says Mohror, a distinguished member of Livermore’s technical staff. UnifyFS spins up a temporary file system to handle I/O operations more directly and efficiently, thus minimizing delays in providing simulation results.
UnifyFS eliminates a bottleneck called contention, which occurs when parallel processes submit too many simultaneous I/O requests to the parallel file system. By using resources on the compute nodes themselves for ephemeral storage, UnifyFS shortens the time and distance to read/write data for a single application, even if more than one is running on the same supercomputer. The software then automatically transfers node-local data to long-term storage in the parallel file system.
“Today’s HPC systems typically have some form of node-local storage, such as solid-state drives, attached to every node. Users can configure UnifyFS according to storage type including overflow storage,” explains UnifyFS developer Cameron Stanavige. “Our software really shines when reading and writing to shared files or when a program writes a lot of data, then immediately passes it to another program to read.” With UnifyFS managing the storage options, application teams do not need to rewrite their codes to do so. Otherwise, Mohror adds, “Application developers would have to take on a huge, time-consuming coding effort specific to the HPC system.”
UnifyFS also works around the outdated I/O semantics of the portable operating system interface (POSIX), a decades-old standard that can stymie modern HPC workloads. POSIX dictates common programming interfaces for variations of the Unix operating system, and most HPC codes include legacy POSIX function calls that effectively “lock” data files when read or written, slowing down concurrent I/O operations. UnifyFS intercepts these calls and assumes strict POSIX adherence is unnecessary for the majority of HPC applications, thereby never locking a file. For the few applications that truly require POSIX semantics, users can configure this feature within UnifyFS to be more “POSIX-like” while still benefiting from accelerated I/O.
From the user’s perspective, UnifyFS is easy to install without a system administrator’s help. Users can integrate it with the resource management software that orchestrates how an application runs across a supercomputer’s nodes. UnifyFS is open source, production-ready, and portable to different types of HPC systems.
Perhaps most significantly, UnifyFS scales as larger workloads consume more compute nodes. In benchmarking tests, the software outpaced traditional file systems’ I/O speed. For example, the team saw an 18-times speedup on 512 nodes of Oak Ridge National Laboratory’s (ORNL) Summit supercomputer when using UnifyFS compared to the traditional parallel file system. When running the Flash-X multiphysics code on 256 nodes of ORNL’s exascale machine Frontier, I/O was 10 times faster with UnifyFS than with the parallel file system. In both cases, UnifyFS freed up bandwidth even as node utilization increased, delivering highly efficient simulation results.
As HPC integrations with cloud computing gain popularity, the team tested applications on an Amazon Web Services (AWS) cloud instance, which resulted in 12.5-times faster write operations with UnifyFS than without—a meaningful gain considering the expense of paying for a high-performance file system. “We’ve successfully installed UnifyFS on an AWS cluster and tested it with small-scale I/O programs, and work is in progress on larger scale AWS clusters with heavy I/O needs,” says Stanavige. “Improved results would mean more cloud-compute time per dollar for the scientists running their applications.”
Even applications that integrate AI workloads can benefit from UnifyFS’s scalability boost. The team tested the software with AI training workloads on Livermore’s Corona supercomputer and discovered that training runs optimized with UnifyFS took just one-fifth of the time as runs using only the parallel file system.
The UnifyFS team includes researchers from Livermore, ORNL, and the National Center for Supercomputing Applications at the University of Illinois. Mohror notes, “Our team developed UnifyFS from the ground up and worked very hard to turn it into a production-ready file system. I can’t express how proud I am that we were recognized with this award.” In addition to the 2024 R&D 100 Award, they won the Best Open Source Software Award at the 2023 IEEE International Parallel and Distributed Processing Symposium.
This R&D 100 Award is not the first for some of Livermore’s UnifyFS contributors. Mohror, Stanavige, Tony Hutter, and Adam Moody also won in 2019 for the Scalable Checkpoint/Restart software framework. (See S&TR, March 2024, Evolving at the Speed of Exascale; and July 2020, Resiliency in Computer Applications.) Mohror says, “We’re honored to be recognized for our continued efforts toward helping users get the best performance out of HPC systems. The Laboratory is truly a world leader in HPC, and I’m proud we are a part of it.”
Optimizing Memory Management
The latest generation of supercomputers—such as the Laboratory’s El Capitan and Tuolumne systems—offer unprecedented computational power and simulation resolution. (See S&TR, December 2024, GUIDEing Drug Development.) Underpinning these capabilities is an intricate interplay between hardware components and software libraries, which must accommodate ever-increasing application demands. Developed at Livermore for the DOE’s Exascale Computing Project (ECP), the award-winning UMap software library steps in to manage access to application data in the memory-storage hierarchy.
Large-scale HPC applications leverage hierarchical data storage tiers, each with a specialized purpose: deep hard-disk and solid-state drives; intermediate dynamic random-access from dual in-line memory modules, storage class, and non-volatile memories; and central processing units located inside the compute nodes. Data generated by an application moves to and from these tiers and the main memory space depending on whether, where, and when it is needed as the computational workflow proceeds.
Perhaps unsurprisingly, inefficiencies abound. Datastores come with their own settings, configurations, access rules, and latency characteristics. When multiple applications use the same datastores, transferring data between main memory (where the compute nodes can act on it) and storage (where it persists) becomes even more challenging—as does efficient access to stored data. UMap’s unique memory-mapping interface optimizes an application’s utilization of the supercomputer’s datastores. “Data analytics is becoming even more closely intertwined with traditional HPC algorithms, and the need for higher bandwidth and higher capacity memory is critical,” states project lead Maya Gokhale, a distinguished member of the Laboratory’s technical staff.
Standard system operations require multiple steps to move data from any datastore into main memory where it can be accessed more quickly. Instead, UMap creates a virtual memory space for an application to reference data wherever it resides for easier, more flexible access. When applications access data from this virtual memory region, UMap caches subsets, or chunks, of that data so they are readily available when the application needs them—analogous to a web browser’s caching of web pages for faster loading. UMap tracks the usage of these chunks (also called pages), dynamically evicting them from the cache if no longer needed or, alternatively, pinning frequently used chunks to prevent eviction. Furthermore, UMap reduces the overhead of page faults, which are generated when the application needs a data chunk (page) that is not in main memory. Users can tailor rules for fetching, pinning, and evicting data chunks as well as define a page size suitable for their use cases.
Additionally, UMap provides three datastore handlers that improve the memory-mapping solutions built into many HPC operating systems. FileStore lets users dictate the granularity of pages cached in memory; SparseStore optimizes concurrent access to sparse data pages; and NetworkStore enables access to datastores on remote servers. Besides these handlers, UMap is extensible to new types of datastores and therefore scalable to new computing architectures, such as El Capitan’s near-node data storage modules.
UMap’s optimizations ease the burden on application developers, who do not need to rewrite their codes to manage these activities. Nor do application teams have to learn all the nuances of interacting with existing or future datastores. “Users only need to be familiar with UMap’s interface, even if the datastores change,” states Ivy Peng, UMap developer and now associate professor of computer science at the KTH Royal Institute of Technology in Sweden. “Our software also provides customizations to suit an application’s unique requirements, which a default system-level solution cannot do.”
The Livermore Metagenomics Analysis Toolkit (LMAT) application, which detects and characterizes organisms in a biological sample, takes advantage of UMap’s features. (See S&TR, October/November 2015, Two-Part Microbial Detection Enhances Bioidentification.) Accessing and searching LMAT’s enormous database of genomic sequences can quickly strain an HPC system and its datastores. With UMap integrated into LMAT, users can quickly search larger swaths of this data without compromising their search criteria or scope. As users send more queries to the LMAT application, UMap speeds up data retrieval accordingly, demonstrating sustained high performance well beyond the operating system’s memory-mapping capability.
Released as open source, UMap is a cost-effective, compute-efficient solution that has outperformed traditional memory-mapping solutions nearly twofold, and researchers across DOE have seen these benefits in a range of applications including when paired with complementary HPC tools. For instance, ECP’s ExaGraph co-design center used UMap in conjunction with the Laboratory-led Metall persistent memory allocator for complex graph analytics applications. When used alongside the Livermore-developed Caliper introspection system, UMap helps application developers better understand memory interactions and performance issues.
UMap introduces a new memory-storage paradigm. “Our interest in the intersection of computer system architectures and data science began more than a decade ago,” explains Gokhale. “High-performance data analytics applications need fast access to very large datasets, and the latency gap has been shrinking between certain types of memory and storage. We are delighted that the awards committee recognized UMap’s capabilities.”
Memory and storage tiers were not as sophisticated a decade ago. As the hardware evolved, UMap arose from the need for a consistent user interface to manage memory-mapping tasks. “We have to always ask what’s next,” states Peng, who worked at the Laboratory from 2019 to 2022. “Power consumption and manufacturing costs are major concerns in the HPC field, so innovation is a constant. To keep up with the rapid pace, we have to look forward with an open mind.”
During her prior tenure at Los Alamos National Laboratory, Gokhale was part of the 2006 R&D 100 Award–winning team behind the Trident compiler framework, a technology targeting a different aspect of HPC performance. She points out, “Advances in HPC require much more than better processors and faster networks. The need for extensibility, plugin architectures, and scalability that were foundational requirements in the design of UMap continue to be of highest importance in the rapidly diversifying HPC space.”
Lab of Opportunity
Livermore’s award-winning technologies serve as both exemplars of what has been accomplished and heralds of what is to come. The nationwide recognition gives Laboratory teams a larger audience for their work, ultimately driving more public and commercial engagement with national laboratories, DOE missions, and R&D conducted in the national interest. Quaite-Randall affirms, “We don’t make products. We make discoveries. It’s amazing what our researchers are doing.”
For the IPO team, guiding researchers through the next phase of their pioneering technologies is a gratifying aspect of mission delivery. “I am truly excited for each team when they receive a prestigious award because so much work has gone into the research,” says the IPO Business Development Executive responsible for coordinating Livermore’s award submissions, Charlotte Eng. Garrett agrees, “Our researchers’ dedication to applying science and technology to make the world a safer place shines through in their motivation to innovate in bold ways.”
—Holly Auten and Lilly Ackerman
For further information contact Elsie Quaite-Randall (925) 423-5210 (quaiterandall1 [at] llnl.gov (quaiterandall1[at]llnl[dot]gov)).




