Back to top
Working in the high-performance computing (HPC) field can feel like riding in a rocket. It takes you on an exhilarating, white-knuckle ride into uncharted space. You simultaneously appreciate the technology currently propelling you forward and wonder how far and fast the rocket can go. Pushing technological and scientific boundaries is how we explore new possibilities.
The next phase of computing technology is known as the exascale era, when computers will be able to process an exaflop—a quintillion (1018) calculations per second. With a vast increase over existing systems in compute power, storage, and memory, exascale computing will allow us to peer more deeply into the inner workings of physical systems. We will uncover scientific surprises and learn more than ever before. Such insights will help advance Lawrence Livermore’s missions.
This potential drives our participation in the Department of Energy’s Exascale Computing Project (ECP), in which Livermore staff hold key leadership positions and participate in innumerable projects. As the feature article titled "The Exascale Software Portfolio" describes, the ECP is forging a path toward a comprehensive and reliable exascale computing ecosystem that will benefit research in national security, foundational science, healthcare, energy, and other areas.
Hardware commands most of the attention in discussions about HPC and exascale capabilities. Indeed, supercomputers are tangible, visible assets that enable researchers to study scientific processes in detail. Livermore has a track record of standing up uniquely capable systems, and we are fortunate to house and manage some of the world’s most powerful machines. For example, Sierra and Lassen are our largest heterogeneous systems, which use both central processing units (CPUs) and graphics processing units (GPUs) to perform calculations more quickly and efficiently than their predecessors. Leveraging GPUs is one step toward a more economical architecture that addresses the challenges in transistor technology associated with the ending of Moore’s Law and Dennard scaling. Debuting in just a few years, our first exascale system, El Capitan, will consume considerably less electrical power (around 30 megawatts) with GPUs than it would without (at least 90 megawatts).
As HPC systems become more complicated, so too does the programming of applications that run on them. We constantly make tradeoffs between hardware decisions and software design, and in some ways, software is the more difficult part of achieving exascale computing. The lifespan of software must be considered well beyond a single project’s timeline or any one machine. We saw in the transition from earlier systems to Sequoia’s CPU-based architecture, and again to Sierra’s hybrid GPU–CPU design that porting codes from one type of machine to another is a monumental task. We need to design software today that will still be relevant on HPC systems in 10 or even 20 years from now—even though we have limited clarity about what those systems might look like.
If we are able to design and build software that is robust against upcoming changes to computer architecture, we will have greatly reduced our mortgage for the future. Our very large software portfolio will only be able to run dependably on new machines if we embrace a development model of reusing components while providing a middle layer that insulates scientific applications from the details of the underlying hardware. The feature article describes many of our innovative software libraries and tools, including those developed under the RADIUSS (Rapid Application Development via an Institutional Universal Software Stack) initiative. The ECP and RADIUSS share this sustainability strategy, focusing on modular components and practical implementation to plan for an uncertain future.
The exascale threshold is often touted as the finish line in HPC, but it is merely a mileage marker in an ongoing competition with the rest of the world. Even as we prepare for El Capitan, we must continue innovating for the computing landscape beyond GPUs and exaflops. Creative changes in architecture will have major implications for how we build software. The next phase of computing technology is always on the horizon. No one knows yet what it will look like, but we at Lawrence Livermore aim to find out.