STR Masthead

Article title: Crossing Computational Frontiers.

WHEN Dawn, Lawrence Livermore’s latest supercomputer, was installed in 2009, scientists at the Institute for Scientific Computing Research (ISCR) were among the first to put it to work. In doing so, they demonstrated an innovative parallelization strategy for simulating at unprecedented resolution and scale the hot, dense plasmas that will occur during fast ignition, the implosion–explosion process to ignite a fusion reaction, at the National Ignition Facility. Their strategy helped solve one of the most difficult scaling problems in numerical simulation: the efficient parallel calculation of long-range interactions. Long-range forces (such as electrostatic or gravitational) are relevant to a variety of modeling scenarios. Previous efforts to develop a fully scalable solution to this complex calculation have failed on less powerful machines.

ISCR’s mission is to assemble and maintain multidisciplinary teams of researchers who work with Livermore project specialists in designing computer applications that address program needs. The institute also collaborates with students and other guests to advance computational capabilities. (See the box below.) Teams choose test-bed projects to push the envelope of application performance on today’s supercomputers. Through this work, ISCR is creating a scientific computing capability that supports the Laboratory’s missions and is developing the technology and expertise needed to effectively utilize next-generation computers, which are expected to expand from thousands to millions of processors.

Computers keep getting larger and more powerful, and Livermore excels at maximizing their potential. “Big computers do not arrive as turnkey devices,” says ISCR Director Fred Streitz, who also leads the modeling and simulations group in Livermore’s Physical and Life Sciences Directorate. “To use these machines to their utmost, exploring more complex problems with ever-increasing resolution, scientists must rethink old approaches and devise new algorithms.”

Streitz is quick to credit his team of computational specialists for the institute’s success. He lauds team leader Jim Glosli and team members Bor Chan, Milo Dorr, Erik Draeger, Jean-Luc Fattebert, Liam Krauss, David Richards, Tom Spelce, and Michael Surh for their ingenuity and dedication in completing the fast-ignition simulation. “Computers run 24/7,” says Streitz, “and the researchers operating these machines sometimes must do nearly the same.”

Glosli, Richards, and Streitz are veterans of a previous campaign, which ran for six continuous weeks and used the entire BlueGene/L supercomputer—more than 200,000 processors. In that groundbreaking simulation, they modeled up to 62.5 billion atoms of liquid aluminum flowing across liquid copper. For their effort, they were honored with the 2007 Gordon Bell Prize and with the Laboratory Director’s Science and Technology Award in 2008.

Between them, ISCR team members have earned eight Gordon Bell prizes over the past five years. Named for C. Gordon Bell, one of the founders of supercomputing, this prize annually recognizes outstanding achievement in high-performance computing, with an emphasis on rewarding innovative science applications. Bell established the prize in 1987 to encourage the further development of parallel processing, the computer design philosophy that has driven high-performance computing since the 1980s.

Dawn simulation of plasma heated by a proton beam.
Even with the computational power of the Dawn supercomputer, researchers can simulate only a thin slice of hot, highly energized argon-doped deuterium–tritium plasma as it is heated by a proton beam. After the beam passes through the plasma, the center (purple) is the hottest region of the plasma, and temperatures drop away to the outer edges (red).

BlueGene/L simulation of liquid copper flowing across liquid aluminum.
All of the processors on the giant BlueGene/L supercomputer ran for six weeks to produce this 1-cubic-micrometer simulation of liquid copper flowing like waves reaching a shore across liquid aluminum.

Hosting Visitors, Gaining Collaborators

The Institute for Scientific Computing Research (ISCR) manages an extensive visitor program for the Laboratory’s Computation Directorate, hosting many guests and postdoctoral researchers throughout the year. For many years, outreach was the institute’s primary goal, and it remains an important function. “We want our visiting students, postdocs, and faculty to be as actively involved in our projects as possible,” says ISCR Director Fred Streitz.

Such collaborations expose students and faculty to the stimulating and challenging work environment of a national laboratory and generate considerable goodwill for Lawrence Livermore. Many students and postdoctoral researchers later join the Laboratory as staff researchers.

The institute’s largest effort is the Summer Visitor Program, which this year brought 100 students and 20 faculty members to Livermore. In any one year, about half of the visitors might come from foreign countries, making logistics quite complicated. Faculty can arrange for sabbatical visits that last up to 12 months. ISCR also serves as the host to computer science graduate students whose academic program is funded through the Lawrence Scholar Program.

Candidates selected for the Summer Visitor Program are hired as summer employees and assigned to work with Laboratory mentors on specific projects. The nature of the project and the assigned work are chosen to complement each candidate’s background and skills. Visits may last from 4 to 30 weeks (sometimes continuing into the fall semester). During that time, participants have an opportunity to learn more about their chosen field and related research areas through numerous seminars and informal interactions with staff and other visitors to the institute.

ISCR also provides opportunities for shorter-duration visits. Scientists and scholars from academia or industry often merit intermittent access to the Laboratory’s staff and resources when funding is not the critical issue. Such arrangements allow visitors and ISCR scientists to explore research areas of mutual interest. According to Streitz, “Our administrative staff stays incredibly busy making arrangements for all these programs and various visitors.”

A New Supercomputing Era
The Dawn supercomputer, which Streitz says is “shockingly powerful,” comes from the same IBM lineage as BlueGene/L, which held the title of world’s fastest supercomputer from November 2004 to May 2008. Dawn can perform 500 trillion floating-point operations per second (teraflops) and is laying the applications foundation for Sequoia, a 20-petaflops (or 20 quadrillion floating-point operations per second) machine scheduled for delivery in 2011.

Sequoia will process calculations designed to build more accurate physical models of nuclear weapon detonations and will strengthen predictive capabilities by running very large suites of complex simulations. This work is a cornerstone of the National Nuclear Security Administration’s Stockpile Stewardship Program to ensure the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without underground testing.

Photo of Photo of Lab scientist Rhys Ulerich with summer students Daniel Osei-Kuffuor and Hilari Tiedeman. Students (from left) Daniel Osei-Kuffuor and Hilari Tiedeman work with Livermore scientist Rhys Ulerich as part of the annual Summer Visitor Program hosted by the Institute for Scientific Computing Research.

Improving Traffic Flow on the Processors
Igniting plasma to achieve fusion requires heating and compressing the target fuel to extremely high temperatures and pressures. To achieve ignition conditions, researchers must understand and control the energy flow in and out of the laser system as well as between various plasma components, such as ions, electrons, and photons. A direct particle simulation of inhomogeneous nonequilibrium plasma can capture the many-body physics of the energy flow and provide that understanding, but the calculation is computationally challenging. The key hurdle—deemed impossible by some—is developing efficient methods to scale the routines that solve for the effect of long-range Coulomb forces so these mechanisms can be simulated at the relevant sizes.

Impossible to some, but not for the ISCR team. In analyzing the problem, the researchers noted that the Coulomb solver has two main computational pieces that, although largely independent, have different scaling behaviors. This observation led them to develop a heterogeneous decomposition process so that scientists can flexibly map, or “tune,” the computational pieces to subsets of the hardware. The tunable approach provided excellent scaling of the Coulomb problem to thousands of processors. As a result, computational experts can for the first time incorporate long-range physics into extremely large-scale simulations.

Think of driving down a winding, two-lane road. No matter how fast you want to drive, the slowest car determines the speed for all cars in your lane. Computer scientists faced this problem in extending sequential applications to calculate complex interactions. Solving that problem led to parallel computing, which is like adding more lanes to the highway.

Even with today’s massively parallel technology, the analogy still applies. “Using a single strategy to parallelize a simulation limits overall scalability because the least scalable component still holds back the entire code,” says Glosli. “In our analogy, it’s like having a truck in every lane.” By assigning parts of a calculation to different processors—in essence, limiting trucks to just a few lanes—the team developed an approach for parallelization that allows multiple force terms to be computed concurrently. The overall calculation now scales effectively across hundreds of thousands of processors, maximizing the power of Dawn. The team was recognized as a 2009 Gordon Bell Prize finalist for developing this strategy.

“The heterogeneous decomposition of the computational problem and optimal mapping to hardware has far-reaching implications for scientific computing,” notes Glosli. “It likely will affect the way future computer codes are developed for massively parallel environments.” The flexibility of this approach allows more complicated models to be developed, and the technique can be applied on current and next-generation machines. The team has also developed methods to include shorter-range physical processes, such as radiation, recombination, ionization, and fusion, in the code. Says Streitz, “Our goal for this project is to deliver a comprehensive simulation tool for computing correlations and transport properties in burning plasma.”

—Katie Walter

Key Words: Dawn, Gordon Bell Prize, Institute for Scientific Computing Research (ISCR), National Ignition Facility, plasma simulation, Sequoia, Summer Visitor Program.

For further information contact Fred Streitz (925) 423-3236 (streitz1@llnl.gov).


S&TR Home | LLNL Home | LLNL Site Map | Top
Site designed and maintained by TID’s Web & Multimedia Group

Lawrence Livermore National Laboratory
Operated by Lawrence Livermore National Security, LLC, for the
U.S. Department of Energy’s National Nuclear Security Administration

Privacy & Legal Notice | UCRL-TR-52000-10-9 | September 9, 2010