Seismic waves from earthquakes, volcanoes, and man-made explosions can propagate through Earth’s interior for thousands of kilometers. Tens to hundreds of events per day are routinely detected by hundreds of seismometers arrayed around the world. These seismic signals are carefully scrutinized to determine whether any one of the events was an underground nuclear explosion, which would violate the Comprehensive Nuclear-Test-Ban Treaty (CTBT). CTBT is a key global tool for preventing the proliferation of nuclear weapons.
Lawrence Livermore scientists have long been leaders in seismic research to support such international nuclear explosive testing treaties, which date back more than a half-century. Treaty verification research includes developing complex models of geologic structure that are used to understand how seismic waves are distorted while propagating from source to receiver. The models are also used in basic earth science research to deduce the past and future evolution of our planet, including the formation and movement of giant tectonic plates. That effort has identified new features deep inside Earth, including an ancient, buried tectonic plate beneath the Indian Ocean.
Livermore scientists and engineers are also conducting chemical explosives tests to refine models of seismic wave generation and using advanced “big data” technologies—that is, data-intensive computational science—to glean new insights from the vast amounts of seismic data accumulated over six decades of nuclear monitoring. Finally, researchers are using supercomputer simulations to better understand ground motions from earthquakes as a means to help California prepare for major temblors expected to strike the state in the future.
Sweeping the Globe
One of the Laboratory’s most significant accomplishments in the seismic sciences is a revolutionary seismic monitoring technology called the Regional Seismic Travel Time (RSTT) model and computing code. Developed with colleagues from Los Alamos and Sandia national laboratories, RSTT improves the accuracy of locating seismic events by incorporating a three-dimensional (3D) model of Earth’s crust and upper mantle and regional data—which are needed for enhanced detection—into existing monitoring systems established in the 1960s through the present. RSTT also offers blazing speed: In less than 1 millisecond, the system calculates the seismic-wave travel times that are used to locate seismic events through triangulation. Since 2010, RSTT has been provided to all CTBT member states so their monitoring organizations can all consistently determine the location of a seismic event.
The model earned Livermore seismologist Stephen Myers the Department of Energy’s prestigious E. O. Lawrence Award in 2014. A researcher in Livermore’s Global Security directorate, Myers was recognized for leading RSTT development as part of Livermore’s enduring nuclear nonproliferation efforts.
For more than three decades, several nations conducted aboveground nuclear tests, but in 1963 the Limited Test Ban Treaty—signed and ratified by the United States, the United Kingdom, and the Soviet Union—banned nuclear explosions in air, oceans, and space. The last aboveground nuclear explosive test was conducted in 1980, by China. The Threshold Test Ban Treaty, which became effective in 1990, limited underground nuclear weapons explosions by the U.S. and Soviet Union to 150 kilotons.
Signed by President Clinton and other heads of state in 1996, CTBT banned all nuclear explosions and provided the framework for international monitoring to detect underground nuclear explosions. This monitoring network uses four technologies: seismic, hydroacoustic, infrasound, and radionuclide. Of these, seismic sensors are the most sensitive to signals from explosions underground, which is the most likely environment for future nuclear explosions, announced or not.
The Preparatory Commission for the CTBT Organization, headquartered in Vienna, Austria, is making preparations to implement CTBT. The organization’s International Data Center receives data from monitoring stations and forwards both raw and analyzed data to member states. The U.S. National Data Center (NDC) at Patrick Air Force Base in Florida is responsible for U.S. monitoring under the treaty. The National Nuclear Security Administration (NNSA) laboratories provide data-analysis algorithms and technology needed for the U.S. NDC to lower monitoring thresholds and improve monitoring performance.
Several nations, including India, Pakistan, and North Korea, have not signed CTBT, and all three have conducted underground tests since CTBT was signed. The most recent announced underground nuclear test was conducted in 2013 by North Korea and generated a magnitude-5.1 seismic event. As with North Korea’s two other underground explosions, in 2006 and 2009, member states received information about the location, magnitude, time, and depth of the explosions within two hours, before the test was even announced by North Korea.
Myers explains that as underground tests become smaller, as evidenced with the North Korean explosions, the monitoring task grows increasingly difficult. He says, “As we entered the CTBT era, we saw the need to focus on smaller events, but these are not reliably detected at distances typified by established global monitoring networks [a distance of approximately 3,000 to 10,000 kilometers, also known as the teleseismic range].” Very small seismic events may only be detectable using seismic stations close to the test location.
Seismic Waves Tell the Tale
Seismic waves reflect and refract from geologically distinct layers: the crust, the upper mantle, the lower mantle, and the core. The speed of seismic waves depends on the elastic properties and density of the material through which they travel. Cold, stiff rocks allow seismic waves to travel quickly, whereas soft, molten rocks slow them down. The two types of seismic waves most useful for locating nuclear explosions are P-waves and S-waves. P-waves are faster than S-waves and so arrive first, followed by S-waves. The layered structure of Earth also results in several paths of propagation, so a seismogram consists of several P-wave and S-wave arrivals. To use these waves to pinpoint an event’s location, scientists must understand details of Earth’s structure and the physics of how that structure affects wave propagation.
For decades scientists successfully employed a simple one-dimensional (1D), radially symmetric model of Earth to monitor the large nuclear explosions that were allowed under the Limited Test Ban Treaty. These strong signals propagate to teleseismic distances mainly through Earth’s lower mantle, where velocity perturbations are minimal. “These explosions were observed all over the world,” notes Myers. “The P- and S-waves were separated nicely in time and gave us clear, uncomplicated signals.” The 1D model predicts long-range P-wave travel times with an error of less than 0.3 percent.
However, detectable signals from a small explosion propagate in Earth’s upper mantle and crust and involve regional distances—less than 2,000 kilometers—between event and sensor, with the error in travel time prediction increasing to as high as 10 percent. In other words, incorporating regional data into a global monitoring system could significantly increase error in event location. Furthermore, using regional data alone could result in a predicted location being off by many tens of kilometers. Nevertheless, the CTBT monitoring system cannot afford to ignore regional data because the amplitude of the signal from a very small event could dip below the background noise level recorded by a seismometer at teleseismic distances.
Concerned with the large errors inherent in predictions based on regional signals and a 1D model, the U.S. NDC tasked the NNSA laboratory team led by Myers to develop a new technology that would enable incorporation of regional data into the monitoring system without increasing event location error. “Our goal,” says Myers, “was to confidently monitor at lower thresholds while maintaining location accuracy.” Making the task even more challenging, the U.S. NDC stipulated that the new model must be computationally efficient and “lightning fast” on a single-processor computer.
A Breakthrough in Three Dimensions
In 2006 Myers formed a scientific team of experts in signal propagation, signal analysis, and computations. By 2010, the team had developed a model that accounts for crust and upper-mantle variations in seismic velocity by dividing Earth’s surface into approximately 41,000 nodes that form the vertices of triangular tiles. Node spacing is approximately 1 degree of arc (about 111 kilometers), and a vertical profile of seismic velocity at each node is interpolated to render a 3D model of Earth’s crust and upper mantle. This 3D grid of seismic wave velocities depicts geologic structure, including variations in depth and the abrupt increase in wave velocity that occurs at the boundary between the crust and mantle, called the Moho discontinuity. The model is used to compute the arrival times of the waves that refract below the Moho discontinuity, as well as waves that are trapped in the crust.
“RSTT represents the first time that a 3D model of Earth was specifically designed for monitoring,” Myers says. As a result, it reduces regional travel time error to the level of teleseismic error, thus allowing smaller events to be confidently located in monitoring systems. “It took the incorporation of regional data into the global monitoring system to lower monitoring thresholds to the point where people were more comfortable with seismic event location under the CTBT,” he says.
An important advantage of RSTT is that it narrows down the search area for an event by approximately a factor of 10 compared to the standard 1D global model. CTBT calls for an international on-site inspection if a dispute arises over whether a treaty-violating underground nuclear explosion has occurred. Such an inspection is limited to 1,000 square kilometers, equivalent to a circle with radius of about 18 kilometers. (See Supporting an Exercise of Global Importance.) Clearly, the more accurate the location estimation, the smaller the inspection area will be, and the easier job the inspectors will have.
Event Type: Equally Important
Scientists determining the location of a seismic event must also distinguish an explosion from a host of possible nonnuclear events, such as conventional weapon explosions, mine explosions, earthquakes, and volcanic activity. “Travel times are to location as amplitudes are to identification of events,” says Myers. Typically, scientists examine the ratio of P-wave and S-wave amplitudes as an indicator of the seismic event type. An explosion radiates seismic waves outward from a point and typically generates stronger P-waves than S-waves, whereas earthquakes are lateral slips on fault planes that predominantly generate S-waves. However, some earthquakes may appear as explosions because S-waves can greatly decrease in amplitude when propagating through underground regions with strongly attenuating rocks.
“The underground nuclear tests conducted by North Korea challenged traditional methods for determining event type,” says Myers. Fortunately, incorporating regional data reduces that ambiguity. “Regional data helped to unambiguously determine that the North Korean events were indeed nuclear explosions.”
In light of the demonstrated success of RSTT in 2010, the U.S. NDC recommended to the U.S. State Department that RSTT be shared with other nations. Four international workshops have been held to date to educate scientists from 66 nations on its use. “With everyone using RSTT we have a consistent framework, so we get consistent answers about the location and nature of an event,” explains Myers. Widespread use of the technology has also led to significant improvements in the model by incorporating regional data that were previously the domain of individual countries. The RSTT program runs on a laptop computer, a feature important to many international users. Myers adds that most national seismic organizations are focused on monitoring local earthquakes, and RSTT is also directly applicable to that purpose.
A Code That Goes Deep
The RSTT method is fast, reliable, trusted, and accessible, but Livermore scientists are constantly pushing to incorporate higher fidelity data to achieve even greater location accuracy and a lower detection threshold. An improved model called LLNL-Global 3D (G3D) extends velocity profiles from the crust and upper mantle used in RSTT to features in Earth’s lower mantle, all the way to the core–mantle boundary. LLNL-G3D retains the same basic node structure but does not incorporate the travel-time approximations that RSTT uses to achieve millisecond computational time. Instead, it calculates the best 3D travel-time solution for waves that propagate from the surface through the deepest parts of Earth.
Seismologist Nathan Simmons, who has led the development of LLNL-G3D, says, “RSTT is well suited for fast 3D travel-time estimates of waves that do not travel too deep into the upper mantle. Most of the time it’s a very good approximation. However, we wanted to build a more explicit representation of the whole Earth that takes into account really complicated and deep geologic structure.” The new model image, which he compares to a geologic computed tomography (CT) scan, allows for 3D travel-time prediction for energy confined to the shallow mantle, as well as energy travelling deep into Earth. The enhanced geologic detail improves seismic epicenter accuracy to about 4 to 5 kilometers in well-sampled cases. This improvement in event epicenter accuracy is significant for several regions, such as the Middle East, where RSTT cannot fully capture the effects of the complicated geologic structure.
Although LLNL-G3D is still in its basic research phase, Simmons anticipates it will see widespread use within a decade. One disadvantage of this more-accurate model, however, is that it requires 100 to 1,000 times longer to compute a travel time than is required by ultrafast RSTT.
Studying the 2014 Napa Quake
Seismologist Artie Rodgers is using Livermore supercomputers to simulate the ground motion of the magnitude-6.0 earthquake that struck the southern Napa Valley in California on August 24, 2014, rupturing a 12-kilometer stretch of the West Napa Fault. One fatality, 100 injuries, and more than $100 million in damage resulted from the quake, the largest to hit the San Francisco Bay Area since the magnitude-6.9 Loma Prieta event in October 1989.
The simulations by Rodgers and team are testing four different rupture models that attempt to describe how seismic waves were generated in the geologic strata underlying the greater Napa area. Two models are from the University of California (UC) at Berkeley, one from UC Santa Barbara, and one from the California Institute of Technology. Rodgers combines each model with a three-dimensional (3D) subsurface geologic model developed by the U.S. Geological Survey (USGS) in Menlo Park. “Broadly, all four models describe the same thing, but they differ in details because each is based on a different method and type of seismic data. We want to see whether one provides better predictions of the recorded motions.”
The Bay Area is home to hundreds of seismometers and Global Positioning System sensors, and so the quake generated a copious amount of high-quality data, which was available almost immediately to earthquake scientists and engineers. At the same time, current simulation capabilities far exceed what existed in 1989, allowing Rodgers’ team to perform higher resolution simulations than have generally been conducted. “We’re trying to improve our understanding of the shaking and damage that can accompany earthquakes. Advanced simulations can give stakeholders, policymakers, and the public a more accurate idea of what to expect from future quakes on this and other fault lines.”
The simulations are performed with the Seismic Wave 4th Order (SW4) code developed by Livermore computer scientists Anders Petersson and Bjorn Sjogreen. The simulations run on the Laboratory’s Cab supercomputer, a 430-teraflop Linux system, and take 6 hours to complete with 4,000 microprocessors running in parallel. “What used to be record-breaking calculations are now routine, but as we learn more we want to push these runs to higher resolution,” Rodgers says.
The simulations show the effectiveness of 3D models. Shaking can be seen to move slowly across the sedimentary structure of the Napa Valley, with seismic waves effectively trapped as they bounce around in the valley. Such detail is missing from simpler simulations without detailed 3D structure.
Rodgers notes the Napa quake caused widespread damage, especially to unreinforced masonry buildings, but nowhere near the devastation that would be expected from a magnitude-7 quake along other faults in the Bay Area. Rodgers says that with strong ties to California universities and USGS, and with its 3D simulation expertise and some of the world’s most powerful high-performance computing resources, “Livermore is uniquely situated to help California understand shaking in the future and prepare for damaging quakes.”
Physics Experiments Fill the Gap
Myers emphasizes that the historic collection of seismic data from underground nuclear explosions may not be representative of all future nuclear seismograms. In particular, future explosions may be conducted in different rock types and may not adhere to established relationships between explosion yield and emplacement depth. To fill the gap in experimental data, the NNSA laboratories launched the Source Physics Experiment, a series of underground chemical explosive tests. Conducted at the NNSA’s Nevada National Security Site, these nonnuclear explosions are helping to fill the gap in experimental data to study the generation of seismic waves from explosions.
“We’re looking at explosion yield and depth of device burial because the historical record provides little or no data on small and deep explosions,” says Myers. The series of experiments is providing data in a controlled environment to test physics models. Heavily instrumented explosions are being conducted first in granite, a material that is strong and heavily fractured. The fractures appear to enhance the generation of S-waves, which can make an explosion look more like an earthquake. The explosions in granite will be followed by explosions in alluvium, which is very weak and has fewer fractures.
The test results are being used to refine computational models of seismic wave generation using Livermore’s GEODYN code. “We want to make sure our codes match not only the historic data set but also new data from these small explosions,” comments Myers.
Tapping the Power of Big Data
The CTBT’s global monitoring network, which is nearing completion, will consist of 50 primary and 120 auxiliary seismic stations, but thousands of other sensors worldwide are also recording or detecting seismic signals. “Seismic sensors are ubiquitous and are generating prodigious amounts of information that must be stored and should be processed,” says Myers. Currently, Lawrence Livermore stores about 600 terabytes of seismic information, encompassing 5 billion seismic measurements.
“If we could make use of all this data, we could do some pretty incredible things,” Myers states. The process of efficiently storing, sorting through, and finding new meaning in enormous volumes of data is a key objective of big data (that is, data-intensive analysis), and Livermore scientists have been among the leaders of this relatively new field. They have already reduced the time it takes to cross-correlate Livermore’s trove of seismic data from a month to a day. “With big-data techniques we can now test new ideas in a day,” he notes.
One way to make use of big data would be to compare incoming seismic data associated with an underground explosion with the historic record of underground nuclear explosions and earthquakes over the past several decades. The result could lower monitoring thresholds and make possible 100-meter accuracy for locating underground tests at known test sites and 1-kilometer accuracy for other areas. “We’re only in the infancy of using big data,” Myers says, “but we see how it has the potential to transform the way we perform international monitoring.”
Discoveries in Plate Tectonics
The tools that Livermore scientists have developed to generate ever more-accurate tomographic images of Earth’s interior for nuclear explosion monitoring have had significant byproducts in other areas of science. For instance, increased resolution has recently revealed new, intriguing features that provide evidence for how tectonic plates move, collide, and are subducted into the lower mantle, improving our understanding of how Earth evolved. (Subduction is the process by which one tectonic plate slides under another plate and sinks into the mantle.)
One of the newly discovered features in the LLNL-G3D model is a tectonic plate located beneath the southern Indian Ocean, stretching southward from Indonesia to the submerged volcanic Kerguelen Plateau near Antarctica, and eastward beneath Tasmania. This plate, called the Southeast Indian Slab, resembles the ancient Farallon Plate that was discovered by seismologists in the late 1980s. The anomalously high seismic velocity exhibited by the plate indicates cooler features once at Earth’s surface.
The Southeast Indian Slab is believed to have sunk mostly during the Jurassic period 150–200 million years ago, when the Indian and Australian subcontinents were close to one another in what is now the southern Indian Ocean. The plate had gone unnoticed until spotted by Simmons, who credits his discovery to advances in data-processing techniques, better data from numerous published reports, and more-advanced imaging methods. “We do a lot to the data we receive that enhances our ability to reveal new structures,” says Simmons.
The research suggests that subducted plates can get stuck within the upper mantle a lot longer than previously expected, until they push through to the lower mantle. Simmons says there are intriguing links potentially connecting this ancient subduction episode with volcanic processes in the Indian Ocean and the breakup of supercontinents that existed in the distant past.
Advancing Science—and Security
Livermore research works to better understand and predict signals from earthquakes. It is also enhancing scientific understanding of the complex structure of Earth, how it has evolved over hundreds of millions of years, and how it continues to evolve. But the most important application to national security is that by enabling the use of regional data in existing monitoring systems, new Livermore technology lowers monitoring thresholds and engages the international community as never before in the effort to ensure compliance with the CTBT’s nonproliferation regime. With technologies such as RSTT and LLNL-G3D, Livermore seismic research is improving the ability of the United States and her allies in nonproliferation to monitor nuclear explosions anywhere in the world.
—Arnie Heller
Key Words: Comprehensive Nuclear-Test-Ban Treaty (CTBT), data-intensive computation, E. O. Lawrence Award, earthquakes, LLNL-Global 3D (G3D) model, nonproliferation, nuclear testing, P-wave, Regional Seismic Travel Time (RSTT) model, S-wave, seismic modeling, Source Physics Experiment (SPE).
For further information contact Stephen Myers (925) 423-4988 (myers30 [at] llnl.gov (myers30[at]llnl[dot]gov)).