Categories: Geoscience

The Big One, in 3-D

Early on a Sunday in June 1992, a magnitude 7.3 earthquake near the San Andreas fault shook Southern California. It leveled homes, sparked fires, cracked roads and caused one death. Its tremors reached Nevada, New Mexico, Colorado and Idaho.

Named the Landers earthquake after a Mojave Desert community near the epicenter, the event was a strike-slip quake, the kind that occurs when massive plates of Earth’s lithosphere push and slide horizontally against each other kilometers below the surface. It was the strongest earthquake in the contiguous 48 United States in 40 years.

Twenty-five years later, a team of scientists has used high-performance computing (HPC) to simulate the Landers quake in three dimensions and high resolution. The work builds on the team’s simulation of a hypothetical magnitude 7.7 earthquake, also on and around the southern San Andreas.

The research has shown how one earthquake can deliver building-collapsing shakes to some areas but not to others, and the Landers simulation helps solve a long-standing puzzle of earthquake science. The work was supported by a Department of Energy Innovative and Novel Computational Impact on Theory and Experiment (INCITE) award and by the Southern California Earthquake Center (SCEC).

Key to the research team’s success is its ability to model nonlinear phenomena in three dimensions using a code that can be scaled up to run well on large HPC systems, namely the Blue Waters machine at the University of Illinois at Urbana-Champaign and Titan at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science user facility. The team also used two additional OLCF systems to process data and to store results and input data files.

The code is a new version of anelastic wave propagation (AWP-ODC) – the ODC for developers Kim Olsen and Steven Day at San Diego State University (SDSU) and Yifeng Cui at the San Diego Supercomputer Center. Daniel Roten, a computational research seismologist at SDSU, led the studies. He began working with the HPC-scalable version of AWP-ODC as a postdoctoral fellow with the code’s development team. At that time, it was one of three simulation groups that modeled a magnitude 7.8 quake on the southern San Andreas fault as part of California’s ShakeOut Earthquake Scenario, which became the basis for emergency response exercises.

Many earthquake simulations use either linear models of forces in three dimensions or nonlinear models of forces in one or two dimensions. These demand less computer time and memory than three-dimensional nonlinear models but they do not capture true relationships between the forces and their effects. For example, linear models typically predict more violent shaking than actually occurs, producing inaccurate risk and hazard assessments.

‘This phenomenon can now be explained by our simulations.’

Besides earthquake magnitude and distance from its source, local geology also affects how severely the ground will shake. The new AWP-ODC version addresses this fact more accurately than ever. The simulations represent not just body waves moving through tectonic plates but also the later-arriving surface waves, some of which shake the ground at frequencies high enough to damage buildings. Previous models could not represent surface waves shaking the ground at frequencies above two cycles per second. AWP-ODC can now represent waves of two to four cycles per second – the frequencies that cause structural damage.

“Soft soils, such as clays or sands, increase the shaking and trap seismic waves, which leads to longer shaking duration,” Roten says. The more cracked, broken and crushed the rock is under an area, the greater the risk of high-frequency surface waves and catastrophic damage. SCEC uses these findings and other insights to improve detailed hazard maps of areas surrounding the fault.

The long-standing puzzle the Landers quake simulation addressed pertained to strike-slip earthquakes, in which rocks deep underground slide past each other, or slip, causing surface rock and soils to shift with them. Slips can cause dramatic surface changes, such as broken roads that are shifted so the lanes no longer line up.

But after studying the Landers earthquake and other strike-slip quakes with magnitudes higher than 7, scientists realized these observations were not as straightforward as they seemed. “Geologists and geophysicists were surprised to see that the slip at depth, inferred from satellite observations, is larger than slip observed at the surface, from shifts measured by geologists in the field,” Roten says.

When material above a strike-slip earthquake doesn’t slide along for the ride, it’s called a shallow slip deficit, or SSD. In some places, it doesn’t appear to slip at all. Researchers thought the surface might slowly continue to slip and catch up with the underlying rock in between earthquakes. But field studies showed that this so-called after-slip (also called fault creep) was minimal and could not bring the surface back in line with the underlying rock.

Other research groups had suggested that SSD occurred when stresses that could cause the surface to slip were absorbed by inelastic off-fault deformation – that is, fracturing in the rock around the strike-slip fault. “Our simulations capture exactly this inelastic, plastic deformation, and the fault-slip and off-fault deformation predicted by our simulations do reproduce the observed patterns if the rock strength is properly selected,” Roten says. “So you could say that this phenomenon can now be explained by our simulations.”

The team intends to use AWP-ODC to reveal more about how earthquakes work as it continues to develop the code for faster HPC systems. Roten says they developed the nonlinear method for CPU systems, which use standard central processing units, before they ported the implementation to the GPU version of the code, one employing graphics processing units to accelerate calculations. The code runs about four times faster on a GPU system than on CPUs, he adds. The team optimized the code to run even faster and to reduce the amount of memory it required, since GPUs have less memory available than CPUs.

“Parallel file systems and parallel I/O (input/output) are also important for these simulations, as we are dealing with a lot of input and output data,” Roten says. “Our input source file alone had a size of 52 terabytes.”

The GPU version does not presently handle all of the features needed. For now, the code takes advantage of Blue Waters’ mix of CPU and faster GPU nodes while the team develops the code to work exclusively on Titan’s GPU nodes. “Titan would have enough GPUs to further scale up the problem, which is what we plan to do,” Roten says.

Bill Cannon

Share
Published by
Bill Cannon

Recent Posts

We the AI trainers

Computer scientists are democratizing artificial intelligence, devising a way to enable virtually anyone to train… Read More

November 12, 2024

AI turbocharge

Electrifying transportation and storing renewable energy require improving batteries, including their electrolytes. These fluids help… Read More

October 30, 2024

Pandemic preparedness

During the pandemic turmoil, Margaret Cheung reconsidered her career. At the University of Houston, she… Read More

October 16, 2024

A heavy lift

Growing up in the remote countryside of China’s Hunan province, Z.J. Wang didn’t see trains… Read More

August 1, 2024

Frugal fusion

Princeton Plasma Physics Laboratory (PPPL) scientists are creating simulations to advance magnetic mirror technology, a… Read More

July 16, 2024

A deeper shade of green

Niall Mangan uses data to explore the underlying mechanisms of energy systems, particularly ones for… Read More

June 26, 2024