Jeroen Tromp’s geophysics group at Princeton University is using of one of science’s most powerful supercomputers to map underground plate tectonic activities with real earthquake waves and their artificially created reverse echoes.
It’s “a way of doing a CAT scan of the Earth,” he says. “We want to understand how the planet we live on works as a heat engine. And this type of imaging helps.”
Tromp, Princeton’s Blair Professor of geology and professor of applied and computational mathematics, is working with three co-investigators and 210 million processor hours on Oak Ridge National Laboratory’s Titan supercomputer on what’s called “full waveform inversion on a global scale.” The computer hours have come from the U.S. Department of Energy’s INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program since 2015.
Using waves from 250 earthquakes as recorded by several thousand seismographic stations worldwide, the researchers have produced “a first-generation model based on this idea,” Tromp says. The group’s recent Geophysical Journal International paper reveals this work-to-date in simulation-generated images of magma plumes, hotspots, subduction zones and fault lines.
The simulations capture key features of plate tectonics, the theory describing how blocks of Earth’s crust get shoved around and are recycled or destroyed as molten magma creates mountains and new seafloor – all driven by heat from Earth’s underlying core and mantle.
This paper is “the best work we’ve done thus far in this field,” he says. “The goal now is to scale this up.”
Tromp says the idea of waveform inversion in geophysics, a nonlinear way to fill in what’s missing from seismic data maps, was developed in the mid-1980s. But “it is only now that we have both the software and computational resources that these kinds of approaches have finally become feasible.”
Waves move slower through warmer materials than they do through cooler ones.
Beginning in the early 2000s, when he was a professor at the California Institute of Technology, Tromp began using the technique his team employs today, known as the adjoint state method. The method lets modelers combine seismic waves from real earthquakes with imaginary waves created in computers – fake echoes that trace how the original signals would be reflected and refracted if they could be time-reversed. This adds extra details analogous to how a computed tomography (CT) scanner increases anatomical feedback when the instrument shoots X-rays in all directions.
Tailored software modeling makes all this work, and Princeton has developed its own special adjoint code, called SPECFEM3D, over several years, Tromp says. “It’s an open-source software package that simulates three-dimensional seismic wave propagation basically on all scales, from ultrasonic waves all the way to the long-period surface waves generated by the largest earthquakes.”
SPECFEM3D is “as realistic as we can possibly make it,” he adds, modeling a rotating and elliptical Earth-displaying topography, ocean depths, ocean basins and variations in crustal thickness. To make sure their adjoint model still reflects reality, the researchers look for “misfit functions” by seeking out the clearest wave signals in real seismograms.
Geoscientists base their tomographic maps and models on differences in earthquake wave speeds. Waves move slower through warmer materials such as magma ascending toward Earth’s surface, Tromp says, than they do through cooler materials, such as recycling oceanic plates moving down and under other plates into the mantle. Also, one type of earthquake signal – a shear wave – will slow down when encountering wetness. But shear waves never penetrate areas that are completely watery, he adds. There “all you have are sound and compressional waves.”
The anisotropy, or directional dependence, of wave movements provides additional guidance. Tromp calls that “an incredibly informative parameter that shows you how plates are moving in the upper mantle and how (heat) convection is taking place.”
Before they moved to Titan, Tromp and various co-researchers tested adjoint tomography on smaller regions and on less-powerful supercomputers.
The first effort, described in a 2009 Science paper, used a Caltech supercomputer cluster to evaluate waves from 142 crustal earthquakes in Southern California. “That study really taught us how to do these kinds of inversions,” Tromp says. “This type of modeling is of tremendous value. You need a very good three-dimensional model of Southern California to try and simulate what might happen during the next large earthquake on the San Andreas fault.”
The second regional study, employing a Princeton supercomputer cluster, evaluated signals from 190 earthquakes within Europe’s crust and upper mantle recorded by 745 seismographic stations, their 2012 paper in Nature Geoscience said.
Next, the researchers enlisted supercomputers at the Texas Advanced Computing Center at the University of Texas at Austin to study 227 crustal and mantle earthquakes recorded by 1,869 stations in East Asia, their 2015 paper in the Journal of Geophysical Research noted.
“In Western Europe we were able to assimilate on the order of 120,000 pieces of information,” Tromp says. “In Southeast Asia we scaled it up to 1.8 million.” All three studies uncovered new geological features. Additional regional surveys are now underway in North America, the Caribbean and Antarctica.
The Titan supercomputer has a theoretical peak performance of more than 27,000 trillion calculations per second. Using it, the Tromp team hopes to scale up the adjoint modeling simulations “by a factor of 20.”
He’s found working with Titan’s 299,008 AMD central processing units and 18,688 NVIDIA graphics processing units can sometimes be “very difficult.” So he’s pinning great hopes on its planned successor – Oak Ridge’s energy-saving Summit supercomputer, to arrive in 2018. At five times the power of Titan but with far fewer processors, Summit’s modeling could turn out to be the computational equivalent of a seismic event.