Simulating complex systems on supercomputers requires that scientists get hundreds of thousands, even millions of processor cores working together in parallel. Managing cooperation on this scale is no simple task.

One challenge is assigning the workload given to each processor core. Unfortunately, complexity isn’t distributed evenly across space and time in real-world systems. For example, in biology, a cell nucleus has far more molecules crammed into a small space than the more dilute, watery cytoplasm that surrounds it. Simulating nuclei therefore requires far more computing power and time than modeling other parts. Such situations lead to a mismatch in which some cores are asked to pull more weight than others.

To solve these load imbalances, Christoph Junghans, a staff scientist at the Department of Energy’s Los Alamos National Laboratory (LANL), and his colleagues are developing algorithms with many applications across high-performance computing (HPC).

“If you’re doing any kind of parallel simulation, and you have a bit of imbalance, all the other cores have to wait for the slowest one,” Junghans says, a problem that compounds as the computing system’s size grows. “The bigger you go on scale, the more these tiny imbalances matter.” On a system like LANL’s Trinity supercomputer up to 999,999 cores could idle, waiting on a single one to complete a task.

To work around these imbalances, scientists must devise ways to break apart, or decompose, a problem’s most complex components into smaller portions. Multiple processors can then tackle those subdomains.

The work could help researchers move toward using exascale computers that can perform one billion billion calculations per second, or one exaflops, efficiently. Though not yet available, the Department of Energy is developing such machines, which would include 100 times more cores than are found in most current supercomputers. Using a process known as co-design, teams of researchers are seeking ways to devise hardware and software together so that current supercomputers and future exascale systems carry out complex calculations as efficiently as possible. Fixing load imbalance is part and parcel of co-design.

“Everybody is trying to find out where the problems would lie in running simulations and calculations on a super big [machine] that nobody has seen before,” says Junghans, deputy leader of LANL’s co-design team. Fixing load imbalances could make it easier to simulate various physical phenomena such as turbulent flows and materials at a range of scales, from watery biological solutions to plastics and metals.

Junghans’ collaborators include researchers from the Max Planck Institute for Polymer Research (MPI-P) in Mainz, Germany, led by Horacio Vargas Guzman. One approach, pioneered at MPI-P by Kurt Kremer’s group, models complex mixtures of molecules using the adaptive resolution scheme, or AdResS. This method divides simulations into areas of high- and low-resolution, based on how much information and complexity is needed in each area. AdResS is useful for these problems, but such a scheme is “especially prone to this load imbalance,” Junghans says.

‘Where can we change or modify the algorithms so that we can solve problems on new hardware?’

Junghans and his MPI-P colleagues developed a new approach – called the heterogeneous spatial domain decomposition algorithm, or HeSpaDDA – that takes this process a step further. It assesses those low- and high-resolution areas and rearranges them to distribute the processing workload. The researchers tested it in two different simulations modeled with AdResS. In one case, they examined the protein ubiquitin’s behavior in water. They also used this algorithm combination to study a model fluid system with two phases (known as a Lennard-Jones binary fluid). The combination of HeSpaDDA and AdResS sped up these simulations by up to 150 percent.

These molecular dynamics simulations are important for advances in the areas of biomedicine, drug development, biomembranes, fluid mechanics, crystal growth, and polymer research. They reported their results in November 2017 in the journal Physical Review E.

Junghans and colleagues from LANL have also worked to solve load imbalances that arise in simulations of other types of matter. For example, they have developed an algorithm that redistributes the simulation workload in the heterogeneous multiscale method, which is useful for modeling solid, metallic systems. This technique could be used to simulate a shock wave traveling through metal, Junghans says.

Unlike the adaptive resolution method, which breaks up simulations into cube-like subdomains, the heterogeneous multiscale method constructs a mesh-like structure around the modeled system. As calculations at various points in the mesh progress, the algorithm divides the complex domain into more manageable chunks. Like adaptive resolution, this method can still have load imbalances, Junghans notes.

Load imbalances also show up on a cosmic scale. At the Supercomputing 2016 Conference, or SC16, researchers showed how they solved load imbalances while simulating a binary star system similar to that detected by LIGO, the Laser Interferometer Gravitational-Wave Observatory. That work involved a method called smooth particle hydrodynamics. The scientists involved were Ph.D. students from LANL’s ISTI/ASC co-design summer school, which brings together future scientists to work on interdisciplinary computing challenges. Junghans and his LANL colleague Robert Pavel co-lead the program.

Co-design has been a big focus of the DOE’s Advanced Scientific Computing Research (ASCR) program in the run-up to exascale HPC. “For us, co-design basically means looking at a problem, and the algorithms to solve that problem, and the hardware,” Junghans says, and answering this question: “Where can we change or modify the algorithms so that we can solve problems on new hardware?”

At the moment, Junghans and his colleagues are working on simulations that use hundreds of processors, though they plan to scale that up significantly. “We have to fix problems at a smaller scale before we’re ready” to move onward, he says. “This will solve one issue, but when you scale up, there will be other problems.”

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security LLC for the Department of Energy’s National Nuclear Security Administration.

Bill Cannon

Share
Published by
Bill Cannon

Recent Posts

Connecting the neurodots

The human brain contains a vast expanse of unmapped territory. An adult brain measures only… Read More

December 3, 2024

We the AI trainers

Computer scientists are democratizing artificial intelligence, devising a way to enable virtually anyone to train… Read More

November 12, 2024

AI turbocharge

Electrifying transportation and storing renewable energy require improving batteries, including their electrolytes. These fluids help… Read More

October 30, 2024

Pandemic preparedness

During the pandemic turmoil, Margaret Cheung reconsidered her career. At the University of Houston, she… Read More

October 16, 2024

A heavy lift

Growing up in the remote countryside of China’s Hunan province, Z.J. Wang didn’t see trains… Read More

August 1, 2024

Frugal fusion

Princeton Plasma Physics Laboratory (PPPL) scientists are creating simulations to advance magnetic mirror technology, a… Read More

July 16, 2024