Categories: Geoscience

Subterranean blues

Divining just how contaminants such as radioactive particles behave as they move from soils to groundwater in complex subterranean worlds has long challenged computer modelers and their simulations.

With the help of high-performance computers, however, national laboratory and university researchers are working together to stretch models of how reactive chemicals move through porous surfaces. Their work may help the Department of Energy in its cleanup of wastes left behind by nuclear materials production.

For Peter Lichtner, a physicist at Los Alamos National Laboratory (LANL), and Glenn Hammond, a computational geohydrologist at Pacific Northwest National Laboratory (PNNL), the target is a complex real-world problem: contaminants in sediment at the Hanford Nuclear Reservation in Washington state. Their main tool is PFLOTRAN –– what they and other modelers call “a parallel multiphase and multicomponent reactive geochemical transport code.”

Lichtner is principal investigator for the project, which is part of the groundwater science application area in the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC) program. His team’s work is creating a better understanding of what happens as water moves underground, carrying contaminants that react and change.

A groundwater plume at the site carries uranium into the nearby Columbia River. The project focuses on a subsection of the reservation, the Hanford 300 site, where nuclear reactor fuel was made and nuclear materials were disposed from World War II until before the site’s last reactor shut down in 1987.

The model predicts moving contaminants’ behavior, in three dimensions, over periods ranging from hours to years.

A variety of chemicals from water to bicarbonate and uranium are part of the scenario, moving through sediment that contains everything from fine sand to inches-long cobbles under various saturation conditions.

Detailed modeling of contaminant leaching from the site requires computer processor cores – and plenty of them.

‘The code handles flows –– supercritical flows, CO2 phase, water, brine.’

“In these groundwater models, you have to deal with three-dimensional problems, so you need a lot of (computer) nodes” to break down and solve the complex equations that go into them, Lichtner says. “To do that on a single processor you’d either run out of memory or time. It would take months or years.”

In the early 1990s, Westinghouse Hanford, the site manager at the time, used a simpler but then-current model for flow. It predicted uranium would clear from Hanford 300 in 10 to 15 years. Almost 20 years later contamination levels are nearly unchanged. Adding details –– the site’s underground layers, heterogeneities of the surface, the complexity of the chemical types, or species, underground and other factors –– requires more and more equations and thus more and more computational power.

It is not just the complicated nature of the underlying surface and the many chemical processes that make this a challenging model to build. The movement of the Columbia River through the site also is complex.

Says Hammond, “If you look at the plot of the river stage, which is the elevation of the river, over a year, it’s all over the place.”

PFLOTRAN’s model covers an area 1,350 by 2,500 by 20 meters aligned with the Columbia River. The actual Hanford 300 area, near the south end of the reservation, is made up of a 1.35 square-kilometer industrial site and about 2.6 square kilometers of surrounding land.

The source of the waste is several hundred meters inland, but uranium reaches groundwater that encounters river water flowing into and out of the porous ground. The river’s rise and fall does more than carry contaminants away: The daily up-and-down cycling of the river stage works like a tide, carrying materials from just above the water table into the groundwater and depositing them on the porous surface, where uranium strongly attaches to soil.

Lichtner and Hammond’s model has access to 15 years of river-stage data, as well as information collected over shorter spans from the Hanford 300 site’s many monitoring wells. Unfortunately, the well data was not collected over a uniform period, making it difficult to match it with river information.

“We have about a year’s worth of well data that we can coordinate with the river data,” Hammond says. “There are 8,760 hours in a year, and we’re modeling multiple years. We’re performing some calculations to see how necessary it is to have that kind of detailed information. We’re doing some smoothing of the data, which allows bigger time steps.”

Computational power is not the only limiting factor in improving the site model: Hourly environmental sampling is costly and prone to errors and breakdowns of the data-collecting equipment. With more work, the team hopes to work around this problem to further improve the model.

“As the model becomes more sophisticated and we apply more computational power,” Hammond says, “we can actually put in some random datasets and try to make it more realistic. The model may not be perfect, but it’s going to be a lot closer.”

Simplification can only get one so far, however. Whereas many groundwater models approach problems with simplified assumptions, understanding the chemical and physical complexities of the Hanford site requires a full 3D model.

“Right now we’re simulating about 12 (chemical species), but there’s a point where you can’t simplify the chemistry any further,” Hammond says.

The team is trying to capture “the difference in chemistry between the river water, which should have a low carbonate concentration, and the groundwater, which is higher in carbonate. That can greatly affect the mobility of uranium. There are also differences in pH and other species as well.”

The fundamental complexity of the problem requires moving beyond simplified models that can run on a workstation and into parallel computing. Developing PFLOTRAN, the code at the core of the model, advanced existing flow and transport codes to efficient parallel processing. It built on elements of PETSc, a suite of tools for parallel solution of equations for scientific applications, allowing relatively rapid development. PETSc is part of another SciDAC project, TOPS, or Towards Optimal Petascale Simulation.

“The advantage of PETSc is that it leaves to the application scientists like us a lot of time to work on the application rather than the parallelization,” Lichtner says. “It does a lot of the work for you.”

Originally developed as two separate codes –– PFLOW for multiphase flow and PTRAN for reactive transport –– PFLOTRAN can now run both elements together, coupled or uncoupled, using continuum-scale mass and energy conservation equations to make accurate predictions of contaminant transport.

PFLOTRAN helps modelers generalize many geochemical flow and transport problems. Although originally designed for work on radionuclide transport, it can model reactive flow in other systems.

“The code handles flows –– supercritical flows, CO2 phase, water, brine,” Hammond says.

It has already been used to model carbon sequestration – cleaning up power plant emissions by storing carbon dioxide deep underground as carbonate minerals or as gas dissolved in ground water.

“The same chemistry algorithms we use at Hanford, we can also use to describe the CO2-brine interaction and the minerals in the formation when you’re modeling injection of CO2 underground,” Hammond says.

“Any investment in the CO2 work benefits the waste problem and helps us develop new process models for legacy waste issues.”

The improved and parallelized code is very stable, and with SciDAC’s 2006 investment in the project, PFLOTRAN can now be used on petascale computers capable of a quadrillion calculations or more per second. The researchers will try it out with a 2009 Innovative and Novel Computational Impact on Theory and Experiment (INCITE) award of 10.5 million processor hours on Oak Ridge National Laboratory’s Cray XTs and half a million on PNNL’s HP Chinook. The University of Illinois also is collaborating.

Approaching flow problems with top supercomputers has changed things, Hammond says. “INCITE (has been) phenomenal. It’s allowed us to simulate a lot of science that wouldn’t have been possible without it. It’s opened doors to new science.”

Using the parallelization strengths of PETSc’s code elements and getting priority runs on open-science computers has moved simulating the site toward real-world impact. As the model gets more and more reliable, it can help inform better approaches to manage the Hanford site’s cleanup.

“We’ll get the information to DOE managers,” Lichtner says, “and then they can decide if a particular remediation strategy is cost effective or would itself cause more pollution.”

Bill Cannon

Published by
Bill Cannon

Recent Posts

Scaling down

As researchers develop new materials for spacecraft, aircraft and earthbound uses, they need to generate… Read More

September 12, 2023

Not-so-plain sight

Scientists and companies invest years and fortunes searching for new medicines. Many discoveries might already… Read More

June 21, 2023

Toil and trouble

Bubbles could block promising technology that would separate carbon dioxide from industrial emissions, capturing… Read More

May 25, 2023

The storms ahead

In mere 18-month span, the Texas Cold Snap left millions of people without power,… Read More

April 25, 2023

Quantum evolution

Bert de Jong leads the Applied Computing for Scientific Discovery Group at Lawrence Berkeley National… Read More

April 13, 2023

Cosmic coding

Two new Department of Energy-sponsored telescopes, the Dark Energy Spectroscopic Instrument (DESI) and the Vera… Read More

March 21, 2023