Profiles in Computing

Piecing together HPC and a career

Barbara Helland. Image courtesy of the Krell Institute.

Like many computer scientists, Barbara Helland’s career began at a helpdesk. She worked with Iowa State University faculty and staff to move code from paper to punch cards to running on room-size mainframes. That led her to what’s now known as the Department of Energy’s Ames National Laboratory, based at Iowa State, where she helped experimental researchers collect data.

“I had to learn how to make everything work,” Helland says – maintaining printers, keeping computers connected, and troubleshooting problems with hardware, software or the experiments themselves. “Figuring out what was wrong was not an easy task back then. But it was fun.” And it launched her scientific computing career.

Eventually, Helland applied her overarching expertise to managing large DOE workforce and computing initiatives, eventually becoming associate director for the Office of Science’s Advanced Scientific Computing Research (ASCR) program before retiring last month. ASCR is currently searching to fill the position.

Family influenced some of Helland’s early choices and interests. Her mother was an administrative assistant, first for a chemical company and later in securities. Her mother realized computers were here to stay and late in her career learned a new software system for her job. She encouraged Helland, who liked math, to study computer science.

Helland honed her program management skills while working with the Ames Lab’s James Corones, at the dawn of the high-performance computing (HPC) era. Among several initiatives, Corones founded Ames Lab’s Scalable Computing Laboratory. Helland started as the team’s system administrator, but her horizons quickly expanded as she visited larger DOE facilities to collaborate on projects and programs.

Her first educational program, Adventures in Supercomputing, encouraged underrepresented students to take more math and science in the 1990s by connecting high school students in Iowa, Tennessee and New Mexico with HPC resources. Helland had personal motivations for her involvement: “My daughter was convinced that she couldn’t do math.” Today, Helland’s daughter does multivariate statistical analysis.

Corones helped establish the DOE Computational Science Graduate Fellowship in 1991. Helland, who spent 25 years at Ames Lab, helped manage the fellowship for about 10 years, moving to the Krell Institute when Corones founded it in 1997. The experience acquainted Helland with computational scientists across the national laboratories and helped secure National Science Foundation grants for the nonprofit company. In 2004, Ed Oliver, then Office of Science associate director for ASCR, recruited her to headquarters, asking her to help launch the leadership computing facilities.

At the time, HPC in the United States was at a crossroads, Helland says. Corporate interest had peaked in the early 2000s. People weren’t buying enough of these sophisticated systems. The internet drew away attention. But Japan was building supercomputers that were consistently rated the world’s fastest, prompting policymakers to worry about global competitiveness.

‘We had to have scientific accomplishments on day one, when that machine was ready to go.’

To reinvigorate U.S. interest in HPC, Congress authorized the Office of Science to invest in large-scale computing facilities. The labs previously bought and installed systems built by Cray, IBM and others.  With the new initiative, DOE labs worked closely with vendors to develop the most advanced computers in the world, Helland says, and leveraged the same project management principles the Office of Science had used to build large experimental facilities, such as massive accelerators and neutron sources. The leadership computing program started with $25 million to upgrade systems at Oak Ridge National Laboratory to launch Oak Ridge Leadership Computing Facility in 2004. A second leadership computing facility followed at Argonne National Laboratory in 2006. The goal: build a world-leading petascale system – capable of a quadrillion calculations per second – for open science research by 2009.

That process didn’t just need to show the systems were on time and on budget, Helland says. “To make this work, we had to have scientific accomplishments on day one, when that machine was ready to go.” Unlike systems at the National Energy Research Scientific Computing Center, which were designated for a wide array of Office of Science computational projects, the leadership computing facilities would apply extensive resources to simulate large problems that would be unwieldy or impossible via experiment or observation and would be open to the entire research community, including industry.

Petascale computing required hardware and software innovations that addressed energy challenges. Simply adding more processing nodes would eventually produce power-hungry systems that could consume hundreds of megawatts each day. To increase energy efficiency, engineers incorporated graphics processing units (GPUs), chips first used for video games. But that also came with hurdles: reviewers noted that GPUs lacked error-correction software to ensure simulations’ accuracy. “So we stepped back and said, ‘Let’s see if we can find some scientific research that can actually make use of these and figure out what we need to do.’” ASCR launched the Center for Accelerated Application Readiness and worked closely with NVIDIA and Cray to improve GPUs for use in science and also build scientific codes that could reliably use GPUs.

When Helland arrived at DOE headquarters, the SciDAC (Scientific Discovery through Advanced Computing) program was already supporting software innovation by uniting teams of mathematicians and computer scientists to develop algorithms capable of parsing scientific problems across supercomputers’ many processors. With the leadership computing facilities, ASCR added the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. Founded in 2003, it provides scientists across academia, the national labs and industry with HPC access and expertise for research in climate, astrophysics, fluid dynamics, computational biology and other fields.

In 2006, DOE added the ASCR Leadership Computing Challenge, which let researchers work on computational projects that weren’t ready for an INCITE award. This program supported research in national emergencies. The ALCC was critical during the SARS-CoV-2 pandemic: in early 2020 it allowed the leadership computing facilities to participate in the HPC COVID-19 Consortium for virus modeling, vaccine research and drug discovery.

Hardware, software and scientific-discovery innovations provided the foundation for the Exascale Computing Project (ECP), launched in 2017, which is producing the Oak Ridge Leadership Computing Facility’s Frontier and Argonne Leadership Computing Facility’s Aurora.

Helland says the large collaborations that ECP and other programs, like SciDAC, fostered will “be a legacy of exascale. Yes, the machine is nice, but the machines are temporary. Something else is always going to come to take its place. But it’s the software and the people that we’ve developed here” that will endure, supporting advances in artificial intelligence, machine learning and big-science projects that take on national challenges – such as DOE’s Energy Earthshots Initiative to realize the nation’s 2050 net-zero carbon emission goal.

Critical problems are on the horizon as Helland departs. Systems must continue to be energy efficient and must increase flexibility to adapt to supply-chain issues and other constraints. Leadership-class systems must also address change at a faster pace than the multiyear timescales needed to build them. Scientific computing also is becoming more intertwined with big experiments – using AI, for example, to process data as experiments happen can help researchers pivot when a something doesn’t go as expected. “With AI,” she notes, “you can quickly ship the data off to have a machine look at it. And the AI can tell you what to look at next.”

The closer connections between computing and experiments might mean increasing and interconnecting midrange systems at the national laboratories, Helland says. Meanwhile, larger systems – including beyond exascale – will rely on research and partnerships with vendors to speed developing microelectronics, quantum processors and other technologies, areas in which ASCR and the Office of Science are already investing resources. “We’re right at the beginning of the exascale era,” Helland says.

 

Note: DOE seeks a leader to drive U.S. innovation in HPC and scientific computing. For more information, go to USAJOBS. 

Bill Cannon

Share
Published by
Bill Cannon

Recent Posts

We the AI trainers

Computer scientists are democratizing artificial intelligence, devising a way to enable virtually anyone to train… Read More

November 12, 2024

AI turbocharge

Electrifying transportation and storing renewable energy require improving batteries, including their electrolytes. These fluids help… Read More

October 30, 2024

Pandemic preparedness

During the pandemic turmoil, Margaret Cheung reconsidered her career. At the University of Houston, she… Read More

October 16, 2024

A heavy lift

Growing up in the remote countryside of China’s Hunan province, Z.J. Wang didn’t see trains… Read More

August 1, 2024

Frugal fusion

Princeton Plasma Physics Laboratory (PPPL) scientists are creating simulations to advance magnetic mirror technology, a… Read More

July 16, 2024

A deeper shade of green

Niall Mangan uses data to explore the underlying mechanisms of energy systems, particularly ones for… Read More

June 26, 2024