Part of the Science at the Exascale series.
When someone mentions national security and computation, the phrase “war games” probably pops into many heads. Indeed, the U.S. government uses powerful computers to run such strategic simulations.
But war strategy is not the first thing that comes to mind for many security experts(see sidebar, “National security workshop sets ambitious agenda”), including Robert Rosner, senior fellow at the University of Chicago’s Computation Institute and professor of physics and astronomy and astrophysics.
When Rosner considers the potential contribution from extreme-scale computing for national security, his mind “is on certification of the stockpile. That is a very important issue.”
The new Strategic Arms Reduction Treaty with the Russian Federation, signed earlier this year, reduces the strategic stockpile to 1,550 deployed nuclear warheads. Since the United States stopped testing nuclear weapons in 1992, simulation is the only way to reduce the stockpile while guaranteeing weapons’ safety and reliability. But ensuring reliability is a complex proposition – especially given the stockpile’s age.
Simulating the nuclear stockpile involves calculating multiple, concurrent physical processes. Computing power “beyond petascale will enable yet more fidelity in the calculations” and the inclusion of even more complicated physics, Rosner says.
‘If an application on an exascale computer uses 100 million cores that can run 16 threads, you have to keep a billion balls in the air simultaneously.’
Fred Streitz, director of Lawrence Livermore National Laboratory’s (LLNL) Institute for Scientific Computing Research, says that many “people think that with the ridiculously large computers that we have we should be able to simulate anything. But our problems are fantastically complicated.”
Take, for instance, the challenge of keeping the nuclear stockpile in working order. Suppose, Streitz says, that you are in your garage, and your job is to certify that your car will start if you put the key in the ignition and turn it. Moreover, your car must not start without your key, and it must not start if someone else uses your key. In addition, your car is aging, because it’s been sitting idle in your garage since 1950. Yet, even after 60 years, you need to guarantee that your car will start but only when it is supposed to.
Meeting such a challenge when it comes to the nuclear stockpile demands computing and knowledge. “Every time we jump to a next generation of computer,” Streitz says, “we open another window through which we can look to discover new science, and that helps in our quest to understand materials and phenomena with a high degree of accuracy.”
National-security researchers need that accuracy because the task grows increasingly complicated. Not only is the stockpile aging. It also has been treated like the proverbial cat with multiple lives, suggests Mark Seager, former assistant department head for advanced computing technology at LLNL’s Integrated Computing and Communications Department and now at Intel.
“The stockpile was designed during a time when weapons systems were routinely replaced with newer ones. So the weapons were designed with a specific lifetime,” Seager says. “That lifetime has been extended repeatedly.” The actual devices, meanwhile, keep growing older, diverging more and more from the nuclear test data that were gathered during their development.
The timescale these simulations must span includes not only years of aging, but also the near-instantaneous reactions related to weapon operation. For example, Seager says, “Nuclear fusion takes place on the femtosecond (a quadrillionth of a second) scale, but the detonation take places in microseconds to milliseconds (millionths to thousandths of a second). That’s a huge set of timescales.”
Stockpile simulations also must go beyond nuclear physics. They must include, for instance, simulations of other materials in the weapons. Over time, the strength, brittleness and other features of the materials might change.
Besides studying stockpiled weapons, national security also can depend on tracking dynamic ones.
Paul Messina, director of science at the Argonne National Laboratory Leadership Computing Facility, describes a project from a couple of decades ago, in which the U.S. military wanted to track missiles launched by the then Soviet Union.
“We had 20 minutes to determine where it was going and to detect the decoys that might be spewed out halfway through the flight. Then, you needed to react very quickly to see if you could perhaps kill the live missile before it landed.
“It required parallel algorithms to do things quickly enough.”
Today’s national security environment brings new challenges. Instead of traditional war, national security experts must worry more about opponents who are terrorists, Rosner says. “That involves a more complex social milieu.”
National security computing needs, however, do not always involve weapons or even war, says Messina, who co-chaired a 2009 workshop that considered extreme-scale computing for national security. “Another application that would require lots of things running extremely fast is a crisis situation.” For example, the security of U.S. lives could depend on being able to determine when and where a tsunami would hit the United States and how severe the impact would be. “If that tsunami is going across the Pacific, you’d have about eight hours to do this.”
National security also could depend on quickly mining huge databases of information. For example, some government agencies track satellite images and data flows around the clock. Finding something specific requires sophisticated tools and fast database searches.
“If one uses many individual disks for storage and parallel file systems and decent database systems software, you have a chance of doing this,” Messina says.
The jump from today’s computing power to exascale is not all new to high-performance computing.
“Every generation in computing increases the complexity of the system,” Seager says. He recalls working on codes to achieve 1,000-way parallelism for the ASCI White supercomputer and later 8,000-way parallelism for Purple and then 360,000-way parallelism for Blue Gene/L. In 2011, he expects to push that to 1.6 million-way parallelism for Sequoia, which IBM will deliver to Lawrence Livermore National Laboratory. For exascale, though, Seager expects to have 1-billion- to 10-billion-way parallelism.
That power will bring fundamental changes to national security-related simulations. “Every factor of 10 improvement in computing-delivered performance brings an entirely new vista of problems that we can solve and physics that we can investigate,” Seager says, but “to scale up by a factor of 10 in parallelism isn’t easy.”
Reaching exascale speeds requires overcoming a range of obstacles, including reducing the power such machines would need. But the ultimate challenge could be balancing the ability for software and hardware to work as a team.
Messina explains: “If an application on an exascale computer uses 100 million cores that can run 16 threads, you have to keep a billion balls in the air simultaneously. That’s not out of the question, but it needs system software that is carefully tuned, and languages and message passing and so on must be enhanced and implemented better.” Moreover, the hardware must provide the features the software needs to run at top speed.
So adding exascale ability to national security requires co-design – hardware and system-software and applications experts collaborating at all stages of the development process.
“As vendors are designing the systems that are the next generation or two out,” Seager says, “we are anticipating it and ramping up our codes to be there.” That co-design, though, demands more than disclosure; it also requires compromise. “If we can make tradeoffs in hardware, system software or applications, that allows many more choices because we can make those choices across the entire co-design space.”
To make all of that interacting work, he adds, computer researchers and partnering vendors must plan ahead by a decade.
Co-design requires a system for developing co-designers. Rosner says training students and postdoctoral researchers provides “the raw material for the national security system. Not many universities let graduate students work on classified research, so we need to train them by working on related problems.”
Other countries already appear to combine high-performance computing and training. “The Chinese are not just trying to catch up,” Rosner says. “They’ve caught up.” Moreover, China appears poised to push into the lead on some national security technology. For example, China already runs specialized programs in hacking, Rosner says.
In some ways, China faces fewer constraints, having “no boundary between civilian and defense work,” Rosner says. “They are very formidable, and they are the ones that we get to worry about now.”
The human brain contains a vast expanse of unmapped territory. An adult brain measures only… Read More
Computer scientists are democratizing artificial intelligence, devising a way to enable virtually anyone to train… Read More
Electrifying transportation and storing renewable energy require improving batteries, including their electrolytes. These fluids help… Read More
During the pandemic turmoil, Margaret Cheung reconsidered her career. At the University of Houston, she… Read More
Growing up in the remote countryside of China’s Hunan province, Z.J. Wang didn’t see trains… Read More
Princeton Plasma Physics Laboratory (PPPL) scientists are creating simulations to advance magnetic mirror technology, a… Read More