For humans, the everyday act of starting a car or truck, pulling it into traffic and turning it right or left while avoiding obstacles and pedestrians is – with training and experience – second nature. We manage it even as traffic, weather, terrain, lighting and setting change rapidly.
Not so for the computers and algorithms that would assume those tasks in autonomous vehicles (AVs) that manufacturers are developing.
“The industry wants to create an autonomous vehicle that can ultimately go anywhere at any time, much like humans” but with little human intervention, says Robert Patton, a researcher in the Computer Science and Mathematics Division and leader of the Learning Systems Group in the Data and Artificial Intelligence Systems Section at Oak Ridge National Laboratory (ORNL).
“The challenge is that humans are very good at dealing with ambiguities and changes to rules and changes to behavior of other folks,” Patton continues, “whereas machines like things to be orderly and clearly defined. So how do you blend those two?”
AV is based on artificial intelligence. To train AI algorithms, researchers present them with scenarios that combine those multiple variables human drivers routinely navigate so the machine is prepared to cope with new, previously unseen situations. The programs perceive their environment through cameras and sensors and attempt to make decisions about steering, speed and braking that make travel safe for human passengers.
It’s tough, however, “for an AI developer to address all possible scenarios and say, ‘yup, covered them all,’” Patton says. Instead, the algorithms often are prepared to address a narrow range of locations and conditions. Going beyond that to myriad unique variable combinations is far more difficult. “It’s an 80-20 type of thing, where you might be able to develop for 80 percent” of situations “but that 20 percent is really, really hard.”
Patton, with a team of ORNL and General Motors researchers, is addressing that problem with 150,000 node-hours on Summit, an IBM AC922 system at the lab’s Oak Ridge Leadership Computing Facility. The Department of Energy’s Advanced Scientific Computing Research Leadership Computing Challenge program provided the time.
With Summit’s parallel processing power, the team has simultaneously analyzed hundreds of thousands of scenarios and virtually driven the equivalent of hundreds of thousands of miles in just a few hours to evaluate AV systems.
‘We get to break things, and that’s always fun.’
Mark Coletti, a staff scientist in the lab’s Computing and Computational Science Directorate, Computer Science and Mathematics Division in the Learning Systems Group, developed the software, aptly named Gremlin. It finds faults in AV systems – scenarios in which they fail. Automotive engineers will use those results to improve their algorithms, perhaps with added training.
“We get to break things,” Coletti says, “and that’s always fun.” Machine-learning algorithms, the foundation of artificial intelligence, “may have a variety of weaknesses. You may not even be aware of what those weaknesses are, and you would use something like Gremlin” to identify them.
Flaws often originate in poor training data. For example, early AV systems trained on real-life driving scenarios didn’t handle turns well, because most driving is straight ahead. Meanwhile, General Motors researchers Jordan Chipka and Ajay Deshpande told the Oak Ridge team that their AV programs struggle with changing lighting conditions. “We used that as sort of as a guiding principle with Gremlin,” focusing on scenarios that varied light levels and directions, Patton says.
Gremlin is based on evolutionary algorithms, which mimic how biological organisms adapt and survive. “You have a kind of DNA, where the DNA represents a posed solution to the problem,” says Coletti, who joined ORNL in 2015 after decades of work as a software engineer for technology contractor SAIC and then earning a doctorate from George Mason University. For the AV problem, the metaphoric genes in each posed solution corresponded to values for lighting, traffic and so on.
After initially randomly generating a set of parent solutions with varying genes, the code cloned them into offspring solutions and mutated them by perturbing values for some of the genes, making them unique from the parents. For example, the genes may have incrementally reduced or increased rain or wind, Coletti says. Gremlin then iteratively applied mutation and selection to refine a population of solutions. Each set of conditions was created in CARLA, an open-source simulator for testing and training AV systems, on Summit and thrown against AV models to see how they coped.
Summit’s massive parallel processing capability let the ORNL team simultaneously test thousands of scenario iterations. Gradually, with subsequent mutations, the population of scenarios converged on solutions, “a set of gene values that tells you specific conditions that the model is struggling with,” Coletti says, such as right-hand turns or wet roads. With such information, engineers can improve training for their models or improve the algorithms.
Porting CARLA to Summit proved challenging, Coletti says, because it was written for X86 processors found in most personal computers and many high-performance machines. Summit’s nodes house a combination of multicore, mutithreading IBM POWER9 processors and NVIDIA Volta graphics processing units, chips that accelerate calculations. Oak Ridge software developer Quentin Haas oversaw recompiling the code to run on the giant supercomputer.
Patton, who joined ORNL as a postdoctoral researcher in 2003, says his team’s goal is to use specific applications such as AVs to advance artificial intelligence technology overall. With Gremlin, “we can do so many more scenarios to test and evaluate autonomous vehicles” than manual testing can while using supercomputing to parallelize operations. “Within hours we can do so much more than could be done manually.”
Though Gremlin was designed for autonomous vehicle testing, the ORNL team has generalized it, and other researchers have inquired about adapting it for their projects. Coletti, meanwhile, has posted the open-source code to a GitHub repository, making it available to other users. Now, he chuckles, “no machine-learning model is safe” from its fault-finding mischief.