Skip to main content Skip to secondary navigation
Page Content

How AI Is Making Autonomous Vehicles Safer

AI is used to simulate real-world conditions to safety-test autonomous vehicles. Stanford researchers surveyed the algorithms and say they are good, but work remains.

Image
Autonomous cars drive around a testing area

What are the best ways to test autonomous vehicles? | iStock/hapabapa

A lot is at stake every time an autonomous car or plane strikes off on its own. Engineers place great trust in the intelligent systems that see and sense the world and help self-controlled vehicles steer clear of virtually every hazard that might come their way.

“Our main challenge as a field is how do we guarantee that these amazing capabilities of AI systems—driverless cars and pilotless planes—are safe before we deploy them in places where human lives are at stake?” says Anthony Corso, a postdoctoral scholar in aeronautics and astronautics and executive director of the Stanford Center for AI Safety.

“The systems themselves are extremely complex, but the environments we are asking them to operate in are incredibly complex, too,” Corso says. “Machine learning has enabled robotic driving in downtown San Francisco, for example, but it’s a huge computational problem that makes validation all the harder.”

Road tests are the ultimate arbiter of safety, but road tests typically come only at the very last stages of the design cycle and are freighted with the same sort of risks to human life that researchers are hoping to avoid. No engineer would want to be responsible for a road test that claimed a life, or even valuable property, in the name of proving that the technology was safe.

Read the full study, "A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical Systems"

 

In light of these practical challenges, designers of autonomous vehicles have come to rely on simulations to test autonomous vehicles’ ability to steer clear of danger. But, are these simulations up to the challenge? With a paper published in the Journal of Artificial Intelligence Research, Corso and colleagues at Stanford and NASA provide an overview of these “black-box safety validation” algorithms. They find reason for optimism that simulation may one day deliver a necessary level of confidence, but work remains.

Assessing the “Black Box”

The designers of self-driving cars and planes and other computationally intensive autonomous vehicles have turned to the so-called black-box validation applications. Black-box algorithms stand in contrast to their sibling, white-box methods. White-box validation seeks what is known as “formal verification” of a system’s safety—not only finding potential points of failure but, ideally, proving the absolute absence of failure, as well.

This higher standard, however, is impossibly intense computationally speaking and does not scale well to large, complex problems like autonomous vehicles. There is just too much going on to calculate everything to deliver white-box level confidence. But, by cutting a few computational corners, black-box approaches hope to surmount these challenges.

Corso likens it to a video game played in reverse, where the testing algorithm is the player and victory is defined as failure—a crash—but in a simulated world, of course, with no risk to life or property. Only then, when you know when and why a system has failed, can you address such situations in the safety mechanisms built into the vehicle.

“The algorithms take an adversarial approach, trying to find weakness. Our hope is that we don’t find failure. The longer that black-box techniques churn away, running through possible scenarios, trying to create weaknesses and not finding them, the greater our confidence grows in the system’s overall safety,” Corso says of the philosophy that drives the field.

Triangulation

To provide the highest confidence possible, validation algorithms perform a sort of triangulation on failure. The highest tier for the most risk-averse industries like aviation, validation algorithms search for any way a system might fail, an approach known as falsification. “Falsification asks: Can you find me any example where the system fails?” Corso says.

It’s a deliberately low bar set to provide greatest assurance. For self-driving cars, however, that threshold is too low. “With an autonomous car operating in an urban environment, you can always find some pathological situation that’s going to cause a crash,” Corso says. “So we raise the bar a bit.”

The next tier, then, involves finding the failures that are most likely to occur to guide the design team as they make their systems as safe as possible. The third tier is to estimate the probability of various forms of failure to assess how likely any one outcome is over another.

“These techniques kind of build on top of each other to increase confidence in overall system safety,” Corso says.

Toward Safer Systems

The survey does not necessarily make value judgments on the black-box tools reviewed, but rather compares how each addresses the problem, what assumptions the creators have built into their system, and what the relative strengths and weaknesses are in each so that autonomous systems designers can choose the approach that best fits their needs.

However, of the nine systems tested that are currently available for use, Corso notes, only two provide anything more than falsification validation, just one offers most-likely failure testing and another offers probability estimation. So, he says, there’s room for improvement.

Overall, Corso and colleagues can’t yet put the stamp of approval on any of the options, but he can see where the field is headed. The most exciting direction, he says, is “compositional validation,” testing individual components separately—like the visual perception and proximity sensing systems—to learn about how each component fails. Knowing more about how subcomponents fail, Corso says, can be leveraged to improve confidence in the overall system safety.

“A few approaches we mentioned have started to touch on this concept,” Corso says, “But I think it will require a lot more work. In their current state, these whole-system algorithms in and of themselves are insufficient to put a formal stamp of approval on them just yet.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics