Synergistic Redundancy: Towards Verifiable Safety for Autonomous Vehicles

09/04/2022
by   Ayoosh Bansal, et al.
10

As Autonomous Vehicle (AV) development has progressed, concerns regarding the safety of passengers and agents in their environment have risen. Each real world traffic collision involving autonomously controlled vehicles has compounded this concern. Open source autonomous driving implementations show a software architecture with complex interdependent tasks, heavily reliant on machine learning and Deep Neural Networks (DNN), which are vulnerable to non deterministic faults and corner cases. These complex subsystems work together to fulfill the mission of the AV while also maintaining safety. Although significant improvements are being made towards increasing the empirical reliability and confidence in these systems, the inherent limitations of DNN verification create an, as yet, insurmountable challenge in providing deterministic safety guarantees in AV. We propose Synergistic Redundancy (SR), a safety architecture for complex cyber physical systems, like AV. SR provides verifiable safety guarantees against specific faults by decoupling the mission and safety tasks of the system. Simultaneous to independently fulfilling their primary roles, the partially functionally redundant mission and safety tasks are able to aid each other, synergistically improving the combined system. The synergistic safety layer uses only verifiable and logically analyzable software to fulfill its tasks. Close coordination with the mission layer allows easier and early detection of safety critical faults in the system. SR simplifies the mission layer's optimization goals and improves its design. SR provides safe deployment of high performance, although inherently unverifiable, machine learning software. In this work, we first present the design and features of the SR architecture and then evaluate the efficacy of the solution, focusing on the crucial problem of obstacle existence detection faults in AV.

READ FULL TEXT

page 1

page 10

research
08/30/2022

Verifiable Obstacle Detection

Perception of obstacles remains a critical safety concern for autonomous...
research
07/01/2019

Kayotee: A Fault Injection-based System to Assess the Safety and Reliability of Autonomous Vehicles to Faults and Errors

Fully autonomous vehicles (AVs), i.e., AVs with autonomy level 5, are ex...
research
09/19/2019

Real-Time Verification for Distributed Cyber-Physical Systems

Safety-critical distributed cyber-physical systems (CPSs) have been foun...
research
07/17/2018

Experimental Resilience Assessment of An Open-Source Driving Agent

Autonomous vehicles (AV) depend on the sensors like RADAR and camera for...
research
07/23/2023

Framing Relevance for Safety-Critical Autonomous Systems

We are in the process of building complex highly autonomous systems that...
research
02/25/2022

Attacks and Faults Injection in Self-Driving Agents on the Carla Simulator – Experience Report

Machine Learning applications are acknowledged at the foundation of auto...
research
11/08/2021

Safety Validation of Autonomous Vehicles using Assertion-based Oracles

Safety and mission performance validation of autonomous vehicles is a ma...

Please sign up or login with your details

Forgot password? Click here to reset