Safe Deep Q-Network for Autonomous Vehicles at Unsignalized Intersection

by   Kasra Mokhtari, et al.

We propose a safe DRL approach for autonomous vehicle (AV) navigation through crowds of pedestrians while making a left turn at an unsignalized intersection. Our method uses two long-short term memory (LSTM) models that are trained to generate the perceived state of the environment and the future trajectories of pedestrians given noisy observations of their movement. A future collision prediction algorithm based on the future trajectories of the ego vehicle and pedestrians is used to mask unsafe actions if the system predicts a collision. The performance of our approach is evaluated in two experiments using the high-fidelity CARLA simulation environment. The first experiment tests the performance of our method at intersections that are similar to the training intersection and the second experiment tests our method at intersections with a different topology. For both experiments, our methods do not result in a collision with a pedestrian while still navigating the intersection at a reasonable speed.


Pedestrian Collision Avoidance for Autonomous Vehicles at Unsignalized Intersection Using Deep Q-Network

Prior research has extensively explored Autonomous Vehicle (AV) navigati...

Detection of Collision-Prone Vehicle Behavior at Intersections using Siamese Interaction LSTM

As a large proportion of road accidents occur at intersections, monitori...

Collision Avoidance in Pedestrian-Rich Environments with Deep Reinforcement Learning

Collision avoidance algorithms are essential for safe and efficient robo...

Prediction of Pedestrian Spatiotemporal Risk Levels for Intelligent Vehicles: A Data-driven Approach

In recent years, road safety has attracted significant attention from re...

Spatio-Temporal Scene-Graph Embedding for Autonomous Vehicle Collision Prediction

In autonomous vehicles (AVs), early warning systems rely on collision pr...

Adversarial Attack Against Image-Based Localization Neural Networks

In this paper, we present a proof of concept for adversarially attacking...

Please sign up or login with your details

Forgot password? Click here to reset