COOR-PLT: A hierarchical control model for coordinating adaptive platoons of connected and autonomous vehicles at signal-free intersections based on deep reinforcement learning

07/01/2022
by   Duowei Li, et al.
0

Platooning and coordination are two implementation strategies that are frequently proposed for traffic control of connected and autonomous vehicles (CAVs) at signal-free intersections instead of using conventional traffic signals. However, few studies have attempted to integrate both strategies to better facilitate the CAV control at signal-free intersections. To this end, this study proposes a hierarchical control model, named COOR-PLT, to coordinate adaptive CAV platoons at a signal-free intersection based on deep reinforcement learning (DRL). COOR-PLT has a two-layer framework. The first layer uses a centralized control strategy to form adaptive platoons. The optimal size of each platoon is determined by considering multiple objectives (i.e., efficiency, fairness and energy saving). The second layer employs a decentralized control strategy to coordinate multiple platoons passing through the intersection. Each platoon is labeled with coordinated status or independent status, upon which its passing priority is determined. As an efficient DRL algorithm, Deep Q-network (DQN) is adopted to determine platoon sizes and passing priorities respectively in the two layers. The model is validated and examined on the simulator Simulation of Urban Mobility (SUMO). The simulation results demonstrate that the model is able to: (1) achieve satisfactory convergence performances; (2) adaptively determine platoon size in response to varying traffic conditions; and (3) completely avoid deadlocks at the intersection. By comparison with other control methods, the model manifests its superiority of adopting adaptive platooning and DRL-based coordination strategies. Also, the model outperforms several state-of-the-art methods on reducing travel time and fuel consumption in different traffic conditions.

READ FULL TEXT
research
06/24/2022

Modeling Adaptive Platoon and Reservation Based Autonomous Intersection Control: A Deep Reinforcement Learning Approach

As a strategy to reduce travel delay and enhance energy efficiency, plat...
research
01/31/2022

CoTV: Cooperative Control for Traffic Light Signals and Connected Autonomous Vehicles using Deep Reinforcement Learning

The target of reducing travel time only is insufficient to support the d...
research
08/21/2022

Development of a CAV-based Intersection Control System and Corridor Level Impact Assessment

This paper presents a signal-free intersection control system for CAVs b...
research
11/05/2020

A Hysteretic Q-learning Coordination Framework for Emerging Mobility Systems in Smart Cities

Connected and automated vehicles (CAVs) can alleviate traffic congestion...
research
08/07/2019

Large-scale traffic signal control using machine learning: some traffic flow considerations

This paper uses supervised learning, random search and deep reinforcemen...
research
05/05/2022

HARL: A Novel Hierachical Adversary Reinforcement Learning for Automoumous Intersection Management

As an emerging technology, Connected Autonomous Vehicles (CAVs) are beli...
research
03/05/2023

D-HAL: Distributed Hierarchical Adversarial Learning for Multi-Agent Interaction in Autonomous Intersection Management

Autonomous Intersection Management (AIM) provides a signal-free intersec...

Please sign up or login with your details

Forgot password? Click here to reset