Turning Traffic Monitoring Cameras into Intelligent Sensors for Traffic Density Estimation

by   Zijian Hu, et al.

Accurate traffic state information plays a pivotal role in the Intelligent Transportation Systems (ITS), and it is an essential input to various smart mobility applications such as signal coordination and traffic flow prediction. The current practice to obtain the traffic state information is through specialized sensors such as loop detectors and speed cameras. In most metropolitan areas, traffic monitoring cameras have been installed to monitor the traffic conditions on arterial roads and expressways, and the collected videos or images are mainly used for visual inspection by traffic engineers. Unfortunately, the data collected from traffic monitoring cameras are affected by the 4L characteristics: Low frame rate, Low resolution, Lack of annotated data, and Located in complex road environments. Therefore, despite the great potentials of the traffic monitoring cameras, the 4L characteristics hinder them from providing useful traffic state information (e.g., speed, flow, density). This paper focuses on the traffic density estimation problem as it is widely applicable to various traffic surveillance systems. To the best of our knowledge, there is a lack of the holistic framework for addressing the 4L characteristics and extracting the traffic density information from traffic monitoring camera data. In view of this, this paper proposes a framework for estimating traffic density using uncalibrated traffic monitoring cameras with 4L characteristics. The proposed framework consists of two major components: camera calibration and vehicle detection. The camera calibration method estimates the actual length between pixels in the images and videos, and the vehicle counts are extracted from the deep-learning-based vehicle detection method. Combining the two components, high-granular traffic density can be estimated. To validate the proposed framework, two case studies were conducted in Hong Kong and Sacramento. The results show that the Mean Absolute Error (MAE) in camera calibration is less than 0.2 meters out of 6 meters, and the accuracy of vehicle detection under various conditions is approximately 90 Overall, the MAE for the estimated density is 9.04 veh/km/lane in Hong Kong and 1.30 veh/km/lane in Sacramento. The research outcomes can be used to calibrate the speed-density fundamental diagrams, and the proposed framework can provide accurate and real-time traffic information without installing additional sensors.


page 11

page 15

page 16

page 19

page 21

page 22

page 25

page 26


CAROM – Vehicle Localization and Traffic Scene Reconstruction from Monocular Cameras on Road Infrastructures

Traffic monitoring cameras are powerful tools for traffic management and...

Identifying High Accuracy Regions in Traffic Camera Images to Enhance the Estimation of Road Traffic Metrics: A Quadtree Based Method

The growing number of real-time camera feeds in urban areas has made it ...

High-Resolution Traffic Sensing with Autonomous Vehicles

The last decades have witnessed the breakthrough of autonomous vehicles ...

Traffic Prediction Framework for OpenStreetMap using Deep Learning based Complex Event Processing and Open Traffic Cameras

Displaying near-real-time traffic information is a useful feature of dig...

DeepWiTraffic: Low Cost WiFi-Based Traffic Monitoring System Using Deep Learning

A traffic monitoring system (TMS) is an integral part of Intelligent Tra...

Framework for Highway Traffic Profiling using Connected Vehicle Data

The connected vehicle (CV) data could potentially revolutionize the traf...

On-ramp and Off-ramp Traffic Flows Estimation Based on A Data-driven Transfer Learning Framework

To develop the most appropriate control strategy and monitor, maintain, ...

Please sign up or login with your details

Forgot password? Click here to reset