Exploring OpenStreetMap Availability for Driving Environment Understanding

03/11/2019
by   Yang Zheng, et al.
20

With the great achievement of artificial intelligence, vehicle technologies have advanced significantly from human centric driving towards fully automated driving. An intelligent vehicle should be able to understand the driver's perception of the environment as well as controlling behavior of the vehicle. Since high digital map information has been available to provide rich environmental context about static roads, buildings and traffic infrastructures, it would be worthwhile to explore map data capability for driving task understanding. Alternative to commercial used maps, the OpenStreetMap (OSM) data is a free open dataset, which makes it unique for the exploration research. This study is focused on two tasks that leverage OSM for driving environment understanding. First, driving scenario attributes are retrieved from OSM elements, which are combined with vehicle dynamic signals for the driving event recognition. Utilizing steering angle changes and based on a Bi-directional Recurrent Neural Network (Bi-RNN), a driving sequence is segmented and classified as lane-keeping, lane-change-left, lane-change-right, turn-left, and turn-right events. Second, for autonomous driving perception, OSM data can be used to render virtual street views, represented as prior knowledge to fuse with vision/laser systems for road semantic segmentation. Five different types of road masks are generated from OSM, images, and Lidar points, and fused to characterize the drivable space at the driver's perspective. An alternative data-driven approach is based on a Fully Convolutional Network (FCN), OSM availability for deep learning methods are discussed to reveal potential usage on compensating street view images and automatic road semantic annotation.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

page 10

research
03/06/2019

A Lane-Change Path Planner and its application with a monocular camera

Human drivers utilize the visual cues from the road to performance some ...
research
01/31/2018

Dynamics of Driver's Gaze: Explorations in Behavior Modeling & Maneuver Prediction

The study and modeling of driver's gaze dynamics is important because, i...
research
03/20/2019

Affordance Learning In Direct Perception for Autonomous Driving

Recent development in autonomous driving involves high-level computer vi...
research
05/07/2023

Bi-Mapper: Holistic BEV Semantic Mapping for Autonomous Driving

A semantic map of the road scene, covering fundamental road elements, is...
research
04/16/2020

Where can I drive? Deep Ego-Corridor Estimation for Robust Automated Driving

Lane detection is an essential part of the perception module of any auto...
research
04/15/2021

Street-Map Based Validation of Semantic Segmentation in Autonomous Driving

Artificial intelligence for autonomous driving must meet strict requirem...
research
11/10/2022

Driver Maneuver Detection and Analysis using Time Series Segmentation and Classification

The current paper implements a methodology for automatically detecting v...

Please sign up or login with your details

Forgot password? Click here to reset