You Only Crash Once: Improved Object Detection for Real-Time, Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous Planetary Landings

by   Timothy Chase Jr, et al.
University at Buffalo

The detection of hazardous terrain during the planetary landing of spacecraft plays a critical role in assuring vehicle safety and mission success. A cheap and effective way of detecting hazardous terrain is through the use of visual cameras, which ensure operational ability from atmospheric entry through touchdown. Plagued by resource constraints and limited computational power, traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps. Although successful on previous missions, this approach is restricted to the specificity of the templates and limited by the fidelity of the underlying hazard map, which both require extensive pre-flight cost and effort to obtain and develop. Terrestrial systems that perform a similar task in applications such as autonomous driving utilize state-of-the-art deep learning techniques to successfully localize and classify navigation hazards. Advancements in spacecraft co-processors aimed at accelerating deep learning inference enable the application of these methods in space for the first time. In this work, we introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique for autonomous spacecraft planetary landings. Through the use of unsupervised domain adaptation we tailor YOCO for training by simulation, removing the need for real-world annotated data and expensive mission surveying phases. We further improve the transfer of representative terrain knowledge between simulation and the real world through visual similarity clustering. We demonstrate the utility of YOCO through a series of terrestrial and extraterrestrial simulation-to-real experiments and show substantial improvements toward the ability to both detect and accurately classify instances of planetary terrain.


page 6

page 14

page 15

page 17

page 18

page 19


Object Detection under Rainy Conditions for Autonomous Vehicles

Advanced automotive active-safety systems, in general, and autonomous ve...

TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time Object Detection Systems

The rapid growth of real-time huge data capturing has pushed the deep le...

Unsupervised Domain Adaptation for Visual Navigation

Advances in visual navigation methods have led to intelligent embodied n...

A Survey on Deep Learning-based Spatio-temporal Action Detection

Spatio-temporal action detection (STAD) aims to classify the actions pre...

Online Domain Adaptation for Occupancy Mapping

Creating accurate spatial representations that take into account uncerta...

Domain Adaptation Through Task Distillation

Deep networks devour millions of precisely annotated images to build the...

Effortless Deep Training for Traffic Sign Detection Using Templates and Arbitrary Natural Images

Deep learning has been successfully applied to several problems related ...

Please sign up or login with your details

Forgot password? Click here to reset