Robots Understanding Contextual Information in Human-Centered Environments using Weakly Supervised Mask Data Distillation

12/15/2020
by   Daniel Dworakowski, et al.
1

Contextual information in human environments, such as signs, symbols, and objects provide important information for robots to use for exploration and navigation. To identify and segment contextual information from complex images obtained in these environments, data-driven methods such as Convolutional Neural Networks (CNNs) are used. However, these methods require large amounts of human labeled data which are slow and time-consuming to obtain. Weakly supervised methods address this limitation by generating pseudo segmentation labels (PSLs). In this paper, we present the novel Weakly Supervised Mask Data Distillation (WeSuperMaDD) architecture for autonomously generating PSLs using CNNs not specifically trained for the task of context segmentation; i.e., CNNs trained for object classification, image captioning, etc. WeSuperMaDD uniquely generates PSLs using learned image features from sparse and limited diversity data; common in robot navigation tasks in human-centred environments (malls, grocery stores). Our proposed architecture uses a new mask refinement system which automatically searches for the PSL with the fewest foreground pixels that satisfies cost constraints. This removes the need for handcrafted heuristic rules. Extensive experiments successfully validated the performance of WeSuperMaDD in generating PSLs for datasets with text of various scales, fonts, and perspectives in multiple indoor/outdoor environments. A comparison with Naive, GrabCut, and Pyramid methods found a significant improvement in label and segmentation quality. Moreover, a context segmentation CNN trained using the WeSuperMaDD architecture achieved measurable improvements in accuracy compared to one trained with Naive PSLs. Our method also had comparable performance to existing state-of-the-art text detection and segmentation methods on real datasets without requiring segmentation labels for training.

READ FULL TEXT

page 1

page 5

page 6

page 11

research
11/16/2021

Weakly-supervised fire segmentation by visualizing intermediate CNN layers

Fire localization in images and videos is an important step for an auton...
research
08/11/2022

PA-Seg: Learning from Point Annotations for 3D Medical Image Segmentation using Contextual Regularization and Cross Knowledge Distillation

The success of Convolutional Neural Networks (CNNs) in 3D medical image ...
research
10/04/2017

Learning to Segment Human by Watching YouTube

An intuition on human segmentation is that when a human is moving in a v...
research
03/20/2023

Weakly-Supervised Text Instance Segmentation

Text segmentation is a challenging vision task with many downstream appl...
research
02/09/2023

Weakly Supervised Human Skin Segmentation using Guidance Attention Mechanisms

Human skin segmentation is a crucial task in computer vision and biometr...
research
11/29/2021

Localized Perturbations For Weakly-Supervised Segmentation of Glioma Brain Tumours

Deep convolutional neural networks (CNNs) have become an essential tool ...
research
10/08/2022

Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation

Most interpretability research focuses on datasets containing thousands ...

Please sign up or login with your details

Forgot password? Click here to reset