CREATE: Multimodal Dataset for Unsupervised Learning, Generative Modeling and Prediction of Sensory Data from a Mobile Robot in Indoor Environments
The CREATE database is composed of 14 hours of multimodal recordings from a mobile robotic platform based on the iRobot Create. The various sensors cover vision, audition, motors and proprioception. The dataset has been designed in the context of a mobile robot that can learn multimodal representations of its environment, thanks to its ability to navigate the environment. This ability can also be used to learn the dependencies and relationships between the different modalities of the robot (e.g. vision, audition), as they reflect both the external environment and the internal state of the robot. The provided multimodal dataset is expected to have multiple usages, such as multimodal unsupervised object learning, multimodal prediction and egomotion/causality detection.
READ FULL TEXT