TartanAir: A Dataset to Push the Limits of Visual SLAM
- Wenshan Wang ,
- Delong Zhu ,
- Xiangwei Wang ,
- Yaoyu Hu ,
- Yuheng Qiu ,
- Chen Wang ,
- Yafei Hu ,
- Ashish Kapoor ,
- Sebastian Scherer
ArXiv
We present a challenging dataset, the TartanAir, for robot navigation task and more. The data is collected in photo-realistic simulation environments in the presence of various light conditions, weather and moving objects. By collecting data in simulation, we are able to obtain multi-modal sensor data and precise ground truth labels, including the stereo RGB image, depth image, segmentation, optical flow, camera poses, and LiDAR point cloud. We set up a large number of environments with various styles and scenes, covering challenging viewpoints and diverse motion patterns, which are difficult to achieve by using physical data collection platforms.
Téléchargements de publications
TartanAir
juin 24, 2020
TartanAir dataset: AirSim Simulation Dataset for Simultaneous Localization and Mapping theairlab.org/tartanair-dataset
Teaching a robot to see and navigate with simulation
The ability to see and navigate is a critical operational requirement for robots and autonomous systems. However, building a real-world autonomous system that can operate safely at scale is a very difficult task. The partnership between Microsoft Research and Carnegie Mellon University is continuing to advance state of the art in the area of autonomous systems through research focused on solving real-world challenges such as autonomous mapping, navigation, and inspection of underground urban and industrial environments. Simultaneous Localization and Mapping (SLAM) is one of the most fundamental capabilities necessary for robots. We explore how SLAM is fundamentally different and complicated due to the sequential nature of recognizing landmarks (such as buildings and trees) in a dynamic physical environment while driving or flying through it versus…