Papers
arxiv:2108.10869

DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras

Published on Aug 24, 2021
Authors:
,

Abstract

DROID-SLAM is a deep learning-based SLAM system that uses recurrent iterative updates of camera pose and pixelwise depth through a Dense Bundle Adjustment layer, demonstrating superior accuracy and robustness compared to previous methods.

AI-generated summary

We introduce DROID-SLAM, a new deep learning based SLAM system. DROID-SLAM consists of recurrent iterative updates of camera pose and pixelwise depth through a Dense Bundle Adjustment layer. DROID-SLAM is accurate, achieving large improvements over prior work, and robust, suffering from substantially fewer catastrophic failures. Despite training on monocular video, it can leverage stereo or RGB-D video to achieve improved performance at test time. The URL to our open source code is https://github.com/princeton-vl/DROID-SLAM.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2108.10869 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2108.10869 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2108.10869 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.