[CVPR 2025] Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization
Paper: https://huggingface.co/papers/2412.08376
Code: https://github.com/ffrivera0/reloc3r
Reloc3r is a simple yet effective camera pose estimation framework that combines a pre-trained two-view relative camera pose regression network with a multi-view motion averaging module.
Trained on approximately 8 million posed image pairs, Reloc3r achieves surprisingly good performance and generalization ability, producing high-quality camera pose estimates in real-time.
Table of Contents
- TODO List
- Installation
- Usage
- Evaluation on Relative Camera Pose Estimation
- Evaluation on Visual Localization
- Training
- Citation
- Acknowledgments
TODO List
- Release pre-trained weights and inference code.
- Release evaluation code for ScanNet1500, MegaDepth1500 and Cambridge datasets.
- Release sample code for self-captured images and videos.
- Release training code and data.
- Release evaluation code for other datasets.
- Release the accelerated version for visual localization.
- Release Gradio Demo.
Installation
- Clone Reloc3r
git clone --recursive https://github.com/ffrivera0/reloc3r.git
cd reloc3r
# if you have already cloned reloc3r:
# git submodule update --init --recursive
- Create the environment using conda
conda create -n reloc3r python=3.11 cmake=3.14.0
conda activate reloc3r
conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia # use the correct version of cuda for your system
pip install -r requirements.txt
# optional: you can also install additional packages to:
# - add support for HEIC images
pip install -r requirements_optional.txt
- Optional: Compile the cuda kernels for RoPE
# Reloc3r relies on RoPE positional embeddings for which you can compile some cuda kernels for faster runtime.
cd croco/models/curope/
python setup.py build_ext --inplace
cd ../../../
- Optional: Download the checkpoints Reloc3r-224/Reloc3r-512. The pre-trained model weights will automatically download when running the evaluation and demo code below.
Usage
Using Reloc3r, you can estimate camera poses for images and videos you captured.
For relative pose estimation, try the demo code in wild_relpose.py
. We provide some image pairs used in our paper.
# replace the args with your paths
python wild_relpose.py --v1_path ./data/wild_images/zurich0.jpg --v2_path ./data/wild_images/zurich1.jpg --output_folder ./data/wild_images/
Visualize the relative pose
# replace the args with your paths
python visualization.py --mode relpose --pose_path ./data/wild_images/pose2to1.txt
For visual localization, the demo code in wild_visloc.py
estimates absolute camera poses from sampled frames in self-captured videos.
The demo simply uses the first and last frames as the database, which requires overlapping regions among all images. This demo does not support linear motion. We provide some videos as examples.
# replace the args with your paths
python wild_visloc.py --video_path ./data/wild_video/ids.MOV --output_folder ./data/wild_video
Visualize the absolute poses
# replace the args with your paths
python visualization.py --mode visloc --pose_folder ./data/wild_video/ids_poses/
Evaluation on Relative Camera Pose Estimation
To reproduce our evaluation on ScanNet1500 and MegaDepth1500, download the datasets here and unzip it to ./data/
.
Then run the following script. You will obtain results similar to those presented in our paper.
bash scripts/eval_relpose.sh
To achieve faster inference speed, set
--amp=1
. This enables evaluation withfp16
, which increases speed from 24 FPS to 40 FPS on an RTX 4090 with Reloc3r-512, without any accuracy loss.
Evaluation on Visual Localization
To reproduce our evaluation on Cambridge, download the dataset here and unzip it to ./data/cambridge/
.
Then run the following script. You will obtain results similar to those presented in our paper.
bash scripts/eval_visloc.sh
Training
We follow DUSt3R to process the training data. Download the datasets: CO3Dv2, ScanNet++, ARKitScenes, BlendedMVS, MegaDepth, DL3DV, RealEstate10K.
For each dataset, we provide a preprocessing script in the datasets_preprocess
directory and an archive containing the list of pairs when needed. You have to download the datasets yourself from their official sources, agree to their license, and run the preprocessing script.
We provide a sample script to train Reloc3r with ScanNet++ on an RTX 3090 GPU
bash scripts/train_small.sh
To reproduce our training for Reloc3r-512 with 8 H800 GPUs, run the following script
bash scripts/train.sh
They are not strictly equivalent to what was used to train Reloc3r, but they should be close enough.
Citation
If you find our work helpful in your research, please consider citing:
@article{reloc3r,
title={Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization},
author={Dong, Siyan and Wang, Shuzhe and Liu, Shaohui and Cai, Lulu and Fan, Qingnan and Kannala, Juho and Yang, Yanchao},
journal={arXiv preprint arXiv:2412.08376},
year={2024}
}
Acknowledgments
Our implementation is based on several awesome repositories:
We thank the respective authors for open-sourcing their code.
- Downloads last month
- 9,817