Although various visual localization approaches exist, such as scene coordinate and pose regression, these methods often struggle with high memory consumption or extensive optimization requirements. To address these challenges, we utilize recent advancements in novel view synthesis, particularly 3D Gaussian Splatting (3DGS), to enhance localization. 3DGS allows for the compact encoding of both 3D geometry and scene appearance with its spatial features. Our method leverages the dense description maps produced by XFeat's lightweight keypoint detection and description model. We propose distilling these dense keypoint descriptors into 3DGS to improve the model's spatial understanding, leading to more accurate camera pose predictions through 2D-3D correspondences. After estimating an initial pose, we refine it using a photometric warping loss. Benchmarking on popular indoor and outdoor datasets shows that our approach surpasses state-of-the-art Neural Render Pose (NRP) methods, including NeRFMatch and PNeRFLoc.
We model the scene using a feature-based 3D Gaussian Splatting (3DGS) approach, grounding keypoint descriptors into a 3D representation for fast, reliable coarse pose estimation. Descriptors from the XFeat network enable localization in both static and dynamic environments. In the test stage, we estimate the initial coarse pose by matching 2D sparse keypoints from the query image to 3D points in the 3DGS model using a simple but effective greedy matching strategy, followed by a Perspective-n-Point (PnP) solver in a RANSAC loop. We refine the pose by aligning a rendered image with the input query using an RGB warping loss, improving accuracy through test-time optimization. Our pipeline is fully end-to-end, meaning it works without extra learnable modules or complex steps.
We evaluate our method in indoor scenarios using the 7Scenes dataset.
Each image below is divided by a slider. The left side shows the ground truth (GT) as a query image, while the right side displays the render from our estimated refined pose.
By comparing these side-by-side visualizations, you can observe the high accuracy of our approach in matching the estimated pose to the ground truth.
We evaluate our method on the Cambridge Landmarks dataset for outdoor scenarios, addressing challenges like varying lighting and diverse architectural features. Our approach demonstrates robust performance in complex urban environments, accurately estimating camera poses.
Additionally, we evaluate our method in challenging dynamic environments using outdoor Phototourism dataset and indoor Sitcoms3D dataset.
These datasets allow us to assess the performance of our approach in more complex and variable scenes, demonstrating its robustness and versatility across different types of environments.
@misc{sidorov2024gsplatlocgroundingkeypointdescriptors,
title={GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for Improved Visual Localization},
author={Gennady Sidorov and Malik Mohrat and Ksenia Lebedeva and Ruslan Rakhimov and Sergey Kolyubin},
year={2024},
eprint={2409.16502},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2409.16502},
}