VGGT-SLAM: Dense RGB SLAM optimized on the SL(4) manifold

SPARK Lab researchers built a new mapping system that can create 3D maps from regular video footage, using a method that accounts for possible distortions when stitching map pieces together, ensuring that the final map stays consistent even for long video sequences.

The result: better, more accurate 3D maps, built from long videos, without needing a super powerful GPU.

Authors: Dominic Maggio, Hyungtae Lim, and Luca Carlone
Citation: Accepted to the 2025 Conference on Neural Information Processing Systems

Abstract:
We present VGGT-SLAM, a dense RGB SLAM system constructed by incrementally and globally aligning submaps created from the feed-forward scene reconstruction approach VGGT using only uncalibrated monocular cameras. While related works align submaps using similarity transforms (i.e., translation, rotation, and scale), we show that such approaches are inadequate in the case of uncalibrated cameras. In particular, we revisit the idea of reconstruction ambiguity, where given a set of uncalibrated cameras with no assumption on the camera motion or scene structure, the scene can only be reconstructed up to a 15-degrees-of-freedom projective transformation of the true geometry. This inspires us to recover a consistent scene reconstruction across submaps by optimizing over the SL(4) manifold, thus estimating 15-degrees-of-freedom homography transforms between sequential submaps while accounting for potential loop closure constraints. As verified by extensive experiments, we demonstrate that VGGT-SLAM achieves improved map quality using long video sequences that are infeasible for VGGT due to its high GPU requirements.