Robust Neural Rendering in the Wild with Asymmetric Dual 3D Gaussian Splatting
Abstract
A novel Asymmetric Dual 3DGS framework improves 3D reconstruction by training dual models with consistency constraints and divergent masking, outperforming existing methods with high efficiency.
3D reconstruction from in-the-wild images remains a challenging task due to inconsistent lighting conditions and transient distractors. Existing methods typically rely on heuristic strategies to handle the low-quality training data, which often struggle to produce stable and consistent reconstructions, frequently resulting in visual artifacts. In this work, we propose Asymmetric Dual 3DGS, a novel framework that leverages the stochastic nature of these artifacts: they tend to vary across different training runs due to minor randomness. Specifically, our method trains two 3D Gaussian Splatting (3DGS) models in parallel, enforcing a consistency constraint that encourages convergence on reliable scene geometry while suppressing inconsistent artifacts. To prevent the two models from collapsing into similar failure modes due to confirmation bias, we introduce a divergent masking strategy that applies two complementary masks: a multi-cue adaptive mask and a self-supervised soft mask, which leads to an asymmetric training process of the two models, reducing shared error modes. In addition, to improve the efficiency of model training, we introduce a lightweight variant called Dynamic EMA Proxy, which replaces one of the two models with a dynamically updated Exponential Moving Average (EMA) proxy, and employs an alternating masking strategy to preserve divergence. Extensive experiments on challenging real-world datasets demonstrate that our method consistently outperforms existing approaches while achieving high efficiency. Codes and trained models will be released.
Community
3DGS in the wild with dual consistency regularization
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RobustSplat: Decoupling Densification and Dynamics for Transient-Free 3DGS (2025)
- Learning Fine-Grained Geometry for Sparse-View Splatting via Cascade Depth Loss (2025)
- Intern-GS: Vision Model Guided Sparse-View 3D Gaussian Splatting (2025)
- SuperGS: Consistent and Detailed 3D Super-Resolution Scene Reconstruction via Gaussian Splatting (2025)
- DropoutGS: Dropping Out Gaussians for Better Sparse-view Rendering (2025)
- TSGS: Improving Gaussian Splatting for Transparent Surface Reconstruction via Normal and De-lighting Priors (2025)
- DeclutterNeRF: Generative-Free 3D Scene Recovery for Occlusion Removal (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper