NeurAR

Neural Uncertainty for Autonomous 3D Reconstruction with Implicit Neural Representations


Yunlong Ran, Jing zeng, Shibo He, Jiming Chen, Lincheng Li, Yingfeng Chen, Gim Hee Lee, Qi Ye



Paper Code

Overview Video



Abstract

Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems. However, applying them to autonomous 3D reconstruction, where robots are required to explore a scene and plan a view path for the reconstruction, has not been studied. In this paper, we explore for the first time the possibility of using implicit neural representations for autonomous 3D scene reconstruction by addressing two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of hand-crafting one. For the first challenge, a proxy of Peak Signal-to-Noise Ratio (PSNR) is proposed to quantify a viewpoint quality. The proxy is acquired by treating the color of a spatial point in a scene as a random variable under a Gaussian distribution rather than a deterministic one; the variance of the distribution quantifies the uncertainty of the reconstruction and composes the proxy. For the second challenge, the proxy is optimized jointly with the parameters of an implicit neural network for the scene. With the proposed view quality criterion, we can then apply the new representations to autonomous 3D reconstruction. Our method demonstrates significant improvements on various metrics for the rendered image quality and the geometry quality of the reconstructed 3D models when compared with variants using TSDF or reconstruction without view planning.

Method

We assume the color to regress for a spatial point in a scene as a random variable modeled by a Gaussian distribution. The Gaussian distribution models the uncertainty of the reconstruction and the variance quantifies the uncertainty. When the regression network converges, the variance of the distribution is given by the squared error of the predicted color and the ground truth color; the integral of the uncertainty of points in the frustum of a viewpoint can be taken as a proxy of PSNR to measure the quality of candidate viewpoints.

a: loss curves of ray-set based, Ratio is uncertainty/MSE. b: our ray-set based uncertainty formulation. c: altenating single-ray based uncertainty formulation, work but not good as ray-set based. Please refer to our paper for more formulation derails. Under the guidiance of uncertainty, NeurAR can easily do active 3D reconstruction.



Bibtex
@article{ran2023neurar, title={NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction With Implicit Neural Representations}, author={Ran, Yunlong and Zeng, Jing and He, Shibo and Chen, Jiming and Li, Lincheng and Chen, Yingfeng and Lee, Gimhee and Ye, Qi}, journal={IEEE Robotics and Automation Letters}, volume={8}, number={2}, pages={1125--1132}, year={2023}, publisher={IEEE} }