Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur

1University of Hong Kong, 2Google
Interpolate start reference image.

Our method aims to render high-fidelity and temporally consistent novel views in large-scale indoor scenes with defective training data.

Abstract

Rendering novel view images is highly desirable for many applications. Despite recent progress, it remains challenging to render high-fidelity and view-consistent novel views in large-scale scenes from in-the-wild images with inevitable artifacts (e.g., motion blur). To this end, we develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality and view-consistent images. Besides, images captured in the wild inevitably contain artifacts, such as motion blur, which deteriorates the quality of rendered images. Accordingly, we propose strategies to simulate blur effects on the rendered images to mitigate the negative influence of blurriness images and reduce their importance during training based on precomputed quality-aware weights. Extensive experiments on real and synthetic data demonstrate that our model surpasses state-of-the-art point-based methods for novel view synthesis.

Pipeline

Interpolate start reference image.

Video

Results on ScanNet




Results on NeRF Synthetics

BibTeX

@inproceedings{dai2023hybrid,
  title={Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur},
  author={Dai, Peng and Zhang, Yinda and Yu, Xin and Lyu, Xiaoyang and Qi, Xiaojuan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}