Drone-assisted Road Gaussian Splatting with Cross-view Uncertainty

Saining Zhang*1,2 Baijun Ye*1,3 Xiaoxue Chen1
Yuantao Chen1 Zongzheng Zhang1 Cheng Peng1,4 Yongliang Shi1 Hao Zhao†1
1Institute for AI Industry Research (AIR), Tsinghua University 2Nanyang Technological University
3IIIS, Tsinghua University 4Beijing Institute of Technology

TL;DR: In this work, we introduce a novel uncertainty-aware 3D-GS training paradigm to effectively use aerial imagery to enhance the NVS of road views.

Qualitative results of our Drone-assisted Road Gaussian Splatting with Cross-view Uncertainty and several baseline methods.

Abstract

Robust and realistic rendering for large-scale road scenes is essential in autonomous driving simulation. Recently, 3D Gaussian Splatting (3D-GS) has made groundbreaking progress in neural rendering, but the general fidelity of large-scale road scene renderings is often limited by the input imagery, which usually has a narrow field of view and focuses mainly on the street-level local area. Intuitively, the data from the drone's perspective can provide a complementary viewpoint for the data from the ground vehicle's perspective, enhancing the completeness of scene reconstruction and rendering. However, training naively with aerial and ground images, which exhibit large view disparity, poses a significant convergence challenge for 3D-GS, and does not demonstrate remarkable improvements in performance on road views. In order to enhance the novel view synthesis of road views and to effectively use the aerial information, we design an uncertainty-aware training method that allows aerial images to assist in the synthesis of areas where ground images have poor learning outcomes instead of weighting all pixels equally in 3D-GS training like prior work did. We are the first to introduce the cross-view uncertainty to 3D-GS by matching the car-view ensemble-based rendering uncertainty to aerial images, weighting the contribution of each pixel to the training process. Additionally, to systematically quantify evaluation metrics, we assemble a high-quality synthesized dataset comprising both aerial and ground images for road scenes. Through comprehensive results, we show that: (1) Jointly training aerial and ground images helps improve representation ability of 3D-GS when test views are shifted and rotated, but performs poorly on held-out road view test. (2) Our method reduces the weakness of the joint training, and out-performs other baselines quantitatively on both held-out tests and scenes involving view shifting and rotation on our datasets. (3) Qualitatively, our method shows great improvements in the rendering of road scene details.

Motivation & Methods



Results for training with ground or aerial and ground images on various models. (G), (A+G) are training with ground or aerial and ground images. Incorporating aerial images helps to mitigate the decline in metrics of the road view synthesis after the view shifting and rotation compared with merely using ground data. However, aerial images do not enhance the result on the held-out test of road scene synthesis.



Overview of Drone-assisted Road Gaussian Splatting with Cross-view Uncertainty. We first adopt an ensemble-based rendering uncertainty to quantify the learning outcomes of 3D Gaussians on ground images. Next, the ground uncertainty is projected to the air to build the cross-view uncertainty. Subsequently, we introduce the cross-view uncertainty to the training of 3D Gaussians as weight for each pixel of aerial images in the loss function, together with the original rendering loss of 3D-GS for ground images.

Visualization of the cross-view uncertainty.

Results


+0.1m is ascending 0.1 meter. 5°d is tilting down by 5 degrees. A* is HD aerial images. (G), (A+G) are training with ground or aerial and ground images.


Comparison on NYC Dataset



Comparison on SF Dataset

BibTeX

@article{zhang2024drone,
  title={Drone-assisted Road Gaussian Splatting with Cross-view Uncertainty},
  author={Zhang, Saining and Ye, Baijun and Chen, Xiaoxue and Chen, Yuantao and Zhang, Zongzheng and Peng, Cheng and Shi, Yongliang and Zhao, Hao},
  journal={arXiv preprint arXiv:2408.15242},
  year={2024}
}