Uncertainty Modelling in Multiview Implicit 3D Reconstruction

Render of a NeRF (cropped) that was trained on an image sequence captured using a smartphone camera.
Team:  M.Heiken, M. Mehltretter, C. Heipke
Jahr:  2023

The process of photogrammetric 3D reconstruction traditionally consists of a processing chain of distinct operations, including feature extraction and matching, bundle adjustment, dense matching and surface reconstruction, e.g. via meshing. There are many works that employ deep learning in order to improve the quality and performance of these steps.

Especially the step from a sparse to a dense reconstruction is computationally expensive, though, as depth maps have to be computed in a many-to-many relationship. Recent works in implicit representations for novel view synthesis (see Mildenhall 2019) are working around this step by relating each input image directly to the object space representation, in a way creating a dense object reconstruction as a byproduct.

Learning such an implicit representation from a set of input images has been shown to be very fast (Müller 2022). A joint consideration of neural implicit surfaces and radiance fields has been investigated in UNISURF (Oechsle 2021).

In this project, the objective is to develop a strategy for quantifying the uncertainty of the learned implicit object representation in object space, in order to make these new techniques qualify as an alternative to the traditional photogrammetric process in geodetic and industrial applications.