The project Semantic Photogrammetry wants to boost the photogrammetric pipeline by integrating semantic information within the processing steps. Labelled images as input for the photogrammetric pipeline have enormous potential to improve the 3D reconstruction results. We experiment semantic information in various steps, starting from feature matching to dense 3D reconstruction.
In particular, for MVS, we leverage plane priors inherited from semantic labels into the 3D reconstruction process. Class-speciﬁc shape priors are used within the depth computation to support the reconstruction of problematic areas. Class specific reconstructions and semantically enhanced 3D point clouds are also generated.
Standard dense matching algorithms often fail to reconstruct correctly the depth in the areas of low texture, as their measures are not robust enough to tackle depth inconsistencies and matching ambiguities. Such textureless areas lacking reliable data for depth estimation are generally present in urban scenes of smooth, homogeneous building facades or indoor scenarios surfaces. To overcome this barrier, higher-level scene understanding constraints have to be introduced to promote the propagation of correct depth estimates between adjacent pixels. The idea of this framework is based on the fact that semantics can successfully indicate textureless areas derived by the class label of the scene (e.g. "wall") where frequently depth miscalculations occur.