3DOMcity Benchmark

Overview

We are pleased to present the 3DOMcity photogrammetric contest, a novel multi-purpose benchmark for assessing the performance of the entire image-based pipeline for 3D urban reconstruction and 3D data classification. The innovative aspects introduced by 3DOMcity are threefolds:

1. MODULAR & MULTI-PURPOSE CONCEPT

it involves multiple tasks throughout the entire 3D reconstruction pipeline, that can be either performed independently from each other, or grouped together.

2. METROLOGICAL CONTEXT 

the performance assessment is carried out within a controlled laboratory environment, that offers a privileged context for traceable measurements.

3. VERY HIGH SPATIAL RESOLUTION

the publicly available datasets feature very detailed GSD (imagery) and high density (point clouds), that enable the assessment of the algorithms’ performance to provide for a detailed 3D reconstruction.  

The benchmark includes a set of 420 "nadir&oblique aerial" images (6016 x 4016 pixels) acquired in a control environment over an ad-hoc 3D artefact (ca 80 x 80 cm planar dimensions, ca 20 cm in height) that simulates a typical urban scenario. Each camera station consists of six images, two nadir looking views (one in portrait mode and one in landscape mode), and four oblique looking views (forward, backward, left and right), acquired under a 45deg tilting of the camera principle axis. The image overlap is 80/65 % (along/across-track directions) for the nadir images, while it is 85/70% for the oblique images. At a mean acquisition height of 1.03 m, the ground sampling distance (GSD) is 0.12 mm in the nadir images and varies from 0.13 mm to 0.27 mm in the oblique views. 

Reference data

The evaluation tests are performed using the following reference data: point clouds acquired with a triangulation-based laser scanner, a set of circular/coded targets, two calibrated scale bars, two printed scale bars, one calliper and one rigid measuring tape. 

Task 1: Image Orientation

The photogrammetric network adjustment is the first task of the contest. Starting from the set of images and reference measurements, the participants are asked to perform the block triangulation and submit the adjusted orientation parameters. Precision in image space, accuracy in object space and relative EOP accuracy will be estimated as evaluation metrics.

Task 2: Dense Image Matching (DIM)

The generation of a 3D point cloud is the second task of the contest. It can be either performed independently from Task 1, by donwloading the undistorted images and reference EOP, or combined with Task 1, by first assessing the accuracy of the image orientations, and then starting from these to compute and evaluate the subsequent DIM result. Evaluation metrics will include accuracy (how close the submitted result is to the reference one) and completeness (how much of the reference result is reconstructed in the submitted one).

Task 3: Point Cloud Classification

The classification of the 3D point cloud (ca 28 mil points, ca 0.16 mm average distance betwen points ie ca 1700 points/m2) is the third task of the 3DOMcity photogrammetric contest. A reference 3D point cloud, a training set and an evaluation set are provided as input data. Participants are asked to classify the point cloud into the following classes of interests: ground, grass, shrub, tree, facade, and roof. The classification results will be analysed for accuracy assessment by computing: confusion matrix, precision, recall, F1 score, true negative rate and balanced accuracy

Important information 

Further details on the benchmark datasets, tasks and evaluation procedures can be downloaded here (Last updated on Oct 9). 

To access the benchmark data, please fill in the form available here and send it to 3dom@fbk.eu .

Citation

When using the 3DOMcity dataset in your research and publications, we will be happy if you cite us:

E. Özdemir, I. Toschi, and F. Remondino, 2019: A MULTI-PURPOSE BENCHMARK FOR PHOTOGRAMMETRIC URBAN 3D RECONSTRUCTION IN A CONTROLLED ENVIRONMENT. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., Vol. XLII-1/W2, pp. 53–60.