Institut für Photogrammetrie und GeoInformation Forschung Aktuelle Projekte
Dynamic Control Information for the relative Positioning of Nodes in a Sensor Network

Dynamic Control Information for the relative Positioning of Nodes in a Sensor Network

Team:  M. Coenen, F. Rottensteiner
Jahr:  2016
Förderung:  Deutsche Forschungsgemeinschaft (DFG)
Laufzeit:  01.12.2016 – 01.12.2019
Ist abgeschlossen:  ja

The highly dynamic nature of street environments is one of the biggest challenges for autonomous driving applications. The precise reconstruction of moving objects, especially of other cars, are fundamental to ensure safe navigation and to enable applications such as interactive motion planning and collaborative positioning. To this end, cameras provide a cost-effective solution to deliver perceptive data of a vehicle's surroundings. On this background, this project is mainly based on stereo images acquired by stereo camera rigs mounted on moving vehicles as observations and has the goal to detect other sensor nodes, i.e. other vehicles in this case, and to determine their relative poses. These information, termed as dynamic control information, can be used as input for collaborative positioning approaches. To enable collaborative positioning, vehicle-to-vehicle observations are necessary. To this end the communication between vehicles and the transmission of relative poses between vehicles as dynamic control information resulting from this project can deliver valuable observations to enhance the positioning collaboratively.

The retrieval of the 3D pose and shape of objects from images is an ill-posed problem, since the projection from 3D to 2D images leaves many ambiguities about 3D objects. A common way to object reconstruction is to make use of deformable shape models as shape prior and to align them with the object in the image, e.g. by matching entities such as surfaces, keypoints, edges, or contours of the deformable 3D model, to their corresponding entities inferred from the image. In this project, we make use of such a deformable vehicle model and present a method that fully reconstructs vehicles in 3D given street level stereo image pairs, allowing the derivation of precise 3D pose and shape parameters. Based on initially detected vehicles, using a state-of-the-art detection approach, the contributions of this project are described as follows.

A novel multi-task convolutional neural network (CNN) is developed, that simultaneously detects vehicle keypoints and vehicle wireframe edges (cf. left column of the figure) and also outputs a probability distribution for the vehicle's orientation as well as the vehicle's type (e.g compact car, estate car, sedan, van, etc.). For the orientation estimation, a novel hierarchical class and classifier structure, and a novel loss for the detection of keypoints and wireframes, are defined in the scope of this project.

For the purpose of vehicle reconstruction, a comprehensive probabilistic model is formulated, mainly based on the outputs of the multi-task CNN but also on 3D and scene information derived from the stereo data. More specifically, the probabilistic model incorporates multiple likelihood functions, simultaneously fitting the surface of the deformable 3D model to stereo reconstructed 3D points, matching model keypoints to the detected keypoints, and aligning the model wireframe to the wireframe inferred by the CNN, respectively. In addition to the observation likelihoods, state prior terms, based on inferred scene knowledge as well as on the probability distributions for orientation and vehicle type derived by the CNN, are used as regularizer for the target pose and shape parameters in the probabilistic model.

The evaluation of the developed method is performed on publicly available benchmark data and on an own data set, recorded in the context of the i.c.sens mapathons. The latter was created by labelling parts of the data acquired during the mapathons by manually fitting CAD models to the observations, thus generating reference data for the evaluation of this project. The results show that, depending on the level of occlusion, an amount of up to 98.9 % of correctly estimated orientations and up to 80.6 % of correctly determined positions can be achieved by our method. The average errors result to 3.1° for orientation and 33 cm for position, respectively. A qualitative result of a fitted vehicle model can be seen in the right column of the figure. 

Publications

Coenen, M.; Rottensteiner, F.; Heipke, C. (2017): Detection and 3D modelling of vehicles from mobile mapping stereo images In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1, pp. 505-512

Coenen, M.; Rottensteiner, F.; Heipke, C. (2018): Recovering the 3D pose and shape of vehicles from stereo images. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2, pp. 73-80.