Invariant Descriptor learning for feature-based image matching

Team:  L. Chen, F. Rottensteiner, C. Heipke
Year:  2013
Funding:  China Scholarship Council (CSC); NVIDIA GPU Grant Program
Duration:  since 2013
Is Finished:  yes

Motivation

Feature based image matching is normally composed of feature detection, feature description and matching for descriptors. The central factor of the image matching performance comes from the invariance against geometric and photometric transformations the feature descriptor can achieve. Classical feature descriptors, like SIFT and SURF, are designed manually and cannot cope with situations with several geometric distortion. In this program, we construct a machine learning model that can learn to design descriptors by feeding the network matched and unmatched image features, thus a better invariance against distortions can be achieved.

Goal

The goal of this research is to develop a more invariant descriptor in a machine learning way and then apply it in an application that classical descriptor cannot handle.

Current method

Siamese CNN based descriptor learning

Dataset

 

Publications

Chen, L.; Rottensteiner, F.; Heipke, C. (2014): Learning image descriptors for matching based on Haar features. In: ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3, ISPRS Technical Commission III Symposium, Zurich, pp.61-66, 2014

Chen, L.; Rottensteiner, F.; Heipke, C. (2015): Feature descriptor by convolution and pooling autoencoders. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W2, S. 31-38, 2015

Chen, L.; Rottensteiner, F.; Heipke, C. (2016): Invariant Descriptor Learning Using a Siamese Convolutional Neural Network. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences III-3, pp. 11-18.