Invariant Descriptor learning for feature-based image matching
|Funded by:||China Scholarship Council (CSC); NVIDIA GPU Grant Program|
Feature based image matching is normally composed of feature detection, feature description and matching for descriptors. The central factor of the image matching performance comes from the invariance against geometric and photometric transformations the feature descriptor can achieve. Classical feature descriptors, like SIFT and SURF, are designed manually and cannot cope with situations with several geometric distortion. In this program, we construct a machine learning model that can learn to design descriptors by feeding the network matched and unmatched image features, thus a better invariance against distortions can be achieved.
The goal of this research is to develop a more invariant descriptor in a machine learning way and then apply it in an application that classical descriptor cannot handle.
Siamese CNN based descriptor learning
- Brown UBC dataset: http://phototour.cs.washington.edu/patches/default.htm
- Hpatches (Homography-patches dataset): https://github.com/featw/hpatches
Chen, L.; Rottensteiner, F.; Heipke, C. (2014): Learning image descriptors for matching based on Haar features. In: ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3, ISPRS Technical Commission III Symposium, Zurich, pp.61-66, 2014 | file |
Chen, L.; Rottensteiner, F.; Heipke, C. (2015): Feature descriptor by convolution and pooling autoencoders. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W2, S. 31-38, 2015 more
Chen, L.; Rottensteiner, F.; Heipke, C. (2016): Invariant Descriptor Learning Using a Siamese Convolutional Neural Network. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences III-3, pp. 11-18.