Variance-preserving Deep Metric Learning for Content-based Image Retrieval

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

  • Nikolaos Passalis, Aristotle University of Thessaloniki, Tampere University
  • ,
  • Alexandros Iosifidis
  • Moncef Gabbouj, Tampere University of Technology
  • ,
  • Anastasios Tefas, Aristotle University of Thessaloniki

Supervised deep metric learning led to spectacular results for several Content-based Information Retrieval (CBIR) applications. The success of these approaches slowly led to the belief that image retrieval and classification are just slightly different variations of the same problem. However, recent evidence suggests that learning highly discriminative representation for a (limited) set of training classes removes valuable information from the representation, potentially harming both the in-domain, as well as the out-of-domain retrieval precision. In this paper, we propose a regularized discriminative deep metric learning method that aims to not only learn a representation that allows for discriminating between different classes, but it is also capable of encoding the latent generative factors separately for each class, overcoming this limitation. This allows for modeling the in-class variance and, as a result, maintaining the ability to represent both sub-classes of the in-domain data, as well as objects that belong to classes outside the training domain. The effectiveness of the proposed method, over existing supervised and unsupervised representation/metric learning approaches, is demonstrated under different in-domain and out-of-domain setups and three challenging image datasets.

TidsskriftPattern Recognition Letters
Sider (fra-til)8-14
Antal sider7
StatusUdgivet - 1 mar. 2020

Se relationer på Aarhus Universitet Citationsformater

ID: 173480488