Logo des Repositoriums
 
Konferenzbeitrag

Facial Attribute Guided Deep Cross-Modal Hashing for Face Image Retrieval

Lade...
Vorschaubild

Volltext URI

Dokumententyp

Text/Conference Paper

Zusatzinformation

Datum

2018

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Köllen Druck+Verlag GmbH

Zusammenfassung

Hashing-based image retrieval approaches have attracted much attention due to their fast query speed and low storage cost. In this paper, we propose an Attribute-based Deep Cross Modal Hashing (ADCMH) network which takes facial attribute modality as a query to retrieve relevant face images. The ADCMH network can efficiently generate compact binary codes to preserve similarity between two modalities (i.e., facial attribute and image modalities) in the Hamming space. Our ADCMH is an end to end deep cross-modal hashing network, which jointly learns similarity preserving features and also compensates for the quantization error due to the hashing of the continuous representation of modalities to binary codes. Experimental results on two standard datasets with facial attributes-image modalities indicate that our ADCMH face image retrieval model outperforms most of the current attribute-guided face image retrieval approaches, which are based on hand crafted features.

Beschreibung

Taherkhani, Fariborz; Talreja, Veeru; Kazemi, Hadi; Nasrabadi, Nasser (2018): Facial Attribute Guided Deep Cross-Modal Hashing for Face Image Retrieval. BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group. Bonn: Köllen Druck+Verlag GmbH. PISSN: 1617-5468. ISBN: 978-3-88579-676-4. Darmstadt. 26.-28. September 2018

Zitierform

DOI

Tags