Taherkhani, FariborzTalreja, VeeruKazemi, HadiNasrabadi, NasserBrömme, ArslanBusch, ChristophDantcheva, AntitzaRathgeb, ChristianUhl, Andreas2019-06-172019-06-172018978-3-88579-676-4https://dl.gi.de/handle/20.500.12116/23782Hashing-based image retrieval approaches have attracted much attention due to their fast query speed and low storage cost. In this paper, we propose an Attribute-based Deep Cross Modal Hashing (ADCMH) network which takes facial attribute modality as a query to retrieve relevant face images. The ADCMH network can efficiently generate compact binary codes to preserve similarity between two modalities (i.e., facial attribute and image modalities) in the Hamming space. Our ADCMH is an end to end deep cross-modal hashing network, which jointly learns similarity preserving features and also compensates for the quantization error due to the hashing of the continuous representation of modalities to binary codes. Experimental results on two standard datasets with facial attributes-image modalities indicate that our ADCMH face image retrieval model outperforms most of the current attribute-guided face image retrieval approaches, which are based on hand crafted features.enFacial AttributesFace Image RetrievalDeep Hashing Network.Facial Attribute Guided Deep Cross-Modal Hashing for Face Image RetrievalText/Conference Paper1617-5468