Auflistung nach Autor:in "Taherkhani, Fariborz"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragFacial Attribute Guided Deep Cross-Modal Hashing for Face Image Retrieval(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Taherkhani, Fariborz; Talreja, Veeru; Kazemi, Hadi; Nasrabadi, NasserHashing-based image retrieval approaches have attracted much attention due to their fast query speed and low storage cost. In this paper, we propose an Attribute-based Deep Cross Modal Hashing (ADCMH) network which takes facial attribute modality as a query to retrieve relevant face images. The ADCMH network can efficiently generate compact binary codes to preserve similarity between two modalities (i.e., facial attribute and image modalities) in the Hamming space. Our ADCMH is an end to end deep cross-modal hashing network, which jointly learns similarity preserving features and also compensates for the quantization error due to the hashing of the continuous representation of modalities to binary codes. Experimental results on two standard datasets with facial attributes-image modalities indicate that our ADCMH face image retrieval model outperforms most of the current attribute-guided face image retrieval approaches, which are based on hand crafted features.
- KonferenzbeitragUnsupervised Facial Geometry Learning for Sketch to Photo Synthesis(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Kazemi, Hadi; Taherkhani, Fariborz; Nasrabadi, Nasser M.Face sketch-photo synthesis is a critical application in law enforcement and digital entertainment industry where the goal is to learn the mapping between a face sketch image and its corresponding photo-realistic image. However, the limited number of paired sketch-photo training data usually prevents the current frameworks to learn a robust mapping between the geometry of sketches and their matching photo-realistic images. Consequently, in this work, we present an approach for learning to synthesize a photo-realistic image from a face sketch in an unsupervised fashion. In contrast to current unsupervised image-to-image translation techniques, our framework leverages a novel perceptual discriminator to learn the geometry of human face. Learning facial prior information empowers the network to remove the geometrical artifacts in the face sketch.We demonstrate that a simultaneous optimization of the face photo generator network, employing the proposed perceptual discriminator in combination with a texture-wise discriminator, results in a significant improvement in quality and recognition rate of the synthesized photos. We evaluate the proposed network by conducting extensive experiments on multiple baseline sketch-photo datasets.