Auflistung nach Autor:in "Nasrabadi, Nasser"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragDeep Sparse Feature Selection and Fusion for Textured Contact Lens Detection(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Poster, Domenick; Nasrabadi, Nasser; Riggan, BenjaminDistinguishing between images of irises wearing textured lenses versus those wearing transparent lenses or no lenses is a challenging problem due to the subtle and fine-grained visual differences. Our approach builds upon existing hand-crafted image features and neural network architectures by optimally selecting and combining the most useful set of features into a single model. We build multiple, parallel sub-networks corresponding to the various feature descriptors and learn the best subset of features through group sparsity. We avoid overfitting such a wide and deep model through a selective transfer learning technique and a novel group Dropout regularization strategy. This model achieves roughly a four times increase in performance over the state-of-the-art on three benchmark textured lens datasets and equals the near-perfect state-of-the-art accuracy on two others. Furthermore, the generic nature of the architecture allows it to be extended to other image features, forms of spoofing attacks, or problem domains.
- KonferenzbeitragFacial Attribute Guided Deep Cross-Modal Hashing for Face Image Retrieval(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Taherkhani, Fariborz; Talreja, Veeru; Kazemi, Hadi; Nasrabadi, NasserHashing-based image retrieval approaches have attracted much attention due to their fast query speed and low storage cost. In this paper, we propose an Attribute-based Deep Cross Modal Hashing (ADCMH) network which takes facial attribute modality as a query to retrieve relevant face images. The ADCMH network can efficiently generate compact binary codes to preserve similarity between two modalities (i.e., facial attribute and image modalities) in the Hamming space. Our ADCMH is an end to end deep cross-modal hashing network, which jointly learns similarity preserving features and also compensates for the quantization error due to the hashing of the continuous representation of modalities to binary codes. Experimental results on two standard datasets with facial attributes-image modalities indicate that our ADCMH face image retrieval model outperforms most of the current attribute-guided face image retrieval approaches, which are based on hand crafted features.
- KonferenzbeitragIdentical Twins as a Facial Similarity Benchmark for Human Facial Recognition(BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group, 2021) McCauley, John; Soleymani, Sobhan; Williams, Brady; Nasrabadi, Nasser; Dawson, JeremyThe problem of distinguishing identical twins and non-twin look-alikes in automated facial recognition (FR) applications has become increasingly important with the widespread adoption of facial biometrics. This work presents an application of one of the largest twin datasets compiled to date to address two FR challenges: 1) determining a baseline measure of facial similarity between identical twins and 2) applying this similarity measure to determine the impact of doppelgangers, or look-alikes, on FR performance for large face datasets. The facial similarity measure is determined via a deep Siamese convolutional neural network. The proposed network provides a quantitative similarity score for any two given faces and has been applied to large-scale face datasets to identify similar face pairs.
- KonferenzbeitragInteroperability of Contact and Contactless Fingerprints Across Multiple Fingerprint Sensors(BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group, 2021) Williams, Brady; McCauley, John; Dando, John; Nasrabadi, Nasser; Dawson, JeremyContactless fingerprinting devices have grown in popularity in recent years due to speed and convenience of capture. Also, due to the global COVID-19 pandemic, the need for safe and hygienic options for fingerprint capture are more pressing than ever. However, contactless systems face challenges in the areas of interoperability and matching performance as shown in other works. In this paper, we present a contactless vs. contact interoperability assessment of several contactless devices, including cellphone fingerphoto capture. In addition to evaluating the match performance of each contactless sensor, this paper presents an analysis of the impact of finger size and skin melanin content on contactless match performance. AUC results indicate that contactless match performance of the newest contactless devices is reaching that of contact fingerprints. In addition, match scores indicate that, while not as sensitive to melanin content, contactless fingerprint matching may be impacted by finger size.