Auflistung nach Schlagwort "gender classification"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragGender and Kinship by Model-Based Ear Biometrics(BIOSIG 2019 - Proceedings of the 18th International Conference of the Biometrics Special Interest Group, 2019) Meng, Di; Nixon, Mark S.; Mahmoodi, SasanMany studies in biometrics have shown how identity can be determined, including by images of ears. In the paper, we show how model an ear and how the gender appears to often be manifest in the ear structures, as is kinship or family relationship. We describe a new model-based approach for viewpoint correction and ear description to enable this analysis. We show that with the new technique having satisfactory basic recognition capability (recognizing individuals with performance similar to state of art), gender can achieve 67.2% and kinship 40.4% rank 1 recognition on ears from subjects with unconstrained pose.
- KonferenzbeitragTransferability Analysis of an Adversarial Attack on Gender Classification to Face Recognition(BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group, 2021) Rezgui, Zohra; Bassit, AminaModern biometric systems establish their decision based on the outcome of machine learning (ML) classifiers trained to make accurate predictions. Such classifiers are vulnerable to diverse adversarial attacks, altering the classifiers' predictions by adding a crafted perturbation. According to ML literature, those attacks are transferable among models that perform the same task. However, models performing different tasks, but sharing the same input space and the same model architecture, were never included in transferability scenarios. In this paper, we analyze this phenomenon for the special case of VGG16-based biometric classifiers. Concretely, we study the effect of the white-box FGSM attack, on a gender classifier and compare several defense methods as countermeasure. Then, in a black-box manner, we attack a pre-trained face recognition classifier using adversarial images generated by the FGSM. Our experiments show that this attack is transferable from a gender classifier to a face recognition classifier where both were independently trained.