P315 - BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group
Auflistung P315 - BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group nach Autor:in "Bassit, Amina"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragBloom Filter vs Homomorphic Encryption: Which approach protects the biometric data and satisfies ISO/IEC 24745?(BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group, 2021) Bassit, Amina; Hahn, Florian; Zeinstra, Chris; Veldhuis, Raymond; Peter, AndreasBloom filter (BF) and homomorphic encryption (HE) are popular modern techniques used to design biometric template protection (BTP) schemes that aim to protect the sensitive biometric information during storage and the comparison process. However, in practice, many BTP schemes based on BF or HE violate at least one of the privacy requirements of the international standard ISO/IEC 24745: irreversibility, unlinkability and confidentiality. In this paper, we investigate the state-of-the-art BTP schemes based on these two approaches and assess their relative strengths and weaknesses with respect to the three requirements of ISO/IEC 24745. The results of our investigation showed that the choice between BF and HE depends on the setting where the BTP scheme will be deployed and the level of trustworthiness of the parties involved in processing the protected template. As a result, HE enhanced by verifiable computation techniques can satisfy the privacy requirements of ISO/IEC 24745 in a trustless setting.
- KonferenzbeitragTransferability Analysis of an Adversarial Attack on Gender Classification to Face Recognition(BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group, 2021) Rezgui, Zohra; Bassit, AminaModern biometric systems establish their decision based on the outcome of machine learning (ML) classifiers trained to make accurate predictions. Such classifiers are vulnerable to diverse adversarial attacks, altering the classifiers' predictions by adding a crafted perturbation. According to ML literature, those attacks are transferable among models that perform the same task. However, models performing different tasks, but sharing the same input space and the same model architecture, were never included in transferability scenarios. In this paper, we analyze this phenomenon for the special case of VGG16-based biometric classifiers. Concretely, we study the effect of the white-box FGSM attack, on a gender classifier and compare several defense methods as countermeasure. Then, in a black-box manner, we attack a pre-trained face recognition classifier using adversarial images generated by the FGSM. Our experiments show that this attack is transferable from a gender classifier to a face recognition classifier where both were independently trained.