Oubaida Chouchane, Michele PanarielloDamer, NaserGomez-Barrero, MartaRaja, KiranRathgeb, ChristianSequeira, Ana F.Todisco, MassimilianoUhl, Andreas2023-12-122023-12-122023978-3-88579-733-31617-5468https://dl.gi.de/handle/20.500.12116/43290This study investigates the impact of gender information on utility, privacy, and fairness in voice biometric systems, guided by the General Data Protection Regulation (GDPR) mandates, which underscore the need for minimizing the processing and storage of private and sensitive data, and ensuring fairness in automated decision-making systems. We adopt an approach that involves the fine-tuning of the wav2vec 2.0 model for speaker verification tasks, evaluating potential gender-related privacy vulnerabilities in the process. An adversarial technique is implemented during the fine-tuning process to obscure gender information within the speaker embeddings, thus bolstering privacy. Results from VoxCeleb datasets indicate our adversarial model increases privacy against uninformed attacks (AUC of 46.80\%), yet slightly diminishes speaker verification performance (EER of 3.89\%) compared to the non-adversarial model (EER of 2.37\%). The model's efficacy reduces against informed attacks (AUC of 96.27\%). Preliminary analysis of system performance is conducted to identify potential gender bias, thus highlighting the need for continued research to understand and enhance fairness, and the delicate interplay between utility, privacy, and fairness in voice biometric systems.enSoft biometric privacyDemographic biasFairnessSpeech and speaker recognitionFairness and Privacy in Voice Biometrics: A Study of Gender Influences Using wav2vec 2.0Text/Conference Paper