P339 - BIOSIG 2023 - Proceedings of the 22nd International Conference of the Biometrics Special Interest Group
Autor*innen mit den meisten Dokumenten
Neueste Veröffentlichungen
- KonferenzbeitragFairness and Privacy in Voice Biometrics: A Study of Gender Influences Using wav2vec 2.0(BIOSIG 2023, 2023) Oubaida Chouchane, Michele PanarielloThis study investigates the impact of gender information on utility, privacy, and fairness in voice biometric systems, guided by the General Data Protection Regulation (GDPR) mandates, which underscore the need for minimizing the processing and storage of private and sensitive data, and ensuring fairness in automated decision-making systems. We adopt an approach that involves the fine-tuning of the wav2vec 2.0 model for speaker verification tasks, evaluating potential gender-related privacy vulnerabilities in the process. An adversarial technique is implemented during the fine-tuning process to obscure gender information within the speaker embeddings, thus bolstering privacy. Results from VoxCeleb datasets indicate our adversarial model increases privacy against uninformed attacks (AUC of 46.80\%), yet slightly diminishes speaker verification performance (EER of 3.89\%) compared to the non-adversarial model (EER of 2.37\%). The model's efficacy reduces against informed attacks (AUC of 96.27\%). Preliminary analysis of system performance is conducted to identify potential gender bias, thus highlighting the need for continued research to understand and enhance fairness, and the delicate interplay between utility, privacy, and fairness in voice biometric systems.
- KonferenzbeitragAssessing the Human Ability to Recognize Synthetic Speech in Ordinary Conversation(BIOSIG 2023, 2023) Daniel Prudký, Anton FircThis work assesses the human ability to recognize synthetic speech (deepfake). This paper describes an experiment in which we communicated with respondents using voice messages. We presented the respondents with a cover story about testing the user-friendliness of voice messages while secretly sending them a pre-prepared deepfake recording during the conversation. We examined their reactions, knowledge of deepfakes, or how many could correctly identify which message was deepfake. The results show that none of the respondents reacted in any way to the fraudulent deepfake message, and only one retrospectively admitted to noticing something specific. On the other hand, a voicemail message that contained a deepfake was correctly identified by 83.9% of respondents after revealing the nature of the experiment. Thus, the results show that although the deepfake recording was clearly identifiable among others, no one reacted to it. In summary, we show that the human ability to recognize voice deepfakes is not at a level we can trust. It is very difficult for people to distinguish between real and fake voices, especially if they do not expect them.
- KonferenzbeitragUnified Face Image Quality Score based on ISO/IEC Quality Components(BIOSIG 2023, 2023) Praveen Kumar Chandaliya, Kiran RajaFace image quality assessment is crucial in the face enrolment process to obtain high-quality face images in the reference database. Neglecting quality control will adversely impact the accuracy and efficiency of face recognition systems, resulting in an image captured with poor perceptual quality. In this work, we present a holistic combination of $21$ component quality measures proposed in ``ISO/IEC CD 29794-5" and identify the varying nature of different measures across different datasets. The variance is seen across both capture-related and subject-related measures, which can be tedious for validating each component metric by a human observer when judging the quality of the enrolment image. Motivated by this observation, we propose an efficient method of combining quality components into one unified score using a simple supervised learning approach. The proposed approach for predicting face recognition performance based on the obtained unified face image quality assessment (FIQA) score was comprehensively evaluated using three datasets representing diverse quality factors. We extensively evaluate the proposed approach using the Error-vs-Discard Characteristic (EDC) and show its applicability using five different FRS. The evaluation indicates promising results of the proposed approach combining multiple component scores into a unified score for broader application in face image enrolment in FRS.
- KonferenzbeitragHuman-centered evaluation of anomalous events detection in crowded environments(BIOSIG 2023, 2023) Giulia Orrù, Elia PorceddaAnomaly detection in crowd analysis refers to the ability to detect events and people’s behaviours that deviate from normality. Anomaly detection techniques are developed to support human operators in various monitoring and investigation activities. So far, the anomaly detectors' performance evaluation derives from the rate of correctly classified individual frames, according to the labels given by the annotator. This evaluation does not make the system's performance appreciable, especially from a human operator viewpoint. In this paper, we propose a novel evaluation approach called ``Trigger-Level evaluation'' that is shown to be human-centered and closer to the user's perception of the system's performance. In particular, we define two new performance metrics to aid the evaluation of the usability of anomaly detectors in real-time.
- KonferenzbeitragFacial image reconstruction and its influence to face recognition(BIOSIG 2023, 2023) Filip Pleško, Tomas GoldmannThis paper focuses on reconstructing damaged facial images using GAN neural networks. In addition, the effect of generating the missing part of the face on face recognition is investigated. The main objective of this work is to observe whether it is possible to increase the accuracy of face recognition by generating missing parts while maintaining a low false accept rate (FAR). A new model for generating the missing parts of a face has been proposed. For face-based recognition, state-of-the-art solutions from the DeepFace library and the QMagFace solution have been used.
- KonferenzbeitragBenchmarking fixed-length Fingerprint Representations across different Embedding Sizes and Sensor Types(BIOSIG 2023, 2023) Tim Rohwedder, Daile Osorio RoigTraditional minutiae-based fingerprint representations consist of a variable-length set of minutiae. This necessitates a more complex comparison causing the drawback of high computational cost in one-to-many comparison. Recently, deep neural networks have been proposed to extract fixed-length embeddings from fingerprints. In this paper, we explore to what extent fingerprint texture information contained in such embeddings can be reduced in terms of dimension, while preserving high biometric performance. This is of particular interest, since it would allow to reduce the number of operations incurred at comparisons. We also study the impact in terms of recognition performance of the fingerprint textural information for two sensor types, i.e. optical and capacitive. Furthermore, the impact of rotation and translation of fingerprint images on the extraction of fingerprint embeddings is analysed. Experimental results conducted on a publicly available database reveal an optimal embedding size of 512 feature elements for the texture-based embedding part of fixed-length fingerprint representations. In addition, differences in performance between sensor types can be perceived. The source code of all experiments presented in this paper is publicly available at https://github.com/tim-rohwedder/fixed-length-fingerprint-extractors, so our work can be fully reproduced.
- KonferenzbeitragAutomatic validation of ICAO compliance regarding head coverings: an inclusive approach concerning religious circumstances(BIOSIG 2023, 2023) Carla Guerra, João S. MarcosThis paper contributes with a dataset and an algorithm that automatically verifies the compliance with the ICAO requirements related to the use of head coverings on facial images used on machine-readable travel documents. All the methods found in the literature ignore that some coverings might be accepted because of religious or cultural reasons, and basically only look for the presence of hats/caps. Our approach specifically includes the religious cases and distinguishes the head coverings that might be considered compliant. We built a dataset composed by facial images of 500 identities to accommodate these type of accessories. That data was used to fine-tune and train a classification model based on the YOLOv8 framework and we achieved state of the art results with an accuracy of 99.1% and EER of 5.7%.
- KonferenzbeitragBIOSIG 2023 - Complete Volume(BIOSIG 2023, 2023)
- KonferenzbeitragDEFT: A new distance-based feature set for keystroke dynamics(BIOSIG 2023, 2023) Kaluarachchi, Nuwan; Kandanaarachchi, Sevvandi; Moore, Kristen; Arakala, ArathiKeystroke dynamics is a behavioural biometric utilised for user identification and authentication. We propose a new set of features based on the distance between keys on the keyboard, a concept that has not been considered before in keystroke dynamics. We combine flight times, a popular metric, with the distance between keys on the keyboard and call them as Distance Enhanced Flight Time features (DEFT). This novel approach provides comprehensive insights into a person’s typing behaviour, surpassing typing velocity alone. We build a DEFT model by combining DEFT features with other previously used keystroke dynamic features. The DEFT model is designed to be device-agnostic, allowing us to evaluate its effectiveness across three commonly used devices: desktop, mobile, and tablet. The DEFT model outperforms the existing state-of-the-art methods when we evaluate its effectiveness across two datasets. We obtain accuracy rates exceeding 99% and equal error rates below 10% on all three devices.
- KonferenzbeitragSynthetic Latent Fingerprint Generation Using Style Transfer(BIOSIG 2023, 2023) Amol S Joshi, Ali DaboueiLimited data availability is a challenging problem in the latent fingerprint domain. Synthetically generated fingerprints are vital for training data-hungry neural network-based algorithms. Conventional methods distort clean fingerprints to generate synthetic latent fingerprints. We propose a simple and effective approach using style transfer and image blending to synthesize realistic latent fingerprints. Our evaluation criteria and experiments demonstrate that the generated synthetic latent fingerprints preserve the identity information from the input contact-based fingerprints while possessing similar characteristics as real latent fingerprints. Additionally, we show that the generated fingerprints exhibit several qualities and styles, suggesting that the proposed method can generate multiple samples from a single fingerprint.