Logo des Repositoriums
 

P339 - BIOSIG 2023 - Proceedings of the 22nd International Conference of the Biometrics Special Interest Group

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 33
  • Konferenzbeitrag
    Fairness and Privacy in Voice Biometrics: A Study of Gender Influences Using wav2vec 2.0
    (BIOSIG 2023, 2023) Oubaida Chouchane, Michele Panariello
    This study investigates the impact of gender information on utility, privacy, and fairness in voice biometric systems, guided by the General Data Protection Regulation (GDPR) mandates, which underscore the need for minimizing the processing and storage of private and sensitive data, and ensuring fairness in automated decision-making systems. We adopt an approach that involves the fine-tuning of the wav2vec 2.0 model for speaker verification tasks, evaluating potential gender-related privacy vulnerabilities in the process. An adversarial technique is implemented during the fine-tuning process to obscure gender information within the speaker embeddings, thus bolstering privacy. Results from VoxCeleb datasets indicate our adversarial model increases privacy against uninformed attacks (AUC of 46.80\%), yet slightly diminishes speaker verification performance (EER of 3.89\%) compared to the non-adversarial model (EER of 2.37\%). The model's efficacy reduces against informed attacks (AUC of 96.27\%). Preliminary analysis of system performance is conducted to identify potential gender bias, thus highlighting the need for continued research to understand and enhance fairness, and the delicate interplay between utility, privacy, and fairness in voice biometric systems.
  • Konferenzbeitrag
    Unified Face Image Quality Score based on ISO/IEC Quality Components
    (BIOSIG 2023, 2023) Praveen Kumar Chandaliya, Kiran Raja
    Face image quality assessment is crucial in the face enrolment process to obtain high-quality face images in the reference database. Neglecting quality control will adversely impact the accuracy and efficiency of face recognition systems, resulting in an image captured with poor perceptual quality. In this work, we present a holistic combination of $21$ component quality measures proposed in ``ISO/IEC CD 29794-5" and identify the varying nature of different measures across different datasets. The variance is seen across both capture-related and subject-related measures, which can be tedious for validating each component metric by a human observer when judging the quality of the enrolment image. Motivated by this observation, we propose an efficient method of combining quality components into one unified score using a simple supervised learning approach. The proposed approach for predicting face recognition performance based on the obtained unified face image quality assessment (FIQA) score was comprehensively evaluated using three datasets representing diverse quality factors. We extensively evaluate the proposed approach using the Error-vs-Discard Characteristic (EDC) and show its applicability using five different FRS. The evaluation indicates promising results of the proposed approach combining multiple component scores into a unified score for broader application in face image enrolment in FRS.
  • Konferenzbeitrag
    A RISE-based explainability method for genuine and impostor face verification
    (BIOSIG 2023, 2023) Naima Bousnina, Joao Ascenso
    Heat Map (HM)-based explainable Face Verification (FV) has the goal to visually interpret the decision-making of black-box FV models. Despite the impressive results, state-of-the-art FV explainability methods based on HMs mainly address genuine verification by generating visual explanations that reveal the similar face regions which most contributed for acceptance decisions. However, the similar face regions may not be the unique critical regions for the model decision, notably when rejection decisions are performed. To address this issue, this paper proposes a more complete FV explainability method, providing meaningful HM-based explanations for both genuine and impostor verification and associated acceptance and rejection decisions. The proposed method adapts the RISE algorithm for FV to generate Similarity Heat Maps (S-HMs) and Dissimilarity Heat Maps (D-HMs) which offer reliable explanations to all types of FV decisions. Qualitative and quantitative experimental results show the effectiveness of the proposed FV explainability method beyond state-of-the-art benchmarks.
  • Konferenzbeitrag
    Facial image reconstruction and its influence to face recognition
    (BIOSIG 2023, 2023) Filip Pleško, Tomas Goldmann
    This paper focuses on reconstructing damaged facial images using GAN neural networks. In addition, the effect of generating the missing part of the face on face recognition is investigated. The main objective of this work is to observe whether it is possible to increase the accuracy of face recognition by generating missing parts while maintaining a low false accept rate (FAR). A new model for generating the missing parts of a face has been proposed. For face-based recognition, state-of-the-art solutions from the DeepFace library and the QMagFace solution have been used.
  • Konferenzbeitrag
    Assessing the Human Ability to Recognize Synthetic Speech in Ordinary Conversation
    (BIOSIG 2023, 2023) Daniel Prudký, Anton Firc
    This work assesses the human ability to recognize synthetic speech (deepfake). This paper describes an experiment in which we communicated with respondents using voice messages. We presented the respondents with a cover story about testing the user-friendliness of voice messages while secretly sending them a pre-prepared deepfake recording during the conversation. We examined their reactions, knowledge of deepfakes, or how many could correctly identify which message was deepfake. The results show that none of the respondents reacted in any way to the fraudulent deepfake message, and only one retrospectively admitted to noticing something specific. On the other hand, a voicemail message that contained a deepfake was correctly identified by 83.9% of respondents after revealing the nature of the experiment. Thus, the results show that although the deepfake recording was clearly identifiable among others, no one reacted to it. In summary, we show that the human ability to recognize voice deepfakes is not at a level we can trust. It is very difficult for people to distinguish between real and fake voices, especially if they do not expect them.
  • Konferenzbeitrag
    A Wrist-worn Diffuse Optical Tomography Biometric System
    (BIOSIG 2023, 2023) Satya Sai Siva Rama Krishna Akula, Sumanth Dasari
    We present a diffuse optical tomography-based biometric system that is not dependent on external traits such as face, but rather interior anatomical information for better privacy and security. The Diffuse Optical Tomography (DOT) scanner is in the form of a wearable over the lower forearm and the wrist, where anatomical structures in the optical path of the scanner optodes serve as the basis for the unique biometric patterns. Our DOT scanner is low-cost and uses COTS Near Infrared LEDs and sensors. To supplement the DOT, our design also incorporates wrist vein imaging as a secondary modality. This paper details the design of the wristband, data collection, and machine learning-based analysis to show the utility of the DOT as a stand-alone biometric modality, and the efficacy of fusing DOT and wrist vein modalities. Our early experimental findings show promise, achieving a high area under the receiver operating characteristic curve (0.989).
  • Konferenzbeitrag
    Generalizability and Application of the Skin Reflectance Estimate Based on Dichromatic Separation (SREDS)
    (BIOSIG 2023, 2023) Joseph A Drahos, Richard Plesh
    Face recognition (FR) systems have become widely used and readily available in recent history. However, differential performance between certain demographics has been identified within popular FR models. Skin tone differences between demographics can be one of the factors contributing to the differential performance observed in face recognition models. Skin tone metrics provide an alternative to self-reported race labels when such labels are lacking or completely not available e.g. large-scale face recognition datasets. In this work, we provide a further analysis of the generalizability of the Skin Reflectance Estimate based on Dichromatic Separation (SREDS) against other skin tone metrics and provide a use case for substituting race labels for SREDS scores in a privacy-preserving learning solution. Our findings suggest that SREDS consistently creates a skin tone metric with lower variability within each subject and SREDS values can be utilized as an alternative to the self-reported race labels at minimal drop in performance. Finally, we provide a publicly available and open-source implementation of SREDS to help the research community. Available at https://github.com/JosephDrahos/SREDS
  • Konferenzbeitrag
    Benchmarking fixed-length Fingerprint Representations across different Embedding Sizes and Sensor Types
    (BIOSIG 2023, 2023) Tim Rohwedder, Daile Osorio Roig
    Traditional minutiae-based fingerprint representations consist of a variable-length set of minutiae. This necessitates a more complex comparison causing the drawback of high computational cost in one-to-many comparison. Recently, deep neural networks have been proposed to extract fixed-length embeddings from fingerprints. In this paper, we explore to what extent fingerprint texture information contained in such embeddings can be reduced in terms of dimension, while preserving high biometric performance. This is of particular interest, since it would allow to reduce the number of operations incurred at comparisons. We also study the impact in terms of recognition performance of the fingerprint textural information for two sensor types, i.e. optical and capacitive. Furthermore, the impact of rotation and translation of fingerprint images on the extraction of fingerprint embeddings is analysed. Experimental results conducted on a publicly available database reveal an optimal embedding size of 512 feature elements for the texture-based embedding part of fixed-length fingerprint representations. In addition, differences in performance between sensor types can be perceived. The source code of all experiments presented in this paper is publicly available at https://github.com/tim-rohwedder/fixed-length-fingerprint-extractors, so our work can be fully reproduced.
  • Konferenzbeitrag
    Human-centered evaluation of anomalous events detection in crowded environments
    (BIOSIG 2023, 2023) Giulia Orrù, Elia Porcedda
    Anomaly detection in crowd analysis refers to the ability to detect events and people’s behaviours that deviate from normality. Anomaly detection techniques are developed to support human operators in various monitoring and investigation activities. So far, the anomaly detectors' performance evaluation derives from the rate of correctly classified individual frames, according to the labels given by the annotator. This evaluation does not make the system's performance appreciable, especially from a human operator viewpoint. In this paper, we propose a novel evaluation approach called ``Trigger-Level evaluation'' that is shown to be human-centered and closer to the user's perception of the system's performance. In particular, we define two new performance metrics to aid the evaluation of the usability of anomaly detectors in real-time.
  • Konferenzbeitrag
    Automatic validation of ICAO compliance regarding head coverings: an inclusive approach concerning religious circumstances
    (BIOSIG 2023, 2023) Carla Guerra, João S. Marcos
    This paper contributes with a dataset and an algorithm that automatically verifies the compliance with the ICAO requirements related to the use of head coverings on facial images used on machine-readable travel documents. All the methods found in the literature ignore that some coverings might be accepted because of religious or cultural reasons, and basically only look for the presence of hats/caps. Our approach specifically includes the religious cases and distinguishes the head coverings that might be considered compliant. We built a dataset composed by facial images of 500 identities to accommodate these type of accessories. That data was used to fine-tune and train a classification model based on the YOLOv8 framework and we achieved state of the art results with an accuracy of 99.1% and EER of 5.7%.