Logo des Repositoriums

P306 - BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 33
  • Konferenzbeitrag
    Unit-Selection Based Facial Video Manipulation Detection
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Nielsen, V; Khodabakhsh, Ali; Busch, Christoph
    Advancements in video synthesis technology have caused major concerns over the authenticity of audio-visual content. A video manipulation method that is often overlooked is inter-frame forgery, in which segments (or units) of an original video are reordered and rejoined while cut-points are covered with transition effects. Subjective tests have shown the susceptibility of viewers in mistaking such content as authentic. In order to support research on the detection of such manipulations, we introduce a large-scale dataset of 1000 morph-cut videos that were generated by automation of the popular video editing software Adobe Premiere Pro. Furthermore, we propose a novel differential detection pipeline and achieve an outstanding frame-level detection accuracy of 95%.
  • Konferenzbeitrag
    Biometric System for Mobile Validation of ID And Travel Documents
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Medvedev, V; Gonçalves, Nuno; Cruz, Leandro
    Current trends in security of ID and travel documents require portable and efficient validation applications that rely on biometric recognition. Such tools can allow any authority and citizen to validate documents and authenticate citizens with no need of expensive and sometimes unavailable proprietary devices. In this work, we present a novel, compact and efficient approach of validating ID and travel documents for offline mobile applications. The approach employs the in-house biometric template that is extracted from the original portrait photo (either full frontal or token frontal), and then stored on the ID document with use of a machine readable code (MRC). The ID document can then be validated with a developed application on a mobile device with digital camera. The similarity score is estimated with use of an artificial neural network (ANN). Results show that we achieve validation accuracy up to 99.5% with corresponding false match rate = 0.0047 and false non-match rate = 0.00034.
  • Konferenzbeitrag
    Simulation of Print-Scan Transformations for Face Images based on Conditional Adversarial Networks
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Mitkovski, Aleksandar; Merkle, Johannes; Rathgeb, Christian; Tams, Benjamin; Bernardo, Kevin; Haryanto, Nathania E.; Busch, Christoph
    In many countries, printing and scanning of face images is frequently performed as part of the issuance process of electronic travel documents, e.g., ePassports. Image alterations induced by such print-scan transformations may negatively effect the performance of various biometric subsystems, in particular image manipulation detection. Consequently, according training data is needed in order to achieve robustness towards said transformations. However, manual printing and scanning is time-consuming and costly. In this work, we propose a simulation of print-scan transformations for face images based on a Conditional Generative Adversarial Network (cGAN). To this end, subsets of two public face databases are manually printed and scanned using different printer-scanner combinations. A cGAN is then trained to perform an image-to-image translation which simulates the corresponding print-scan transformations. The goodness of simulation is evaluated with respect to image quality, biometric sample quality and performance, as well as human assessment.
  • Konferenzbeitrag
    Touchless Fingerprint Sample Quality: Prerequisites for the Applicability of NFIQ2.0
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Priesnitz, Jannis; Rathgeb, Christian; Buchmann, Nicolas; Busch, Christoph
    The impact of fingerprint sample quality on biometric performance is undisputed. For touch-based fingerprint data, the effectiveness of the NFIQ2.0 quality estimation method is well documented in scientific literature. Due to the increasing use of touchless fingerprint recognition systems a thorough investigation of the usefulness of the NFIQ2.0 for touchless fingerprint data is of interest. In this work, we investigate whether NFIQ2.0 quality scores are predictive of error rates associated with the biometric performance of touchless fingerprint recognition. For this purpose, we propose a touchless fingerprint preprocessing that favours NFIQ2.0 quality estimation which has been designed for touch-based fingerprint data. Comparisons are made between NFIQ2.0 score distributions obtained from touch-based and touchless fingerprint data of the publicly available FVC06, MCYT, PolyU, and ISPFDv1 databases. Further, the predictive power regarding biometric performance is evaluated in terms of Error-versus-Reject Curves (ERCs) using an open source fingerprint recognition system. Under constrained capture conditions NFIQ2.0 is found to be an effective tool for touchless fingerprint quality estimation if an adequate preprocessing is applied.
  • Konferenzbeitrag
    Improved Liveness Detection in Dorsal Hand Vein Videos using Photoplethysmography
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Schuiki, Johannes; Uhl, Andreas
    In this study, a previously published infrared finger vein liveness detection scheme is tested for its applicability on dorsal hand vein videos. A custom database consisting of five different types of presentation attacks recorded with transillumination as well as reflected light illumination is examined. Additionally, two different methods for liveness detection are presented in this work. All methods described employ the concept of generating a signal through the change in average pixel illumination, which is referred to as Photoplethysmography. Feature vectors in order to classify a given video sequence are generated using spectral analysis of the time series. Experimental results show the effectiveness of the proposed methods.
  • Konferenzbeitrag
    Fisher Vector Encoding of Dense-BSIF Features for Unknown Face Presentation Attack Detection
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) González-Soler, Lázaro J.; Gomez-Barrero, Marta; Busch, Christoph
    The task of determining whether a sample stems from a real subject (i.e, it is a bona fide presentation) or it comes from an artificial replica (i.e., it is an attack presentation) is a mandatory requirement for biometric capture devices, which has received a lot of attention in the recent past. Nowadays, most face Presentation Attack Detection (PAD) approaches have reported a good detection performance when they are evaluated on known Presentation Attack Instruments (PAIs) and acquisition conditions, in contrast to more challenging scenarios where unknown attacks are included in the evaluation. For those more realistic scenarios, the existing approaches are in many cases unable to detect unknown PAI species. In this work, we introduce a new feature space based on Fisher vectors, computed from compact Binarised Statistical Image Features (BSIF) histograms, which allows finding semantic feature subsets from known samples in order to enhance the detection of unknown attacks. This new representation, evaluated over three freely available facial databases, shows promising results in the top state-of-the-art: a BPCER100 under 17% together with a AUC over 98% can be achieved in the presence of unknown attacks.
  • Konferenzbeitrag
    BIOSIG 2020 - Komplettband
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020)
  • Konferenzbeitrag
    Can Generative Colourisation Help Face Recognition?
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Drozdowski, Pawel; Fischer, Daniel; Rathgeb, Christian; Geissler, Julian; Knedlik, Jan; Busch, Christoph
    Generative colourisation methods can be applied to automatically convert greyscale images to realistically looking colour images. In a face recognition system, such techniques might be employed as a pre-processing step in scenarios where either one or both face images to be compared are only available in greyscale format. In an experimental setup which reflects said scenarios, we investigate if generative colourisation can improve face sample utility and overall biometric performance of face recognition. To this end, subsets of the FERET and FRGCv2 face image databases are converted to greyscale and colourised applying two versions of the DeOldify colourisation algorithm. Face sample quality assessment is done using the FaceQnet quality estimator. Biometric performance measurements are conducted for the widely used ArcFace system with its built-in face detector and reported according to standardised metrics. Obtained results indicate that, for the tested systems, the application of generative colourisation does neither improve face image quality nor recognition performance. However, generative colourisation was found to aid face detection and subsequent feature extraction of the used face recognition system which results in a decrease of the overall false reject rate.
  • Konferenzbeitrag
    Compact Models for Periocular Verification Through Knowledge Distillation
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Boutros, Fadi; Damer, Naser; Fang, Meiling; Raja, Kiran; Kirchbuchner, Florian; Kuijper, Arjan
    Despite the wide use of deep neural network for periocular verification, achieving smaller deep learning models with high performance that can be deployed on low computational powered devices remains a challenge. In term of computation cost, we present in this paper a lightweight deep learning model with only 1.1m of trainable parameters, DenseNet-20, based on DenseNet architecture. Further, we present an approach to enhance the verification performance of DenseNet-20 via knowledge distillation. With the experiments on VISPI dataset captured with two different smartphones, iPhone and Nokia, we show that introducing knowledge distillation to DenseNet-20 training phase outperforms the same model trained without knowledge distillation where the Equal Error Rate (EER) reduces from 8.36% to 4.56% EER on iPhone data, from 5.33% to 4.64% EER on Nokia data, and from 20.98% to 15.54% EER on cross-smartphone data.
  • Konferenzbeitrag
    On the assessment of face image quality based on handcrafted features
    (BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Henniger, Olaf; Fu, Biying; Chen, Cong
    This paper studies the assessment of the quality of face images, predicting the utility of face images for automated recognition. The utility of frontal face images from a publicly available dataset was assessed by comparing them with each other using commercial off-the-shelf face recognition systems. Multiple face image features delineating face symmetry and characteristics of the capture process were analysed to find features predictive of utility. The selected features were used to build system-specific and generic random forest classifiers.