P306 - BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group
Autor*innen mit den meisten Dokumenten
Neueste Veröffentlichungen
- KonferenzbeitragUnit-Selection Based Facial Video Manipulation Detection(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Nielsen, V; Khodabakhsh, Ali; Busch, ChristophAdvancements in video synthesis technology have caused major concerns over the authenticity of audio-visual content. A video manipulation method that is often overlooked is inter-frame forgery, in which segments (or units) of an original video are reordered and rejoined while cut-points are covered with transition effects. Subjective tests have shown the susceptibility of viewers in mistaking such content as authentic. In order to support research on the detection of such manipulations, we introduce a large-scale dataset of 1000 morph-cut videos that were generated by automation of the popular video editing software Adobe Premiere Pro. Furthermore, we propose a novel differential detection pipeline and achieve an outstanding frame-level detection accuracy of 95%.
- KonferenzbeitragSimulation of Print-Scan Transformations for Face Images based on Conditional Adversarial Networks(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Mitkovski, Aleksandar; Merkle, Johannes; Rathgeb, Christian; Tams, Benjamin; Bernardo, Kevin; Haryanto, Nathania E.; Busch, ChristophIn many countries, printing and scanning of face images is frequently performed as part of the issuance process of electronic travel documents, e.g., ePassports. Image alterations induced by such print-scan transformations may negatively effect the performance of various biometric subsystems, in particular image manipulation detection. Consequently, according training data is needed in order to achieve robustness towards said transformations. However, manual printing and scanning is time-consuming and costly. In this work, we propose a simulation of print-scan transformations for face images based on a Conditional Generative Adversarial Network (cGAN). To this end, subsets of two public face databases are manually printed and scanned using different printer-scanner combinations. A cGAN is then trained to perform an image-to-image translation which simulates the corresponding print-scan transformations. The goodness of simulation is evaluated with respect to image quality, biometric sample quality and performance, as well as human assessment.
- KonferenzbeitragBiometric System for Mobile Validation of ID And Travel Documents(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Medvedev, V; Gonçalves, Nuno; Cruz, LeandroCurrent trends in security of ID and travel documents require portable and efficient validation applications that rely on biometric recognition. Such tools can allow any authority and citizen to validate documents and authenticate citizens with no need of expensive and sometimes unavailable proprietary devices. In this work, we present a novel, compact and efficient approach of validating ID and travel documents for offline mobile applications. The approach employs the in-house biometric template that is extracted from the original portrait photo (either full frontal or token frontal), and then stored on the ID document with use of a machine readable code (MRC). The ID document can then be validated with a developed application on a mobile device with digital camera. The similarity score is estimated with use of an artificial neural network (ANN). Results show that we achieve validation accuracy up to 99.5% with corresponding false match rate = 0.0047 and false non-match rate = 0.00034.
- KonferenzbeitragFisher Vector Encoding of Dense-BSIF Features for Unknown Face Presentation Attack Detection(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) González-Soler, Lázaro J.; Gomez-Barrero, Marta; Busch, ChristophThe task of determining whether a sample stems from a real subject (i.e, it is a bona fide presentation) or it comes from an artificial replica (i.e., it is an attack presentation) is a mandatory requirement for biometric capture devices, which has received a lot of attention in the recent past. Nowadays, most face Presentation Attack Detection (PAD) approaches have reported a good detection performance when they are evaluated on known Presentation Attack Instruments (PAIs) and acquisition conditions, in contrast to more challenging scenarios where unknown attacks are included in the evaluation. For those more realistic scenarios, the existing approaches are in many cases unable to detect unknown PAI species. In this work, we introduce a new feature space based on Fisher vectors, computed from compact Binarised Statistical Image Features (BSIF) histograms, which allows finding semantic feature subsets from known samples in order to enhance the detection of unknown attacks. This new representation, evaluated over three freely available facial databases, shows promising results in the top state-of-the-art: a BPCER100 under 17% together with a AUC over 98% can be achieved in the presence of unknown attacks.
- KonferenzbeitragImproved Liveness Detection in Dorsal Hand Vein Videos using Photoplethysmography(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Schuiki, Johannes; Uhl, AndreasIn this study, a previously published infrared finger vein liveness detection scheme is tested for its applicability on dorsal hand vein videos. A custom database consisting of five different types of presentation attacks recorded with transillumination as well as reflected light illumination is examined. Additionally, two different methods for liveness detection are presented in this work. All methods described employ the concept of generating a signal through the change in average pixel illumination, which is referred to as Photoplethysmography. Feature vectors in order to classify a given video sequence are generated using spectral analysis of the time series. Experimental results show the effectiveness of the proposed methods.
- KonferenzbeitragTouchless Fingerprint Sample Quality: Prerequisites for the Applicability of NFIQ2.0(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Priesnitz, Jannis; Rathgeb, Christian; Buchmann, Nicolas; Busch, ChristophThe impact of fingerprint sample quality on biometric performance is undisputed. For touch-based fingerprint data, the effectiveness of the NFIQ2.0 quality estimation method is well documented in scientific literature. Due to the increasing use of touchless fingerprint recognition systems a thorough investigation of the usefulness of the NFIQ2.0 for touchless fingerprint data is of interest. In this work, we investigate whether NFIQ2.0 quality scores are predictive of error rates associated with the biometric performance of touchless fingerprint recognition. For this purpose, we propose a touchless fingerprint preprocessing that favours NFIQ2.0 quality estimation which has been designed for touch-based fingerprint data. Comparisons are made between NFIQ2.0 score distributions obtained from touch-based and touchless fingerprint data of the publicly available FVC06, MCYT, PolyU, and ISPFDv1 databases. Further, the predictive power regarding biometric performance is evaluated in terms of Error-versus-Reject Curves (ERCs) using an open source fingerprint recognition system. Under constrained capture conditions NFIQ2.0 is found to be an effective tool for touchless fingerprint quality estimation if an adequate preprocessing is applied.
- KonferenzbeitragBIOSIG 2020 - Komplettband(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020)
- KonferenzbeitragCan Generative Colourisation Help Face Recognition?(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Drozdowski, Pawel; Fischer, Daniel; Rathgeb, Christian; Geissler, Julian; Knedlik, Jan; Busch, ChristophGenerative colourisation methods can be applied to automatically convert greyscale images to realistically looking colour images. In a face recognition system, such techniques might be employed as a pre-processing step in scenarios where either one or both face images to be compared are only available in greyscale format. In an experimental setup which reflects said scenarios, we investigate if generative colourisation can improve face sample utility and overall biometric performance of face recognition. To this end, subsets of the FERET and FRGCv2 face image databases are converted to greyscale and colourised applying two versions of the DeOldify colourisation algorithm. Face sample quality assessment is done using the FaceQnet quality estimator. Biometric performance measurements are conducted for the widely used ArcFace system with its built-in face detector and reported according to standardised metrics. Obtained results indicate that, for the tested systems, the application of generative colourisation does neither improve face image quality nor recognition performance. However, generative colourisation was found to aid face detection and subsequent feature extraction of the used face recognition system which results in a decrease of the overall false reject rate.
- KonferenzbeitragCompact Models for Periocular Verification Through Knowledge Distillation(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Boutros, Fadi; Damer, Naser; Fang, Meiling; Raja, Kiran; Kirchbuchner, Florian; Kuijper, ArjanDespite the wide use of deep neural network for periocular verification, achieving smaller deep learning models with high performance that can be deployed on low computational powered devices remains a challenge. In term of computation cost, we present in this paper a lightweight deep learning model with only 1.1m of trainable parameters, DenseNet-20, based on DenseNet architecture. Further, we present an approach to enhance the verification performance of DenseNet-20 via knowledge distillation. With the experiments on VISPI dataset captured with two different smartphones, iPhone and Nokia, we show that introducing knowledge distillation to DenseNet-20 training phase outperforms the same model trained without knowledge distillation where the Equal Error Rate (EER) reduces from 8.36% to 4.56% EER on iPhone data, from 5.33% to 4.64% EER on Nokia data, and from 20.98% to 15.54% EER on cross-smartphone data.
- KonferenzbeitragEffects of sample stretching in face recognition(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Hedberg, Mathias FredrikFace stretching is something that can occur intentionally and unintentionally when preparing a face sample for enrollment in a face recognition system. In this paper we assess what affects both horizontal and vertical stretching have on a face recognition algorithms. Basic closed-set identification tests revealed that holistic face recognition algorithms performed poorly compared to feature based recognition algorithms when classifying non-stretched samples against templates based on stretched samples.