P282 - BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group
Auflistung P282 - BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group nach Titel
1 - 10 von 32
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAdvanced Face Presentation Attack Detection on Light Field Database(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Chiesa, Valeria; Dugelay, Jean-LucIn the last years several works have been focused on the impact of new sensors on face recognition. A particular interest has been addressed to technologies able to detect the depth of the scene as light field cameras. Together with person identification algorithms, new anti-spoofing methods customized for specific devices have to be investigated. In this paper, a new algorithm for presentation attack detection on light field face database is proposed. While distance between subject and camera is not a relevant information for standard 2D spoofing attacks, it could be important when using 3D cameras. We prove through three experiments that the proposed method based on depth map elaboration outperforms the existent algorithms in presentation attack detection on light field images.
- KonferenzbeitragA benchmark database of visible and thermal paired face images across multiple variations(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Mallat, Khawla; Dugelay, Jean-LucAlthough visible face recognition systems have grown as a major area of research, they are still facing serious challenges when operating in uncontrolled environments. In attempt to overcome these limitations, thermal imagery has been investigated as a promising direction to extend face recognition technology. However, the reduced number of databases acquired in thermal spectrum limits its exploration. In this paper, we introduce a database of face images acquired simultaneously in visible and thermal spectra under various variations: illumination, expression, pose and occlusion. Then, we present a comparative study of face recognition performances on both modalities against each variation and the impact of bimodal fusion. We prove that thermal spectrum rivals with the visible spectrum not only in the presence of illumination changes, but also in case of expression and poses changes.
- KonferenzbeitragBenefits of Gaussian Convolution in Gait Recognition(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Marsico, Maria De; Mecca, AlessioThe first and still popular approach to gait recognition applies computer vision techniques to appearance-based features of walking patterns. More recently, wearable sensors have become attractive. The accelerometer is the most used one, being embedded in widespread mobile devices. Related techniques do not suffer for problems like occlusion and point of view, but for intra-subject variations caused by walking speed, ground type, shoes, etc. However, we can often recognize a person from the walking pattern, and this stimulates to search for robust features, able to sufficiently characterize this trait. This paper presents some preliminary experiments using the convolution with Gaussian kernels to extract relevant gait elements. The experiments use the large ZJU-gaitacc public dataset, and achieve improved results compared with previous works exploiting the same dataset.
- KonferenzbeitragBiometric Transaction Authentication using Smartphones(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Stokkenes, Martin; Ramachandra, Raghavendra; Busch, ChristophSecure and robust authentication of users and customers is critical, as an increasing number of services from banks, health and government sectors are made available to people as online services. Recent development in the area of biometrics, e.g. biometric systems in smartphones, has contributed to higher adoption of the technology as a viable authentication factor in modern systems. In this work, we propose an approach for authenticating transactions in an online bank by using a combination of Bloom filters and error correcting codes. Firstly, protected biometric templates, using Bloom filters, are generated from faces detected in images captured using smartphones. Secondly, a key, shared between a smartphone and a bank server, is encoded using error correcting codes. The encoded key is then secured in the smartphone using the protected biometric templates. Authentication of a banking transaction is realised by unlocking the secured key with a protected biometric template that is close to the template used to lock the key. Experiments are performed on a database consisting of images and videos captured using an iPhone 6S.
- KonferenzbeitragDeep Domain Adaptation for Face Recognition using images captured from surveillance cameras(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Banerjee, Samik; Bhattacharjee, Avishek; Das, SukhenduLearning based on convolutional neural networks (CNNs) or deep learning has been a major research area with applications in face recognition (FR). However, performances of algorithms designed for FR are unsatisfactory when surveillance conditions severely degrade the test probes. The work presented in this paper has three contributions. First, it proposes a novel adaptive-CNN architecture of deep learning refurbished for domain adaptation (DA), to overcome the difference in feature distributions between the gallery and probe samples. The proposed architecture consists of three components: feature (FM), adaptive (AM) and classification (CM) modules. Secondly, a novel 2-stage algorithm for Mutually Exclusive Training (2-MET) based on stochastic gradient descent, has been proposed. The final stage of training in 2-MET freezes the layers of the FM and CM, while updating (tuning) only the parameters of the AM using a few probe (as target) samples. This helps the proposed deep-DA CNN to bridge the disparities in the distributions of the gallery and probe samples, resulting in enhanced domain-invariant representation for efficient deep-DA learning and classification. The third contribution comes from rigorous experimentations performed on three benchmark real-world surveillance face datasets with various kinds of degradations. This reveals the superior performance of the proposed adaptive-CNN architecture with 2-MET training, using Rank-1 recognition rates and ROC and CMC metrics, over many recent state-of-the-art techniques of CNN and DA.
- KonferenzbeitragDeep Sparse Feature Selection and Fusion for Textured Contact Lens Detection(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Poster, Domenick; Nasrabadi, Nasser; Riggan, BenjaminDistinguishing between images of irises wearing textured lenses versus those wearing transparent lenses or no lenses is a challenging problem due to the subtle and fine-grained visual differences. Our approach builds upon existing hand-crafted image features and neural network architectures by optimally selecting and combining the most useful set of features into a single model. We build multiple, parallel sub-networks corresponding to the various feature descriptors and learn the best subset of features through group sparsity. We avoid overfitting such a wide and deep model through a selective transfer learning technique and a novel group Dropout regularization strategy. This model achieves roughly a four times increase in performance over the state-of-the-art on three benchmark textured lens datasets and equals the near-perfect state-of-the-art accuracy on two others. Furthermore, the generic nature of the architecture allows it to be extended to other image features, forms of spoofing attacks, or problem domains.
- KonferenzbeitragEnhanced low-latency speaker spotting using selective cluster enrichment(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Patino, Jose; Delgado, Héctor; Evans, NicholasLow-latency speaker spotting (LLSS) calls for the rapid detection of known speakers within multi-speaker audio streams. While previous work showed the potential to develop efficient LLSS solutions by combining speaker diarization and speaker detection within an online processing framework, it failed to move significantly beyond the traditional definition of diarization. This paper shows that the latter needs rethinking and that a diarization sub-system tailored to the end application, rather than to the minimisation of the diarization error rate, can improve LLSS performance. The proposed selective cluster enrichment algorithm is used to guide the diarization system to better model segments within a multi-speaker audio stream and hence detect more reliably a given target speaker. The LLSS solution reported in this paper shows that target speakers can be detected with a 16% equal error rate after having been active in multi-speaker audio streams for only 15 seconds.
- KonferenzbeitragEstimating the Data Origin of Fingerprint Samples(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Schuch, Patrick; May, Jan Marek; Busch, ChristophThe data origin (i.e. acquisition technique and acquisition mode) can have a significant impact on the appearance and characteristics of a fingerprint sample. This dataset bias might be challenging for processes like biometric feature extraction. Much effort can be put into data normalization or into processes able to deal with almost any input data. The performance of the former might suffer from this general applicability. The latter losses information by definition. If one is able to reliably identify the data origin of fingerprints, one will be able to dispatch the samples to specialized processes. Six methods of classification are evaluated for their capabilities to distinguish between fifteen different datasets. Acquisition technique and acquisition mode can be classified very accurately. Also, most of the datasets can be distinguished reliably.
- KonferenzbeitragFacial Attribute Guided Deep Cross-Modal Hashing for Face Image Retrieval(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Taherkhani, Fariborz; Talreja, Veeru; Kazemi, Hadi; Nasrabadi, NasserHashing-based image retrieval approaches have attracted much attention due to their fast query speed and low storage cost. In this paper, we propose an Attribute-based Deep Cross Modal Hashing (ADCMH) network which takes facial attribute modality as a query to retrieve relevant face images. The ADCMH network can efficiently generate compact binary codes to preserve similarity between two modalities (i.e., facial attribute and image modalities) in the Hamming space. Our ADCMH is an end to end deep cross-modal hashing network, which jointly learns similarity preserving features and also compensates for the quantization error due to the hashing of the continuous representation of modalities to binary codes. Experimental results on two standard datasets with facial attributes-image modalities indicate that our ADCMH face image retrieval model outperforms most of the current attribute-guided face image retrieval approaches, which are based on hand crafted features.
- KonferenzbeitragFake Face Detection Methods: Can They Be Generalized?(BIOSIG 2018 - Proceedings of the 17th International Conference of the Biometrics Special Interest Group, 2018) Khodabakhsh, Ali; Ramachandra, Raghavendra; Raja, Kiran; Wasnik, Pankaj; Busch, ChristophWith advancements in technology, it is now possible to create representations of human faces in a seamless manner for fake media, leveraging the large-scale availability of videos. These fake faces can be used to conduct personation attacks on the targeted subjects. Availability of open source software and a variety of commercial applications provides an opportunity to generate fake videos of a particular target subject in a number of ways. In this article, we evaluate the generalizability of the fake face detection methods through a series of studies to benchmark the detection accuracy. To this extent, we have collected a new database of more than 53;000 images, from 150 videos, originating from multiple sources of digitally generated fakes including Computer Graphics Image (CGI) generation and many tampering based approaches. In addition, we have also included images (with more than 3;200) from the predominantly used Swap-Face application that is commonly available on smart-phones. Extensive experiments are carried out using both texture-based handcrafted detection methods and deep learning based detection methods to find the suitability of detection methods. Through the set of evaluation, we attempt to answer if the current fake face detection methods can be generalizable.