Pereira, Joao AfonsoSequeira, Ana F.Pernes, DiogoCardoso, Jaime S.Brömme, ArslanBusch, ChristophDantcheva, AntitzaRaja, KiranRathgeb, ChristianUhl, Andreas2020-09-162020-09-162020978-3-88579-700-5https://dl.gi.de/handle/20.500.12116/34325Fingerprint presentation attack detection (PAD) methods present a stunning performance in current literature. However, the fingerprint PAD generalisation problem is still an open challenge requiring the development of methods able to cope with sophisticated and unseen attacks as our eventual intruders become more capable. This work addresses this problem by applying a regularisation technique based on an adversarial training and representation learning specifically designed to to improve the PAD generalisation capacity of the model to an unseen attack. In the adopted approach, the model jointly learns the representation and the classifier from the data, while explicitly imposing invariance in the high-level representations regarding the type of attacks for a robust PAD. The application of the adversarial training methodology is evaluated in two different scenarios: i) a handcrafted feature extraction method combined with a Multilayer Perceptron (MLP); and ii) an end-to-end solution using a Convolutional Neural Network (CNN). The experimental results demonstrated that the adopted regularisation strategies equipped the neural networks with increased PAD robustness. The adversarial approach particularly improved the CNN models’ capacity for attacks detection in the unseen-attack scenario, showing remarkable improved APCER error rates when compared to state-of-the-art methods in similar conditions.enFingerprint presentation attack detectionadversarial learningtransfer learningA robust fingerprint presentation attack detection method against unseen attacks through adversarial learningText/Conference Paper1617-5468