Ferreira, Pedro M.Sequeira, Ana F.Pernes, DiogoRebelo, AnaCardoso, Jaime S.Brömme, ArslanBusch, ChristophDantcheva, AntitzaRathgeb, ChristianUhl, Andreas2020-09-152020-09-152019978-3-88579-690-9https://dl.gi.de/handle/20.500.12116/34238Despite the high performance of current presentation attack detection (PAD) methods, the robustness to unseen attacks is still an under addressed challenge. This work approaches the problem by enforcing the learning of the bona fide presentations while making the model less dependent on the presentation attack instrument species (PAIS). The proposed model comprises an encoder, mapping from input features to latent representations, and two classifiers operating on these underlying representations: (i) the task-classifier, for predicting the class labels (as bona fide or attack); and (ii) the species-classifier, for predicting the PAIS. In the learning stage, the encoder is trained to help the task-classifier while trying to fool the species-classifier. Plus, an additional training objective enforcing the similarity of the latent distributions of different species is added leading to a ‘PAIspecies’- independent model. The experimental results demonstrated that the proposed regularisation strategies equipped the neural network with increased PAD robustness. The adversarial model obtained better loss and accuracy as well as improved error rates in the detection of attack and bona fide presentations.enIris presentation attack detectionopen-setadversarial learningtransfer learning.Adversarial learning for a robust iris presentation attack detection method against unseen attack presentationsText/Conference Paper1617-5468