Konferenzbeitrag
A robust fingerprint presentation attack detection method against unseen attacks through adversarial learning
Vorschaubild nicht verfügbar
Volltext URI
Dokumententyp
Text/Conference Paper
Zusatzinformation
Datum
2020
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Fingerprint presentation attack detection (PAD) methods present a stunning performance
in current literature. However, the fingerprint PAD generalisation problem is still an open challenge
requiring the development of methods able to cope with sophisticated and unseen attacks as our
eventual intruders become more capable. This work addresses this problem by applying a regularisation
technique based on an adversarial training and representation learning specifically designed
to to improve the PAD generalisation capacity of the model to an unseen attack. In the adopted approach,
the model jointly learns the representation and the classifier from the data, while explicitly
imposing invariance in the high-level representations regarding the type of attacks for a robust PAD.
The application of the adversarial training methodology is evaluated in two different scenarios: i)
a handcrafted feature extraction method combined with a Multilayer Perceptron (MLP); and ii) an
end-to-end solution using a Convolutional Neural Network (CNN). The experimental results demonstrated
that the adopted regularisation strategies equipped the neural networks with increased PAD
robustness. The adversarial approach particularly improved the CNN models’ capacity for attacks
detection in the unseen-attack scenario, showing remarkable improved APCER error rates when
compared to state-of-the-art methods in similar conditions.