Konferenzbeitrag
Compact Models for Periocular Verification Through Knowledge Distillation
Lade...
Volltext URI
Dokumententyp
Text/Conference Paper
Zusatzinformation
Datum
2020
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Despite the wide use of deep neural network for periocular verification, achieving smaller
deep learning models with high performance that can be deployed on low computational powered
devices remains a challenge. In term of computation cost, we present in this paper a lightweight deep
learning model with only 1.1m of trainable parameters, DenseNet-20, based on DenseNet architecture.
Further, we present an approach to enhance the verification performance of DenseNet-20 via
knowledge distillation. With the experiments on VISPI dataset captured with two different smartphones,
iPhone and Nokia, we show that introducing knowledge distillation to DenseNet-20 training
phase outperforms the same model trained without knowledge distillation where the Equal Error
Rate (EER) reduces from 8.36% to 4.56% EER on iPhone data, from 5.33% to 4.64% EER on
Nokia data, and from 20.98% to 15.54% EER on cross-smartphone data.