Auflistung nach Autor:in "Araki, Toshinori"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragNew security definitions for biometric authentication with template protection: toward covering more threats against authentication systems(BIOSIG 2013, 2013) Isshiki, Toshiyuki; Araki, Toshinori; Mori, Kengo; Obana, Satoshi; Ohki, Tetsushi; Sakamoto, ShizuoExisting studies on the security of biometric authentication with template protection have considered the adversaries who obtain only protected templates. Since biometric authentication systems transmit data other than the protected templates, we need to consider how to secure biometric authentication systems against adversaries with those data. In this paper, we propose a classification of adversaries in biometric authentication with template protection into the following three types in accordance with their knowledge: (1) protected template data, (2) data transmitted during authentication, and (3) both types of data. We also propose a new security metric unforgeability, which provides authentication security against attacks by adversaries impersonating someone else on authentication systems even when they cannot obtain the biometric information of a claimant. We then give security definitions against each type of adversary we classified. We also propose a biometric authentication scheme with template protection that is irreversible against all types of adversaries.
- KonferenzbeitragOn Brightness Agnostic Adversarial Examples Against Face Recognition Systems(BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group, 2021) Singh, Inderjeet; Momiyama, Satoru; Kakizaki, Kazuya; Araki, ToshinoriThis paper introduces a novel adversarial example generation method against face recognition systems (FRSs). An adversarial example (AX) is an image with deliberately crafted noise to cause incorrect predictions by a target system. The AXs generated from our method remain robust under real-world brightness changes. Our method performs non-linear brightness transformations while leveraging the concept of curriculum learning during the attack generation procedure. We demonstrate that our method outperforms conventional techniques from comprehensive experimental investigations in the digital and physical world. Furthermore, this method enables practical risk assessment of FRSs against brightness agnostic AXs.