Khodabakhsh, AliLoiselle, HugoBrömme, ArslanBusch, ChristophDantcheva, AntitzaRaja, KiranRathgeb, ChristianUhl, Andreas2020-09-162020-09-162020978-3-88579-700-5https://dl.gi.de/handle/20.500.12116/34322There is a long history of exploitation of the visual similarity of look-alikes for fraud and deception. The visual similarity along with the application of physical and digital cosmetics greatly challenges the recognition ability of average humans. Face recognition systems are not an exception in this regard and are vulnerable to such similarities. In contrast to physiological face recognition, behavioral face recognition is often overlooked due to the outstanding success of the former. However, the behavior of a person can provide an additional source of discriminative information with regards to the identity of individuals when physiological attributes are not reliable. In this study, we propose a novel biometric recognition system based only on facial behavior for the differentiation of look-alikes in unconstrained recording conditions. To this end, we organized a dataset of 85;656 utterances from 1000 look-alike pairs based on videos collected from the wild, large enough for the development of deep learning solutions. Our selection criteria assert that for these collected videos, both state-of-the-art biometric systems and human judgment fail in recognition. Furthermore, to utilize the advantage of large-scale data, we introduce a novel action-independent biometric recognition system that was trained using triplet-loss to create generalized behavioral identity embeddings. We achieve look-alike recognition equal-error-rate of 7:93% with sole reliance on the behavior descriptors extracted from facial landmark movements. The proposed method can have applications in face recognition as well as presentation attack detection and Deepfake detection.enBehavioral BiometricsFace RecognitionLook-alike faceFacial MotionTriplet LossAction-Independent Generalized Behavioral Identity Descriptors for Look-alike Recognition in VideosText/Conference Paper1617-5468