Khodabakhsh, AliRamachandra, RaghavendraRaja, KiranWasnik, PankajBusch, ChristophBrömme, ArslanBusch, ChristophDantcheva, AntitzaRathgeb, ChristianUhl, Andreas2019-06-172019-06-172018978-3-88579-676-4https://dl.gi.de/handle/20.500.12116/23809With advancements in technology, it is now possible to create representations of human faces in a seamless manner for fake media, leveraging the large-scale availability of videos. These fake faces can be used to conduct personation attacks on the targeted subjects. Availability of open source software and a variety of commercial applications provides an opportunity to generate fake videos of a particular target subject in a number of ways. In this article, we evaluate the generalizability of the fake face detection methods through a series of studies to benchmark the detection accuracy. To this extent, we have collected a new database of more than 53;000 images, from 150 videos, originating from multiple sources of digitally generated fakes including Computer Graphics Image (CGI) generation and many tampering based approaches. In addition, we have also included images (with more than 3;200) from the predominantly used Swap-Face application that is commonly available on smart-phones. Extensive experiments are carried out using both texture-based handcrafted detection methods and deep learning based detection methods to find the suitability of detection methods. Through the set of evaluation, we attempt to answer if the current fake face detection methods can be generalizable.enFake FacePresentation Attack DetectionDatasetGeneralizationTransfer Learning.Fake Face Detection Methods: Can They Be Generalized?Text/Conference Paper1617-5468