Dong, XingboJin, ZheGuo, ZhenhuaTeoh, Andrew Beng JinBrömme, ArslanBusch, ChristophDamer, NaserDantcheva, AntitzaGomez-Barrero, MartaRaja, KiranRathgeb, ChristianSequeira, AnaUhl, Andreas2021-10-042021-10-042021978-3-88579-709-8https://dl.gi.de/handle/20.500.12116/37474Face recognition based on deep convolutional neural networks (CNN) has manifested superior accuracy. Despite the high discriminability of deep features generated by CNN, the vulnerability of the deep feature is often overlooked and leads to security and privacy concerns, particularly, the risks of reconstructing face images from the deep templates. In this paper, we propose a method to generate high definition (HD) face images from deep features. To be specific, the deep features extracted from CNN are mapped to the input (latent vector) of the pre-trained StyleGAN2 using a regression model. Subsequently, HD face images can be generated based on the latent vector by the pre-trained StyleGAN2 model. To evaluate our method, we derived the face features from the generated HD face images and compared against the bona fide face features. In the sense of face image reconstruction, our method is simple, yet the experimental results suggest the effectiveness, which achieves an attack performance as high as TAR=46.08% (18.30%) @ FAR=0.1 threshold under type-I (type-II) attack settings. Besides, experiment results also indicate that 50.7% of generated HD face images can pass one commercial off-the-shelf (COTS) liveness detection.enFace template securityFace images reconstructionFeatures to face imagesTowards Generating High Definition Face Images from Deep TemplatesText/Conference Paper1617-5468