Wang, YaohuiDantcheva, AntitzaBremond, FrancoisBrömme, ArslanBusch, ChristophDantcheva, AntitzaRathgeb, ChristianUhl, Andreas2019-06-172019-06-172018978-3-88579-676-4https://dl.gi.de/handle/20.500.12116/23800Recent advances in computer vision have aimed at extracting and classifying auxiliary biometric information such as age, gender, as well as health attributes, referred to as soft biometrics or attributes. We here seek to explore the inverse problem, namely face generation based on attribute labels, which is of interest due to related applications in law enforcement and entertainment. Particularly, we propose a method based on deep conditional generative adversarial network (DCGAN), which introduces additional data (e.g., labels) towards determining specific representations of generated images. We present experimental results of the method, trained on the dataset CelebA, and validate these based on two GAN-quality-metrics, as well as based on three face detectors and one commercial off the shelf (COTS) attribute classifier. While these are early results, our findings indicate the method’s ability to generate realistic faces from attribute labels.enAttributesSoft BiometricsGenerative Adversarial NetworksFrom attributes to faces: a conditional generative network for face genera-tionText/Conference Paper1617-5469