Auflistung nach Schlagwort "face image quality"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragCan Generative Colourisation Help Face Recognition?(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Drozdowski, Pawel; Fischer, Daniel; Rathgeb, Christian; Geissler, Julian; Knedlik, Jan; Busch, ChristophGenerative colourisation methods can be applied to automatically convert greyscale images to realistically looking colour images. In a face recognition system, such techniques might be employed as a pre-processing step in scenarios where either one or both face images to be compared are only available in greyscale format. In an experimental setup which reflects said scenarios, we investigate if generative colourisation can improve face sample utility and overall biometric performance of face recognition. To this end, subsets of the FERET and FRGCv2 face image databases are converted to greyscale and colourised applying two versions of the DeOldify colourisation algorithm. Face sample quality assessment is done using the FaceQnet quality estimator. Biometric performance measurements are conducted for the widely used ArcFace system with its built-in face detector and reported according to standardised metrics. Obtained results indicate that, for the tested systems, the application of generative colourisation does neither improve face image quality nor recognition performance. However, generative colourisation was found to aid face detection and subsequent feature extraction of the used face recognition system which results in a decrease of the overall false reject rate.
- KonferenzbeitragThe relative contributions of facial parts qualities to the face image utility(BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group, 2021) Fu, Biying; Chen, Cong; Henniger, Olaf; Damer, NaserFace image quality assessment predicts the utility of a face image for automated face recognition. A high-quality face image can achieve good performance for the identification or verification task. Some recent face image quality assessment algorithms are established on deep-learningbased approaches, which rely on face embeddings of aligned face images. Such face embeddings fuse complex information into a single feature vector and are, therefore, challenging to disentangle. The semantic context however can provide better interpretable insights into neural-network decisions. We investigate the effects of face subregions (semantic contexts) and link the general image quality of face subregions with face image utility. The evaluation is performed on two difficult largescale datasets (LFW and VGGFace2) with three face recognition solutions (FaceNet, SphereFace, and ArcFace). In total, we applied four face image quality assessment methods and one general image quality assessment method on four face subregions (eyes, mouth, nose, and tightly cropped face region) and the aligned faces. In addition, the effect of fusion of different face subregions was investigated to increase the robustness of the outcomes