Auflistung nach Autor:in "Hahmann, Ferdinand"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragCombination of facial landmarks for robust eye localization using the discriminative generalized Hough transform(BIOSIG 2013, 2013) Hahmann, Ferdinand; Böer, Gordon; Schramm, HaukeThe Discriminative Generalized Hough Transform (DGHT) is a general and robust automated object localization method, which has been shown to achieve state-of-the-art success rates in different application areas like medical image analysis and person localization. In this contribution the framework is enhanced by a novel facial landmark combination technique which is theoretically introduced and evaluated for an eye localization task on a public database. The technique applies individually trained DGHT models for the localization of different facial landmarks, combines the obtained Hough spaces into a 3D feature matrix and applies a specifically trained higher-level DGHT model for the final localization based on the given features. In addition to that, the framework is further improved by a task-specific multi-level approach which adjusts the zooming-in strategy with respect to relevant structures and confusable objects. With the new system it was possible to increase the iris localization rate from 96.6% to 97.9% on 3830 evaluation images. This result is promising, since the variation of the head pose in the database is quite large and the applied error measure considers the worst of a left and right eye localization attempt.
- KonferenzbeitragModel interpolation for eye localization using the discriminative generalized Hough transform(BIOSIG 2012, 2012) Hahmann, Ferdinand; Ruppertshofen, Heike; Böer, Gordon; Schramm, HaukeThe Discriminative Generalized Hough Transform (DGHT) is a general method for the localization of arbitrary objects with well-defined shape, which has been successfully applied in medical image processing. In this contribution, the framework is used for eye localization in the public PUT face database. The DGHT combines the Generalized Hough Transform (GHT) with a discriminative training procedure to generate GHT shape models with individual positive and negative model point weights. Based on a set of training images with annotated target points, the individual votes of model points in the Hough space are combined in a maximum-entropy probability distribution and the free parameters are optimized with respect to the training error rate. The estimated model point specific weights reflect the important model structures to distinguish the target object from other confusable image parts. Additionally, the point weights allow for a determination of irrelevant parts in the model, which can be eliminated to make space for new model point candidates from training images with high localization error. The iterative training procedure of weight estimation, point elimination, testing on training images, and incorporation of new model point candidates is repeated until a stopping criterion is reached. Furthermore, the DGHT framework incorporates a multi-level approach, in which the searched region is reduced in 6 zooming steps, using individually trained shape models. In order to further enhance the robustness of the method, the DGHT framework is, for the first time, extended by a linear model interpolation for the trained left and right eye model. An evaluation on the PUT face database has shown a success rate of 99% for iris detection in frontal-view images and 97% if the test set contains a large head pose variability.