Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Störfaktorbezogenes Grenzwertverhalten biometrischer Gesichtserkennung

: Daum, Henning
: Encarnação, José L.; Malsburg, Christoph von der

München: Verlag Dr. Hut, 2007, VIII, 239 pp.
Zugl.: Darmstadt, TU, Diss., 2006
ISBN: 3-89963-487-X
ISBN: 978-3-89963-487-7
Fraunhofer IGD ()
security; biometric; face recognition; testing; evaluation

Biometric recognition has grown to a viable alternative authentication method in the last years. Due to its intuitive usage while keeping or even enhancing security, it has been integrated in many applications like passports and other identification documents. Face recognition (FR) has drawn special attention, as it is also used by humans and, therefore, is already integrated in many areas. Several tests and evaluations have examined and shown the recognition performance of FR algorithms and have proved high identification and verification rates for good frontal images as used in controlled environments. The next upcoming challenge is uncontrolled and disturbed images of degraded quality. The common performance metrics used for evaluations using undisturbed images are not suited very well for comparing algorithm performance for more challenging data. Therefore, this work addresses the testing and evaluation of biometric face recognition algorithms in the aspect of disturbed images. Firstly, a methods for simulating certain disturbances, like contrast, resolution and sharpness, are developed by the means of computer graphics manipulation. Afterward, a framework for conducting large scale evaluations is specified and developed. The most important step is the analysis of the collected recognition results. While common metrics analyze the step of extracting the characteristic features of the face (enrollment) and the comparison of these separately, this is difficult for evaluations using disturbed images. Disturbances can influence both steps: Either an image is not enrolled because the feature extraction fails, or less features are used and lead to a degraded comparison result. To be able to compare algorithms with different behavior, an integrated metric reflecting both possibilities is developed in this work.