Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Comparison-Level Mitigation of Ethnic Bias in Face Recognition

: Terhörst, Philipp; Tran, Mai Ly; Damer, Naser; Kirchbuchner, Florian; Kuijper, Arjan


Institute of Electrical and Electronics Engineers -IEEE-:
8th International Workshop on Biometrics and Forensics, IWBF 2020. Proceedings : April 29-30, 2020, Porto, Portugal
Piscataway, NJ: IEEE, 2020
ISBN: 978-1-7281-6232-4
ISBN: 978-1-7281-6233-1
6 pp.
International Workshop on Biometrics and Forensics (IWBF) <8, 2020, Porto>
Conference Paper
Fraunhofer IGD ()
biometrics; face recognition; ATHENE; Lead Topic: Smart City; Lead Topic: Visual Computing as a Service; Research Line: Computer vision (CV); Research Line: Human computer interaction (HCI); Biometric features; automatic identification system (AIS); Bias; fairness; CRISP

Current face recognition systems achieve high performance on several benchmark tests. Despite this progress, recent works showed that these systems are strongly biased against demographic sub-groups. Previous works introduced approaches that aim at learning less biased representations. However, applying these approaches in real applications requires a complete replacement of the templates in the database. This replacement procedure further requires that a face image of each enrolled individual is stored as well. In this work, we propose the first bias-mitigating solution that works on the comparison-level of a biometric system. We propose a fairness driven neural network classifier for the comparison of two biometric templates to replace the systems similarity function. This fair classifier is trained with a novel penalization term in the loss function to introduce the criteria of group and individual fairness to the decision process. This penalization term forces the score distributions of different ethnicities to be similar, leading to a reduction of the intra-ethnic performance differences. Experiments were conducted on two publicly available datasets and evaluated the performance of four different ethnicities. The results showed that for both fairness criteria, our proposed approach is able to significantly reduce the ethnic bias, while it preserves a high recognition ability. Our model, build on individual fairness, achieves bias reduction rate between 15.35%and 52.67%. In contrast to previous work, our solution is easy to integrate into existing systems by simply replacing the systems similarity functions with our fair template comparison approach.