Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Mitigating Ethnic Bias in Face Recognition Models through Fair Template Comparison

 
: Tran, Mai Ly
: Kuijper, Arjan; Terhörst, Philipp

Darmstadt, 2019, 62 pp.
Darmstadt, TU, Master Thesis, 2019
English
Master Thesis
Fraunhofer IGD ()
Lead Topic: Smart City; Research Line: Computer vision (CV); biometrics; face recognition; machine learning

Abstract
Face recognition systems find many uses in daily life. For example, they can be used to unlock your phone or automatically tag a person in a photo but they are also used in other application fields such as in security environments or surveillance. However, there is a significant problem with these systems: they are often biased. These systems make much more mistakes on women and darker-skinned people than on men and light-skinned people. This bias comes from data which is heavily skewed towards light-skinned men and the systems learn from these data, reflecting this bias. As face recognition systems become more prevalent, solving this problem increasingly gains importance, especially when these mistakes can have a large impact, such as when they are used for identifying criminals but entire groups of persons are discriminated. The important question is: How can the bias be reduced as much as possible so that the systems get fairer while maintaining a sufficient recognition performance? There are several ways to tackle bias. Previous approaches tried to introduce balanced datasets or remove features which may lead to a bias. However, often they have to deal with the challenge of providing enough data for a balanced dataset or with performance drops. This is especially true for minority groups, as it is intrinsically hard to collect more data for them. Therefore, there exists an even stronger bias against minority groups. In this thesis, the focus is on reducing the ethnic bias of facial recognition systems through a fair template comparison method: We propose applying two different fairness concepts during the training of template comparison models by adding them as penalization terms to the loss function. The first concept, group fairness, aims at equalizing groups while the second concept, individual fairness, aims at equal treatment for similar individuals. Our approach is evaluated on two different datasets. The template comparison is realized as logistic regression and neural network models. The experiments show not only the influence of the fairness terms but also that it is possible to achieve a fairer system without a significant face recognition performance drop.

: http://publica.fraunhofer.de/documents/N-575307.html