Options
2023
Conference Paper
Title
Are Explainability Tools Gender Biased? A Case Study on Face Presentation Attack Detection
Abstract
Face recognition (FR) systems continue to spread in our daily lives with an increasing demand for higher explain-ability and interpretability of FR systems that are mainly based on deep learning. While bias across demographic groups in FR systems has already been studied, the bias of explainability tools has not yet been investigated. As such tools aim at steering further development and enabling a better understanding of computer vision problems, the possible existence of bias in their outcome can lead to a chain of biased decisions. In this paper, we explore the existence of bias in the outcome of explainability tools by investigating the use case of face presentation attack detection. By utilizing two different explainability tools on models with different levels of bias, we investigate the bias in the outcome of such tools. Our study shows that these tools show clear signs of gender bias in the quality of their explanations.
Conference
Keyword(s)
Branche: Information Technology
Research Line: Computer vision (CV)
Research Line: Human computer interaction (HCI)
Research Line: Machine learning (ML)
LTA: Interactive decision-making support and assistance systems
LTA: Machine intelligence, algorithms, and data structures (incl. semantics)
Image interpretations
Biometrics
Bias
Face recognition
Machine learning
ATHENE
CRISP