Huber, MarcoMarcoHuberBoutros, FadiFadiBoutrosDamer, NaserNaserDamer2025-05-272025-05-272025https://publica.fraunhofer.de/handle/publica/48800010.1007/978-3-031-92089-9_18Face recognition (FR) models are vulnerable to performance variations across demographic groups. The causes for these performance differences are unclear due to the highly complex deep learning-based structure of face recognition models. Several works aimed at exploring possible roots of gender and ethnicity bias, identifying semantic reasons such as hairstyle, make-up, or facial hair as possible sources. Motivated by recent discoveries of the importance of frequency patterns in convolutional neural networks, we explain bias in face recognition using state-of-the-art frequency-based explanations. Our extensive results show that different frequencies are important to FR models depending on the ethnicity of the samples.enBranche: Information TechnologyResearch Line: Computer vision (CV)Research Line: Human computer interaction (HCI)Research Line: Machine learning (ML)LTA: Interactive decision-making support and assistance systemsLTA: Machine intelligence, algorithms, and data structures (incl. semantics)LTA: Generation, capture, processing, and output of images and 3D modelsBiometricsFace recognitionMachine learningComputer visionDeep learningATHENEFrequency Matters: Explaining Biases of Face Recognition in the Frequency Domainconference paper