Huber, MarcoMarcoHuberLuu, Anh ThiAnh ThiLuuTerhörst, PhilippPhilippTerhörstDamer, NaserNaserDamer2024-04-172024-04-172024https://publica.fraunhofer.de/handle/publica/46613810.1109/WACV57701.2024.00467Explainable Face Recognition is gaining growing attention as the use of the technology is gaining ground in security-critical applications. Understanding why two face images are matched or not matched by a given face recognition system is important to operators, users, and developers to increase trust, accountability, develop better systems, and highlight unfair behavior. In this work, we propose a similarity score argument backpropagation (xSSAB) approach that supports or opposes the face-matching decision to visualize spatial maps that indicate similar and dissimilar areas as interpreted by the underlying FR model. Furthermore, we present Patch-LFW, a new explainable face verification benchmark that enables along with a novel evaluation protocol, the first quantitative evaluation of the validity of similarity and dissimilarity maps in explainable face recognition approaches. We compare our efficient approach to state-of-the-art approaches demonstrating a superior trade-off between efficiency and performance. The code as well as the proposed Patch-LFW is publicly available at: https://github.com/marcohuber/xSSAB.enBranche: Information TechnologyResearch Line: Computer vision (CV)Research Line: Human computer interaction (HCI)Research Line: Machine learning (ML)LTA: Interactive decision-making support and assistance systemsLTA: Machine intelligence, algorithms, and data structures (incl. semantics)LTA: Generation, capture, processing, and output of images and 3D modelsBiometricsFace recognitionMachine learningDeep learningATHENEEfficient Explainable Face Verification based on Similarity Score Argument Backpropagationconference paper