CC BY 4.0Caldeira, EduardaEduardaCaldeiraNeto, Pedro C.Pedro C.NetoGonçalves, TiagoTiagoGonçalvesDamer, NaserNaserDamerSequeira, Ana F.Ana F.SequeiraCardoso, Jaime S.Jaime S.Cardoso2024-03-262024-03-262024https://publica.fraunhofer.de/handle/publica/464492https://doi.org/10.24406/publica-284210.1016/j.sctalk.2024.10033110.24406/publica-2842Morphing attacks keep threatening biometric systems, especially face recognition systems. Over time they have become simpler to perform and more realistic, as such, the usage of deep learning systems to detect these attacks has grown. At the same time, there is a constant concern regarding the lack of interpretability of deep learning models. Balancing performance and interpretability has been a difficult task for scientists. However, by leveraging domain information and proving some constraints, we have been able to develop IDistill, an interpretable method with state-of-the-art performance that provides information on both the identity separation on morph samples and their contribution to the final prediction. The domain information is learnt by an autoencoder and distilled to a classifier system in order to teach it to separate identity information. When compared to other methods in the literature it outperforms them in three out of five databases and is competitive in the remaining.enBranche: Information TechnologyBranche: Bioeconomics and InfrastructureResearch Line: Computer vision (CV)Research Line: Human computer interaction (HCI)Research Line: Machine learning (ML)LTA: Interactive decision-making support and assistance systemsLTA: Machine intelligence, algorithms, and data structures (incl. semantics)LTA: Generation, capture, processing, and output of images and 3D modelsBiometricsFace recognitionMachine learningMorphing attackATHENEDisentangling Morphed Identities for Face Morphing Detectionjournal article