Kuijper, ArjanTerhörst, PhillipBierbaum, FlorianFlorianBierbaum2022-12-152022-12-152022https://publica.fraunhofer.de/handle/publica/430044A MasterFace attack aims at generating a face image that can be successfully matched against as many people as possible and therefore represents a large potential security risk for face recognition systems. Especially, if it is taken into account that the generation of MasterFaces does not require access to the information of the subject they are trying to match. Previous works proposed methods for the generation of MasterFaces and showed that they can match a large portion of the identities in their testing data. However, these works conducted limited evaluation experiments with small testing sample sizes, older face recognition models and limited cross-dataset and cross-model evaluations. Since this makes it hard to show the generalizability of MasterFace attacks, in this work, we analyze the generalizability of MasterFace attacks empirically and theoretically. The empirical analysis includes six state-of-the-art face recognition models, cross-dataset and cross-model evaluation on three testing datasets with a much larger sample size and variance. This investigation showed very low generalizability of the MasterFaces when testing their effectiveness using different face recognition models and datasets than the ones they were trained and tested on. To be precise, in our experiments their effectiveness was comparable to zero-effort imposter attacks, meaning a random sample from the dataset has a very similar success rate as a MasterFace. In the theoretical analysis, we investigated how many identities an optimal MasterFace can theoretically match in an embedding space with perfect identity separation. For this, we estimated how many unique identities can fit in such an embedding space and how many of these a MasterFace could potentially match. This investigation resulted in an insignificant coverage of the MasterFace, indicating that the effectiveness of MasterFace attacks will furtherly decrease on future face recognition systems since they aim for a better identity separation. We conclude that MasterFace attacks show no threat to face recognition systems unless the attacker has access to the deployed face recognition model and dataset. However, we suggest that MasterFace attacks can be used to understand and improve the robustness of face recognition systems instead. The results of this work resulted in a paper publication [49].enLead Topic: Smart CityResearch Line: Computer vision (CV)Research Line: Machine Learning (ML)Biometric identification systemsFace recognitionGeneralizationDeep learningInvestigating the Generalizability of MasterFace Attacks on Face Recognition SystemsUntersuchung der Generalisierbarkeit von MasterFace Angriffen auf Gesichtserkennungssystemebachelor thesis