Accurate and robust neural networks for face morphing attack detection
Artificial neural networks tend to use only what they need for a task. For example, to recognize a rooster, a network might only considers the rooster's red comb and wattle and ignores the rest of the animal. This makes them vulnerable to attacks on their decision making process and can worsen their generality. Thus, this phenomenon has to be considered during the training of networks, especially in safety and security related applications. In this paper, we propose neural network training schemes, which are based on different alternations of the training data, to increase robustness and generality. Precisely, we limit the amount and position of information available to the neural network for the decision making process and study their effects on the accuracy, generality, and robustness against semantic and black box attacks for the particular example of face morphing attacks. In addition, we exploit layer-wise relevance propagation (LRP) to analyze the differences in the decision making process of the differently trained neural networks. A face morphing attack is an attack on a biometric facial recognition system, where the system is fooled to match two different individuals with the same synthetic face image. Such a synthetic image can be created by aligning and blending images of the two individuals that should be matched with this image. We train neural networks for face morphing attack detection using our proposed training schemes and show that they lead to an improvement of robustness against attacks on neural networks. Using LRP, we show that the improved training forces the networks to develop and use reliable models for all regions of the analyzed image. This redundancy in representation is of crucial importance to security related applications.