Deep learning-based face recognition and the robustness to perspective distortion
Face recognition technology is spreading into a wide range of applications. This is mainly driven by social acceptance and the performance boost achieved by the deep learningbased solutions in the recent years. Perspective distortion is an understudied distortion in face recognition that causes converging verticals when imaging 3D objects depending on the distance to the object. The effect of this distortion on face recognition was previously studied for algorithms based on hand-crafted features with a clear negative effect on verification performance. Possible solutions were proposed by compensating the distortion effect on the face image level, which requires knowing the camera settings and capturing a high quality image. This work investigates the effect of perspective distortion on the performance of a deep learning-based face recognition solution. It also provides a device parameter-independent solution to decrease this effect by creating more perspective-robust face representations. This was achieved by training the deep learning model on perspective-diverse data, without increasing the size of the training data. Experiments performed on the deep model in hand and a specifically collected database concluded that the perspective distortion effects face verification performance if not considered in the training process, and that this can be improved by our proposal of creating robust face representations by properly selecting the training data.