The Relative Contributions of Facial Parts Qualities to the Face Image Utility
Face image quality assessment predicts the utility of a face image for automated face recognition. A high-quality face image can achieve good performance for the identification or verification task. Some recent face image quality assessment algorithms are established on deep-learning-based approaches, which rely on face embeddings of aligned face images. Such face embeddings fuse complex information into a single feature vector and are, therefore, challenging to disentangle. The semantic context however can provide better interpretable insights into neural-network decisions. We investigate the effects of face subregions (semantic contexts) and link the general image quality of face subregions with face image utility. The evaluation is performed on two difficult large-scale datasets (LFW and VGGFace2) with three face recognition solutions (FaceNet, SphereFace, and ArcFace). In total, we applied four face image quality assessment methods and one general image quality assessment method on four face subregions (eyes, mouth, nose, and tightly cropped face region) and the aligned faces. In addition, the effect of fusion of different face subregions was investigated to increase the robustness of the outcomes.