Options
2025
Conference Paper not in Proceedings
Title
ViT-FIQA: Assessing Face Image Quality using Vision Transformers
Abstract
Face Image Quality Assessment (FIQA) aims to predict the utility of a face image for face recognition (FR) systems. State-of-the-art FIQA methods mainly rely on convolutional neural networks (CNNs), leaving the potential of Vision Transformer (ViT) architectures underexplored. This work proposes ViT-FIQA, a novel approach that extends standard ViT backbones, originally optimized for FR, through a learnable quality token designed to predict a scalar utility score for any given face image. The learnable quality token is concatenated with the standard image patch tokens, and the whole sequence is processed via global self-attention by the ViT encoders to aggregate contextual information across all patches. At the output of the backbone, ViT-FIQA branches into two heads: (1) the patch tokens are passed through a fully connected layer to learn discriminative face representations via a margin-penalty softmax loss, and (2) the quality token is fed into a regression head to learn to predict the face sample's utility. Extensive experiments on challenging benchmarks and several FR models, including both CNN-and ViT-based architectures, demonstrate that ViT-FIQA consistently achieves top-tier performance. These results underscore the effectiveness of transformerbased architectures in modeling face image utility and highlight the potential of ViTs as a scalable foundation for future FIQA research https://cutt.ly/irHlzXUC.
Open Access
File(s)
Rights
Use according to copyright law
Language
English
Keyword(s)
Branche: Infrastructure and Public Services
Research Line: Computer vision (CV)
Research Line: Human computer interaction (HCI)
Research Line: Machine learning (ML)
LTA: Machine intelligence, algorithms, and data structures (incl. semantics)
Biometrics
Face Recognition
Machine learning
Artificial intelligence (AI)
ATHENE