Options
2024
Conference Paper
Title
QUD: Unsupervised Knowledge Distillation for Deep Face Recognition
Abstract
We present in this paper an unsupervised knowledge distillation (KD) approach, namely QUD, for face recognition. The proposed QUD approach utilizes a queue of features within a contrastive learning setup to guide the student model to learn a feature representation similar to its counterpart obtained from the teacher and dissimilar from the ones that are stored in a queue. This queue is updated by pushing a batch of feature representations obtained from the teacher into the queue and dequeuing the oldest ones from the queue in each training iteration. We additionally incorporate a temperature into the contrastive loss to control how sensitive contrastive learning is to samples considered negative in the queue. The proposed unsupervised QUD approach does not require accessing the same dataset used to train the teacher model or even for the data to have identity labels. The effectiveness of the proposed approach is demonstrated through several sensitivity studies on different teacher architectures and using different datasets for student training in the KD framework. Additionally, the achieved results on mainstream benchmarks by our unsupervised QUD are compared to state-of-the-art (SOTA), achieving very competitive performances and even outperforming SOTA on several benchmarks. Code and pre-trained models are available under https://github.com/jankolf/QUD.
Funder
Bundesministerium für Bildung und Forschung -BMBF-
Conference
Keyword(s)
Branche: Information Technology
Research Line: Computer vision (CV)
Research Line: Human computer interaction (HCI)
Research Line: Machine learning (ML)
LTA: Interactive decision-making support and assistance systems
LTA: Machine intelligence, algorithms, and data structures (incl. semantics)
LTA: Generation, capture, processing, and output of images and 3D models
Biometrics
Face recognition
Machine learning
Deep learning
Efficiency
ATHENE