• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Comparison of Knowledge Distillation Methods for Low-complexity Multi-microphone Speech Enhancement using the FT-JNF Architecture
 
  • Details
  • Full
Options
2025
Conference Paper
Title

Comparison of Knowledge Distillation Methods for Low-complexity Multi-microphone Speech Enhancement using the FT-JNF Architecture

Abstract
Multi-microphone speech enhancement using deep neural networks (DNNs) has significantly progressed in recent years. However, many proposed DNN-based speech enhancement algorithms cannot be implemented on devices with limited hardware resources. Only lowering the complexity of such systems by reducing the number of parameters often results in worse performance. Knowledge Distillation (KD) is a promising approach for reducing DNN model size while preserving performance. In this paper, we consider the recently proposed Frequency-Time Joint Non-linear Filter (FT-JNF) architecture and investigate several KD methods to train smaller (student) models from a large pre-trained (teacher) model. Five KD methods are evaluated using direct output matching, the self-similarity of intermediate layers, and fused multi-layer losses. Experimental results on a simulated dataset using a compact array with five microphones show that three KD methods substantially improve the performance of student models compared to training without KD. A student model with only 25% of the teacher model’s parameters achieves comparable PESQ scores at 0 dB SNR. Furthermore, a reduction of up to 96% in model size can be achieved with only a minimal decrease in PESQ scores.
Author(s)
Metzger, Robert
Ohlenbusch, Mattes  
Fraunhofer-Institut für Digitale Medientechnologie IDMT  
Doclo, Simon  
Fraunhofer-Institut für Digitale Medientechnologie IDMT  
Rollwage, Christian  
Fraunhofer-Institut für Digitale Medientechnologie IDMT  
Mainwork
Speech Communication. 16th ITG Conference 2025  
Conference
Conference on Speech Communication 2025  
Language
English
Fraunhofer-Institut für Digitale Medientechnologie IDMT  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024