• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. ECQ<sup>x</sup> : Explainability-Driven Quantization for Low-Bit and Sparse DNNs
 
  • Details
  • Full
Options
2022
Conference Paper
Title

ECQ<sup>x</sup> : Explainability-Driven Quantization for Low-Bit and Sparse DNNs

Abstract
The remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations. Such increases in memory and computational demands make deep learning prohibitive for resource-constrained hardware platforms such as mobile devices. Recent efforts aim to reduce these overheads, while preserving model performance as much as possible, and include parameter reduction techniques, parameter quantization, and lossless compression techniques. In this chapter, we develop and describe a novel quantization paradigm for DNNs: Our method leverages concepts of explainable AI (XAI) and concepts of information theory: Instead of assigning weight values based on their distances to the quantization clusters, the assignment function additionally considers weight relevances obtained from Layer-wise Relevance Propagation (LRP) and the information content of the clusters (entropy optimization). The ultimate goal is to preserve the most relevant weights in quantization clusters of highest information content. Experimental results show that this novel Entropy-Constrained and XAI-adjusted Quantization (ECQx ) method generates ultra low-precision (2–5 bit) and simultaneously sparse neural networks while maintaining or even improving model performance. Due to reduced parameter precision and high number of zero-elements, the rendered networks are highly compressible in terms of file size, up to 103 × compared to the full-precision unquantized DNN model. Our approach was evaluated on different types of models and datasets (including Google Speech Commands, CIFAR-10 and Pascal VOC) and compared with previous work.
Author(s)
Becking, Daniel
Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut HHI  
Dreyer, Maximilian
Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut HHI  
Samek, Wojciech  
Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut HHI  
Müller, Karsten
Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut HHI  
Lapuschkin, Sebastian Roland
Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut HHI  
Mainwork
XxAI - Beyond Explainable AI  
Project(s)
BIFOLD  
Funder
Bundesministerium für Bildung und Forschung -BMBF-
Conference
International Conference on Machine Learning (ICML) 2020  
Workshop "Extending Explainable AI Beyond Deep Models and Classifiers" 2020  
Open Access
DOI
10.1007/978-3-031-04083-2_14
Additional link
Full text
Language
English
Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut HHI  
Keyword(s)
  • Efficient Deep Learning

  • Explainable AI (XAI)

  • Layer-wise Relevance Propagation (LRP)

  • Neural Network Compression

  • Neural Network Quantization

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024