• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Explainable Acoustic Scene Classification: Making Decisions Audible
 
  • Details
  • Full
Options
2022
Conference Paper
Title

Explainable Acoustic Scene Classification: Making Decisions Audible

Abstract
This study presents a sonification method that provides "audible explanations"to improve the transparency of the decision-making processes of convolutional neural networks designed for acoustic scene classification (ASC). First, a deep neural network (DNN) based on the ResNet architecture is proposed. Secondly, Grad-CAM and guided backpropagation images are computed for a given input signal. These images are then used to produce frequency-selective filters that retain signal components in the input that contribute to the decision of the trained DNN. The test results demonstrate that the proposed model outperforms two baseline models. The reconstructed audio waveform is interpretable by the human ear, serving as a valuable tool to examine and possibly improve ASC models.
Author(s)
Bicer, H. Nazim
Götz, Philipp
Tuna, Cagdas
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Habets, Emanuël A.P.
Mainwork
International Workshop on Acoustic Signal Enhancement, IWAENC 2022. Proceedings  
Conference
International Workshop on Acoustic Signal Enhancement 2022  
DOI
10.1109/IWAENC53105.2022.9914699
Language
English
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Keyword(s)
  • acoustic scene classification

  • convolutional neural networks

  • explainable artificial intelligence

  • Grad-CAM

  • guided backpropagation

  • interpretability

  • ResNet

  • spectrogram

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024