• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Extracting Interpretable Hierarchical Rules from Deep Neural Networks’ Latent Space
 
  • Details
  • Full
Options
October 15, 2023
Conference Paper
Title

Extracting Interpretable Hierarchical Rules from Deep Neural Networks’ Latent Space

Abstract
Deep neural networks, known for their superior learning capabilities, excel in identifying complex relationships between inputs and outputs, leveraging hierarchical, distributed data processing. Despite their impressive performance, these networks often resemble ’black boxes’ due to their highly intricate internal structure and representation, raising challenges in terms of safety, ethical standards, and social norms. Decompositional rule extraction techniques have sought to address these issues by delving into the latent space and retrieving a broad set of symbolic rules. However, the interpretability of these rules is often hampered by their size and complexity. In this paper, we introduce EDICT (Extracting Deep Interpretable Concepts using Trees), a novel approach for rule extraction which employs a hierarchy of decision trees to mine concepts learned in a neural network, thereby generating highly interpretable rules. Evaluations across multiple datasets reveal that our method extracts rules with greater speed and interpretability compared to existing decompositional rule extraction techniques. Simultaneously, our approach demonstrates competitive performance in classification accuracy and model fidelity.
Author(s)
Wang, Ya
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Paschke, Adrian  
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Mainwork
Rules and Reasoning. 7th International Joint Conference, RuleML+RR 2023. Proceedings  
Conference
International Joint Conference on Rules and Reasoning 2023  
DOI
10.1007/978-3-031-45072-3_17
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Keyword(s)
  • Neural network interpretability

  • Rule-based explanations

  • Decompositional rule extraction

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024