• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization
 
  • Details
  • Full
Options
2019
Conference Paper
Title

Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization

Abstract
One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.
Author(s)
Schaaf, Nina
Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA  
Huber, Marco  
Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA  
Maucher, Johannes
Hochschule der Medien Stuttgart
Mainwork
18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019. Proceedings  
Conference
International Conference on Machine Learning and Applications (ICMLA) 2019  
Open Access
DOI
10.1109/ICMLA.2019.00016
Additional full text version
Landing Page
Language
English
Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA  
Keyword(s)
  • Explainable Artificial Intelligence (XAI)

  • neuronales Netz

  • neuronales Netzwerk

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024