
Publica
Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten. Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization
| Wani, M.A. ; Institute of Electrical and Electronics Engineers -IEEE-: 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019. Proceedings : December 16-19, 2019, Boca Raton, Florida, USA Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2019 ISBN: 978-1-7281-4550-1 ISBN: 978-1-7281-4551-8 ISBN: 978-1-7281-4549-5 S.42-49 |
| International Conference on Machine Learning and Applications (ICMLA) <18, 2019, Boca Raton/Fla.> |
|
| Englisch |
| Konferenzbeitrag |
| Fraunhofer IPA () |
| Explainable Artificial Intelligence (XAI); neuronales Netz; neuronales Netzwerk |
Abstract
One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.