Schaaf, NinaNinaSchaafHuber, MarcoMarcoHuberMaucher, JohannesJohannesMaucher2022-03-142022-03-142019https://publica.fraunhofer.de/handle/publica/40678510.1109/ICMLA.2019.00016One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.enExplainable Artificial Intelligence (XAI)neuronales Netzneuronales NetzwerkEnhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularizationconference paper