Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Forcing Interpretability for Deep Neural Networks through Rule-based Regularization

 
: Burkart, Nadia; Huber, Marco; Faller, Philipp M.

:

Wani, M.A. ; Institute of Electrical and Electronics Engineers -IEEE-:
18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019. Proceedings : December 16-19, 2019, Boca Raton, Florida, USA
Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2019
ISBN: 978-1-7281-4550-1
ISBN: 978-1-7281-4551-8
ISBN: 978-1-7281-4549-5
S.700-705
International Conference on Machine Learning and Applications (ICMLA) <18, 2019, Boca Raton/Fla.>
Englisch
Konferenzbeitrag
Fraunhofer IPA ()
Fraunhofer IOSB ()
Künstliche Intelligenz; Explainable Artificial Intelligence (XAI); maschinelles Lernen; neuronales Netz; interpretable machine learning; explainability; rule-based regularization; neural networks

Abstract
Remarkable progress in the field of machine learning strongly drives the research in many application domains. For some domains, it is mandatory that the output of machine learning algorithms needs to be interpretable. In this paper, we propose a rule-based regularization technique to enforce interpretability for neural networks (NN). For this purpose, we train a rule-based surrogate model simultaneously with the NN. From the surrogate, a metric quantifying its degree of explainability is derived and fed back to the training of the NN as a regularization term. We evaluate our model on four datasets and compare it to unregularized models as well as a decision tree (DT) based baseline. The rule-based regularization approach achieves interpretability and competitive accuracy.

: http://publica.fraunhofer.de/dokumente/N-581049.html