CC BY 4.0Burkart, NadiaNadiaBurkartFranz, MaximilianMaximilianFranzHuber, Marco F.Marco F.Huber2022-03-1415.1.20212021https://publica.fraunhofer.de/handle/publica/40976510.1007/978-3-662-62746-4_910.24406/publica-r-409765Machine learning and deep learning are widely used in various applications to assist or even replace human reasoning. For instance, a machine learning based intrusion detection system (IDS) monitors a network for malicious activity or specific policy violations. We propose that IDSs should attach a sufficiently understandable report to each alert to allow the operator to review them more efficiently. This work aims at complementing an IDS by means of a framework to create explanations. The explanations support the human operator in understanding alerts and reveal potential false positives. The focus lies on counterfactual instances and explanations based on locally faithful decision-boundaries.enintrusion detectionExplainable Machine LearningCounterfactual ExplanationsdetectionExplainable Artificial Intelligence (XAI)maschinelles LernenKünstliche Intelligenz004670Explanation Framework for Intrusion Detectionconference paper