• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems
 
  • Details
  • Full
Options
April 25, 2025
Journal Article
Title

Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems

Abstract
The increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid the task of processing and analyzing large amounts of information. However, models generated by DL are often considered "black boxes", a name derived due to the difficulties faced by users when trying to understand the decision-making process for obtaining results. This research seeks to address the challenges of transparency, explainability, and reliability posed by black-box models in digital forensics. To accomplish this, explainable artificial intelligence (XAI) is explored as a solution. This approach seeks to make DL models more interpretable and understandable by humans. The SHAP (SHapley Additive eXplanations) and LIME (Local Interpretable Model-agnostic Explanations) methods will be implemented and evaluated as a model-agnostic technique to explain predictions of the generated models for forensic analysis. By applying these methods to the XGBoost and TabNet models trained on the UNSW-NB15 dataset, the results indicated distinct global feature importance rankings between the model types and revealed greater consistency of local explanations for the tree-based XGBoost model compared to the deep learning-based TabNet. This study aims to make the decision-making process in these models transparent and to assess the confidence and consistency of XAI-generated explanations in a forensic context.
Author(s)
Hermosilla, Pamela
Pontificia Universidad Católica de Valparaíso, Chile
Díaz, Mauricio
Pontificia Universidad Católica de Valparaíso, Chile
Berríos, Sebastián
Pontificia Universidad Católica de Valparaíso, Chile
Allende-Cid, Héctor  
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Journal
Computers  
Project(s)
The Lamarr Institute for Machine Learning and Artificial Intelligence  
Funder
Bundesministerium für Bildung und Forschung -BMBF-  
Open Access
File(s)
Download (5.76 MB)
Rights
CC BY 4.0: Creative Commons Attribution
DOI
10.3390/computers14050160
10.24406/publica-5119
Additional link
Full text
Language
English
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Keyword(s)
  • Forensic Analysis

  • XAI

  • UNSW-NB15

  • SHAP

  • LIME

  • XGBoost

  • TabNet

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024