CC BY 4.0Hermosilla, PamelaPamelaHermosillaBerríos, SebastiánSebastiánBerríosAllende-Cid, HéctorHéctorAllende-Cid2025-08-202025-08-202025-06-30https://publica.fraunhofer.de/handle/publica/490813https://doi.org/10.24406/publica-511610.3390/app1513732910.24406/publica-5116The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers a powerful approach to enhancing transparency, trust, and legal defensibility in network intrusion detection. This study presents a comparative analysis of SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) applied to Extreme Gradient Boosting (XGBoost) and Attentive Interpretable Tabular Learning (TabNet), using the UNSW-NB15 dataset. XGBoost achieved 97.8% validation accuracy and outperformed TabNet in explanation stability and global coherence. In addition to classification performance, we evaluate the fidelity, consistency, and forensic relevance of the explanations. The results confirm the complementary strengths of SHAP and LIME, supporting their combined use in building transparent, auditable, and trustworthy AI systems in digital forensic applications.enexplainable artificial intelligence (XAI)intrusion detection system (IDS)digital forensicsSHAPLIMEInterpretability evaluationExplainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Modelsjournal article