• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. Enhancing Accountability, Resilience, and Privacy of Intelligent Networks with XAI
 
  • Details
  • Full
Options
September 11, 2025
Journal Article
Title

Enhancing Accountability, Resilience, and Privacy of Intelligent Networks with XAI

Abstract
In the rapidly evolving landscape of networking and security, the adoption of artificial intelligence (AI) is accelerating to meet the demands of real-time, data-driven applications. Current AI development processes predominantly prioritize model utility metrics such as accuracy, precision, and recall, often overlooking critical trustworthiness aspects like accountability, resilience to adversarial attacks, and privacy. To address this gap, we propose a novel AI/Machine Learning (ML) development process that systematically integrates trustworthiness metrics alongside traditional model utility measures. Our process emphasizes the iterative development of trustworthy AI models by balancing performance, accountability, resilience, and privacy through the incorporation of eXplainable AI (XAI) techniques. We validate the effectiveness of our methodology across four distinct networking and security use cases. In encrypted traffic classification, LightGBM emerges as the most practical model, offering a strong balance of utility, accountability, and robustness despite Neural Networks achieving the highest raw performance. For malware detection, feature reduction in the MalDoc model yields a minimal utility loss (<0.7%) while substantially enhancing resilience to evasion attacks (10-80%). In assessing privacy trade-offs in Federated Learning, we observe that although strong differential privacy significantly degrades utility (up to 70% on MNIST), it enables early-stage privacy protection without fully masking poisoned client behaviour, which remains detectable through SHAP and t-SNE-based analysis. Lastly, in a smart healthcare emergency e-call scenario, our 1D CNN model achieves not only strong predictive performance (96.21% accuracy, 91.55% precision, 93.13% recall) but also provides stable and interpretable explanations using LRP and SHAP, with LRP demonstrating higher consistency across ECG segments. Therefore, unlike prior studies that focus on isolated aspects such as accountability or resilience, our work proposes a holistic, quantifiable process that balances the trade-offs among model utility, accountability, resilience, and privacy to support the development of trustworthy AI models in communication systems.
Author(s)
Senevirathna, Thulitha
Sandeepa, Chamara
Siniarski, Bartlomiej
Nguyen, Manh-Dung
Marchal, Samuel
Boerger, Michell  
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Liyanage, Madhusanka
Wang, Shen
Journal
IEEE Open Journal of the Communications Society  
Open Access
DOI
10.1109/OJCOMS.2025.3608784
Additional link
Full text
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Keyword(s)
  • Artificial intelligence

  • Privacy

  • Resilience

  • Measurement

  • Iterative methods

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024