• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Limitations of Feature Attribution in Long Text Classification of Standards
 
  • Details
  • Full
Options
November 8, 2024
Conference Paper
Title

Limitations of Feature Attribution in Long Text Classification of Standards

Abstract
Managing complex AI systems requires insight into a model's decision-making processes. Understanding how these systems arrive at their conclusions is essential for ensuring reliability. In the field of explainable natural language processing, many approaches have been developed and evaluated. However, experimental analysis of explainability for text classification has been largely constrained to short text and binary classification. In this applied work, we study explainability for a real-world task where the goal is to assess the technological suitability of standards. This prototypical use case is characterized by large documents, technical language, and a multi-label setting, making it a complex modeling challenge. We provide an analysis of approx. 1000 documents with human-annotated evidence. We then present experimental results with two explanation methods evaluating plausibility and runtime of explanations. We find that the average runtime for explanation generation is at least 5 minutes and that the model explanations do not overlap with the ground truth. These findings reveal limitations of current explanation methods. In a detailed discussion, we identify possible reasons and how to address them on three different dimensions: task, model and explanation method. We conclude with risks and recommendations for the use of feature attribution methods in similar settings.
Author(s)
Beckh, Katharina  orcid-logo
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Rachel Jacob, Joann
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Seeliger, Adrian
Deutsches Institut für Normung e. V. (DIN)
Rüping, Stefan  
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Nejad, Najmeh Mousavi  
Mainwork
AAAI Fall Symposia 2024. Proceedings  
Project(s)
The Lamarr Institute for Machine Learning and Artificial Intelligence  
Funder
Bundesministerium für Bildung und Forschung -BMBF-  
Conference
Association for the Advancement of Artificial Intelligence (AAAI Fall Symposium) 2024  
DOI
10.1609/aaaiss.v4i1.31765
Language
English
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Keyword(s)
  • text classification

  • machine learning

  • language models

  • explainable machine learning

  • trustworthy machine learning

  • standards

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024