• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Re-interpreting rules interpretability
 
  • Details
  • Full
Options
2025
Journal Article
Title

Re-interpreting rules interpretability

Abstract
Trustworthy machine learning requires a high level of interpretability of machine learning models, yet many models are inherently black-boxes. Training interpretable models instead - or using them to mimic the black-box model - seems like a viable solution. In practice, however, these interpretable models are still unintelligible due to their size and complexity. In this paper, we present an approach to explain the logic of large interpretable models that can be represented as sets of logical rules by a simple, and thus intelligible, descriptive model. The coarseness of this descriptive model and its fidelity to the original model can be controlled, so that a user can understand the original model in varying levels of depth. We showcase and discuss this approach on three real-world problems from healthcare, material science, and finance.
Author(s)
Adilova, Linara  
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Kamp, Michael  
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Andrienko, Gennady
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Andrienko, Natalia
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Journal
International journal of data science and analytics  
Open Access
DOI
10.1007/s41060-023-00398-5
Additional full text version
Landing Page
Language
English
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Keyword(s)
  • Descriptive model

  • Generalization

  • Global explanation

  • Interpretability

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024