• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Guest Editorial: New Developments in Explainable and Interpretable Artificial Intelligence
 
  • Details
  • Full
Options
2024
Editorial
Title

Guest Editorial: New Developments in Explainable and Interpretable Artificial Intelligence

Abstract
This special issue brings together seven articles that address different aspects of explainable and interpretable artificial intelligence (AI). Over the years, machine learning (ML) and AI models have posted strong performance across several tasks. This has sparked interest in deploying these methods in critical applications like health and finance. However, to be deployable in the field, ML and AI models must be trustworthy. Explainable and interpretable AI are two areas of research that have become increasingly important to ensure trustworthiness and hence deployability of advanced AI and ML methods. Interpretable AI are models that obey some domain-specific constraints so that they are better understandable by humans. In essence, they are not black-box models. On the other hand, explainable AI refers to models and methods that are typically used to explain another black-box model.
Author(s)
Subbalakshmi, Koduvayur P.
Samek, Wojciech  
Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut HHI  
Hu, Xia Ben
Journal
IEEE transactions on artificial intelligence  
DOI
10.1109/TAI.2024.3356669
Additional link
Full text
Language
English
Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut HHI  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024