• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Explainable AI for Bioinformatics: Methods, Tools and Applications
 
  • Details
  • Full
Options
2023
Journal Article
Title

Explainable AI for Bioinformatics: Methods, Tools and Applications

Abstract
Artificial intelligence (AI) systems utilizing deep neural networks and machine learning (ML) algorithms are widely used for solving critical problems in bioinformatics, biomedical informatics and precision medicine. However, complex ML models that are often perceived as opaque and black-box methods make it difficult to understand the reasoning behind their decisions. This lack of transparency can be a challenge for both end-users and decision-makers, as well as AI developers. In sensitive areas such as healthcare, explainability and accountability are not only desirable properties but also legally required for AI systems that can have a significant impact on human lives. Fairness is another growing concern, as algorithmic decisions should not show bias or discrimination towards certain groups or individuals based on sensitive attributes. Explainable AI (XAI) aims to overcome the opaqueness of black-box models and to provide transparency in how AI systems make decisions. Interpretable ML models can explain how they make predictions and identify factors that influence their outcomes. However, the majority of the state-of-the-art interpretable ML methods are domain-agnostic and have evolved from fields such as computer vision, automated reasoning or statistics, making direct application to bioinformatics problems challenging without customization and domain adaptation. In this paper, we discuss the importance of explainability and algorithmic transparency in the context of bioinformatics. We provide an overview of model-specific and model-agnostic interpretable ML methods and tools and outline their potential limitations. We discuss how existing interpretable ML methods can be customized and fit to bioinformatics research problems. Further, through case studies in bioimaging, cancer genomics and text mining, we demonstrate how XAI methods can improve transparency and decision fairness. Our review aims at providing valuable insights and serving as a starting point for researchers wanting to enhance explainability and decision transparency while solving bioinformatics problems. GitHub: https://github.com/rezacsedu/XAI-for-bioinformatics.
Author(s)
Karim, Md. Rezaul
Fraunhofer-Institut für Angewandte Informationstechnik FIT  
Islam, Tanhim
Shajalal, Md
Beyan, Oya
Lange, Christoph
Fraunhofer-Institut für Angewandte Informationstechnik FIT  
Cochez, Michael
Rebholz-Schuhmann, Dietrich
Decker, Stefan  
Fraunhofer-Institut für Angewandte Informationstechnik FIT  
Journal
Briefings in bioinformatics  
DOI
10.1093/bib/bbad236
Additional full text version
Landing Page
Language
English
Fraunhofer-Institut für Angewandte Informationstechnik FIT  
Keyword(s)
  • bioinformatics

  • deep learning

  • explainable AI

  • interpretable machine learning

  • machine learning

  • NLP

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024