• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Algorithmic Fairness in Healthcare Data with Weighted Loss and Adversarial Learning
 
  • Details
  • Full
Options
2024
Conference Paper
Title

Algorithmic Fairness in Healthcare Data with Weighted Loss and Adversarial Learning

Abstract
Fairness in terms of various sensitive or protected attributes such as race, gender, age group, etc. has been a subject of great importance in the healthcare domain. Group fairness is considered as one of the principal criteria. However, most of the prevailing mitigation techniques emphasize on tuning the training algorithms while overlooking the fact that the training data may possibly be the primary reason for the biased outcomes. In this work, we address two sensitive attributes (age group and gender) with empirical evaluations of systemic inflammatory response syndrome (SIRS) classification for a dataset extracted from electronic health records (EHRs) for the essential task of improving equity in outcomes. Machine learning (ML)-based technologies are progressively becoming prevalent in hospitals; therefore, our approach carries out a demand for the frameworks to consider performance trade-offs regarding sensitive patient attributes combined with model training and permit organizations to utilize their ML resources in manners that are aware of potential fairness and equity issues. With the intended purpose of fairness, we experiment with a number of strategies to reduce disparities in algorithmic performance with respect to gender and age group. We leverage a sample and label balancing technique using weighted loss along with adversarial learning for an observational cohort derived from EHRs to introduce a “fair” SIRS classification model with minimized discrepancy in error rates over different groups. We experimentally illustrate that our strategy has the ability to align the distribution of SIRS classification outcomes for the models constructed from high-dimensional EHR data across a number of groups simultaneously.
Author(s)
Das, Pronaya Prosun
Fraunhofer-Institut für Toxikologie und Experimentelle Medizin ITEM  
Mast, Marcel
Peter L. Reichertz Institut für Medizinische Informatik
Wiese, Lena
Fraunhofer-Institut für Toxikologie und Experimentelle Medizin ITEM  
Jack, Thomas
Hannover Medical School
Wulff, Antje
Peter L. Reichertz Institut für Medizinische Informatik
Mainwork
Lecture Notes in Networks and Systems
Funder
Bundesministerium für Gesundheit  
Conference
Intelligent Systems Conference, IntelliSys 2023
DOI
10.1007/978-3-031-47715-7_18
Language
English
Fraunhofer-Institut für Toxikologie und Experimentelle Medizin ITEM  
Keyword(s)
  • Adversarial learning

  • Bias

  • EHR

  • Fairness

  • Healthcare

  • Neural networks

  • SIRS

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024