Götte, Gesa MarieGesa MarieGötte2024-01-112024-01-112023https://publica.fraunhofer.de/handle/publica/45865010.18420/inf2023_012-s2.0-85181075543This paper explores the effect of adversarial debiasing on the performance of machine learning models. As concerns about fairness in algorithmic decision-making grow, techniques for detecting and mitigating biases in ML models have been developed. However, there is a trade-off between fairness and model performance. This study investigates the impact of using adversarial debiasing on model performance in different scenarios of potential sampling biases and target distributions. Simulated data with varying structural and sampling parameters is used to evaluate the models’ performance. The results show that while adversarial debiasing can lead to significant improvements in certain scenarios, it can also result in impairments or no significant difference in performance compared to the normal models.enDebiasingFair AIThe Effect of Adversarial Debiasing on Model Performanceconference paper