Götte, Gesa MarieGesa MarieGötteAntons, OliverOliverAntonsHerzog, AndreasAndreasHerzogArlinghaus, Julia C.Julia C.Arlinghaus2024-12-182024-12-182024-11-04https://publica.fraunhofer.de/handle/publica/48084410.33968/2024.78Factories are evolving into Cyber-Physical Production Systems, producing vast data volumes that can be leveraged using computational power. However, an easy and sorrowless integration of machine learning (ML) can lead to too simplistic or false pattern extraction, i.e. biased ML applications. Especially when trained on big data this poses a significant risk when deploying ML. Research has shown that there are sources for undesired biases among the whole ML life cycle and feedback loop between human, data and the ML model. Methods to detect, mitigate and prevent those undesired biases in order to achieve "fair" ML solutions have been developed and established in tool boxes in the past years. In this article, we utilize a structured literature review to address the underappreciated biases in ML for production application and highlight the ambiguity of the term bias. It emphasizes the necessity for research on ML biases in production and shows off the most relevant blind spots so far. Filling those blind spots with research and guidelines to incorporate bias screening, treatment and risk assessment in the ML life cycle of industrial applications promises to enhance their robustness, resilience and trustworthiness.enResponsible AIFair AIProduktionBiasBias in MLAI in ProductionPerception of biases in machine learning in production researchconference paper