CC BY 4.0Helmer, LennardLennardHelmerMartens, ClaudioClaudioMartensWegener, DennisDennisWegenerAkila, MaramMaramAkilaBecker, DanielDanielBeckerAbbas, SermadSermadAbbas2024-07-302024-07-302024-07-29https://publica.fraunhofer.de/handle/publica/472283https://doi.org/10.24406/publica-347910.1145/3643691.364858410.24406/publica-3479In recent years, Machine Learning Operations (MLOps) has become increasingly important as more and more Machine Learning (ML) based applications are brought into production. With this widespread, attention must be paid to the application's trustworthiness. Numerous methods and tools have already been developed in the area of trustworthy AI. However, the integration of those into the MLOps cycle and in particular into the pipeline engineering process is missing. To address this open problem, we analysed an AI audit catalog and translated the respective requirements into a healthcare IT service provider's MLOps process. In this work, we describe the translation process and present the insights obtained via a case study. Our work highlights the necessary considerations for professionals and the scientific community when dealing with similar challenges in Trustworthy AI engineering and operations and provides clear recommendations.enMLOpsMachine LearningEngineeringTrustworthy AISoftware EngineeringDevelopmentCase studyDDC::000 Informatik, Informationswissenschaft, allgemeine WerkeTowards Trustworthy AI Engineering - A Case Study on integrating an AI audit catalog into MLOps processesconference paper