Under CopyrightBeckert, BerndBerndBeckertHoxha, JuliaJuliaHoxhaKellmeyer, PhilippPhilippKellmeyerPhilipp, PatrickPatrickPhilipp2024-09-182024-09-182023https://publica.fraunhofer.de/handle/publica/475214https://doi.org/10.24406/publica-366710.24406/publica-3667AI promises efficiency gains and innovations in the health sector. In the future, AI-based apps will be able to recognise symptoms of illness and warn patients before an emergency occurs, doctors will be able to have medical data, such as EEG data, interpreted in an automated way, or they will be able to make therapy recommendations based on a large number of studies that are automatically evaluated by the AI. But what about the reliability of such applications? How robust is the data being used for training? What about transparency and explainability? Do the data reinforce established patterns of discrimination? What happens when the system "learns" new patterns based on new data and changes its output? Questions like these are being discussed under the heading of "Trustworthy AI ." The concept, which been developed by a high-level expert group in the EU in 2019 comprises of seven dimensions ranging form “human oversight” to “accountability”. Since the concept was formulated, there have been many attempts to trans-late it into practical guidelines to make it applicable for the development and implementation of AI. However, practical examples and best practices are still rare. This contribution presents three exemplary implementations of AI in the health sector: A chatbot app used as a digital patient companion (1), a diagnostic pattern recognition system (2), and a clinical decision support system (3).enTrustworthy AIimplementationAI-based health applicationsAI chatbotdiagnostic pattern recognitionclinical decision supportempirical analysis of implementationbest practicesMDRAI ActTrustworthy AI in the health sector: Challenges and solutions illustrated by three real-world examplespresentation