Now showing 1 - 5 of 5
No Thumbnail Available
Publication

Guideline for Designing Trustworthy Artificial Intelligence

2023-02 , Poretschkin, Maximilian , Schmitz, Anna , Akila, Maram , Adilova, Linara , Becker, Daniel , Cremers, Armin B. , Hecker, Dirk , Houben, Sebastian , Mock, Michael , Rosenzweig, Julia , Sicking, Joachim , Schulz, Elena , Voß, Angelika , Wrobel, Stefan , Loh, Silke , Stolberg, Evelyn , Tomala, Annette Daria

Artificial Intelligence (AI) has made impressive progress in recent years and represents a a crucial impact on the economy and society. Prominent use cases include applications in medical diagnostics,key technology that has predictive maintenance and, in the future, autonomous driving. However, it is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards and are effectively protected against new AI risks. For instance, AI bears the risk of unfair treatment of individuals when processing personal data e.g., to support credit lending or staff recruitment decisions. Serious false predictions resulting from minor disturbances in the input data are another example - for instance, when pedestrians are not detected by an autonomous vehicle due to image noise. The emergence of these new risks is closely linked to the fact that the process for developing AI applications, particularly those based on Machine Learning (ML), strongly differs from that of conventional software. This is because the behavior of AI applications is essentially learned from large volumes of data and is not predetermined by fixed programmed rules.

No Thumbnail Available
Publication

Trustworthy Use of Artificial Intelligence

2019-07 , Cremers, Armin B. , Englander, Alex , Gabriel, Markus , Hecker, Dirk , Mock, Michael , Poretschkin, Maximilian , Rosenzweig, Julia , Rostalski, Frauke , Sicking, Joachim , Volmer, Julia , Voosholz, Jan , Voß, Angelika , Wrobel, Stefan

This publication forms a basis for the interdisciplinary development of a certification system for artificial intelligence. In view of the rapid development of artificial intelligence with disruptive and lasting consequences for the economy, society, and everyday life, it highlights the resulting challenges that can be tackled only through interdisciplinary dialog between IT, law, philosophy, and ethics. As a result of this interdisciplinary exchange, it also defines six AI-specific audit areas for trustworthy use of artificial intelligence. They comprise fairness, transparency, autonomy and control, data protection as well as security and reliability while addressing ethical and legal requirements. The latter are further substantiated with the aim of operationalizability.

No Thumbnail Available
Publication

Data Ecosystems: A New Dimension of Value Creation Using AI and Machine Learning

2022-07-22 , Hecker, Dirk , Voß, Angelika , Wrobel, Stefan

Machine learning and artificial intelligence have become crucial factors for the competitiveness of individual companies and entire economies. Yet their successful deployment requires access to a large volume of training data often not even available to the largest corporations. The rise of trustworthy federated digital ecosystems will significantly improve data availability for all participants and thus will allow a quantum leap for the widespread adoption of artificial intelligence at all scales of companies and in all sectors of the economy. In this chapter, we will explain how AI systems are built with data science and machine learning principles and describe how this leads to AI platforms. We will detail the principles of distributed learning which represents a perfect match with the principles of distributed data ecosystems and discuss how trust, as a central value proposition of modern ecosystems, carries over to creating trustworthy AI systems.

No Thumbnail Available
Publication

Vertrauenswürdiger Einsatz von Künstlicher Intelligenz

2019 , Cremers, Armin B. , Englander, Alex , Gabriel, Markus , Hecker, Dirk , Mock, Michael , Poretschkin, Maximilian , Rosenzweig, Julia , Rostalski, Frauke , Sicking, Joachim , Volmer, Julia , Voosholz, Jan , Voß, Angelika , Wrobel, Stefan

Die vorliegende Publikation dient als Grundlage für die interdisziplinäre Entwicklung einer Zertifizierung von Künstlicher Intelligenz. Angesichts der rasanten Entwicklung von Künstlicher Intelligenz mit disruptiven und nachhaltigen Folgen für Wirtschaft, Gesellschaft und Alltagsleben verdeutlicht sie, dass sich die hieraus ergebenden Herausforderungen nur im interdisziplinären Dialog von Informatik, Rechtswissenschaften, Philosophie und Ethik bewältigen lassen. Als Ergebnis dieses interdisziplinären Austauschs definiert sie zudem sechs KI-spezifische Handlungsfelder für den vertrauensvollen Einsatz von Künstlicher Intelligenz: Sie umfassen Fairness, Transparenz, Autonomie und Kontrolle, Datenschutz sowie Sicherheit und Verlässlichkeit und adressieren dabei ethische und rechtliche Anforderungen. Letztere werden mit dem Ziel der Operationalisierbarkeit weiter konkretisiert.

No Thumbnail Available
Publication

Leitfaden zur Gestaltung vertrauenswürdiger Künstlicher Intelligenz (KI-Prüfkatalog)

2021 , Poretschkin, Maximilian , Schmitz, Anna , Akila, Maram , Adilova, Linara , Becker, Daniel , Cremers, Armin B. , Hecker, Dirk , Houben, Sebastian , Mock, Michael , Rosenzweig, Julia , Sicking, Joachim , Schulz, Elena , Voß, Angelika , Wrobel, Stefan