Now showing 1 - 2 of 2
  • Publication
    Guideline for Designing Trustworthy Artificial Intelligence
    (Fraunhofer IAIS, 2023-02) ; ; ; ; ;
    Cremers, Armin B.
    ;
    ;
    Houben, Sebastian
    ;
    ; ;
    Sicking, Joachim
    ;
    ; ; ;
    Loh, Silke
    ;
    Stolberg, Evelyn
    ;
    Tomala, Annette Daria
    Artificial Intelligence (AI) has made impressive progress in recent years and represents a a crucial impact on the economy and society. Prominent use cases include applications in medical diagnostics,key technology that has predictive maintenance and, in the future, autonomous driving. However, it is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards and are effectively protected against new AI risks. For instance, AI bears the risk of unfair treatment of individuals when processing personal data e.g., to support credit lending or staff recruitment decisions. Serious false predictions resulting from minor disturbances in the input data are another example - for instance, when pedestrians are not detected by an autonomous vehicle due to image noise. The emergence of these new risks is closely linked to the fact that the process for developing AI applications, particularly those based on Machine Learning (ML), strongly differs from that of conventional software. This is because the behavior of AI applications is essentially learned from large volumes of data and is not predetermined by fixed programmed rules.
  • Publication
    Trustworthy Use of Artificial Intelligence
    ( 2019-07)
    Cremers, Armin B.
    ;
    Englander, Alex
    ;
    Gabriel, Markus
    ;
    ; ; ; ;
    Rostalski, Frauke
    ;
    Sicking, Joachim
    ;
    Volmer, Julia
    ;
    Voosholz, Jan
    ;
    ;
    This publication forms a basis for the interdisciplinary development of a certification system for artificial intelligence. In view of the rapid development of artificial intelligence with disruptive and lasting consequences for the economy, society, and everyday life, it highlights the resulting challenges that can be tackled only through interdisciplinary dialog between IT, law, philosophy, and ethics. As a result of this interdisciplinary exchange, it also defines six AI-specific audit areas for trustworthy use of artificial intelligence. They comprise fairness, transparency, autonomy and control, data protection as well as security and reliability while addressing ethical and legal requirements. The latter are further substantiated with the aim of operationalizability.