Now showing 1 - 6 of 6
  • Publication
    Guideline for Designing Trustworthy Artificial Intelligence
    (Fraunhofer IAIS, 2023-02) ; ; ; ; ;
    Cremers, Armin B.
    ;
    ;
    Houben, Sebastian
    ;
    ; ;
    Sicking, Joachim
    ;
    ; ; ;
    Loh, Silke
    ;
    Stolberg, Evelyn
    ;
    Tomala, Annette Daria
    Artificial Intelligence (AI) has made impressive progress in recent years and represents a a crucial impact on the economy and society. Prominent use cases include applications in medical diagnostics,key technology that has predictive maintenance and, in the future, autonomous driving. However, it is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards and are effectively protected against new AI risks. For instance, AI bears the risk of unfair treatment of individuals when processing personal data e.g., to support credit lending or staff recruitment decisions. Serious false predictions resulting from minor disturbances in the input data are another example - for instance, when pedestrians are not detected by an autonomous vehicle due to image noise. The emergence of these new risks is closely linked to the fact that the process for developing AI applications, particularly those based on Machine Learning (ML), strongly differs from that of conventional software. This is because the behavior of AI applications is essentially learned from large volumes of data and is not predetermined by fixed programmed rules.
  • Publication
    Wasserstein Dropout
    ( 2022-09-08)
    Sicking, Joachim
    ;
    ;
    Pintz, Maximilian Alexander
    ;
    ; ;
    Fischer, Asja
    Despite of its importance for safe machine learning, uncertainty quantification for neural networks is far from being solved. State-of-the-art approaches to estimate neural uncertainties are often hybrid, combining parametric models with explicit or implicit (dropout-based) ensembling. We take another pathway and propose a novel approach to uncertainty quantification for regression tasks, Wasserstein dropout, that is purely non-parametric. Technically, it captures aleatoric uncertainty by means of dropout-based sub-network distributions. This is accomplished by a new objective which minimizes the Wasserstein distance between the label distribution and the model distribution. An extensive empirical analysis shows that Wasserstein dropout outperforms state-of-the-art methods, on vanilla test data as well as under distributional shift in terms of producing more accurate and stable uncertainty estimates.
  • Publication
    A Novel Regression Loss for Non-Parametric Uncertainty Optimization
    ( 2021)
    Sicking, Joachim
    ;
    ;
    Pintz, Maximilian
    ;
    ;
    Fischer, Asja
    ;
    Quantification of uncertainty is one of the most promising approaches to establish safe machine learning. Despite its importance, it is far from being generally solved, especially for neural networks. One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice. However, it can underestimate the uncertainty. We propose a new objective, referred to as second-moment loss (SML), to address this issue. While the full network is encouraged to model the mean, the dropout networks are explicitly used to optimize the model variance. We intensively study the performance of the new objective on various UCI regression datasets. Comparing to the state-of-the-art of deep ensembles, SML leads to comparable prediction accuracies and uncertainty estimates while only requiring a single model. Under distribution shift, we observe moderate improvements. As a side result, we introduce an intuitive Wasserstein distance-based uncertainty measure that is non-saturating and thus allows to resolve quality differences between any two uncertainty estimates.
  • Publication
    Trustworthy Use of Artificial Intelligence
    ( 2019-07)
    Cremers, Armin B.
    ;
    Englander, Alex
    ;
    Gabriel, Markus
    ;
    ; ; ; ;
    Rostalski, Frauke
    ;
    Sicking, Joachim
    ;
    Volmer, Julia
    ;
    Voosholz, Jan
    ;
    ;
    This publication forms a basis for the interdisciplinary development of a certification system for artificial intelligence. In view of the rapid development of artificial intelligence with disruptive and lasting consequences for the economy, society, and everyday life, it highlights the resulting challenges that can be tackled only through interdisciplinary dialog between IT, law, philosophy, and ethics. As a result of this interdisciplinary exchange, it also defines six AI-specific audit areas for trustworthy use of artificial intelligence. They comprise fairness, transparency, autonomy and control, data protection as well as security and reliability while addressing ethical and legal requirements. The latter are further substantiated with the aim of operationalizability.
  • Publication
    Vertrauenswürdiger Einsatz von Künstlicher Intelligenz
    (Fraunhofer IAIS, 2019)
    Cremers, Armin B.
    ;
    Englander, Alex
    ;
    Gabriel, Markus
    ;
    ; ; ; ;
    Rostalski, Frauke
    ;
    Sicking, Joachim
    ;
    ;
    Voosholz, Jan
    ;
    ;
    Die vorliegende Publikation dient als Grundlage für die interdisziplinäre Entwicklung einer Zertifizierung von Künstlicher Intelligenz. Angesichts der rasanten Entwicklung von Künstlicher Intelligenz mit disruptiven und nachhaltigen Folgen für Wirtschaft, Gesellschaft und Alltagsleben verdeutlicht sie, dass sich die hieraus ergebenden Herausforderungen nur im interdisziplinären Dialog von Informatik, Rechtswissenschaften, Philosophie und Ethik bewältigen lassen. Als Ergebnis dieses interdisziplinären Austauschs definiert sie zudem sechs KI-spezifische Handlungsfelder für den vertrauensvollen Einsatz von Künstlicher Intelligenz: Sie umfassen Fairness, Transparenz, Autonomie und Kontrolle, Datenschutz sowie Sicherheit und Verlässlichkeit und adressieren dabei ethische und rechtliche Anforderungen. Letztere werden mit dem Ziel der Operationalisierbarkeit weiter konkretisiert.
  • Publication
    Effcient Decentralized Deep Learning by Dynamic Model Averaging
    ( 2019) ; ;
    Sicking, Joachim
    ;
    Hüger, Fabian
    ;
    Schlicht, Peter
    ;
    ;
    We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources. The proposed protocol allows to handle different phases of model training equally well and to quickly adapt to concept drifts. This leads to a reduction of communication by an order of magnitude compared to periodically communicating state-of-the-art approaches. Moreover, we derive a communication bound that scales well with the hardness of the serialized learning problem. The reduction in communication comes at almost no cost, as the predictive performance remains virtually unchanged. Indeed, the proposed protocol retains loss bounds of periodically averaging schemes. An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such as autonomous driving, or voice recognition and image classification on mobile phones.