Now showing 1 - 10 of 2325
  • Publication
    The why and how of trustworthy AI
    Artificial intelligence is increasingly penetrating industrial applications as well as areas that affect our daily lives. As a consequence, there is a need for criteria to validate whether the quality of AI applications is sufficient for their intended use. Both in the academic community and societal debate, an agreement has emerged under the term “trustworthiness” as the set of essential quality requirements that should be placed on an AI application. At the same time, the question of how these quality requirements can be operationalized is to a large extent still open. In this paper, we consider trustworthy AI from two perspectives: the product and organizational perspective. For the former, we present an AI-specific risk analysis and outline how verifiable arguments for the trustworthiness of an AI application can be developed. For the second perspective, we explore how an AI management system can be employed to assure the trustworthiness of an organization with respect to its handling of AI. Finally, we argue that in order to achieve AI trustworthiness, coordinated measures from both product and organizational perspectives are required.
  • Publication
    Wie Computer Sprachen lernen
    ( 2022-08-05)
    Paass, Gerhard
    Ob Sprachassistenten, Chatbots oder die automatische Analyse von Dokumenten: Die rasanten Entwicklungen in der KI machen Sprachtechnologien mittlerweile allgegenwärtig. Doch wie gelingt es der KI, die Feinheiten der menschlichen Sprache zu verstehen?
  • Publication
    Data Ecosystems: A New Dimension of Value Creation Using AI and Machine Learning
    Machine learning and artificial intelligence have become crucial factors for the competitiveness of individual companies and entire economies. Yet their successful deployment requires access to a large volume of training data often not even available to the largest corporations. The rise of trustworthy federated digital ecosystems will significantly improve data availability for all participants and thus will allow a quantum leap for the widespread adoption of artificial intelligence at all scales of companies and in all sectors of the economy. In this chapter, we will explain how AI systems are built with data science and machine learning principles and describe how this leads to AI platforms. We will detail the principles of distributed learning which represents a perfect match with the principles of distributed data ecosystems and discuss how trust, as a central value proposition of modern ecosystems, carries over to creating trustworthy AI systems.
  • Publication
    Assurance Methodology for In-vehicle AI
    ( 2022-07-08)
    Blank, Frédérik
    ;
    Hüger, Fabian
    ;
    ;
    Stauner, Thomas
    The application of AI is a key enabler for highly automated driving. Initiated by VDA, a consortium of OEMs, suppliers, technology providers and scientific institutions is developing a methodology for a novel safety argumentation in the project “KI Absicherung” (safe AI) that systematically identifies insufficiencies of AI-based functions, makes them measurable and mitigates them. The project stems from the “VDA-Leitinitiative” (flagship initiative). An industrial consensus for a methodical approach is to be achieved which is demonstrated using the example of pedestrian detection.
  • Publication
    Methodik zur Absicherung von KI im Fahrzeug
    ( 2022-07-08)
    Blank, Frédérik
    ;
    Hüger, Fabian
    ;
    ;
    Stauner, Thomas
    Der Einsatz von KI ist ein Schlüsselelement auf dem Weg zum automatisierten Fahren. Initiiert durch den VDA erarbeitet ein Konsortium aus OEMs, Zulieferern, Technologie­ providern und Forschungseinrichtungen eine Methodik für eine neuartige Sicherheits­ argumentation, die systematisch Schwächen von KI­basierten Funktionen identifiziert, messbar macht und mitigiert. Mit dem Projekt „KI Absicherung“, das aus der VDA­ Leitinitiative hervorgeht, soll ein Industriekonsens für ein methodisches Vorgehen geschaffen werden, das am Beispiel der Fußgängererkennung aufgezeigt wird.
  • Publication
    Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
    ( 2022-06-18)
    Houben, Sebastian
    ;
    Albrecht, Stefanie
    ;
    ;
    Bär, Andreas
    ;
    Brockherde, Felix
    ;
    Feifel, Patrick
    ;
    Fingscheidt, Tim
    ;
    ;
    Ghobadi, Seyed Eghbal
    ;
    Hammam, Ahmed
    ;
    Haselhoff, Anselm
    ;
    Hauser, Felix
    ;
    Heinzemann, Christian
    ;
    Hoffmann, Marco
    ;
    Kapoor, Nikhil
    ;
    Kappel, Falk
    ;
    Klingner, Marvin
    ;
    Kronenberger, Jan
    ;
    Küppers, Fabian
    ;
    Löhdefink, Jonas
    ;
    Mlynarski, Michael
    ;
    ;
    Mualla, Firas
    ;
    Pavlitskaya, Svetlana
    ;
    ;
    Pohl, Alexander
    ;
    Ravi-Kumar, Varun
    ;
    ;
    Rottmann, Matthias
    ;
    ;
    Sämann, Timo
    ;
    Schneider, Jan David
    ;
    Schulz, Elena
    ;
    Schwalbe, Gesina
    ;
    ;
    Srivastava, Toshika
    ;
    Varghese, Serin
    ;
    Weber, Michael
    ;
    Wirkert, Sebastian
    ;
    ;
    Woehrle, Matthias
    Deployment of modern data-driven machine learning methods, most often realized by deep neural networks (DNNs), in safety-critical applications such as health care, industrial plant control, or autonomous driving is highly challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability and implausible predictions to directed attacks by means of malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from so-called safety concerns, properties that preclude their deployment as no argument or experimental setup can help to assess the remaining risk. In recent years, an abundance of state-of-the-art techniques aiming to address these safety concerns has emerged. This chapter provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our work addresses machine learning experts and safety engineers alike: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern machine learning methods. We hope that this contribution fuels discussions on desiderata for machine learning systems and strategies on how to help to advance existing approaches accordingly.
  • Publication
    ScrutinAI: A Visual Analytics Approach for the Semantic Analysis of Deep Neural Network Predictions
    ( 2022-06-02)
    Haedecke, Elena Gina
    ;
    ;
    We present ScrutinAI, a Visual Analytics approach to exploit semantic understanding for deep neural network (DNN) predictions analysis, focusing on models for object detection and semantic segmentation. Typical fields of application for such models, e.g. autonomous driving or healthcare, have a high demand for detecting and mitigating data- and model-inherent shortcomings. Our approach aims to help analysts use their semantic understanding to identify and investigate potential weaknesses in DNN models. ScrutinAI therefore includes interactive visualizations of the model's inputs and outputs, interactive plots with linked brushing, and data filtering with textual queries on descriptive meta data. The tool fosters hypothesis driven knowledge generation which aids in understanding the model's inner reasoning. Insights gained during the analysis process mitigate the "black-box character" of the DNN and thus support model improvement and generation of a safety argumentation for AI applications. We present a case study on the investigation of DNN models for pedestrian detection from the automotive domain.
  • Publication
    Safety Assurance of Machine Learning for Perception Functions
    ( 2022-06) ;
    Hellert, Christian
    ;
    Hüger, Fabian
    ;
    ;
    Rohatschek, Andreas
    The latest generation of safety standards applicable to automated driving systems require both qualitative and quantitative safety acceptance criteria to be defined and argued. At the same time, the use of machine learning (ML) functions is increasingly seen as a prerequisite to achieving the necessary levels of perception performance in the complex operating environments of these functions. This inevitably leads to the question of which supporting evidence must be presented to demonstrate the safety of ML-based automated driving systems. This chapter discusses the challenge of deriving suitable acceptance criteria for the ML function and describes how such evidence can be structured in order to support a convincing safety assurance case for the system. In particular, we show how a combination of methods can be used to estimate the overall machine learning performance, as well as to evaluate and reduce the impact of ML-specific insufficiencies, both during design and operation.
  • Publication
    SEC-Learn: Sensor Edge Cloud for Federated Learning
    ( 2022-04-27) ;
    Antes, Christoph
    ;
    ; ;
    Johnson, David S.
    ;
    Jung, Matthias
    ;
    ; ; ; ;
    Kutter, Christoph
    ;
    ;
    Loroch, Dominik M.
    ;
    ;
    Laleni, Nelli
    ;
    ;
    Leugering, Johannes
    ;
    Martín Fernández, Rodrigo
    ;
    Mateu, Loreto
    ;
    Mojumder, Shaown
    ;
    ; ; ; ; ;
    Wallbott, Paul
    ;
    ;
    Due to the slow-down of Moore’s Law and Dennard Scaling, new disruptive computer architectures are mandatory. One such new approach is Neuromorphic Computing, which is inspired by the functionality of the human brain. In this position paper, we present the projected SEC-Learn ecosystem, which combines neuromorphic embedded architectures with Federated Learning in the cloud, and performance with data protection and energy efficiency.
  • Publication
    A generalized Weisfeiler-Lehman graph kernel
    ( 2022-04-27)
    Schulz, Till Hendrik
    ;
    ;
    Welke, Pascal
    ;
    After more than one decade, Weisfeiler-Lehman graph kernels are still among the most prevalent graph kernels due to their remarkable predictive performance and time complexity. They are based on a fast iterative partitioning of vertices, originally designed for deciding graph isomorphism with one-sided error. The Weisfeiler-Lehman graph kernels retain this idea and compare such labels with respect to equality. This binary valued comparison is, however, arguably too rigid for defining suitable graph kernels for certain graph classes. To overcome this limitation, we propose a generalization of Weisfeiler-Lehman graph kernels which takes into account a more natural and finer grade of similarity between Weisfeiler-Lehman labels than equality. We show that the proposed similarity can be calculated efficiently by means of the Wasserstein distance between certain vectors representing Weisfeiler-Lehman labels. This and other facts give rise to the natural choice of partitioning the vertices with the Wasserstein k-means algorithm. We empirically demonstrate on the Weisfeiler-Lehman subtree kernel, which is one of the most prominent Weisfeiler-Lehman graph kernels, that our generalization significantly outperforms this and other state-of-the-art graph kernels in terms of predictive performance on datasets which contain structurally more complex graphs beyond the typically considered molecular graphs.