Now showing 1 - 10 of 378
  • Patent
    Verfahren und Vorrichtungen für automatische, kooperative Manöver
    ( 2023-03-23)
    Häfner, Bernhard
    ;
    ;
    Schepker, Henning F.
    ;
    Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
    ;
    Bayerische Motoren Werke AG -BMW-, München
    Offenbart ist ein Verfahren, umfassend die Schritte:- Erhalten einer gemeinsamen Umgebungsinformation wenigstens zweier Maschinen;- Generieren einer Mehrzahl von Kooperationsmanövern, welche jeweils ein Manöver für jede der zwei Maschinen umfassen, auf Basis der Umgebungsinformation;- Bewerten jedes Kooperationsmanövers auf Basis eines vorgegebenen Gütekriteriums;- Auswählen eines Kooperationsmanövers, welches einen vorbestimmten Wert des Gütekriteriums, insbesondere einen Bestwert, aufweist;- Bereitstellen des ausgewählten Kooperationsmanövers für die zwei Maschinen.
  • Publication
    Effects of defects in automated fiber placement laminates and its correlation to automated optical inspection results
    ( 2023)
    Böckl, B.
    ;
    Wedel, Andre
    ;
    Misik, Adam
    ;
    Drechsler, K.
    Automated Fiber Placement (AFP) is a widely used production process for the manufacturing of large scale CFRP parts. However, the occurrence of manufacturing defects such as gaps or overlaps is still a common problem in today’s AFP production environments. This study investigates the effect of different defect configurations on the mechanical performance (i.e., tensile strength, flexural strength, and shear strength) of AFP laminates. The results are then linked to the data generated “inline” by a ply inspection system. We use the Pearson correlation in order to relate the measured defect volume to the strength of samples containing different types of defects. A clear knockdown in tensile strength was found for specimens with gaps or overlaps that caused a high amount of fiber undulations in the laminate. The sensor data analysis showed a similar trend. Specimens with a high defect volume had significantly lower values for the tensile strength. A correlation coefficient of -0.98 between these two values was calculated. The obtained results are a promising step towards automated quality inspection for the AFP process.
  • Publication
    Concept Correlation and its Effects on Concept-Based Models
    ( 2023) ;
    Monnet, Maureen
    ;
    Concept-based learning approaches for image classification, such as Concept Bottleneck Models, aim to enable interpretation and increase robustness by directly learning high-level concepts which are used for predicting the main class. They achieve competitive test accuracies compared to standard end-to-end models. However, with multiple concepts per image and binary concept annotations (without concept localization), it is not evident if the output of the concept model is truly based on the predicted concepts or other features in the image. Additionally, high correlations between concepts would allow a model to predict a concept with high test accuracy by simply using a correlated concept as a proxy. In this paper, we analyze these correlations between concepts in the CUB and GTSRB datasets and propose methods beyond test accuracy for evaluating their effects on the performance of a concept-based model trained on this data. To this end, we also perform a more detailed analysis on the effects of concept correlation using synthetically generated datasets of 3D shapes. We see that high concept correlation increases the risk of a model's inability to distinguish these concepts. Yet simple techniques, like loss weighting, show promising initial results for mitigating this issue.
  • Publication
    Sensing and Machine Learning for Automotive Perception: A Review
    ( 2023)
    Pandharipande, Ashish
    ;
    ;
    Dauwels, Justin
    ;
    Gurbuz, Sevgi Z.
    ;
    Ibanez-Guzman, Javier
    ;
    Li, Guofa
    ;
    Piazzoni, Andrea
    ;
    Wang, Pu
    ;
    Santra, Avik
    Automotive perception involves understanding the external driving environment as well as the internal state of the vehicle cabin and occupants using sensor data. It is critical to achieving high levels of safety and autonomy in driving. This paper provides an overview of different sensor modalities like cameras, radars, and LiDARs used commonly for perception, along with the associated data processing techniques. Critical aspects in perception are considered, like architectures for processing data from single or multiple sensor modalities, sensor data processing algorithms and the role of machine learning techniques, methodologies for validating the performance of perception systems, and safety. The technical challenges for each aspect are analyzed, emphasizing machine learning approaches given their potential impact on improving perception. Finally, future research opportunities in automotive perception for their wider deployment are outlined.
  • Publication
    Entwicklung und Zertifizierung klinischer KI-Software
    ( 2023)
    Ahmidi, Narges
    ;
    Mareis, Leopold
    Das derzeitige Interesse an Künstlicher Intelligenz (KI) wird weitgehend durch die beeindruckende Leistung großer Sprachmodelle wie ChatGPT getrieben, was erhebliche mediale Aufmerksamkeit erregt hat. Obwohl bereits zahlreiche KI-Lösungen für verschiedene klinische Anwendungen wie Radiologie, Pathologie, Kaloskopie und Krebstherapie entwickelt wurden, sind bisher nur wenige in Kliniken implementiert. Das wirft die Frage auf, warum dies der Fall ist. Um diesen Umstand näher zu beleuchten, bietet der vorliegende Artikel einen kurzen Überblick darüber, wie Kl funktioniert, und beschreibt den Prozess der Herstellung und der Zertifizierung eines KI-Systems. Außerdem werden die Herausforderungen skizziert, die bei der Garantie von Zuverlässigkeit und Sicherheit klinischer KI-Systeme auftreten.
  • Publication
    Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models
    Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a prerequisite for its use in safety critical applications such that AI models can reliably assist humans in critical decisions. However, instead of trying to explain our models post-hoc, we need models which are interpretable-by-design built on a reasoning process similar to humans that exploits meaningful high-level concepts such as shapes, texture or object parts. Learning such concepts is often hindered by its need for explicit specification and annotation up front. Instead, prototype-based learning approaches such as ProtoPNet claim to discover visually meaningful prototypes in an unsupervised way. In this work, we propose a set of properties that those prototypes have to fulfill to enable human analysis, e.g. as part of a reliable model assessment case, and analyse such existing methods in the light of these properties. Given a ‘Guess who?’ game, we find that these prototypes still have a long way ahead towards definite explanations. We quantitatively validate our findings by conducting a user study indicating that many of the learnt prototypes are not considered useful towards human understanding. We discuss about the missing links in the existing methods and present a potential real-world application motivating the need to progress towards truly human-interpretable prototypes.
  • Publication
    Out-of-Distribution Detection for Reinforcement Learning Agents with Probabilistic Dynamics Models
    ( 2023) ; ;
    Schmoeller da Roza, Felippe
    ;
    Günnemann, Stephan
    Reliability of reinforcement learning (RL) agents is a largely unsolved problem. Especially in situations that substantially differ from their training environment, RL agents often exhibit unpredictable behavior, potentially leading to performance loss, safety violations or catastrophic failure. Reliable decision making agents should therefore be able to cast an alert whenever they encounter situations they have never seen before and do not know how to handle. While the problem, also known as out-of-distribution (OOD) detection, has received considerable attention in other domains such as image classification or sensory data analysis, it is less frequently studied in the context of RL. In fact, there is not even a common understanding of what OOD actually means in RL. In this work, we want to bridge this gap and approach the topic of OOD in RL from a general perspective. For this, we formulate OOD in RL as severe perturbations of the Markov decision process (MDP). To detect such perturbations, we introduce a predictive algorithm utilizing probabilistic dynamics models and bootstrapped ensembles. Since existing benchmarks are sparse and limited in their complexity, we also propose a set of evaluation scenarios with OOD occurrences. A detailed analysis of our approach shows superior detection performance compared to existing baselines from related fields.
  • Publication
    Randomized Smoothing (almost) in Real Time?
    ( 2023)
    Seferis, Emmanouil
    ;
    Kollias, Stefanos
    ;
    Certifying the robustness of Deep Neural Networks (DNNs) is very important in safety-critical domains. Randomized Smoothing (RS) has been recently proposed as a scalable, model-agnostic method for robustness verification, which has achieved excellent results and has been extended for a large variety of adversarial perturbation scenarios. However, a hidden cost in RS is during interference, since it requires passing tens-of-thousands perturbed samples through the DNN in order to perform the verification. In this work, we try to address this challenge, and explore what it would take to perform RS much faster, perhaps even in real-time, and what happens as we decrease the number of samples by orders of magnitude. Surprisingly, we find that the performance reduction in terms of average certified radius is not too large, even if we decrease the number of samples by two orders of magnitude, or more. This could possibly pave the way even for realtime robustness certification, under suitable settings. We perform a detailed analysis, both theoretically and experimentally, and show promising results on the standard CIFAR-10 and ImageNet datasets.
  • Publication
    Safe, Ethical and Sustainable: Framing the Argument
    ( 2023)
    McDermid, John Alexander
    ;
    Porter, Zoe
    ;
    The authors have previously articulated the need to think beyond safety to encompass ethical and environmental (sustainability) concerns, and to address these concerns through the medium of argumentation. However, the scope of concerns is very large and there are other challenges such as the need to make trade-offs between incommensurable concerns. The paper outlines an approach to these challenges through suitably framing the argument and illustrates the approach by considering alternative concept designs for an autonomous mobility service.
  • Publication
    Towards Probabilistic Safety Guarantees for Model-Free Reinforcement Learning
    ( 2023)
    Schmoeller da Roza, Felippe
    ;
    ;
    Günneman, Stephan
    Improving safety in model-free Reinforcement Learning is necessary if we expect to deploy such systems in safety-critical scenarios. However, most of the existing constrained Reinforcement Learning methods have no formal guarantees for their constraint satisfaction properties. In this paper, we show the theoretical formulation for a safety layer that encapsulates model epistemic uncertainty over a distribution of constraint model approximations and can provide probabilistic guarantees of constraint satisfaction.