Now showing 1 - 8 of 8
  • Publication
    Towards Uncertainty Reduction Tactics for Behavior Adaptation
    An autonomous system must continuously adapt its behavior to its context in order to fulfill its goals in dynamic environments. Obtaining information about the context, however, often leads to partial knowledge, only, with a high degree of uncertainty. Enabling the systems to actively reduce these uncertainties at run-time by performing additional actions, such as changing a mobile robot’s position to improve the perception with additional perspectives, can increase the systems’ performance. However, incorporating these techniques by adapting behavior plans is not trivial as the potential benefit of such so-called tactics highly depends on the specific context. In this paper, we present an analysis of the performance improvement that can theoretically be achieved with uncertainty reduction tactics. Furthermore, we describe a modeling methodology based on probabilistic data types that makes it possible to estimate the suitability of a tactic in a situation. This methodology is the first step towards enabling autonomous systems to use uncertainty reduction in practice and to plan behavior with more optimal performance.
  • Publication
    Adaptively Managing Reliability of Machine Learning Perception under Changing Operating Conditions
    Autonomous systems are deployed in various contexts, which makes the role of the surrounding environment and operational context increasingly vital, e.g., for autonomous driving. To account for these changing operating conditions, an autonomous system must adapt its behavior to maintain safe operation and a high level of autonomy. Machine Learning (ML) components are generally being introduced for perceiving an autonomous system’s environment, but their reliability strongly depends on the actual operating conditions, which are hard to predict. Therefore, we propose a novel approach to learn the influence of the prevalent operating conditions and use this knowledge to optimize reliability of the perception through self adaptation. Our proposed approach is evaluated in a perception case study for autonomous driving. We demonstrate that our approach is able to improve perception under varying operating conditions, in contrast to the state-of-the-art. Besides the advantage of interpretability, our results show the superior reliability of ML-based perception.
  • Publication
    Fuzzy Interpretation of Operational Design Domains in Autonomous Driving
    ( 2022-07) ; ; ;
    Oboril, Fabian
    ;
    Buerkle, Cornelius
    The evolution towards autonomous driving involves operating safely in open-world environments. For this, autonomous vehicles and their Autonomous Driving System (ADS) are designed and tested for specific, so-called Operational Design Domains (ODDs). When moving from prototypes to real-world mobility solutions, autonomous vehicles, however, will face changing scenarios and operational conditions that they must handle safely. Within this work, we propose a fuzzy based approach to consider changing operational conditions of autonomous driving based on smaller ODD fragments, called μODDs. By this, an ADS is enabled to smoothly adapt its driving behavior for meeting safety during shifting operational conditions. We evaluate our solution in simulated vehicle following scenarios passing through different μODDs, modeled by weather changes. The results show that our approach is capable of considering operational domain changes without endangering safety and allowing improved utility optimization.
  • Publication
    Safety Implications of Runtime Adaptation to Changing Operating Conditions
    ( 2022) ; ; ;
    Oboril, Fabian
    ;
    Buerkle, Cornelius
    With further advancements of autonomous driving, also larger application scenarios will be addressed, so-called Operational Design Domains (ODDs). Autonomous vehicles will likely experience varying operating conditions in such broader ODDs. The implications of changing operating conditions on safety and required adaptation is, however, an open challenge. In our work, we exemplary investigate a vehicle following scenario passing through altering operating conditions and Responsibility Sensitive Safety (RSS) as formal model to define appropriate longitudinal following distances. We provide a deeper analysis of the influence of switching the safety model parameter values to adapt to new operating conditions. As our findings show that hard switches of operating conditions can lead to critical situations, we propose an approach for continuously adapting safety model parameters allowing for a safe and more comfortable transition. In our evaluation, we utilize driving simulations to compare the hard switching of parameters with our proposal of gradual adaptation. Our results highlight the implications of changing operating conditions on the driving safety. Moreover, we provide a solution to adapt the safety model parameters of an autonomous vehicle in such a way that safety model violations during transition can be avoided.
  • Publication
    Dependable and Efficient Cloud-Based Safety-Critical Applications by Example of Automated Valet Parking
    ( 2021) ;
    Shekhada, Dhavalkumar
    ;
    ; ;
    Ishigooka, Tasuku
    ;
    Otsuka, Satoshi
    ;
    Mizuochi, Mariko
    Future embedded systems and services will be seamlessly connected and will interact on all levels with the infrastructure and cloud. For safety-critical applications this means that it is not sufficient to ensure dependability in a single embedded system, but it is necessary to cover the complete service chain including all involved embedded systems as well as involved services running in the edge or the cloud. However, for the development of such Cyber-Physical Systems-of-Systems (CPSoS) engineers must consider all kinds of dependability requirements. For example, it is not an option to ensure safety by impeding reliability or availability requirements. In fact, it is the engineers' task to optimize the CPSoS' performance without violating any safety goals. In this paper, we identify the main challenges of developing CPSoS based on several industrial use cases and present our novel approach for designing cloud-based safety-critical applications with optimized performance by the example of an automated valet parking system. The evaluation shows that our monitoring and recovery solution ensures a superior performance in comparison to current methods, while meeting the system's safety demands in case of connectivity-related faults.
  • Publication
    Towards Dynamic Safety Management for Autonomous Systems
    Safety assurance of autonomous systems is one of the current key challenges of safety engineering. Given the specific characteristics of autonomous systems, we need to deal with many uncertainties making it difficult or even impossible to predict the system's behaviour in all potential operational situations. Simply using established static safety approaches would result in very strict worst-case assumptions making the development of autonomous systems at reasonable costs impossible. This paper therefore introduces the idea of dynamic safety management. Using dynamic safety management enables a system to assess its safety and to self-optimize its performance at runtime. Considering the current risk related to the actual context at runtime instead of being bound to strict worst-case assumptions provides the essential basis for the development of safe and yet cost-efficient autonomous systems.
  • Publication
    Towards safety-awareness and dynamic safety management
    Future safety-critical systems will be highly automated or even autonomous and they will dynamically cooperate with other systems as part of a comprehensive ecosystem. This together with increasing utilization of artificial intelligence introduces uncertainties on different levels, which detriment the application of established safety engineering methods and standards. These uncertainties might be tackled by making systems safety-aware and enabling them to manage themselves accordingly. This paper introduces a corresponding conceptual dynamic safety management framework incorporating monitoring facilities and runtime safety-models to create safety-awareness. Based on this, planning and execution of safe system optimizations can be carried out by means of self-adaptation. We illustrate our approach by applying it for the dynamic safety assurance of a single car.
  • Publication
    Towards integrating undependable self-adaptive systems in safety-critical environments
    Modern cyber-physical systems (CPS) integrate more and more powerful computing power to master novel applications and adapt to changing situations. A striking example is the recent progression in the automotive market towards autonomous driving. Powerful artificial intelligent algorithms must be executed on high performant parallelized platforms. However, this cannot be employed in a safe way, as the platforms stemming from the consumer electronics (CE) world still lack required dependability and safety mechanisms. In this paper, we present a concept to integrate undependable self-adaptive subsystems into safety-critical environments. For this, we introduce self-adaptation envelopes which manage undependable system parts and integrate within a dependable system. We evaluate our approach by a comprehensive case study of autonomous driving. Thereby, we show that the potential failures of the AUTOSAR Adaptive platform as exemplary undependable system can be handled by our concept. In overall, we outline a way of integrating inherently undependable adaptive systems into safety-critical CPS.