Now showing 1 - 10 of 74
  • Publication
    Approach for Argumenting Safety on Basis of an Operational Design Domain
    ( 2024) ;
    Zeller, Marc
    ;
    Schoenhaar, Hannes
    ;
    ;
    The Operational Design Domain (ODD) is a representative model of the real world in which an Automated Driving System (ADS) is intended to operate. The definition of the ODD is a crucial part of the development process for such an artificial intelligence (AI)-enabled system. This is due to the fact that the ODD is the basis for several critical development activities, like defining system-level requirements, test & verification, and building a well-founded safety case for an AI-based ADS. Since an inadequately defined ODD poses a major safety concern for the entire development, an ODD must be defined completely and consistently during the development process. In this work, we present an approach for the ODD definition and maintenance during the development of safety-critical AI-based ADS functionalities and provide evidences to argue the sufficient completeness and consistency. We demonstrate the feasibility of our approach by an industrial use case of a fully automated system in the railway domain.
  • Publication
    Automatic Deduction of the Impact of Context Variability on System Safety Goals
    ( 2024) ; ;
    Trapp, Mario
    Autonomous systems, such as trains with a high grade of automation, need to function safely in their operational context. One hindrance to the development of such systems is the high degree of variability of this context: Different context variants can have a substantial impact on the safety goals the system must fulfill to function with sufficiently low residual risk.In this paper, we propose a method for modeling and reasoning about the context variability of an autonomous system and its impact on the system’s safety. We build upon contextual goal models to model the refinement of safety goals and their dependence on the environment. By introducing an explicit model of the context variability to be expected, we transform the challenge of safety in variable environments to a satisfaction modulo theories problem. This allows us to find inconsistencies and check whether a concrete context variant would allow for safe operation of the system. We demonstrate our approach with a use case from the railway domain and show its applicability to an automatic train operation system in different contexts based on map data.
  • Publication
    Online Identification of Operational Design Domains of Automated Driving System Features
    ( 2024) ; ;
    Trapp, Mario
    The Operational Design Domain (ODD) consists of operating conditions under which an Automated Driving System (ADS) feature is intended to be deployed and should satisfy safety and performance requirements. Creating human-interpretable and monitorable ODD specifications for ADS features, comprising black-box and non-deterministic Machine Learning (ML) components, is complicated owing to the unknown impact of possibly infinite operational contexts on system requirement fulfillment. Furthermore, these ML components may be updated to address unforeseen operational contexts encountered after feature deployment, thus necessitating further updates to the ODD. This paper proposes a novel approach for online ODD identification, i.e., discovering operating conditions wherein the ADS feature satisfies system requirements, using fuzzy behavior oracles. Our data-driven approach involves human-interpretable representation of operational contexts, facilitating the semi-automatic generation of conditional ODD statements and updates to ODD post-feature deployment. The feasibility of our approach is validated with a case study on a Lane Change Assist ADS feature, which exhibits a 55% improvement in scalability, allowing its deployment in a broader ODD.
  • Publication
    DevOps in Robotics: Challenges and Practices
    ( 2023)
    Sawczuk da Silva, Alexandre
    ;
    ; ;
    Rothe, Johannes
    ;
    Ihrke, Christoph
    DevOps, which refers to a set of practices for streamlining the development and operations of software companies, is becoming increasingly popular as businesses strive to adopt a loosely coupled architecture that supports frequent software delivery. As a result, DevOps is also gaining traction in other domains and involved architectures, including robotics, though research in this area is still lacking. To address this gap, this paper investigates how to adapt key DevOps principles from the domain of software engineering to the domain of robotics. In order to demonstrate the feasibility of this in practice, an industrial robotics case study is conducted. The results indicate that the adoption of these principles is also beneficial for robotic software architectures, though general DevOps approaches may require some adaptation to match the existing infrastructure.
  • Publication
    Adaptively Managing Reliability of Machine Learning Perception under Changing Operating Conditions
    Autonomous systems are deployed in various contexts, which makes the role of the surrounding environment and operational context increasingly vital, e.g., for autonomous driving. To account for these changing operating conditions, an autonomous system must adapt its behavior to maintain safe operation and a high level of autonomy. Machine Learning (ML) components are generally being introduced for perceiving an autonomous system’s environment, but their reliability strongly depends on the actual operating conditions, which are hard to predict. Therefore, we propose a novel approach to learn the influence of the prevalent operating conditions and use this knowledge to optimize reliability of the perception through self adaptation. Our proposed approach is evaluated in a perception case study for autonomous driving. We demonstrate that our approach is able to improve perception under varying operating conditions, in contrast to the state-of-the-art. Besides the advantage of interpretability, our results show the superior reliability of ML-based perception.
  • Publication
    Towards Uncertainty Reduction Tactics for Behavior Adaptation
    An autonomous system must continuously adapt its behavior to its context in order to fulfill its goals in dynamic environments. Obtaining information about the context, however, often leads to partial knowledge, only, with a high degree of uncertainty. Enabling the systems to actively reduce these uncertainties at run-time by performing additional actions, such as changing a mobile robot’s position to improve the perception with additional perspectives, can increase the systems’ performance. However, incorporating these techniques by adapting behavior plans is not trivial as the potential benefit of such so-called tactics highly depends on the specific context. In this paper, we present an analysis of the performance improvement that can theoretically be achieved with uncertainty reduction tactics. Furthermore, we describe a modeling methodology based on probabilistic data types that makes it possible to estimate the suitability of a tactic in a situation. This methodology is the first step towards enabling autonomous systems to use uncertainty reduction in practice and to plan behavior with more optimal performance.
  • Publication
    Self-Adaptive Service Deployment for Resilience of Smart Manufacturing Architectures
    ( 2023) ;
    Sawczuk da Silva, Alexandre
    ;
    Knissel, Tim
    ;
    Recent advances in the manufacturing sector - including edge-to-cloud continuum, machine learning, and digitalization - can enable smart manufacturing solutions, such as control optimization and predictive maintenance. One challenge in new system architectures is the efficient resource management under changing conditions while meeting process requirements, such as latency, when deploying software services. To address this, we propose an approach for self-adaptive service deployment that increases the resilience of smart manufacturing systems. We combine self-adaptation principles with run-time models - that describe the system in the form of the standardized Asset Administration Shell - to enable flexible software architectures for manufacturing. The proposed solution comprises the continuous adaptation of the service deployment in response to system changes, such as resource exhaustion or failure, to ensure an optimized operation. An evaluation of an example manufacturing use case shows that the proposed solution leads to lower execution latency and continuation of production in situations with low resources, e.g., through failures, compared to less flexible deployment approaches.
  • Publication
    Fuzzy Interpretation of Operational Design Domains in Autonomous Driving
    ( 2022-07) ; ; ;
    Oboril, Fabian
    ;
    Buerkle, Cornelius
    The evolution towards autonomous driving involves operating safely in open-world environments. For this, autonomous vehicles and their Autonomous Driving System (ADS) are designed and tested for specific, so-called Operational Design Domains (ODDs). When moving from prototypes to real-world mobility solutions, autonomous vehicles, however, will face changing scenarios and operational conditions that they must handle safely. Within this work, we propose a fuzzy based approach to consider changing operational conditions of autonomous driving based on smaller ODD fragments, called μODDs. By this, an ADS is enabled to smoothly adapt its driving behavior for meeting safety during shifting operational conditions. We evaluate our solution in simulated vehicle following scenarios passing through different μODDs, modeled by weather changes. The results show that our approach is capable of considering operational domain changes without endangering safety and allowing improved utility optimization.
  • Publication
    Concept for Safe Interaction of Driverless Industrial Trucks and Humans in Shared Areas
    ( 2022-06-17) ; ; ;
    Ishigooka, Tasuku
    ;
    Otsuka, Satoshi
    ;
    Mizuochi, Mariko
    Humans still need to access the same area as automated systems, like in warehouses, if full automation is not feasible or economical. In such shared areas, critical interactions are inevitable. The automation of vehicles is usually tied to an argument on improved safety. However, current standards still rely also on the awareness of humans to avoid collisions. Along with this, modern intelligent warehouses are equipped with additional sensors that can help to automate safety. Blind corners, where the view is obscured, are particularly critical and, moreover, their location can change when goods are moved. Therefor, we generalize a concept for safe interactions at known blind corners to movements in the entire warehouse. We propose an architecture that uses infrastructure sensors to prevent human-robot collisions with respect to automated forklifts as instances of driverless industrial trucks. This includes a safety critical function using wireless communication, which sporadically might be unavailable or disturbed. Therefore, the proposed architecture is able to mitigate these faults and gracefully degrades the system’s performance if required. Within our extensive evaluation, we simulate varying warehouse settings to verify our approach and to estimate the impact on an automated forklift’s performance.
  • Publication
    Safety Implications of Runtime Adaptation to Changing Operating Conditions
    ( 2022) ; ; ;
    Oboril, Fabian
    ;
    Buerkle, Cornelius
    With further advancements of autonomous driving, also larger application scenarios will be addressed, so-called Operational Design Domains (ODDs). Autonomous vehicles will likely experience varying operating conditions in such broader ODDs. The implications of changing operating conditions on safety and required adaptation is, however, an open challenge. In our work, we exemplary investigate a vehicle following scenario passing through altering operating conditions and Responsibility Sensitive Safety (RSS) as formal model to define appropriate longitudinal following distances. We provide a deeper analysis of the influence of switching the safety model parameter values to adapt to new operating conditions. As our findings show that hard switches of operating conditions can lead to critical situations, we propose an approach for continuously adapting safety model parameters allowing for a safe and more comfortable transition. In our evaluation, we utilize driving simulations to compare the hard switching of parameters with our proposal of gradual adaptation. Our results highlight the implications of changing operating conditions on the driving safety. Moreover, we provide a solution to adapt the safety model parameters of an autonomous vehicle in such a way that safety model violations during transition can be avoided.