Now showing 1 - 10 of 86
  • Publication
    Towards the Concept of Trust Assurance Case
    Trust is a fundamental aspect in enabling a smooth adoption of robotic technical innovations in our societies. While Artificial Intelligence (AI) is capable to uplift digital contributions to our societies while protecting environmental resources, its ethical and technical trust dimensions bring significant challenges for a sustainable evolution of robotic systems. Inspired by the safety assurance case, in this paper we introduce the concept of trust assurance case together with the implementation of its ethical and technical principles directed towards assuring a trustworthy sustainable evolution of AI-enabled robotic systems.
  • Publication
    Open Dependability Exchange Metamodel: A Format to Exchange Safety Information
    Safety-relevant systems are becoming ever more complex, and they typically contain components from different manufacturers which have been integrated along the supply chain. Safety assurance is highly challenging in this context, with model-based approaches being a potential remedy. To unlock the potential of such approaches, a data format is needed to represent the safety information in multi-tier supply chains in a tool-independent way. This paper presents the Open Dependability Exchange (ODE) (https://github.com/Digital-Dependability-Identities/ODE) metamodel developed in the H2020 DEIS Project, which captures the essence and relation between the safety-related artifacts created during the entire development lifecycle. The different parts of the ODE provide coverage for architectural modeling, hazard and risk analysis, failure logic modeling (such as FME(D)A, FTA, and Markov Chains), and safety requirements. It enables the exchange of safety information between the different phases of the safety engineering lifecycle and the exchange across organizations in multi-tier supply chains. Moreover, the ODE enables the creation, integration, and validation of safety information using different vendors' tools regardless of the specific tool's methodology.
  • Publication
    Plug-and-Produce... Safely!
    ( 2022) ;
    Huck, Tom P.
    ;
    ;
    Ledermann, Christoph
    ;
    ;
    Schlosser, Patrick
    ;
    Schmidt, Andreas
    ;
    ;
    To enable resilient, innovative, and sustainable industrialization, adopting the Industry 4.0 (I4.0) paradigm is essential, as it enables distributed, reconfigurable production environments. Fast reconfiguration, and hence flexibility, is further achieved by employing human-robot-collaborations - but this poses challenges with respect to human worker safety that currently assumes only static systems. While industrial practice is moving towards service-oriented approaches for the nominal function (producing goods), the safety assurance process is not yet ready for this new world that demands continuous, collaborative, on-demand assurance [21]. In this paper, we present an end-to-end model-based safety assurance lifecycle (using Conditional Safety Certificates [30]) to bring the assurance process closer to the demands of I4.0 and overcome this paradigm mismatch. We give details on the different steps of our approach and provide a worked example for an industrial human-robot-collaboration use case.
  • Publication
    StaDRe and StaDRo: Reliability and Robustness Estimation of ML-Based Forecasting Using Statistical Distance Measures
    ( 2022) ;
    Ambekar, Akshatha
    ;
    ;
    Aslansefat, Koorosh
    ;
    Reliability estimation of Machine Learning (ML) models is becoming a crucial subject. This is particularly the case when such models are deployed in safety-critical applications, as the decisions based on model predictions can result in hazardous situations. In this regard, recent research has proposed methods to achieve safe, dependable, and reliable ML systems. One such method consists of detecting and analyzing distributional shift, and then measuring how such systems respond to these shifts. This was proposed in earlier work in SafeML. This work focuses on the use of SafeML for time series data, and on reliability and robustness estimation of ML-forecasting methods using statistical distance measures. To this end, distance measures based on the Empirical Cumulative Distribution Function (ECDF) proposed in SafeML are explored to measure Statistical-Distance Dissimilarity (SDD) across time series. We then propose SDD-based Reliability Estimate (StaDRe) and SDD-based Robustness (StaDRo) measures. With the help of a clustering technique, the similarity between the statistical properties of data seen during training and the forecasts is identified. The proposed method is capable of providing a link between dataset SDD and Key Performance Indicators (KPIs) of the ML models.
  • Publication
    Towards the Concept of Trust Assurance Case
    Trust is a fundamental aspect in enabling self-adaptation of intelligent systems and in paving the way towards a smooth adoption of technological innovations in our societies. While Artificial Intelligence (AI) is capable to uplift the human contribution to our societies while protecting environmental resources, its ethical and technical trust dimensions bring significant challenges for a sustainable self-adaptive evolution in the domain of safety-critical systems. Inspired from the safety assurance case, in this paper we introduce the concept of trust assurance case together with the implementation of its ethical and technical principles directed towards assuring a trustworthy sustainable evolution of safety-critical AI-controlled systems.
  • Publication
    Keep Your Distance: Determining Sampling and Distance Thresholds in Machine Learning Monitoring
    ( 2022)
    Farhad, Al-Harith
    ;
    ;
    Schmidt, Andreas
    ;
    ;
    Aslansefat, Koorosh
    ;
    Machine Learning (ML) has provided promising results in recent years across different applications and domains. However, in many cases, qualities such as reliability or even safety need to be ensured. To this end, one important aspect is to determine whether or not ML components are deployed in situations that are appropriate for their application scope. For components whose environments are open and variable, for instance those found in autonomous vehicles, it is therefore important to monitor their operational situation in order to determine its distance from the ML components’ trained scope. If that distance is deemed too great, the application may choose to consider the ML component outcome unreliable and switch to alternatives, e.g. using human operator input instead. SafeML is a model-agnostic approach for performing such monitoring, using distance measures based on statistical testing of the training and operational datasets. Limitations in setting SafeML up properly include the lack of a systematic approach for determining, for a given application, how many operational samples are needed to yield reliable distance information as well as to determine an appropriate distance threshold. In this work, we address these limitations by providing a practical approach and demonstrate its use in a well known traffic sign recognition problem, and on an example using the CARLA open-source automotive simulator.
  • Publication
    Engineering Dynamic Risk and Capability Models to Improve Cooperation Efficiency Between Human Workers and Autonomous Mobile Robots in Shared Spaces
    ( 2022) ; ; ;
    Ogata, Takehito
    ;
    Otsuka, Satoshi
    ;
    Ishigooka, Tasuku
    Coexistence or even cooperation of autonomous mobile robots (AMR) and humans is a key ingredient for future visions of production, warehousing and smart logistic. Before these visions can become reality one of the fundamental challenges to be tackled is safety assurance. Existing safety concepts have significant drawbacks, they either physically separate operation spaces completely or stop the AMR if its planned trajectory overlaps with a risk area constructed around a human worker based on a worst-case assumption. In the best case, this leads to only less-than-optimal performance, in the worst case an application idea might prove to be completely unfeasible. A general solution is to replace static worst-case assumptions with dynamic safety reasoning capabilities. This paper introduces a corresponding solution concept based on dynamic risk and capability models which enables safety assurance and at the same time allows for continuous optimization of performance properties.
  • Publication
    The Concept of Ethical Digital Identities
    ( 2022) ;
    Buhnova, B.
    ;
    Jacobi, F.
    ;
    Dynamic changes within the cyberspace are greatly impacting human lives and our societies. Emerging evidence indicates that without an ethical overlook on technological progress, intelligent solutions created to improve and enhance our lives can easily be turned against humankind. In complex AI-socio-technical ecosystems where humans, AI (Artificial Intelligence) and systems interact without a common language for building trust, this paper introduces a methodological concept of Ethical Digital Identities for supporting the ethical evaluation of intelligent digital assets.
  • Publication
    SafeDrones: Real-Time Reliability Evaluation of UAVs Using Executable Digital Dependable Identities
    ( 2022)
    Aslansefat, Koorosh
    ;
    Nikolaou, Panagiota
    ;
    Walker, Martin
    ;
    ; ; ;
    Kolios, Panayiotis
    ;
    Michael, Maria K.
    ;
    Theocharides, Theocharis
    ;
    Ellinas, Georgios
    ;
    ;
    Papadopoulos, Yiannis
    The use of Unmanned Arial Vehicles (UAVs) offers many advantages across a variety of applications. However, safety assurance is a key barrier to widespread usage, especially given the unpredictable operational and environmental factors experienced by UAVs, which are hard to capture solely at design-time. This paper proposes a new reliability modeling approach called SafeDrones to help address this issue by enabling runtime reliability and risk assessment of UAVs. It is a prototype instantiation of the Executable Digital Dependable Identity (EDDI) concept, which aims to create a model-based solution for real-time, data-driven dependability assurance for multi-robot systems. By providing real-time reliability estimates, SafeDrones allows UAVs to update their missions accordingly in an adaptive manner.
  • Publication
    Bridging Trust in Runtime Open Evaluation Scenarios
    ( 2021) ;
    Buhnova, Barbora
    ;
    Marchetti, Eda
    ;
    ;
    Solutions to specific challenges within software engineering activities can greatly benefit from human creativity. For example, evidence of trust derived from creative virtual evaluation scenarios can support the trust assurance of fast-paced runtime adaptation of intelligent behavior. Following this vision, in this paper, we introduce a methodological and architectural concept that interplays creative and social aspects of gaming into software engineering activities, more precisely into a virtual evaluation of system behavior. A particular trait of the introduced concept is that it reinforces cooperation between technological and social intelligence.