Now showing 1 - 10 of 13
  • Publication
    Towards the Concept of Trust Assurance Case
    Trust is a fundamental aspect in enabling a smooth adoption of robotic technical innovations in our societies. While Artificial Intelligence (AI) is capable to uplift digital contributions to our societies while protecting environmental resources, its ethical and technical trust dimensions bring significant challenges for a sustainable evolution of robotic systems. Inspired by the safety assurance case, in this paper we introduce the concept of trust assurance case together with the implementation of its ethical and technical principles directed towards assuring a trustworthy sustainable evolution of AI-enabled robotic systems.
  • Publication
    Predictive Simulation within the Process of Building Trust
    ( 2022) ;
    Buhnova, B.
    ;
    The emerging dynamic architectures of autonomous digital ecosystems raise new challenges in the process of assuring trust and safety. In particular, the admission of software smart agents into autonomous dynamic ecosystems will become a significant future topic. In this work we propose the concept of predictive simulation, which elevates from the concept of virtual Hardware-in-the-Loop (vHiL) testbed, to support rapid runtime evaluation of software smart agents in autonomous digital ecosystems. Based on this testbed, we introduce a novel strategy for building trust in software components that enter an ecosystem as black boxes without executing their behavior which can be potentially malicious, but by executing corresponding digital twins which are abstract models fed with real-time data.
  • Publication
    Towards the Concept of Trust Assurance Case
    Trust is a fundamental aspect in enabling self-adaptation of intelligent systems and in paving the way towards a smooth adoption of technological innovations in our societies. While Artificial Intelligence (AI) is capable to uplift the human contribution to our societies while protecting environmental resources, its ethical and technical trust dimensions bring significant challenges for a sustainable self-adaptive evolution in the domain of safety-critical systems. Inspired from the safety assurance case, in this paper we introduce the concept of trust assurance case together with the implementation of its ethical and technical principles directed towards assuring a trustworthy sustainable evolution of safety-critical AI-controlled systems.
  • Publication
    Bridging Trust in Runtime Open Evaluation Scenarios
    ( 2021) ;
    Buhnova, Barbora
    ;
    Marchetti, Eda
    ;
    ;
    Solutions to specific challenges within software engineering activities can greatly benefit from human creativity. For example, evidence of trust derived from creative virtual evaluation scenarios can support the trust assurance of fast-paced runtime adaptation of intelligent behavior. Following this vision, in this paper, we introduce a methodological and architectural concept that interplays creative and social aspects of gaming into software engineering activities, more precisely into a virtual evaluation of system behavior. A particular trait of the introduced concept is that it reinforces cooperation between technological and social intelligence.
  • Publication
    Creating Trust in Collaborative Embedded Systems
    Effective collaboration of embedded systems relies strongly on the assumption that all components of the system and the system itself operate as expected. A level of trust is established based on that assumption. To verify and validate these assumptions, we propose a systematic procedure that starts at the design phase and spans the runtime of the systems. At design time, we propose system evaluation in pure virtual environments, allowing multiple system behaviors to be executed in a variety of scenarios. At runtime, we suggest performing predictive simulation to get insights into the system's decision making process. This enables trust to be created in the system part of a cooperation. When cooperation is performed in open, uncertain environments, the negotiation protocols between collaborative systems must be monitored at runtime. By engaging in various negotiation protocols, the participants assign roles, schedule tasks, and combine their world views to allow more resilient perception and planning. In this chapter, we describe two complementary monitoring approaches to address the decentralized nature of collaborative embedded systems.
  • Publication
    Towards Creation of Automated Prediction Systems for Trust and Dependability Evaluation
    ( 2020) ;
    Chren, Stanislav
    ;
    Aktouf, Oum-El-Kheir
    ;
    Larsson, Alf
    ;
    Chillarege, Ram
    ;
    ; ;
    We advance the ability to design reliable Cyber-Physical Systems of Systems (CPSoSs) by integrating artificial intelligence to the engineering methods of these systems. The current practice relies heavily on independent validation of software and hardware components, with only limited evaluation during engineering integration activities. Furthermore, our changing landscape of real-time adaptive systems allows software components to be dynamically included or re-distributed within a Cyber-Physical System (CPS), with mostly unknown implications on the overall systems integrity, reliability and security. This paper introduces an approach consisting of scientific and engineering processes that enable development of concepts for automated prediction systems for evaluating the dependability and trust of CPSoS. This significantly moves the security and reliability design process ahead by opening the doors for far more relevant design strategies and the opportunity to develop protocols, methods, and tools aimed at dealing with a wide variety of platforms with poorly calibrated reliability characteristics.
  • Publication
    Building trust in the untrustable
    Trust is a major aspect in the relationship between humans and autonomous safety-critical systems, such as autonomous vehicles. Although human errors may cause higher risks, failures of autonomous systems are more strongly perceived by the general population, which hinders the adoption of autonomous safety-critical systems. It is therefore necessary to devise approaches for systematically building trust in autonomous functions and thereby facilitate the adoption process. In this paper, we introduce a method and a framework for incrementally building trust in the context of autonomous driving. Within the envisioned solution, we employ the psychological narrative behind trust building through the formation of new habits and introduce a method where trust is established gradually for both the human and the autonomous safety-critical system via reputation building and step-by-step integration of smart software agents replacing human actions.
  • Publication
    Towards Runtime Monitoring for Malicious Behaviors Detection in Smart Ecosystems
    ( 2019) ;
    Giandomenico, Felicita Di
    ;
    ;
    Lonetti, Francesca
    ;
    Marchetti, Eda
    ;
    Jahic, Jasmin
    ;
    Smart Ecosystem reflects in the control decisions of entities of different nature, especially of its software components. Particularly, the malicious behavior requires a more accurate attention. This paper discusses the challenges related to the evaluation of software smart agents and proposes a first solution leveraging the monitoring facilities for a) assuring conformity between the software agent and its digital twin in a real-time evaluation and b) validating decisions of the digital twins during runtime in a predictive simulation.
  • Publication
    Towards creation of a reference architecture for trust-based digital ecosystems
    ( 2019) ;
    Chren, Stanislav
    ;
    Bühnová, Barbora
    ;
    ;
    Dimitrov, Dimitar
    With progressing digitalization and the trend towards autonomous computing, systems tend to form digital ecosystems, where each autonomous system aims at achieving its own goals. Within a highway ecosystem, for example, autonomous vehicles could deploy smart agents in the form of software applications. This would enable cooperative driving and ultimately formation of vehicle platoons that reduce air friction and fuel consumption. In the smart grid domain, software-defined virtual power plants could be established to enable remote and autonomous collaboration of various units, such as smart meters, data concentrators, and distributed energy resources, in order to optimize power generation, demand-side energy and power storage. Effective collaboration within these emerging digital ecosystems strongly relies on the assumption that all components of the ecosystem operate as expected, and a level of trust among them is established based on that. In this paper, we present the idea of trust-based digital ecosystems, built upon the concept of a digital twin of this ecosystem, as a machine readable representation of the system and a representation of goals and trust at runtime. This creates demand for introducing a reference architecture for trust-based digital ecosystems that would capture their main concepts and relationships. By modeling the goals of the actors and systems, a reference architecture can provide a basis for analyzing competitive forces that influence the health of an ecosystem.
  • Publication
    (Do Not) Trust in Ecosystems
    ( 2019) ; ;
    Buhnova, Barbora
    In the context of Smart Ecosystems, systems engage in dynamic cooperation with other systems to achieve their goals. Expedient operation is only possible when all systems cooperate as expected. This requires a level of trust between the components of the ecosystem. New systems that join the ecosystem therefore first need to build up a level of trust. Humans derive trust from behavioral reputation in key situations. In Smart Ecosystems (SES), the reputation of a system or system component can also be based on observation of its behavior. In this paper, we introduce a method and a test platform that support virtual evaluation of decisions at runtime, thereby supporting trust building within SES. The key idea behind the platform is that it employs and evaluates Digital Twins, which are executable models of system components, to learn about component behavior in observed situations. The trust in the Digital Twin then builds up over time based on the behavioral compliance of the real system component with its Digital Twin. In this paper, we use the context of automotive ecosystems and examine the concepts for building up reputation on control algorithms of smart agents dynamically downloaded at runtime to individual autonomous vehicles within the ecosystem.