Now showing 1 - 10 of 10
  • Publication
    Bridging Trust in Runtime Open Evaluation Scenarios
    ( 2021) ;
    Buhnova, Barbora
    ;
    Marchetti, Eda
    ;
    ;
    Solutions to specific challenges within software engineering activities can greatly benefit from human creativity. For example, evidence of trust derived from creative virtual evaluation scenarios can support the trust assurance of fast-paced runtime adaptation of intelligent behavior. Following this vision, in this paper, we introduce a methodological and architectural concept that interplays creative and social aspects of gaming into software engineering activities, more precisely into a virtual evaluation of system behavior. A particular trait of the introduced concept is that it reinforces cooperation between technological and social intelligence.
  • Publication
    Creating Trust in Collaborative Embedded Systems
    Effective collaboration of embedded systems relies strongly on the assumption that all components of the system and the system itself operate as expected. A level of trust is established based on that assumption. To verify and validate these assumptions, we propose a systematic procedure that starts at the design phase and spans the runtime of the systems. At design time, we propose system evaluation in pure virtual environments, allowing multiple system behaviors to be executed in a variety of scenarios. At runtime, we suggest performing predictive simulation to get insights into the system's decision making process. This enables trust to be created in the system part of a cooperation. When cooperation is performed in open, uncertain environments, the negotiation protocols between collaborative systems must be monitored at runtime. By engaging in various negotiation protocols, the participants assign roles, schedule tasks, and combine their world views to allow more resilient perception and planning. In this chapter, we describe two complementary monitoring approaches to address the decentralized nature of collaborative embedded systems.
  • Publication
    Towards Creation of Automated Prediction Systems for Trust and Dependability Evaluation
    ( 2020) ;
    Chren, Stanislav
    ;
    Aktouf, Oum-El-Kheir
    ;
    Larsson, Alf
    ;
    Chillarege, Ram
    ;
    ; ;
    We advance the ability to design reliable Cyber-Physical Systems of Systems (CPSoSs) by integrating artificial intelligence to the engineering methods of these systems. The current practice relies heavily on independent validation of software and hardware components, with only limited evaluation during engineering integration activities. Furthermore, our changing landscape of real-time adaptive systems allows software components to be dynamically included or re-distributed within a Cyber-Physical System (CPS), with mostly unknown implications on the overall systems integrity, reliability and security. This paper introduces an approach consisting of scientific and engineering processes that enable development of concepts for automated prediction systems for evaluating the dependability and trust of CPSoS. This significantly moves the security and reliability design process ahead by opening the doors for far more relevant design strategies and the opportunity to develop protocols, methods, and tools aimed at dealing with a wide variety of platforms with poorly calibrated reliability characteristics.
  • Publication
    Building trust in the untrustable
    Trust is a major aspect in the relationship between humans and autonomous safety-critical systems, such as autonomous vehicles. Although human errors may cause higher risks, failures of autonomous systems are more strongly perceived by the general population, which hinders the adoption of autonomous safety-critical systems. It is therefore necessary to devise approaches for systematically building trust in autonomous functions and thereby facilitate the adoption process. In this paper, we introduce a method and a framework for incrementally building trust in the context of autonomous driving. Within the envisioned solution, we employ the psychological narrative behind trust building through the formation of new habits and introduce a method where trust is established gradually for both the human and the autonomous safety-critical system via reputation building and step-by-step integration of smart software agents replacing human actions.
  • Publication
    Towards Runtime Monitoring for Malicious Behaviors Detection in Smart Ecosystems
    ( 2019) ;
    Giandomenico, Felicita Di
    ;
    ;
    Lonetti, Francesca
    ;
    Marchetti, Eda
    ;
    Jahic, Jasmin
    ;
    Smart Ecosystem reflects in the control decisions of entities of different nature, especially of its software components. Particularly, the malicious behavior requires a more accurate attention. This paper discusses the challenges related to the evaluation of software smart agents and proposes a first solution leveraging the monitoring facilities for a) assuring conformity between the software agent and its digital twin in a real-time evaluation and b) validating decisions of the digital twins during runtime in a predictive simulation.
  • Publication
    Towards creation of a reference architecture for trust-based digital ecosystems
    ( 2019) ;
    Chren, Stanislav
    ;
    Bühnová, Barbora
    ;
    ;
    Dimitrov, Dimitar
    With progressing digitalization and the trend towards autonomous computing, systems tend to form digital ecosystems, where each autonomous system aims at achieving its own goals. Within a highway ecosystem, for example, autonomous vehicles could deploy smart agents in the form of software applications. This would enable cooperative driving and ultimately formation of vehicle platoons that reduce air friction and fuel consumption. In the smart grid domain, software-defined virtual power plants could be established to enable remote and autonomous collaboration of various units, such as smart meters, data concentrators, and distributed energy resources, in order to optimize power generation, demand-side energy and power storage. Effective collaboration within these emerging digital ecosystems strongly relies on the assumption that all components of the ecosystem operate as expected, and a level of trust among them is established based on that. In this paper, we present the idea of trust-based digital ecosystems, built upon the concept of a digital twin of this ecosystem, as a machine readable representation of the system and a representation of goals and trust at runtime. This creates demand for introducing a reference architecture for trust-based digital ecosystems that would capture their main concepts and relationships. By modeling the goals of the actors and systems, a reference architecture can provide a basis for analyzing competitive forces that influence the health of an ecosystem.
  • Publication
    (Do Not) Trust in Ecosystems
    ( 2019) ; ;
    Buhnova, Barbora
    In the context of Smart Ecosystems, systems engage in dynamic cooperation with other systems to achieve their goals. Expedient operation is only possible when all systems cooperate as expected. This requires a level of trust between the components of the ecosystem. New systems that join the ecosystem therefore first need to build up a level of trust. Humans derive trust from behavioral reputation in key situations. In Smart Ecosystems (SES), the reputation of a system or system component can also be based on observation of its behavior. In this paper, we introduce a method and a test platform that support virtual evaluation of decisions at runtime, thereby supporting trust building within SES. The key idea behind the platform is that it employs and evaluates Digital Twins, which are executable models of system components, to learn about component behavior in observed situations. The trust in the Digital Twin then builds up over time based on the behavioral compliance of the real system component with its Digital Twin. In this paper, we use the context of automotive ecosystems and examine the concepts for building up reputation on control algorithms of smart agents dynamically downloaded at runtime to individual autonomous vehicles within the ecosystem.
  • Publication
    Prototyping automotive smart ecosystems
    The breakthrough of smart ecosystems formed by open adaptive systems requires new testing methods. The need for safety and security on the roads is pushed ahead by new developments in the area of autonomous driving. Frameworks that support testing of control functions are therefore needed. We present a prototype platform for automotive smart ecosystem that enables testing of smart ecosystems with a special focus on visualization and integration with real world. An abstract and a detailed description of the platform components, together with argumentation of the chosen components and interface description is presented as well. The platform provides a meaningful visualization of scenarios that verify and validate behavior interaction between component of the real and virtual world. Index Terms-Prototype Platform, Simulation, Smart Ecosystems, Automotive, Visualization, System of Systems, Software Ecosystems, Testing, Virtual Testing.
  • Publication
    Accelerated simulated fault injection testing
    ( 2017) ;
    Jahic, Jasmin
    ;
    ; ; ;
    Dropmann, Christoph
    ;
    Munk, Peter
    ;
    Rakshith, Amarnath
    ;
    Thaden, Eike
    Fault injection testing approaches assess the reliability of execution environments for critical software. They support the early testing of safety concepts that mitigate the impact of hardware failures on software behavior. The growing use of platform software for embedded systems raises the need to verify safety concepts that execute on top of operating systems and middleware platforms. Current fault injection techniques consider the resulting software stack as one black box and attempt to test the reaction of all components in the context of faults. This leads to very high software complexity and consequently requires a very high number of fault injection experiments. Testing the software components, such as control functions, operating systems, and middleware, individually would lead to a significant reduction of the number of experiments required. In this paper, we illustrate our novel approach for fault injection testing, which considers the components of a software stack, enables re-use of previously collected evidences, allows focusing testing on highly critical parts of the control software, and significantly lowers the number of experiments required.