Now showing 1 - 8 of 8
  • Publication
    Towards the Concept of Trust Assurance Case
    Trust is a fundamental aspect in enabling a smooth adoption of robotic technical innovations in our societies. While Artificial Intelligence (AI) is capable to uplift digital contributions to our societies while protecting environmental resources, its ethical and technical trust dimensions bring significant challenges for a sustainable evolution of robotic systems. Inspired by the safety assurance case, in this paper we introduce the concept of trust assurance case together with the implementation of its ethical and technical principles directed towards assuring a trustworthy sustainable evolution of AI-enabled robotic systems.
  • Publication
    Towards the Concept of Trust Assurance Case
    Trust is a fundamental aspect in enabling self-adaptation of intelligent systems and in paving the way towards a smooth adoption of technological innovations in our societies. While Artificial Intelligence (AI) is capable to uplift the human contribution to our societies while protecting environmental resources, its ethical and technical trust dimensions bring significant challenges for a sustainable self-adaptive evolution in the domain of safety-critical systems. Inspired from the safety assurance case, in this paper we introduce the concept of trust assurance case together with the implementation of its ethical and technical principles directed towards assuring a trustworthy sustainable evolution of safety-critical AI-controlled systems.
  • Publication
    The Concept of Ethical Digital Identities
    ( 2022) ;
    Buhnova, B.
    ;
    Jacobi, F.
    ;
    Dynamic changes within the cyberspace are greatly impacting human lives and our societies. Emerging evidence indicates that without an ethical overlook on technological progress, intelligent solutions created to improve and enhance our lives can easily be turned against humankind. In complex AI-socio-technical ecosystems where humans, AI (Artificial Intelligence) and systems interact without a common language for building trust, this paper introduces a methodological concept of Ethical Digital Identities for supporting the ethical evaluation of intelligent digital assets.
  • Publication
    Bridging Trust in Runtime Open Evaluation Scenarios
    ( 2021) ;
    Buhnova, Barbora
    ;
    Marchetti, Eda
    ;
    ;
    Solutions to specific challenges within software engineering activities can greatly benefit from human creativity. For example, evidence of trust derived from creative virtual evaluation scenarios can support the trust assurance of fast-paced runtime adaptation of intelligent behavior. Following this vision, in this paper, we introduce a methodological and architectural concept that interplays creative and social aspects of gaming into software engineering activities, more precisely into a virtual evaluation of system behavior. A particular trait of the introduced concept is that it reinforces cooperation between technological and social intelligence.
  • Publication
    Goals within Trust-based Digital Ecosystems
    ( 2021) ;
    Purohit, Akanksha
    ;
    Buhnova, Barbora
    ;
    Within a digital ecosystem, systems and actors form coalitions for achieving common and individual goals. In a constant motion of collaborative and competitive forces and faced with the risk of malicious attacks, ecosystem participants require strong guarantees of their collaborators' trustworthiness. Evidence of trustworthy behavior derived from runtime executions can provide these trust guarantees, given that clear definition and delimitation of trust concerns exist. Without them, a base for negotiating expectations, quantifying achievements and identifying strategical attacks cannot be established and attainment of strategic benefits relies solely on vulnerable collaborations. In this paper we examine the relationship between goals and trust and we introduce a formalism for goal representation. We delimit the trust concerns with anti-goals. The anti-goals set the boundaries within which we structure the trust analysis and build up evidence for motivated attacks.
  • Publication
    Towards Creation of Automated Prediction Systems for Trust and Dependability Evaluation
    ( 2020) ;
    Chren, Stanislav
    ;
    Aktouf, Oum-El-Kheir
    ;
    Larsson, Alf
    ;
    Chillarege, Ram
    ;
    ; ;
    We advance the ability to design reliable Cyber-Physical Systems of Systems (CPSoSs) by integrating artificial intelligence to the engineering methods of these systems. The current practice relies heavily on independent validation of software and hardware components, with only limited evaluation during engineering integration activities. Furthermore, our changing landscape of real-time adaptive systems allows software components to be dynamically included or re-distributed within a Cyber-Physical System (CPS), with mostly unknown implications on the overall systems integrity, reliability and security. This paper introduces an approach consisting of scientific and engineering processes that enable development of concepts for automated prediction systems for evaluating the dependability and trust of CPSoS. This significantly moves the security and reliability design process ahead by opening the doors for far more relevant design strategies and the opportunity to develop protocols, methods, and tools aimed at dealing with a wide variety of platforms with poorly calibrated reliability characteristics.
  • Publication
    Building trust in the untrustable
    Trust is a major aspect in the relationship between humans and autonomous safety-critical systems, such as autonomous vehicles. Although human errors may cause higher risks, failures of autonomous systems are more strongly perceived by the general population, which hinders the adoption of autonomous safety-critical systems. It is therefore necessary to devise approaches for systematically building trust in autonomous functions and thereby facilitate the adoption process. In this paper, we introduce a method and a framework for incrementally building trust in the context of autonomous driving. Within the envisioned solution, we employ the psychological narrative behind trust building through the formation of new habits and introduce a method where trust is established gradually for both the human and the autonomous safety-critical system via reputation building and step-by-step integration of smart software agents replacing human actions.
  • Publication
    Predictive Runtime Simulation for Building Trust in Cooperative Autonomous Systems
    Future autonomous systems will also be cooperative systems. They will interact with each other, with traffic infrastructure, with cloud services and with other systems. In such an open ecosystem trust is of fundamental importance, because cooperation between systems is key for many innovation applications and services. Without an adequate notion of trust, as well as means to maintain and use it, the full potential of autonomous systems thus cannot be unlocked. In this paper, we discuss what constitutes trust in autonomous cooperative systems and sketch out a corresponding multifaceted notion of trust. We then go on to discuss a predictive runtime simulation approach as a building block for trust and elaborate on means to secure this approach.