• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. The SPATIAL Architecture: Design and Development Experiences from Gauging and Monitoring the AI Inference Capabilities of Modern Applications
 
  • Details
  • Full
Options
July 2024
Conference Paper
Title

The SPATIAL Architecture: Design and Development Experiences from Gauging and Monitoring the AI Inference Capabilities of Modern Applications

Abstract
Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of- concept architecture that analyzes AI models in a human-in-the- loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in real- world industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight.
Author(s)
Ottun, Abdul-Rasheed
Marasinghe, Rasinthe
Elemosho, Toluwani
Liyanage, Mohan
Ragab, Mohamad
Bagave, Prachi
Westberg, Marcus
Asadi, Mehrdad
Boerger, Michell  
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Sandeepa, Chamara
Senevirathna, Thulitha
Siniarski, Bartlomiej
Liyanage, Madhusanka
La, Vinh Hoa
Nguyen , Manh-Dung
Montes De Oca, Edgardo
Oomen, Tessa
Gonçalves, João Fernando Ferreira
Tanasković, Illija
Klopanovic, Sasa
Kourtellis, Nicolas
Soriente, Claudio
Pridmore, Jason
Cavalli, Ana Rosa
Draskovic, Drasko
Marchal, Samuel
Wang, Shennan
Noguero, David Solans
Tcholtchev, Nikolay Vassilev
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Ding, Aaron Yi
Flores, Huber
Mainwork
Proceedings of the 2024 IEEE 44th International Conference on Distributed Computing Systems, ICDCS 2024  
Conference
International Conference on Distributed Computing Systems 2024  
Open Access
DOI
10.1109/ICDCS60910.2024.00092
Additional full text version
Landing Page
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Keyword(s)
  • Trustworthy A

  • AI Ac

  • Industrial Use Case

  • Accountability

  • Resilience

  • Human Oversight

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024