• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Trustworthy Artificial Intelligence
 
  • Details
  • Full
Options
2022
Conference Paper
Title

Trustworthy Artificial Intelligence

Abstract
Trust or trustworthiness are hard to define. There are many aspects that can increase or decrease the trust in an Artificial Intelligence systems. This is why entities such as the High-level expert group on AI (HLEG) and the European commission’s artificial intelligence act are putting forward guidelines and regulations demand trustworthiness and help to better define it. One aspect that can increase the trust in a system is to make the system more transparent. For AI systems this can be achieved through Explainable AI or XAI which has the goal to explain learning systems. This article will list some requirements from the HLEG and the European artificial intelligence act and will go further into transparency and how it can be achieved through explanations. At the end we will cover personalized explanations, how they could be achieved and how they could benefit users.
Author(s)
Becker, Maximilian  
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
Mainwork
Proceedings of the 2021 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory  
Conference
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation and Institute for Anthropomatics, Vision and Fusion Laboratory (Joint Workshop) 2021  
DOI
10.5445/IR/1000148316
Language
English
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024