• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Anderes
  4. The European Artificial Intelligence Act
 
  • Details
  • Full
Options
2024
Paper (Preprint, Research Paper, Review Paper, White Paper, etc.)
Title

The European Artificial Intelligence Act

Title Supplement
Overview and Recommendations for Compliance
Abstract
The European Union’s AI Act establishes ethical guidelines and a regulatory framework for the development, deployment, and use of Artificial Intelligence (AI) systems in the European Union. In this whitepaper, we provide an overview of the key provisions outlined in the EU AI Act, addressing risk-classification, stakeholder considerations, and requirements, with a particular focus on safety-critical and therefore high-risk AI systems. In the scope of the EU AI Act, AI systems are categorized based on risk levels, ranging from minimal- to unacceptable- risk, each with its corresponding set of regulatory obligations. High-risk AI systems are subject to rigorous requirements spanning risk management, data governance, transparency, and human oversight. This whitepaper delves into the specifics of Articles 9 to 15 of the EU AI Act, covering the requirements for high-risk AI systems while also identifying gaps with respect to existing safety standards. We propose a framework for bridging these gaps by deriving concrete requirements from the EU AI Act inspired by contract-based design. We leverage our expertise in trustworthy AI and safety to develop a framework for deriving argumentation trees for generic properties of Machine Learning (ML) systems. We demonstrate this framework on three practical use cases across various sectors. Our work illustrates how AI systems in safety-critical domains such as automotive, industrial automation, and healthcare can meet regulatory standards while upholding ethical principles. The EU AI Act represents a significant step towards fostering trust, accountability, and responsible innovation in AI technologies. By following systematic verification processes for EU AI Act requirements, stakeholders can navigate the complex AI landscape with confidence, ensuring the ethical development and deployment of AI systems while safeguarding human interests and values.
Author(s)
Heidemann, Lena  
Fraunhofer-Institut für Kognitive Systeme IKS  
Herd, Benjamin  orcid-logo
Fraunhofer-Institut für Kognitive Systeme IKS  
Kelly, Jessica
Fraunhofer-Institut für Kognitive Systeme IKS  
Mata, Núria
Fraunhofer-Institut für Kognitive Systeme IKS  
Tsai, Wan-Ting
Fraunhofer-Institut für Kognitive Systeme IKS  
Zafar, Shanza Ali
Fraunhofer-Institut für Kognitive Systeme IKS  
Zamanian, Alireza
Fraunhofer-Institut für Kognitive Systeme IKS  
Corporate Author
Fraunhofer-Institut für Kognitive Systeme IKS  
Project(s)
IKS-Ausbauprojekt  
Funder
Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie  
DOI
10.24406/publica-3899
File(s)
Heidemann_TheEuropeanArtificialIntelligenceAct_2405_Whitepaper.pdf (4.98 MB)
Rights
Under Copyright
Language
English
Fraunhofer-Institut für Kognitive Systeme IKS  
Fraunhofer Group
Fraunhofer-Verbund IUK-Technologie  
Keyword(s)
  • artificial intelligence

  • AI

  • European Union

  • EU

  • ethics

  • ethical guideline

  • regulatory framework

  • regulation

  • risk

  • safety critical

  • safety standard

  • trustworthy artificial intelligence

  • AI act

  • artificial intelligence act

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024