• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Anderes
  4. Complexity, uncertainty and the Safety of ML
 
  • Details
  • Full
Options
2023
Paper (Preprint, Research Paper, Review Paper, White Paper, etc.)
Title

Complexity, uncertainty and the Safety of ML

Title Supplement
Position Paper published on HAL science ouverte
Abstract
There is currently much debate regarding whether or not applications based on Machine Learning (ML) can be made demonstrably safe. We assert that our ability to argue the safety of ML-based functions depends on the complexity of the task and environment of the function, the observations (training and test data) used to develop the function and the complexity of the ML models. Our inability to adequately address this complexity inevitably leads to uncertainties in the specification of the safety requirements, the performance of the ML models and our assurance argument itself. By understanding each of these dimensions as a continuum, can we better judge what level of safety can be achieved for a particular ML-based function?
Author(s)
Burton, Simon  
Fraunhofer-Institut für Kognitive Systeme IKS  
Herd, Benjamin  orcid-logo
Fraunhofer-Institut für Kognitive Systeme IKS  
Project(s)
ML4Safety  
Funder
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.  
Conference
International Conference on Computer Safety, Reliability and Security 2023  
DOI
10.24406/publica-1982
File(s)
Burton_ComplexityUncertaintyAndTheSafetyOfML_2309_SAFECOMP_PositionPaper_av.pdf (119.62 KB)
Link
Link
Rights
Under Copyright
Language
English
Fraunhofer-Institut für Kognitive Systeme IKS  
Fraunhofer Group
Fraunhofer-Verbund IUK-Technologie  
Keyword(s)
  • artificial intelligence

  • AI

  • machine learning

  • ML

  • safety

  • safety assurance

  • uncertainty

  • complexity

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024