• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection?
 
  • Details
  • Full
Options
2020
Conference Paper
Title

Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection?

Abstract
Reliable information about the uncertainty of predictions from deep neural networks could greatly facilitate their utilization in safety-critical applications. Current approaches for uncertainty quantification usually focus on in-distribution data, where a high uncertainty should be assigned to incorrect predictions. In contrast, we focus on out-of-distribution data where a network cannot make correct predictions and therefore should always report high uncertainty. In this paper, we compare several state-of-the-art uncertainty quantification methods for deep neural networks regarding their ability to detect novel inputs. We evaluate them on image classification tasks with regard to metrics reflecting requirements important for safety-critical applications. Our results show that a portion of out-of-distribution inputs can be detected with reasonable loss in overall accuracy. However, current uncertainty quantification approaches alone are not sufficient for an overall reliable out-of-distribution detection.
Author(s)
Schwaiger, Adrian  
Fraunhofer-Institut für Kognitive Systeme IKS  
Sinhamahapatra, Poulami  
Fraunhofer-Institut für Kognitive Systeme IKS  
Gansloser, Jens  
Fraunhofer-Institut für Kognitive Systeme IKS  
Roscher, Karsten  
Fraunhofer-Institut für Kognitive Systeme IKS  
Mainwork
Workshop on Artificial Intelligence Safety, AISafety 2020. Proceedings. Online resource  
Project(s)
ADA-Center
Funder
Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie StMWi  
Conference
Workshop on Artificial Intelligence Safety (SafeAI 2020) 2021  
International Joint Conference on Artificial Intelligence (IJCAI)<29, 2020, Online> 2020  
Pacific Rim International Conference on Artificial Intelligence (PRICAI) 2020  
Open Access
DOI
10.24406/publica-fhg-408442
File(s)
N-596770.pdf (835.45 KB)
Rights
CC BY 4.0: Creative Commons Attribution
Language
English
Fraunhofer-Institut für Kognitive Systeme IKS  
Keyword(s)
  • deep learning

  • artificial intelligence

  • AI

  • AI safety

  • Safe AI

  • Out-of-Distribution Detection

  • novelty detection

  • uncertainty quantification

  • uncertainty estimation

  • perception

  • Tiefe Neuronale Netze

  • Unsicherheitsquantifizierung

  • Safe Intelligence

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024