Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection?

 
: Schwaiger, Adrian; Sinhamahapatra, Poulami; Gansloser, Jens; Roscher, Karsten

:
Volltext urn:nbn:de:0011-n-5967703 (835 KByte PDF)
MD5 Fingerprint: 07070b6dc2a2d06ff1ea33be7003d7b4
(CC) by
Erstellt am: 24.7.2020


Espinoza, H.:
Workshop on Artificial Intelligence Safety, AISafety 2020. Proceedings. Online resource : Co-located with the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), Yokohama, Japan, January, 2021
Online im WWW, 2020 (CEUR Workshop Proceedings 2640)
ISSN: 1613-0073 (E-ISSN)
Paper 18, 8 S.
Workshop on Artificial Intelligence Safety (SafeAI 2020) <2021, Online>
International Joint Conference on Artificial Intelligence (IJCAI)<29, 2020, Online>
Pacific Rim International Conference on Artificial Intelligence (PRICAI) <17, 2020, Online>
Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie StMWi
BAYERN DIGITAL II; 20-3410-2-9-8; ADA-Center
ADA Lovelace Center for Analytics, Data and Applications
Englisch
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IKS ()
deep learning; artificial intelligence; AI; AI safety; Safe AI; Out-of-Distribution Detection; novelty detection; uncertainty quantification; uncertainty estimation; perception; Tiefe Neuronale Netze; Unsicherheitsquantifizierung; Safe Intelligence

Abstract
Reliable information about the uncertainty of predictions from deep neural networks could greatly facilitate their utilization in safety-critical applications. Current approaches for uncertainty quantification usually focus on in-distribution data, where a high uncertainty should be assigned to incorrect predictions. In contrast, we focus on out-of-distribution data where a network cannot make correct predictions and therefore should always report high uncertainty. In this paper, we compare several state-of-the-art uncertainty quantification methods for deep neural networks regarding their ability to detect novel inputs. We evaluate them on image classification tasks with regard to metrics reflecting requirements important for safety-critical applications. Our results show that a portion of out-of-distribution inputs can be detected with reasonable loss in overall accuracy. However, current uncertainty quantification approaches alone are not sufficient for an overall reliable out-of-distribution detection.

: http://publica.fraunhofer.de/dokumente/N-596770.html