Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

From Black-box to White-box: Examining Confidence Calibration under different Conditions

: Schwaiger, Franziska; Henne, Maximilian; Küppers, Fabian; Schmoeller Roza, Felippe; Roscher, Karsten; Haselhoff, Anselm

Volltext urn:nbn:de:0011-n-6336141 (3.1 MByte PDF)
MD5 Fingerprint: 5ed6ca9446dbb7c063512d879106c0da
(CC) by
Erstellt am: 2.4.2021

Espinoza, H.:
Workshop on Artificial Intelligence Safety, SafeAI 2021. Proceedings. Online resource : Co-located with the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021), Virtual, February 8, 2021
Online im WWW, 2021 (CEUR Workshop Proceedings 2808)
Paper 13, 8 S.
Workshop on Artificial Intelligence Safety (SafeAI) <2021, Online>
Conference on Artificial Intelligence (AAAI) <35, 2021, Online>
Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie StMWi

Konferenzbeitrag, Elektronische Publikation
Fraunhofer IKS ()
calibration; neural networks; object recognition; safety engineering; safety critical

Confidence calibration is a major concern when applying artificial neural networks in safety-critical applications. Since most research in this area has focused on classification in the past, confidence calibration in the scope of object detection has gained more attention only recently. Based on previous work, we study the miscalibration of object detection models with respect to image location and box scale. Our main contribution is to additionally consider the impact of box selection methods like non-maximum suppression to calibration. We investigate the default intrinsic calibration of object detection models and how it is affected by these post-processing techniques. For this purpose, we distinguish between black-box calibration with non-maximum suppression and white-box calibration with raw network outputs. Our experiments reveal that post-processing highly affects confidence calibration. We show that non-maximum suppression has the potential to degrade initially well-calibrated predictions, leading to overconfident and thus miscalibrated models.