Sicking, JoachimJoachimSickingAkila, MaramMaramAkilaPintz, Maximilian AlexanderMaximilian AlexanderPintzWirtz, TimTimWirtzWrobel, StefanStefanWrobelFischer, AsjaAsjaFischer2022-10-142022-10-142024https://publica.fraunhofer.de/handle/publica/42765710.1007/s10994-022-06230-8Despite of its importance for safe machine learning, uncertainty quantification for neural networks is far from being solved. State-of-the-art approaches to estimate neural uncertainties are often hybrid, combining parametric models with explicit or implicit (dropout-based) ensembling. We take another pathway and propose a novel approach to uncertainty quantification for regression tasks, Wasserstein dropout, that is purely non-parametric. Technically, it captures aleatoric uncertainty by means of dropout-based sub-network distributions. This is accomplished by a new objective which minimizes the Wasserstein distance between the label distribution and the model distribution. An extensive empirical analysis shows that Wasserstein dropout outperforms state-of-the-art methods, on vanilla test data as well as under distributional shift in terms of producing more accurate and stable uncertainty estimates.enSafe Machine LearningRegression Neural NetworksUncertainty EstimationAleatoric UncertaintyDropoutObject DetectionDDC::000 Informatik, Informationswissenschaft, allgemeine Werke::000 Informatik, Wissen, Systeme::006 Spezielle ComputerverfahrenWasserstein Dropoutjournal article