Haedecke, Elena GinaElena GinaHaedeckePintz, Maximilian AlexanderMaximilian AlexanderPintz2022-10-072022-10-072022-09https://publica.fraunhofer.de/handle/publica/427397In light of deep neural network applications emerging in diverse fields - e.g., industry, healthcare or finance - weaknesses and failures of these models might bare unacceptable risks. Methods are needed that enable developers to discover and mitigate such weaknesses in order to develop trustworthy Machine Learning (ML), especially in safety-critical application areas. However, it is necessary to get an insight into the rapidly developing variety of methods for correcting different deficiencies. Unlike other similar work that focuses on one particular topic, we consider three areas of action which are directly associated with the development and evaluation of ML models: transparency, uncertainty estimation and robustness. We provide an overview and comparative assessment of current approaches for building reliable and transparent models targeted at ML developers.enTrustworthy AIDeep Neural NetworksSafeguarding AITransparencyRobustnessUncertainty EstimationSurveyTransparency and Reliability Assurance Methods for Safeguarding Deep Neural Networks - A Surveypaper