Considering Reliability of Deep Learning Function to Boost Data Suitability and Anomaly Detection
The increased demand of Deep Neural Networks (DNNs) in safety-critical systems, such as autonomous vehicles, leads to increasing importance of training data suitability. Firstly, we focus on how to extract the relevant data content for ensuring DNN reliability. Then, we identify error categories and propose mitigation measures with emphasis on data suitability. Despite all efforts to boost data suitability, not all possible variations of a real application can be identified. Hence, we analyse the case of unknown out-of-distribution data. In this case, we suggest to complement data suitability with online anomaly detection using FACER that supervises the behaviour of the DNN.