Options
2021
Master Thesis
Title
Concept Learning for Image Classification in the Low Data Regime
Abstract
With the recent popularity of neural networks, the question inevitably arises of how these powerful models can be safely deployed in practical applications. These tasks are often safety-critical, so erroneous decisions can result in catastrophic consequences. Hence, their application requires special precautions. Explainable AI tries to mitigate this issue by providing understandable reasoning as to how a given prediction was reached internally. Thus, creating human-understandable explanations allows debugging a model and provision of information about the internal reasoning of the model. This work meticulously analyzes a specific architecture, namely Concept Bottleneck models. Its intrinsic explainability is achieved by an added bottleneck, where all neurons in the layer correspond pairwise to a predefined concept. However, training such an architecture in practice can be challenging since it requires a densely annotated dataset. More precisely, each sample needs to be annotated with a class label and a predefined set of concepts describing the image. Especially the latter part is exceptionally costly in practice. This work introduces a semi-supervised earning schedule to substantially lower the number of samples needed to train Concept Bottleneck models. We show that it is sufficient to solely annotate the class labels and about 30% of the concepts with the proposed schedule to match the baseline on the full dataset. Furthermore, we analyze another essential precaution for the application of Concept Bottleneck models in practice, namely out-of-distribution detection. Knowing whether an input sample comes from the same distribution as the training data is essential. The detection of such outliers becomes especially important since neural networks have been shown to make highly-confident predictions on the incorrect class for out-of distribution samples. We propose a new variant of confidence thresholding, targeted explicitly at Concept Bottleneck models. Instead of using the confidence of the class prediction, it operates on the concept confidence scores instead. We show that with this technique, we can outperform traditional softmax thresholding by a large margin. Lastly, we evaluate the limitations of the Concept Bottleneck architecture in its original form to guide future research on this topic. Mainly, we show that these networks often are sensitive to correlations between concepts in the dataset. As a result, these networks do not always provide information about their internal reasoning since they can argue about the presence of a feature using its correlation with others.
Thesis Note
München, TU, Master Thesis, 2021
Author(s)
Advisor(s)
Publishing Place
München