Options
2023
Conference Paper
Title
Category Differences Matter: A Broad Analysis of Inter-Category Error in Semantic Segmentation
Abstract
In current evaluation schemes of semantic segmentation, metrics are calculated in such a way that all predicted classes should equally be identical to their ground truth, paying less attention to the various manifestations of the false predictions within the object category. In this work, we propose the Critical Error Rate (CER) as a supplement to the current evaluation metrics, focusing on the error rate, which reflects predictions that fall outside of the category from the ground truth. We conduct a series of experiments evaluating the behavior of different network architectures in various evaluation setups, including domain shift, the introduction of novel classes, and a mixture of these. We demonstrate the essential criteria for network generalization with those experiments. Furthermore, we ablate the impact of utilizing various class taxonomies for the evaluation of out-of-category error.
Author(s)