Henninger, SofiaSofiaHenningerKellner, MaximilianMaximilianKellnerRombach, BenediktBenediktRombachReiterer, AlexanderAlexanderReiterer2024-09-092024-09-092024https://publica.fraunhofer.de/handle/publica/47502710.3390/jimaging10090220The utilization of robust, pre-trained foundation models enables simple adaptation to specific ongoing tasks. In particular, the recently developed Segment Anything Model (SAM) has demonstrated impressive results in the context of semantic segmentation. Recognizing that data collection is generally time-consuming and costly, this research aims to determine whether the use of these foundation models can reduce the need for training data. To assess the models’ behavior under conditions of reduced training data, five test datasets for semantic segmentation will be utilized. This study will concentrate on traffic sign segmentation to analyze the results in comparison to Mask R-CNN: the field’s leading model. The findings indicate that SAM does not surpass the leading model for this specific task, regardless of the quantity of training data. Nevertheless, a knowledge-distilled student architecture derived from SAM exhibits no reduction in accuracy when trained on data that have been reduced by 95%.enSemantic segmentationSegment anything modelMask R-CNNTraining data reductionTraffic signsReducing Training Data Using Pre-Trained Foundation Modelsjournal article