Options
February 12, 2026
Paper (Preprint, Research Paper, Review Paper, White Paper, etc.)
Title
Efficient Segment Anything with Depth-Aware Fusion and Limited Training Data
Title Supplement
Published on arXiv
Abstract
Segment Anything Models (SAM) achieve impressive universal segmentation performance but require massive datasets (e.g., 11M images) and rely solely on RGB inputs. Recent efficient variants reduce computation but still depend on large-scale training. We propose a lightweight RGB-D fusion framework that augments EfficientViT-SAM with monocular depth priors. Depth maps are generated with a pretrained estimator and fused mid-level with RGB features through a dedicated depth encoder. Trained on only 11.2k samples (less than 0.1% of SA-1B), our method achieves higher accuracy than EfficientViT-SAM, showing that depth cues provide strong geometric priors for segmentation. Our results demonstrate that depth cues enable data-efficient and lightweight segmentation suitable for resource-limited scenarios.
Author(s)