Options
2013
Conference Paper
Titel
Depth-adaptive supervoxels for RGB-D video segmentation
Abstract
In this paper we present a method for automatic video segmentation of RGB-D video streams provided by combined colour and depth sensors like the Microsoft Kinect. To this end, we combine position and normal information from the depth sensor with colour information to compute temporally stable, depth-adaptive superpixels and combine them into a graph of strand-like spatiotemporal, depth-adaptive supervoxels. We use spectral graph clustering on the supervoxel graph to partition it into spatiotemporal segments. Experimental evaluation on several challenging scenarios demonstrates that our two-layer RGB-D video segmentation technique produces excellent video segmentation results.