Combination of time-of-flight depth and stereo using semiglobal optimization
A growing number of modern computer vision applications like object recognition, collision avoidance and scene understanding demand accurate and dense 3D representations of their environment. To improve existing procedures for 3D data acquisition this paper proposes a novel method for sensor combination on a stereo and a Time-of-Flight camera system. By calibrating the two sensor systems to each other, valid measurements from the 2.5D Time-of-Flight sensor are converted to disparity guesses within the stereo system. The disparity guesses from the Time-of-Flight data constrain the correspondence search results from the stereo matching algorithm. It is empirically shown, that the proposed method effectively enhance the results from stereo vision, especially in structureless areas where stereo correspondence search fails. The method is evaluated on the camera system of the service robot Care-O-bot®3.