Spatially-adaptive filtering in a model-based machine vision approach to robust workpiece tracking
The introduction of model-based machine vision into the feedback loop of a robot manipulator usually implies that edge elements extracted from the current digitized video frame are matched to segments of a workpiece model which has been projected into the image plane according to the current estimate of the relative pose between the recording video camera and the workpiece to be tracked. In the case in which two (nearly) parallel projected model segments are close to each other in the image plane, the association between edge elements and model segments can become ambiguous. Since mismatches are likely to distort the state variable update step of a Kalman-Filter-based tracking process, suboptimal state estimates may result which can potentially jeopardize the entire tracking process. In order to avoid such problems, spatially adjacent projected model segments in the image plane have been suppressed by an augmented version of a hiddenline algorithm and thereby excluded from the edge ele ment association and matching step - see, e.g., Tonko et al., 1CRA'97, pp. 3166-3171. Here, we study an alternative approach towards increasing the robustness of a machine-vision-based tracker, exploiting the fact that a gradient filter can be adapted to the spatial characteristics of the local grayvalue variation: the filter mask is compressed in the gradient direction and enlarged perpendicular to the gradient direction, i.e. along the edge. We investigate the effect which such a spatially adaptive gradient filter for the extraction of edge elements from image frames exerts upon the association between edge elements and model segments, upon the subsequent fitting process in the update step, and thereby upon the robustness of the entire model-based tracking approach.