Options
2014
Conference Paper
Title
On maximum geometric finger-tip recognition distance using depth sensors
Abstract
Depth sensor data is commonly used as the basis for Natural User Interfaces (NUI). The recent availability of different camera systems at affordable prices has caused a significant uptake in the research community, e. g. for building hand-pose or gesture-based controls in various scenarios and with different algorithms. The limited resolution and noise of the utilized cameras naturally puts a constraint on the distance between camera and user at which a meaningful interaction can still be designed for. We therefore conducted extensive accuracy experiments to explore the maximum distance that allows for recognizing finger-tips of an average-sized hand using three popular depth cameras (Swiss Ranger SR4000, Microsoft Kinect for Windows and the Alpha Development Kit of the Kinect for Windows 2), with two geometric algorithms and a manual image analysis. In our experiment, the palm faces the sensors with all five fingers extended. It is moved at distances of 0.5 to 3.5meter s from the sensor. Quantitative data is collected regarding the number of finger-tips recognized in the binary hand outline image for each sensor, using two algorithms. For qualitative analysis, samples of the hand outline are also collected. The quantitative results proved to be inconclusive due to false positives or negatives caused by noise. In turn our qualitative analysis, achieved by inspecting the hand outline images manually, provides conclusive understanding of the depth data quality. We find that recognition works reliably up to 1.5 m (SR4000, Kinect) and 2.4 m (Kinect2). These insights are generally applicable for designing NUIs that rely on depth sensor data.