Options
2014
Conference Paper
Title
Supporting annotation of anatomical landmarks using automatic scale selection
Abstract
The effectiveness of appearance based person models strongly relies on a sufficiently large number of high quality training samples. Generating training data in terms of bounding boxes is already a time consuming task. If more complex person models are used, like part-based models or models suitable for human pose estimation, the labeling process becomes infeasible. In the context of pose estimation, motion capturing is often used to generate ground truth data. A major problem with this approach is that motion capturing is usually done in artificial environments with only few persons. It is therefore difficult to generate classifiers which are able to localize anatomical landmarks on a moving person. In order to solve this problem we propose a solution to generate annotations of anatomical landmarks using a semi-automatic work flow, based on tracking and automatic scale selection. The contribution of the paper is twofold. First, different tracking methods are evaluated in terms of their properties to follow anatomical structures on a moving person. Second, in order to determine the spatial extents of anatomical landmarks some simple but effective scale selection methods are proposed. The resulting person models are intended to generate a suitable basis for learning regression models for monocular pose estimation, as well as for training part-based models directly. Results of a comprehensive quantitative evaluation on the UMPM dataset are presented, while we also show examples of qualitative results on two challenging YouTube sequences.
Open Access
File(s)
Rights
Under Copyright
Language
English