Articulated atlas for segmentation of the skeleton from head & neck CT datasets
In this paper a novel articulated atlas for the fully automated segmentation of the skeleton from head & neck CT datasets is presented. An individual atlas describing the shape and appearance is created for each individual bone. Principal Component Analysis is used to learn spatial relations between those atlases resulting in a unified articulated atlas. Transformations are parameterized using the matrix exponential to enable linear combinations required for learning. The adaptation to test images considers appearance, distance to bone structures and the trained articulation space. For evaluation, an atlas created from 10 manually labeled training images has been applied to 46 clinically acquired head & neck CT datasets. Visual inspection showed that in 74% of the cases, the adaptation process was successful. In a second experiment leave-one-out validation was used to quantify the segmentation accuracy. The successfully adapted cases resulted in an average volume overlap error of 30.67 and an average symmetric surface distance of 0.76 mm.