Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation

 
: Nowak, S.; Rüger, S.

:

Association for Computing Machinery -ACM-, Special Interest Group on Multimedia -SIGMM-:
ACM SIGMM International Conference on Multimedia Information Retrieval, MIR 2010. Proceedings. CD-ROM : March 29 - 31, 2010, Philadelphia, PA, USA
New York: ACM, 2010
ISBN: 978-1-605-58815-5
pp.557-566
International Conference on Multimedia Information Retrieval (MIR) <11, 2010, Philadelphia/Pa.>
English
Conference Paper
Fraunhofer IDMT ()

Abstract
The creation of golden standard datasets is a costly business. Optimally more than one judgment per document is obtained to ensure a high quality on annotations. In this context, we explore how much annotations from experts differ from each other, how different sets of annotations influence the ranking of systems and if these annotations can be obtained with a crowdsourcing approach. This study is applied to annotations of images with multiple concepts. A subset of the images employed in the latest ImageCLEF Photo Annotation competition was manually annotated by expert annotators and non-experts with Mechanical Turk. The inter-annotator agreement is computed at an image-based and concept-based level using majority vote, accuracy and kappa statistics. Further, the Kendall ? and Kolmogorov-Smirnov correlation test is used to compare the ranking of systems regarding different ground-truths and different evaluation measures in a benchmark scenario. Results show that while the agreement between experts and non-experts varies depending on the measure used, its influence on the ranked lists of the systems is rather small. To sum up, the majority vote applied to generate one annotation set out of several opinions, is able to filter noisy judgments of non-experts to some extent. The resulting annotation set is of comparable quality to the annotations of experts.

: http://publica.fraunhofer.de/documents/N-151406.html