Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

The CLEF 2011 photo annotation and concept-based retrieval tasks

 
: Nowak, Stefanie; Nagel, Karolin; Liebetrau, Judith

:
Fulltext (PDF; )

Petras, V.:
CLEF 2011 Working Notes. Online resource : Conference on Multilingual and Multimodal Information Access Evaluation, Amsterdam, The Netherlands, September 19-22, 2011
Amsterdam, 2011 (CEUR Workshop Proceedings 1177)
pp.39-64
Conference on Multilingual and Multimodal Information Access Evaluation (CLEF) <2011, Amsterdam>
Bundesministerium für Wirtschaft und Technologie BMWi
1MQ07017; THESEUS
English
Conference Paper, Electronic Publication
Fraunhofer IDMT ()
photo annotation; assessment; CLEF

Abstract
The ImageCLEF 2011 Photo Annotation and Concept-based Retrieval Tasks pose the challenge of an automated annotation of Flickr images with 99 visual concepts and the retrieval of images based on query topics. The participants were provided with a training set of 8,000 images including annotations, EXIF data, and Flickr user tags. The annotation challenge was performed on 10,000 images, while the retrieval challenge considered 200,000 images. Both tasks differentiate among approaches that consider solely visual information, approaches that rely only on textual information in form of image metadata and user tags, and multi-modal approaches that combine both information sources. The relevance assessments were acquired with a crowdsourcing approach and the evaluation followed two evaluation paradigms: per concept and per example. In total, 18 research teams participated in the annotation challenge with 79 submissions. The concept-based retrieval task was tackled by 4 teams that submitted a total of 31 runs. Summarizing the results, the annotation task could be solved with a MiAP of 0.443 in the multimodal configuration, with a MiAP of 0.388 in the visual configuration, and with a MiAP of 0.346 in the textual configuration. The concept-based retrieval task was solved best with a MAP of 0.164 using multimodal information and a manual intervention in the query formulation. The best completely automated approach achieved 0.085 MAP and uses solely textual information. Results indicate that while the annotation task shows promising results, the concept-based retrieval task is much harder to solve, especially for specific information needs.

: http://publica.fraunhofer.de/documents/N-367229.html