Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

8th challenge on question answering over linked data (QALD-8)

: Usbeck, R.; Ngonga Ngomo, A.-C.; Conrads, F.; Röder, M.; Napolitano, G.

Volltext ()

Choi, K.-S.:
Joint Proceedings of ISWC 2018 Workshops SemDeep-4 and NLIWOD-4. Online resource : Joint proceedings of the 4th Workshop on Semantic Deep Learning (SemDeep-4) and NLIWoD4: Natural Language Interfaces for the Web of Data (NLIWOD-4) and 9th Question Answering over Linked Data challenge (QALD-9) co-located with 17th International Semantic Web Conference (ISWC 2018). Monterey, California, United States of America, October 8th to 9th, 2018
Monterey, Calif.: CEUR, 2018 (CEUR Workshop Proceedings 2241)
URN: urn:nbn:de:0074-2241-6
ISSN: 1613-0073
Workshop on Semantic Deep Learning (SemDeep) <4, 2018, Monterey/Calif.>
Natural Language Interfaces for the Web of Data Workshop (NLIWOD) <4, 2018, Monterey/Calif.>
International Semantic Web Conference (ISWC) <17, 2018, Monterey/Calif.>
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IAIS ()

. The QALD-8 challenge focused on the successful and long running multilingual QA task. For the first time, the participating teams were required to provide webservices of their systems to participate in the challenge, which will in turn support comparable research in the future. In this challenge, we also changed the underlying evaluation platform to account for the need for comparable experiments via webservices in contrast to former XML/JSON file submissions. This increased the entrance requirements for participating teams but ensures long term comparability of the system performance and a fair and open challenge. In the future, we will further simplify the participation process and offer leaderboards prior to the actual challenge to allow participants to see their performance beforehand. After feedback from the authors, we will likely add new key performance indicators for the capability of a system to know which questions it cannot answer and take confidence scores for answers into account. Moreover, we will remove most of the curve ball questions to reflect the original character of the QALD challenge, which provides a clean and linguistically challenging benchmark.