Under CopyrightLehmann, JensWrobel, StefanRony, Md Rashad Al HasanMd Rashad Al HasanRony2023-11-282023-11-282023-09-18urn:nbn:de:hbz:5-72215https://publica.fraunhofer.de/handle/publica/457311https://doi.org/10.24406/publica-221510.24406/publica-2215Large language models empowering recent conversational systems such as Alexa and Siri require external knowledge to generate informative and accurate dialogues. The knowledge may be provided in structured or unstructured forms, such as knowledge graphs, documents, and databases. Typically, language models face several issues when attempting to incorporate knowledge for conversational question answering: 1) they are unable to capture the relationship between facts in a structured knowledge, 2) they lack the capability of handling the dynamic knowledge in a multi-domain conversational setting, 3) because of the scarcity of unsupervised approaches for question answer over knowledge graphs (KGQA), systems often require a large amount of training data, and 4) because of the complexities and dependencies involved in the KGQA process it is difficult to generate a formal query for question answering. All of these issues result in uninformative and incorrect answers. Furthermore, an evaluation metric that can capture various aspects of the system response, such as semantic, syntactic, and grammatical acceptability, is necessary to ensure the quality of such conversational question answering systems. Addressing the shortcomings in this thesis, we propose techniques for incorporating structured and unstructured knowledge into pre-trained language models to improve conversational question answering systems. First, we propose a novel task-oriented dialogue system that introduces a structure aware knowledge embedding and knowledge graph-weighted attention masking strategies to facilitate a language model in selecting relevant facts from a KG for informative dialogue generation. Experiment results on the benchmark datasets demonstrate significant improvement over previous baselines. Next, we introduce an unsupervised KGQA system, leveraging several pre-trained language models to improve the essential components (i.e., entity and relation linking) of KGQA. The system further introduces a novel tree-based algorithm for extracting the answer entities from a KG. The proposed techniques relax the need for training data to improve KGQA performance. Then, we introduce a generative system that combines the benefits of end-to-end and modular systems and leverages a GPT-2 language model to learn graph-specific information (i.e., entities and relations) in its parameters to generate SPARQL query for extracting answer entities from a KG. The proposed system encodes linguistic features of a question to understand complex question patterns for generating accurate SPARQL queries. Afterward, we developed a system demonstrator for question answering over unstructured documents about climate change. Pre-trained language models are leveraged to index unstructured text documents into a dense space for document retrieval and question answering. Finally, we propose an automatic evaluation metric, incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). A comprehensive evaluation exhibits the effectiveness of our proposed metric over the state-of-the-art approaches. Overall, our contributions exhibit that the effective incorporation of external knowledge into a language model significantly improves the performance of conversational question answering. We made all the resources and code used in the proposed systems publicly available.enDDC::000 Informatik, Informationswissenschaft, allgemeine Werke::000 Informatik, Wissen, Systeme::004 Datenverarbeitung; InformatikAdvancing Knowledge-Enhanced Conversational Systems Leveraging Language Modelsdoctoral thesis