Now showing 1 - 2 of 2
No Thumbnail Available
Publication

Uncovering Inconsistencies and Contradictions in Financial Reports using Large Language Models

2023-12 , Deußer, Tobias , Leonhard, David , Hillebrand, Lars Patrick , Berger, Armin , Khaled, Mohamed , Heiden, Sarah , Dilmaghani, Tim , Kliem, Bernd , Loitz, Rüdiger , Bauckhage, Christian , Sifa, Rafet

Correct identification and correction of contradictions and inconsistencies within financial reports constitute a fundamental component of the audit process. To streamline and automate this critical task, we introduce a novel approach leveraging large language models and an embedding-based paragraph clustering methodology. This paper assesses our approach across three distinct datasets, including two annotated datasets and one unannotated dataset, all within a zero-shot framework. Our findings reveal highly promising results that significantly enhance the effectiveness and efficiency of the auditing process, ultimately reducing the time required for a thorough and reliable financial report audit.

No Thumbnail Available
Publication

Contradiction Detection in Financial Reports

2023-01-23 , Deußer, Tobias , Pielka, Maren , Pucknat, Lisa , Jacob, Basil , Dilmaghani, Tim , Nourimand, Mahdis , Kliem, Bernd , Loitz, Rüdiger , Bauckhage, Christian , Sifa, Rafet

Finding and amending contradictions in a financial report is crucial for the publishing company and its financial auditors. To automate this process, we introduce a novel approach that incorporates informed pre-training into its transformer-based architecture to infuse this model with additional Part-Of-Speech knowledge. Furthermore, we fine-tune the model on the public Stanford Natural Language Inference Corpus and our proprietary financial contradiction dataset. It achieves an exceptional contradiction detection F1 score of 89.55% on our real-world financial contradiction dataset, beating our several baselines by a considerable margin. During the model selection process we also test various financial-document-specific transformer models and find that they underperform the more general embedding approaches.