Now showing 1 - 10 of 155
  • Publication
    Uncovering Inconsistencies and Contradictions in Financial Reports using Large Language Models
    ( 2023-12) ;
    Leonhard, David
    ;
    ;
    Berger, Armin
    ;
    Khaled, Mohamed
    ;
    Heiden, Sarah
    ;
    Dilmaghani, Tim
    ;
    Kliem, Bernd
    ;
    Loitz, Rüdiger
    ;
    ;
    Correct identification and correction of contradictions and inconsistencies within financial reports constitute a fundamental component of the audit process. To streamline and automate this critical task, we introduce a novel approach leveraging large language models and an embedding-based paragraph clustering methodology. This paper assesses our approach across three distinct datasets, including two annotated datasets and one unannotated dataset, all within a zero-shot framework. Our findings reveal highly promising results that significantly enhance the effectiveness and efficiency of the auditing process, ultimately reducing the time required for a thorough and reliable financial report audit.
  • Publication
    Improving Zero-Shot Text Matching for Financial Auditing with Large Language Models
    ( 2023-08-22) ;
    Berger, Armin
    ;
    ;
    Dilmaghani, Tim
    ;
    Khaled, Mohamed
    ;
    Kliem, B.
    ;
    Loitz, Rüdiger
    ;
    ;
    Leonhard, David
    ;
    ;
    Auditing financial documents is a very tedious and time-consuming process. As of today, it can already be simplified by employing AI-based solutions to recommend relevant text passages from a report for each legal requirement of rigorous accounting standards. However, these methods need to be fine-tuned regularly, and they require abundant annotated data, which is often lacking in industrial environments. Hence, we present ZeroShotALI, a novel recommender system that leverages a state-of-the-art large language model (LLM) in conjunction with a domain-specifically optimized transformer-based text-matching solution. We find that a two-step approach of first retrieving a number of best matching document sections per legal requirement with a custom BERT-based model and second filtering these selections using an LLM yields significant performance improvements over existing approaches.
  • Publication
    How Does Knowledge Injection Help in Informed Machine Learning?
    Informed machine learning describes the injection of prior knowledge into learning systems. It can help to improve generalization, especially when training data is scarce. However, the field is so application-driven that general analyses about the effect of knowledge injection are rare. This makes it difficult to transfer existing approaches to new applications, or to estimate potential improvements. Therefore, in this paper, we present a framework for quantifying the value of prior knowledge in informed machine learning. Our main contributions are threefold. Firstly, we propose a set of relevant metrics for quantifying the benefits of knowledge injection, comprising in-distribution accuracy, out-of-distribution robustness, and knowledge conformity. We also introduce a metric that combines performance improvement and data reduction. Secondly, we present a theoretical framework that represents prior knowledge in a function space and relates it to data representations and a trained model. This suggests that the distances between knowledge and data influence potential model improvements. Thirdly, we perform a systematic experimental study with controllable toy problems. All in all, this helps to find general answers to the question how knowledge injection helps in informed machine learning.
  • Publication
    A New Aligned Simple German Corpus
    ( 2023-07)
    Toborek, Vanessa
    ;
    Busch, Moritz
    ;
    Boßert, Malte
    ;
    ;
    Welke, Pascal
    "Leichte Sprache", the German counterpart to Simple English, is a regulated language aiming to facilitate complex written language that would otherwise stay inaccessible to different groups of people. We present a new sentence-aligned monolingual corpus for Simple German - German. It contains multiple document-aligned sources which we have aligned using automatic sentence-alignment methods. We evaluate our alignments based on a manually labelled subset of aligned documents. The quality of our sentence alignments, as measured by the F1-score, surpasses previous work. We publish the dataset under CC BY-SA and the accompanying code under MIT license.
  • Publication
    Contradiction Detection in Financial Reports
    ( 2023-01-23) ; ;
    Pucknat, Lisa
    ;
    Jacob, Basil
    ;
    Dilmaghani, Tim
    ;
    Nourimand, Mahdis
    ;
    Kliem, Bernd
    ;
    Loitz, Rüdiger
    ;
    ;
    Finding and amending contradictions in a financial report is crucial for the publishing company and its financial auditors. To automate this process, we introduce a novel approach that incorporates informed pre-training into its transformer-based architecture to infuse this model with additional Part-Of-Speech knowledge. Furthermore, we fine-tune the model on the public Stanford Natural Language Inference Corpus and our proprietary financial contradiction dataset. It achieves an exceptional contradiction detection F1 score of 89.55% on our real-world financial contradiction dataset, beating our several baselines by a considerable margin. During the model selection process we also test various financial-document-specific transformer models and find that they underperform the more general embedding approaches.
  • Publication
    An Empirical Evaluation of the Rashomon Effect in Explainable Machine Learning
    ( 2023)
    Müller, Sebastian
    ;
    Toborek, Vanessa
    ;
    ;
    Jakobs, Matthias
    ;
    ;
    Welke, Pascal
    The Rashomon Effect describes the following phenomenon: for a given dataset there may exist many models with equally good performance but with different solution strategies. The Rashomon Effect has implications for Explainable Machine Learning, especially for the comparability of explanations. We provide a unified view on three different comparison scenarios and conduct a quantitative evaluation across different datasets, models, attribution methods, and metrics. We find that hyperparameter-tuning plays a role and that metric selection matters. Our results provide empirical support for previously anecdotal evidence and exhibit challenges for both scientists and practitioners.
  • Publication
    Towards Automated Regulatory Compliance Verification in Financial Auditing with Large Language Models
    ( 2023)
    Berger, Armin
    ;
    ;
    Leonhard, David
    ;
    ;
    Bell Felix de Oliveira, Thiago
    ;
    Dilmaghani, Tim
    ;
    Khaled, Mohamed
    ;
    Kliem, Bernd
    ;
    Loitz, Rüdiger
    ;
    ;
    The auditing of financial documents, historically a labor-intensive process, stands on the precipice of transformation. AI-driven solutions have made inroads into streamlining this process by recommending pertinent text passages from financial reports to align with the legal requirements of accounting standards. However, a glaring limitation remains: these systems commonly fall short in verifying if the recommended excerpts indeed comply with the specific legal mandates. Hence, in this paper, we probe the efficiency of publicly available Large Language Models (LLMs) in the realm of regulatory compliance across different model configurations. We place particular emphasis on comparing cutting-edge open-source LLMs, such as Llama-2, with their proprietary counterparts like OpenAI's GPT models. This comparative analysis leverages two custom datasets provided by our partner PricewaterhouseCoopers (PwC) Germany. We find that the open-source Llama-2 70 billion model demonstrates outstanding performance in detecting non-compliance or true negative occurrences, beating all their proprietary counterparts. Nevertheless, proprietary models such as GPT-4 perform the best in a broad variety of scenarios, particularly in non-English contexts.
  • Publication
    KPI-EDGAR: A Novel Dataset and Accompanying Metric for Relation Extraction from Financial Documents
    ( 2022-12) ;
    Ali, Syed Musharraf
    ;
    ;
    Nurchalifah, Desiana Dien
    ;
    Jacob, Basil
    ;
    ;
    We introduce KPI-EDGAR, a novel dataset for Joint Named Entity Recognition and Relation Extraction building on financial reports uploaded to the Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system, where the main objective is to extract Key Performance Indicators (KPIs) from financial documents and link them to their numerical values and other attributes. We further provide four accompanying baselines for benchmarking potential future research. Additionally, we propose a new way of measuring the success of said extraction process by incorporating a word-level weighting scheme into the conventional F 1 score to better model the inherently fuzzy borders of the entity pairs of a relation in this domain.
  • Publication
    From Open Set Recognition Towards Robust Multi-class Classification
    The challenges and risks of deploying deep neural networks (DNNs) in the open-world are often overlooked and potentially result in severe outcomes. With our proposed informer approach, we leverage autoencoder-based outlier detectors with their sensitivity to epistemic uncertainty by ensembling multiple detectors each learning a different one-vs-rest setting. Our results clearly show informer’s superiority compared to DNN ensembles, kernel-based DNNs, and traditional multi-layer perceptrons (MLPs) in terms of robustness to outliers and dataset shift while maintaining a competitive classification performance. Finally, we show that informer can estimate the overall uncertainty within a prediction and, in contrast to any of the other baselines, break the uncertainty estimate down into aleatoric and epistemic uncertainty. This is an essential feature in many use cases, as the underlying reasons for the uncertainty are fundamentally different and can require different actions.
  • Publication
    Quantum Circuit Evolution on NISQ Devices
    Variational quantum circuits build the foundation for various classes of quantum algorithms. In a nutshell, the weights of a parametrized quantum circuit are varied until the empirical sampling distribution of the circuit is sufficiently close to a desired outcome. Numerical first-order methods are applied frequently to fit the parameters of the circuit, but most of the time, the circuit itself, that is, the actual composition of gates, is fixed. Methods for optimizing the circuit design jointly with the weights have been proposed, but empirical results are rather scarce. Here, we consider a simple evolutionary strategy that addresses the trade-off between finding appropriate circuit architectures and parameter tuning. We evaluate our method both via simulation and on actual quantum hardware. Our benchmark problems include the transverse field Ising Hamiltonian and the Sherrington-Kirkpatrick spin model. Despite the shortcomings of current noisy intermediate-scale quantum hardware, we find only a minor slowdown on actual quantum machines compared to simulations. Moreover, we investigate which mutation operations most significantly contribute to the optimization. The results provide intuition on how randomized search heuristics behave on actual quantum hardware and lay out a path for further refinement of evolutionary quantum gate circuits.