Now showing 1 - 10 of 32
  • Publication
    Controlled Randomness Improves the Performance of Transformer Models
    ( 2024-03-19) ;
    Zhao, Cong
    ;
    Krämer, Wolfgang
    ;
    Leonhard, David
    ;
    ;
    During the pre-training step of natural language models, the main objective is to learn a general representation of the pre-training dataset, usually requiring large amounts of textual data to capture the complexity and diversity of natural language. Contrasting this, in most cases, the size of the data available to solve the specific downstream task is often dwarfed by the aforementioned pre-training dataset, especially in domains where data is scarce. We introduce controlled randomness, i.e. noise, into the training process to improve fine-tuning language models and explore the performance of targeted noise in addition to the parameters of these models. We find that adding such noise can improve the performance in our two downstream tasks of joint named entity recognition and relation extraction and text summarization.
  • Publication
    Uncovering Inconsistencies and Contradictions in Financial Reports using Large Language Models
    ( 2023-12) ;
    Leonhard, David
    ;
    ;
    Berger, Armin
    ;
    Khaled, Mohamed
    ;
    Heiden, Sarah
    ;
    Dilmaghani, Tim
    ;
    Kliem, Bernd
    ;
    Loitz, Rüdiger
    ;
    ;
    Correct identification and correction of contradictions and inconsistencies within financial reports constitute a fundamental component of the audit process. To streamline and automate this critical task, we introduce a novel approach leveraging large language models and an embedding-based paragraph clustering methodology. This paper assesses our approach across three distinct datasets, including two annotated datasets and one unannotated dataset, all within a zero-shot framework. Our findings reveal highly promising results that significantly enhance the effectiveness and efficiency of the auditing process, ultimately reducing the time required for a thorough and reliable financial report audit.
  • Publication
    Improving Zero-Shot Text Matching for Financial Auditing with Large Language Models
    ( 2023-08-22) ;
    Berger, Armin
    ;
    ;
    Dilmaghani, Tim
    ;
    Khaled, Mohamed
    ;
    Kliem, B.
    ;
    Loitz, Rüdiger
    ;
    ;
    Leonhard, David
    ;
    ;
    Auditing financial documents is a very tedious and time-consuming process. As of today, it can already be simplified by employing AI-based solutions to recommend relevant text passages from a report for each legal requirement of rigorous accounting standards. However, these methods need to be fine-tuned regularly, and they require abundant annotated data, which is often lacking in industrial environments. Hence, we present ZeroShotALI, a novel recommender system that leverages a state-of-the-art large language model (LLM) in conjunction with a domain-specifically optimized transformer-based text-matching solution. We find that a two-step approach of first retrieving a number of best matching document sections per legal requirement with a custom BERT-based model and second filtering these selections using an LLM yields significant performance improvements over existing approaches.
  • Publication
    Contradiction Detection in Financial Reports
    ( 2023-01-23) ; ;
    Pucknat, Lisa
    ;
    Jacob, Basil
    ;
    Dilmaghani, Tim
    ;
    Nourimand, Mahdis
    ;
    Kliem, Bernd
    ;
    Loitz, Rüdiger
    ;
    ;
    Finding and amending contradictions in a financial report is crucial for the publishing company and its financial auditors. To automate this process, we introduce a novel approach that incorporates informed pre-training into its transformer-based architecture to infuse this model with additional Part-Of-Speech knowledge. Furthermore, we fine-tune the model on the public Stanford Natural Language Inference Corpus and our proprietary financial contradiction dataset. It achieves an exceptional contradiction detection F1 score of 89.55% on our real-world financial contradiction dataset, beating our several baselines by a considerable margin. During the model selection process we also test various financial-document-specific transformer models and find that they underperform the more general embedding approaches.
  • Publication
    KPI-EDGAR: A Novel Dataset and Accompanying Metric for Relation Extraction from Financial Documents
    ( 2022-12) ;
    Ali, Syed Musharraf
    ;
    ;
    Nurchalifah, Desiana Dien
    ;
    Jacob, Basil
    ;
    ;
    We introduce KPI-EDGAR, a novel dataset for Joint Named Entity Recognition and Relation Extraction building on financial reports uploaded to the Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system, where the main objective is to extract Key Performance Indicators (KPIs) from financial documents and link them to their numerical values and other attributes. We further provide four accompanying baselines for benchmarking potential future research. Additionally, we propose a new way of measuring the success of said extraction process by incorporating a word-level weighting scheme into the conventional F 1 score to better model the inherently fuzzy borders of the entity pairs of a relation in this domain.
  • Publication
    Utilizing Representation Learning for Robust Text Classification Under Datasetshift
    Within One-vs-Rest (OVR) classification, a classifier differentiates a single class of interest (COI) from the rest, i.e. any other class. By extending the scope of the rest class to corruptions (dataset shift), aspects of outlier detection gain relevancy. In this work, we show that adversarially trained autoencoders (ATA) representative of autoencoder-based outlier detection methods, yield tremendous robustness improvements over traditional neural network methods such as multi-layer perceptrons (MLP) and common ensemble methods, while maintaining a competitive classification performance. In contrast, our results also reveal that deep learning methods solely optimized for classification, tend to fail completely when exposed to dataset shift.
  • Publication
    Combining Machine Learning and Simulation to a Hybrid Modelling Approach: Current and Future Directions
    In this paper, we describe the combination of machine learning and simulation towards a hybrid modelling approach. Such a combination of data-based and knowledge-based modelling is motivated by applications that are partly based on causal relationships, while other effects result from hidden dependencies that are represented in huge amounts of data. Our aim is to bridge the knowledge gap between the two individual communities from machine learning and simulation to promote the development of hybrid systems. We present a conceptual framework that helps to identify potential combined approaches and employ it to give a structured overview of different types of combinations using exemplary approaches of simulation-assisted machine learning and machine-learning assisted simulation. We also discuss an advanced pairing in the context of Industry 4.0 where we see particular further potential for hybrid systems. In this paper, we describe the combination of machine learning and simulation towards a hybrid modelling approach. Such a combination of data-based and knowledge-based modelling is motivated by applications that are partly based on causal relationships, while other effects result from hidden dependencies that are represented in huge amounts of data. Our aim is to bridge the knowledge gap between the two individual communities from machine learning and simulation to promote the development of hybrid systems. We present a conceptual framework that helps to identify potential combined approaches and employ it to give a structured overview of different types of combinations using exemplary approaches of simulation-assisted machine learning and machine-learning assisted simulation. We also discuss an advanced pairing in the context of Industry 4.0 where we see particular further potential for hybrid systems.
  • Publication
    Triple Classification Using Regions and Fine-Grained Entity Typing
    ( 2019)
    Dong, Tiansi
    ;
    Wang, Zhigang
    ;
    Li, Juanzi
    ;
    ;
    Cremers, Armin B.
    A Triple in knowledge-graph takes a form that consists of head, relation, tail. Triple Classification is used to determine the truth value of an unknown Triple. This is a hard task for 1-to-N relations using the vector-based embedding approach. We propose a new region-based embedding approach using fine-grained type chains. A novel geometric process is presented to extend the vectors of pre-trained entities into n-balls (n-dimensional balls) under the condition that head balls shall contain their tail balls. Our algorithm achieves zero energy loss, therefore, serves as a case study of perfectly imposing tree structures into vector space. An unknown Triple (h, r, x) will be predicted as true, when x's n-ball is located in the r-subspace of h's n-ball, following the same construction of know n tails of h. The experiments are based on large datasets derived from the benchmark datasets WN11, FB13, and WN18. Our results show that the performance of the new method is related to the length of the type chain and the quality of pre-trained entity-embeddings, and that performances of long chains with well-trained entity-embeddings outperform other methods in the literature.
  • Publication
    Two Attempts to Predict Author Gender in Cross-Genre Settings in Dutch
    ( 2019)
    Brito, Eduardo
    ;
    ;
    This paper describes the systems designed by the FraunhoferIAIS team at the CLIN29 shared task on cross-genre gender detection in Dutch. We show two alternative classification approaches: a rather standard one consisting of feature engineering and a random forest classifier; and an alternative one involving a LSTM classifier. Both are enhanced by a LDA model trained on stems. We considered various features such as frequency of function words, parts-of-speech and sentiment among others. We achieved 53.77% average accuracy in the cross-genre settings.
  • Publication
    Towards Shortest Paths via Adiabatic Quantum Computing
    Since first working quantum computers are now available, accelerated developments of this technology may be expected. This will likely impact graph- or network analysis because quantum computers promise fast solutions for many problems in these areas. In this paper, we explore the use of adiabatic quantum computing in finding shortest paths. We devise an Ising energy minimization formulation for this task and discuss how to set up a system of quantum bits to find minimum energy states of the model. In simulation experiments, we numerically solve the corresponding Schrödinger equations and observe our approach to work well. This evidences that shortest path computation can at least be assisted by quantum computers.