Now showing 1 - 10 of 70
  • Publication
    Towards Automated Regulatory Compliance Verification in Financial Auditing with Large Language Models
    ( 2023)
    Berger, Armin
    ;
    ;
    Leonhard, David
    ;
    ;
    Bell Felix de Oliveira, Thiago
    ;
    Dilmaghani, Tim
    ;
    Khaled, Mohamed
    ;
    Kliem, Bernd
    ;
    Loitz, Rüdiger
    ;
    ;
    The auditing of financial documents, historically a labor-intensive process, stands on the precipice of transformation. AI-driven solutions have made inroads into streamlining this process by recommending pertinent text passages from financial reports to align with the legal requirements of accounting standards. However, a glaring limitation remains: these systems commonly fall short in verifying if the recommended excerpts indeed comply with the specific legal mandates. Hence, in this paper, we probe the efficiency of publicly available Large Language Models (LLMs) in the realm of regulatory compliance across different model configurations. We place particular emphasis on comparing cutting-edge open-source LLMs, such as Llama-2, with their proprietary counterparts like OpenAI's GPT models. This comparative analysis leverages two custom datasets provided by our partner PricewaterhouseCoopers (PwC) Germany. We find that the open-source Llama-2 70 billion model demonstrates outstanding performance in detecting non-compliance or true negative occurrences, beating all their proprietary counterparts. Nevertheless, proprietary models such as GPT-4 perform the best in a broad variety of scenarios, particularly in non-English contexts.
  • Publication
    From Open Set Recognition Towards Robust Multi-class Classification
    The challenges and risks of deploying deep neural networks (DNNs) in the open-world are often overlooked and potentially result in severe outcomes. With our proposed informer approach, we leverage autoencoder-based outlier detectors with their sensitivity to epistemic uncertainty by ensembling multiple detectors each learning a different one-vs-rest setting. Our results clearly show informer’s superiority compared to DNN ensembles, kernel-based DNNs, and traditional multi-layer perceptrons (MLPs) in terms of robustness to outliers and dataset shift while maintaining a competitive classification performance. Finally, we show that informer can estimate the overall uncertainty within a prediction and, in contrast to any of the other baselines, break the uncertainty estimate down into aleatoric and epistemic uncertainty. This is an essential feature in many use cases, as the underlying reasons for the uncertainty are fundamentally different and can require different actions.
  • Publication
    Towards Generating Financial Reports from Tabular Data Using Transformers
    Financial reports are commonplace in the business world, but are long and tedious to produce. These reports mostly consist of tables with written sections describing these tables. Automating the process of creating these reports, even partially has the potential to save a company time and resources that could be spent on more creative tasks. Some software exists which uses conditional statements and sentence templates to generate the written sections. This solution lacks creativity and innovation when compared to recent advancements in NLP and deep learning. We instead implement a transformer network to solve the task of generating this text. By generating matching pairs between tables and sentences found in financial documents, we created a dataset for our transformer. We were able to achieve promising results, with the final model reaching a BLEU score of 63.3. Generated sentences are natural, grammatically correct and mostly faithful to the information found in the tables.
  • Publication
    KPI-BERT: A Joint Named Entity Recognition and Relation Extraction Model for Financial Reports
    ( 2022) ; ;
    Dilmaghani, Tim
    ;
    Kliem, Bernd
    ;
    Loitz, Rüdiger
    ;
    ;
    We present KPI-BERT, a system which employs novel methods of named entity recognition (NER) and relation extraction (RE) to extract and link key performance indicators (KPIs), e.g. "revenue"or "interest expenses", of companies from real-world German financial documents. Specifically, we introduce an end-to-end trainable architecture that is based on Bidirectional Encoder Representations from Transformers (BERT) combining a recurrent neural network (RNN) with conditional label masking to sequentially tag entities before it classifies their relations. Our model also introduces a learnable RNN-based pooling mechanism and incorporates domain expert knowledge by explicitly filtering impossible relations. We achieve a substantially higher prediction performance on a new practical dataset of German financial reports, outperforming several strong baselines including a competing state-of-the-art span-based entity tagging approach.
  • Publication
    Bounding open space risk with decoupling autoencoders in open set recognition
    One-vs-Rest (OVR) classification aims to distinguish a single class of interest (COI) from other classes. The concept of novelty detection and robustness to dataset shift becomes crucial in OVR when the scope of the rest class is extended from the classes observed during training to unseen and possibly unrelated classes, a setting referred to as open set recognition (OSR). In this work, we propose a novel architecture, namely decoupling autoencoder (DAE), which provides a proven upper bound on the open space risk and minimizes open space risk via a dedicated training routine. Our method is benchmarked within three different scenarios, each isolating different aspects of OSR, namely plain classification, outlier detection, and dataset shift. The results conclusively show that DAE achieves robust performance across all three tasks. This level of cross-task robustness is not observed for any of the seven potent baselines from the OSR, OVR, outlier detection, and ensembling domain which, apart from ATA (Lübbering et al., From imbalanced classification to supervised outlier detection problems: adversarially trained auto encoders. In: Artificial neural networks and machine learning-ICANN 2020, 2020), tend to fail on either one of the tasks. Similar to DAE, ATA is based on autoencoders and facilitates the reconstruction error to predict the inlierness of a sample. However unlike DAE, it does not provide any uncertainty scores and therefore lacks rudimentary means of interpretation. Our adversarial robustness and local stability results further support DAE's superiority in the OSR setting, emphasizing its applicability in safety-critical systems.
  • Publication
    Towards automating Numerical Consistency Checks in Financial Reports
    ( 2022) ; ;
    Dilmaghani, Tim
    ;
    Kliem, Bernd
    ;
    Loitz, Rüdiger
    ;
    ;
    We introduce KPI-Check, a novel system that automatically identifies and cross-checks semantically equivalent key performance indicators (KPIs), e.g. "revenue"or "total costs", in real-world German financial reports. It combines a financial named entity and relation extraction module with a BERT-based filtering and text pair classification component to extract KPIs from unstructured sentences before linking them to synonymous occurrences in the balance sheet and profit & loss statement. The tool achieves a high matching performance of 73.00% micro F1 on a hold out test set and is currently being deployed for a globally operating major auditing firm to assist the auditing procedure of financial statements.
  • Publication
    Solving Subset Sum Problems using Quantum Inspired Optimization Algorithms with Applications in Auditing and Financial Data Analysis
    ( 2022) ;
    Gerlach, Thore Thassilo
    ;
    ;
    Kliem, Bernd
    ;
    Many applications in automated auditing and the analysis and consistency check of financial documents can be formulated in part as the subset sum problem: Given a set of numbers and a target sum, find the subset of numbers that sums up to the target. The problem is NP-hard and classical solving algorithms are therefore not practical to use in many real applications.We tackle the problem as a QUBO (quadratic unconstrained binary optimization) problem and show how gradient descent on Hopfield Networks reliably finds solutions for both artificial and real data. We outline how this algorithm can be applied by adiabatic quantum computers (quantum annealers) and specialized hardware (field programmable gate arrays) for digital annealing and run experiments on quantum annealing hardware.
  • Publication
    Towards Bundle Adjustment for Satellite Imaging via Quantum Machine Learning
    ( 2022-01-01) ;
    Gerlach, Thore Thassilo
    ;
    Hugues, Romain
    ;
    ; ;
    Barbaresco, F.
    Given is a set of images, where all images show views of the same area at different points in time and from different viewpoints. The task is the alignment of all images such that relevant information, e.g., poses, changes, and terrain, can be extracted from the fused image. In this work, we focus on quantum methods for keypoint extraction and feature matching, due to the demanding computational complexity of these sub-tasks. To this end, k-medoids clustering, kernel density clustering, nearest neighbor search, and kernel methods are investigated and it is explained how these methods can be re-formulated for quantum annealers and gate-based quantum computers. Experimental results obtained on digital quantum emulation hardware, quantum annealers, and quantum gate computers show that classical systems still deliver superior results. However, the proposed methods are ready for the current and upcoming generations of quantum computing devices which have the potential to outperform classical systems in the near future.
  • Publication
    An Optimization for Convolutional Network Layers Using the Viola-Jones Framework and Ternary Weight Networks
    Neural networks have the potential to be extremely powerful for computer vision related tasks, but can be computationally expensive. Classical methods, by comparison, tend to be relatively light weight, albeit not as powerful. In this paper, we propose a method of combining parts from a classical system, called the Viola-Jones Object Detection Framework, with a modern ternary neural network to improve the efficiency of a convolutional neural net by replacing convolutional filters with a set of custom ones inspired by the framework. This reduces the number of operations needed for computing feature values with negligible effects on overall accuracy, allowing for a more optimized network.
  • Publication
    Decoupling Autoencoders for Robust One-vs-Rest Classification
    One-vs-Rest (OVR) classification aims to distinguish a single class of interest from other classes. The concept of novelty detection and robustness to dataset shift becomes crucial in OVR when the scope of the rest class extends from the classes observed during training to unseen and possibly unrelated classes. In this work, we propose a novel architecture, namely Decoupling Autoencoder (DAE) to tackle the common issue of robustness w.r.t. out-of-distribution samples which is prevalent in classifiers such as multi-layer perceptrons (MLP) and ensemble architectures. Experiments on plain classification, outlier detection, and dataset shift tasks show DAE to achieve robust performance across these tasks compared to the baselines, which tend to fail completely, when exposed to dataset shift. W hile DAE and the baselines yield rather uncalibrated predictions on the outlier detection and dataset shift task, we found that DAE calibration is more stable across all tasks. Therefore, calibration measures applied to the classification task could also improve the calibration of the outlier detection and dataset shift scenarios for DAE.