Under CopyrightBell Felix de Oliveira, ThiagoThiagoBell Felix de OliveiraLeonhard, DavidDavidLeonhardBashir, Ali HamzaAli HamzaBashirDilmaghani, TimTimDilmaghaniKhaled, MohamedMohamedKhaledWarning, UlrichUlrichWarningLoitz, RüdigerRüdigerLoitzHalscheidt, SandraSandraHalscheidtBirr, Jana LilianJana LilianBirrBerger, ArminArminBergerSifa, RafetRafetSifaBerghaus, DavidDavidBerghaus2025-01-292025-01-292024https://doi.org/10.24406/publica-4177https://publica.fraunhofer.de/handle/publica/48308710.1109/BigData62323.2024.1082515910.24406/publica-4177The auditing of financial documents, traditionally a labor-intensive task, is a promising field of application for Artificial Intelligence. Recommendation systems are capable of suggesting the most relevant passages from financial reports that meet accounting standards’ legal requirements. However, testing if the compliance requirements are satisfied is a non-trivial task. In this work, we tackle this problem from two directions. Our first approach leverages Large Language Models which we fine-tune specifically for compliance checks. Our results show an improvement in performance over the generic baseline LLMs. A disadvantage of LLMs is that they result in high inference costs. For this reason, we explore a second approach in which we use smaller models that come with reduced running costs. Despite their smaller size, these models also show promising predictive performance.encompliance checklarge language modelsauditrecommender systemsFine-Tuning Large Language Models for Compliance Checksconference paper