CC BY-NC-ND 4.0Pawlowicz, DanielDanielPawlowiczWeber, JuleJuleWeberDukino, ClaudiaClaudiaDukino2025-05-162025-05-162025https://doi.org/10.24406/publica-4666https://publica.fraunhofer.de/handle/publica/48760610.5220/001337810000389010.24406/publica-46662-s2.0-105001968969The integration of Speech-to-Text (STT) technology has the potential to enhance the efficiency of industrial workflows. However, standard speech models demonstrate suboptimal performance in domain-specific use cases. In order to gain user trust, it is essential to ensure accurate transcription, which can be achieved through the fine-tuning of the model to the specific domain. OpenAI’s Whisper was selected as the initial model and subsequently fine-tuned with domain-specific real-world recordings. The fine-tuned model outperforms the initial model in terms of transcription of technical jargon, as evidenced by the results of the study. The finetuned model achieved a validation loss of 1.75 and a Word Error Rate (WER) of 1. In addition to improving accuracy, this approach addresses the challenges of noisy environments and speaker variability that are common in real-world industrial environments. The present study demonstrates the efficacy of fine-tuning the Whisper model to new vocabulary with technical jargon, thereby underscoring the value of model adaptation for domain-specific use cases.enfalseWhisperFine-TuningDomainSpeech-to-Text TranscriptionEffectiveness of Whisper's Fine-Tuning for Domain-Specific Use Cases in the Industryconference paper