Options
2025
Conference Paper
Title
Effectiveness of Whisper's Fine-Tuning for Domain-Specific Use Cases in the Industry
Abstract
The integration of Speech-to-Text (STT) technology has the potential to enhance the efficiency of industrial workflows. However, standard speech models demonstrate suboptimal performance in domain-specific use cases. In order to gain user trust, it is essential to ensure accurate transcription, which can be achieved through the fine-tuning of the model to the specific domain. OpenAI’s Whisper was selected as the initial model and subsequently fine-tuned with domain-specific real-world recordings. The fine-tuned model outperforms the initial model in terms of transcription of technical jargon, as evidenced by the results of the study. The finetuned model achieved a validation loss of 1.75 and a Word Error Rate (WER) of 1. In addition to improving accuracy, this approach addresses the challenges of noisy environments and speaker variability that are common in real-world industrial environments. The present study demonstrates the efficacy of fine-tuning the Whisper model to new vocabulary with technical jargon, thereby underscoring the value of model adaptation for domain-specific use cases.
Open Access
File(s)
Rights
CC BY-NC-ND 4.0: Creative Commons Attribution-NonCommercial-NoDerivatives
Language
English