Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Multi-Staged Cross-Lingual Acoustic Model Adaption for Robust Speech Recognition in Real-World Applications - A Case Study on German Oral History Interviews

: Gref, Michael; Walter, Oliver; Schmidt, Christoph Andreas; Behnke, Sven; Köhler, Joachim

Volltext urn:nbn:de:0011-n-5904526 (256 KByte PDF)
MD5 Fingerprint: f3e826a534b9d6fed51e3a05b52ea6b9
(CC) by-nc
Erstellt am: 4.6.2020

Calzolari, N. ; European Language Resources Association -ELRA-, Paris:
12th Language Resources and Evaluation Conference, LREC 2020. Proceedings. Online resource : Marseille, May 11-16, 2020
Paris: ELRA, 2020
Language Resources and Evaluation Conference (LREC) <12, 2020, Marseille>
Bundesministerium für Bildung und Forschung BMBF (Deutschland)
Forschungsinfrastrukturen für die Geistes- und qualitativen Sozialwissenschaften; 01UG1511B; KA3
Kölner Zentrum für Analyse und Archivierung audiovisueller Daten
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IAIS ()
acoustic modeling; acoustic model adaption; cross-lingual; digital humanities; oral history; speech recognition; transfer learning; under-resourced speech recognition

While recent automatic speech recognition systems achieve remarkable performance when large amounts of adequate, high quality annotated speech data is used for training, the same systems often only achieve an unsatisfactory result for tasks in domains that greatly deviate from the conditions represented by the training data. For many real-world applications, there is a lack of sufficient data that can be directly used for training robust speech recognition systems. To address this issue, we propose and investigate an approach that performs a robust acoustic model adaption to a target domain in a cross-lingual, multi-staged manner. Our approach enables the exploitation of large-scale training data from other domains in both the same and other languages. We evaluate our approach using the challenging task of German oral history interviews, where we achieve a relative reduction of the word error rate by more than 30% compared to a model trained from scratch only on the target domain, and 6-7% relative compared to a model trained robustly on 1000 hours of same-language out-of-domain training data.