Gref, MichaelMichaelGrefSchmidt, Christoph AndreasChristoph AndreasSchmidtBehnke, SvenSvenBehnkeKöhler, JoachimJoachimKöhler2022-03-142022-03-142019https://publica.fraunhofer.de/handle/publica/40511310.1109/ICME.2019.00142In automatic speech recognition, often little training data is available for specific challenging tasks, but training of state-of-the-art automatic speech recognition systems requires large amounts of annotated speech. To address this issue, we propose a two-staged approach to acoustic modeling that combines noise and reverberation data augmentation with transfer learning to robustly address challenges such as difficult acoustic recording conditions, spontaneous speech, and speech of elderly people. We evaluate our approach using the example of German oral history interviews, where a relative average reduction of the word error rate by 19.3% is achieved.enacoustic signal processinglearning (artificial intelligence)speech recognitionspontaneous speechgerman oral history interviewacoustic modeling adaptionrobust speech recognitionannotated speechreverberation data augmentationautomatic speech recognition systemacoustic recording conditionelderly people speechword error ratetransfer learningtrainingtraining datahistorydata modelreverberationadaptation modelrobust speech recognitiondomain adaptiontransfer learningmulti-condition trainingdata augmentationoral history005006629Two-Staged Acoustic Modeling Adaption for Robust Speech Recognition by the Example of German Oral History Interviewsconference paper