• English
  • Deutsch
  • Log In
    Password Login
    or
  • Research Outputs
  • Projects
  • Researchers
  • Institutes
  • Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask
 
  • Details
  • Full
Options
2018
Conference Paper
Titel

Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask

Abstract
Singing voice separation based on deep learning relies on the usage of time-frequency masking. In many cases the masking process is not a learnable function or is not encapsulated into the deep learning optimization. Consequently, most of the existing methods rely on a post processing step using the generalized Wiener filtering. This work proposes a method that learns and optimizes (during training) a source-dependent mask and does not need the aforementioned post processing step. We introduce a recurrent inference algorithm, a sparse transformation step to improve the mask generation process, and a learned denoising filter. Obtained results show an increase of 0.49 dB for the signal to distortion ratio and 0.30 dB for the signal to interference ratio, compared to previous state-of-the-art approaches for monaural singing voice separation.
Author(s)
Mimilakis, S.I.
Drossos, K.
Santos, J.F.
Schuller, G.
Virtanen, T.
Bengio, Y.
Hauptwerk
IEEE International Conference on Acoustics, Speech, and Signal Processing 2018. Proceedings
Konferenz
International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2018
Thumbnail Image
DOI
10.1109/ICASSP.2018.8461822
Language
English
google-scholar
Fraunhofer-Institut für Digitale Medientechnologie IDMT
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Send Feedback
© 2022