• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask
 
  • Details
  • Full
Options
2018
Conference Paper
Title

Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask

Abstract
Singing voice separation based on deep learning relies on the usage of time-frequency masking. In many cases the masking process is not a learnable function or is not encapsulated into the deep learning optimization. Consequently, most of the existing methods rely on a post processing step using the generalized Wiener filtering. This work proposes a method that learns and optimizes (during training) a source-dependent mask and does not need the aforementioned post processing step. We introduce a recurrent inference algorithm, a sparse transformation step to improve the mask generation process, and a learned denoising filter. Obtained results show an increase of 0.49 dB for the signal to distortion ratio and 0.30 dB for the signal to interference ratio, compared to previous state-of-the-art approaches for monaural singing voice separation.
Author(s)
Mimilakis, S.I.  
Drossos, K.
Santos, J.F.
Virtanen, T.
Bengio, Y.
Schuller, G.  
Mainwork
IEEE International Conference on Acoustics, Speech, and Signal Processing 2018. Proceedings  
Conference
International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2018  
Open Access
DOI
10.1109/ICASSP.2018.8461822
Additional link
Full text
Language
English
Fraunhofer-Institut für Digitale Medientechnologie IDMT  
Keyword(s)
  • automatic music analysis

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024