• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Classification vs. Regression in Supervised Learning for Single Channel Speaker Count Estimation
 
  • Details
  • Full
Options
2018
Conference Paper
Title

Classification vs. Regression in Supervised Learning for Single Channel Speaker Count Estimation

Abstract
The task of estimating the maximum number of concurrent speakers from single channel mixtures is important for various audio-based applications, such as blind source separation, speaker diarisation, audio surveillance or auditory scene classification. Building upon powerful machine learning methodology, we develop a Deep Neural Network (DNN) that estimates a speaker count. While DNNs efficiently map input representations to output targets, it remains unclear how to best handle the network output to infer integer source count estimates, as a discrete count estimate can either be tackled as a regression or a classification problem. In this paper, we investigate this important design decision and also address complementary parameter choices such as the input representation. We evaluate a state-of-the-art DNN audio model based on a Bi-directional Long Short-Term Memory network architecture for speaker count estimations. Through experimental evaluations aimed at identifying the best overall strategy for the task and show results for five seconds speech segments in mixtures of up to ten speakers.
Author(s)
Stöter, F.-R.
Chakrabarty, S.
Edler, B.
Habets, E.A.P.
Mainwork
IEEE International Conference on Acoustics, Speech, and Signal Processing 2018. Proceedings  
Conference
International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2018  
Open Access
DOI
10.1109/ICASSP.2018.8462159
Language
English
Fraunhofer-Institut für Integrierte Schaltungen IIS  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024