• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Automatic and Manual Annotation of Time-Varying Perceptual Properties in Movie Soundtracks
 
  • Details
  • Full
Options
2012
Conference Paper
Title

Automatic and Manual Annotation of Time-Varying Perceptual Properties in Movie Soundtracks

Abstract
In this paper, we present an automated tool as a part of the SyncGlobal project for time continuous prediction of loudness and brightness in soundtracks. The novel AnnotationTool is presented where manual time continuous annotations can be performed. We rate well-known audio features to represent two perceptual attributes-loudness and brightness. A regression model is trained with the manual annotations and the acoustic features in order to model both attributes. Five different regression methods are implemented and their success in tracking the two perceptions is studied. A coefficient of determination (R2) of 0.91 is achieved for loudness and 0.35 for brightness using Support Vector Regression (SVR) yielding a better performance than Friberg et al. [1].
Author(s)
Dhandhania, Vedant
Abeßer, Jakob  
Kruspe, Anna
Großmann, Holger  
Mainwork
9th Sound and Music Computing Conference, SMC 2012. Proceedings  
Conference
Sound and Music Computing Conference (SMC) 2012  
Link
Link
Language
English
Fraunhofer-Institut für Digitale Medientechnologie IDMT  
Keyword(s)
  • music classification

  • music annotation

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024