Dhandhania, VedantVedantDhandhaniaAbeßer, JakobJakobAbeßerKruspe, AnnaAnnaKruspeGroßmann, HolgerHolgerGroßmann2022-03-122022-03-122012https://publica.fraunhofer.de/handle/publica/379559In this paper, we present an automated tool as a part of the SyncGlobal project for time continuous prediction of loudness and brightness in soundtracks. The novel AnnotationTool is presented where manual time continuous annotations can be performed. We rate well-known audio features to represent two perceptual attributes-loudness and brightness. A regression model is trained with the manual annotations and the acoustic features in order to model both attributes. Five different regression methods are implemented and their success in tracking the two perceptions is studied. A coefficient of determination (R2) of 0.91 is achieved for loudness and 0.35 for brightness using Support Vector Regression (SVR) yielding a better performance than Friberg et al. [1].enmusic classificationmusic annotation621006Automatic and Manual Annotation of Time-Varying Perceptual Properties in Movie Soundtracksconference paper