Multi-input Architecture and Disentangled Representation Learning for Multi-dimensional Modeling of Music Similarity
In the context of music information retrieval, similarity-based approaches are useful for a variety of tasks that benefit from a query-by-example approach. Music however, naturally decomposes into a set of semantically meaningful factors of variation. Current representation learning strategies pursue the disentanglement of such factors from deep representations, and result in highly interpretable models. This allows to model the perception of music similarity, which is highly subjective and multi-dimensional. While the focus of prior work is on metadata driven similarity, we suggest to directly model the human notion of multi-dimensional music similarity. To achieve this, we propose a multi-input deep neural network architecture, which simultaneously processes mel-spectrogram, CENSchromagram and tempogram representations in order to extract informative features for different disentangled musical dimensions: genre, mood, instrument, era, tempo, and key. We evaluated the proposed music similarity approach using a triplet prediction task and found that the proposed multi-input architecture outperforms a state of the art method. Furthermore, we present a novel multi-dimensional analysis to evaluate the influence of each disentangled dimension on the perception of music similarity.