Abeßer, J.J.AbeßerNowakowski, M.M.NowakowskiWeiß, C.C.Weiß2022-03-142024-04-152022-03-142021https://publica.fraunhofer.de/handle/publica/41165810.1007/978-3-030-70210-6_3Electroacoustic music is experienced primarily through auditory perception, as it is not usually based on a prescriptive score. For the analysis of such pieces, transcriptions are sometimes created to illustrate events and processes graphically in a readily comprehensible way. These are usually based on the spectrogram of the recording. Although the manual generation of transcriptions is often time-consuming, they provide a useful starting point for any person who has interest in a work. Deep-learning algorithms that learn to recognize characteristic spectral patterns using supervised learning represent a promising technology to automatize this task. This paper investigates and explores the labeling of sound objects in electroacoustic music recordings. We test several neural-network architectures that enable classification of sound objects using musicological and signal-processing methods. We also show future perspectives how our results can be improved and applied to a new gradient-based visualization approach.enAutomatic Music Analysis621006Towards Deep Learning Strategies for Transcribing Electroacoustic Musicconference paper