Schelenz, LukasLukasSchelenzRajanna, ShobhaShobhaRajannaGosalci, DenisDenisGosalciHeublein, LucasLucasHeubleinPirkl, JonasJonasPirklOtt, JonathanJonathanOttOtt, FelixFelixOttMutschler, ChristopherChristopherMutschlerFeigl, TobiasTobiasFeigl2025-07-162025-07-162025https://publica.fraunhofer.de/handle/publica/48962810.1109/PLANS61210.2025.110283532-s2.0-105009249712Forecasting within signal processing pipelines is crucial for mitigating delays, particularly in predicting the dynamic movements of objects such as NBA players. This task poses significant challenges due to the inherently interactive and unpredictable nature of sports, where abrupt changes in velocity and direction are prevalent. Traditional approaches, including (S)ARIMA(X), Kalman filters (KF), and Particle filters (PF), often struggle to model the non-linear dynamics present in such scenarios. Machine learning (ML) methods, such as long short-term memory (LSTM) networks, graph neural networks (GNNs), and Transformers, offer greater flexibility and accuracy but frequently fail to explicitly capture the interplay between temporal dependencies and contextual interactions, which are critical in chaotic sports environments.In this paper, we evaluate these models and assess their strengths and weaknesses. Experimental results reveal key performance trade-offs across input history length, generalizability, and the ability to incorporate contextual information. ML-based methods demonstrated substantial improvements over linear models across forecast horizons of up to 2s. Among the tested architectures, our hybrid LSTM augmented with contextual information achieved the lowest final displacement error (FDE) of 1.51m, outperforming temporal convolutional neural network (TCNN), graph attention network (GAT), and Transformers, while also requiring less data and training time compared to GAT and Transformers. Our findings indicate that no single architecture excels across all metrics, emphasizing the need for task-specific considerations in trajectory prediction for fast-paced, dynamic environments such as NBA gameplay.enfalseDeep LearningGraph Attention NetworkHuman BehaviourLong Short-term MemoryMachine LearningRecurrent NetworkSelf-attentionSport AnalyticsTime-seriesTrajectory ForecastingTransformerExploitation of Hidden Context in Dynamic Movement Forecasting: A Neural Network Journey from Recurrent to Graph Neural Networks and General Purpose Transformersconference paper