• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Exploitation of Hidden Context in Dynamic Movement Forecasting: A Neural Network Journey from Recurrent to Graph Neural Networks and General Purpose Transformers
 
  • Details
  • Full
Options
2025
Conference Paper
Title

Exploitation of Hidden Context in Dynamic Movement Forecasting: A Neural Network Journey from Recurrent to Graph Neural Networks and General Purpose Transformers

Abstract
Forecasting within signal processing pipelines is crucial for mitigating delays, particularly in predicting the dynamic movements of objects such as NBA players. This task poses significant challenges due to the inherently interactive and unpredictable nature of sports, where abrupt changes in velocity and direction are prevalent. Traditional approaches, including (S)ARIMA(X), Kalman filters (KF), and Particle filters (PF), often struggle to model the non-linear dynamics present in such scenarios. Machine learning (ML) methods, such as long short-term memory (LSTM) networks, graph neural networks (GNNs), and Transformers, offer greater flexibility and accuracy but frequently fail to explicitly capture the interplay between temporal dependencies and contextual interactions, which are critical in chaotic sports environments.In this paper, we evaluate these models and assess their strengths and weaknesses. Experimental results reveal key performance trade-offs across input history length, generalizability, and the ability to incorporate contextual information. ML-based methods demonstrated substantial improvements over linear models across forecast horizons of up to 2s. Among the tested architectures, our hybrid LSTM augmented with contextual information achieved the lowest final displacement error (FDE) of 1.51m, outperforming temporal convolutional neural network (TCNN), graph attention network (GAT), and Transformers, while also requiring less data and training time compared to GAT and Transformers. Our findings indicate that no single architecture excels across all metrics, emphasizing the need for task-specific considerations in trajectory prediction for fast-paced, dynamic environments such as NBA gameplay.
Author(s)
Schelenz, Lukas
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Rajanna, Shobha
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Gosalci, Denis
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Heublein, Lucas
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Pirkl, Jonas
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Ott, Jonathan  
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Ott, Felix
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Mutschler, Christopher  
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Feigl, Tobias  
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Mainwork
IEEE/ION Position, Location and Navigation Symposium, PLANS 2025  
Conference
Position, Location and Navigation Symposium 2025  
DOI
10.1109/PLANS61210.2025.11028353
Language
English
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Keyword(s)
  • Deep Learning

  • Graph Attention Network

  • Human Behaviour

  • Long Short-term Memory

  • Machine Learning

  • Recurrent Network

  • Self-attention

  • Sport Analytics

  • Time-series

  • Trajectory Forecasting

  • Transformer

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024