Huber, PatrickPatrickHuberGöhner, UlrichUlrichGöhnerTrapp, MarioMarioTrappZender, JonathanJonathanZenderLichtenberg, RabeaRabeaLichtenberg2025-01-102025-08-262025-01-102024https://publica.fraunhofer.de/handle/publica/48121910.1109/ASIANComNet63184.2024.10811052The response time of Artificial Neural Network (ANN)-inference is of utmost importance in embedded applications, particularly continual stream-processing. Predictive maintenance applications require timely predictions of state changes. This study serves to enable the reader to estimate the response time of a given model based on the underlying platform, and emphasizes the relevance of benchmarking generic ANN applications on edge devices. We analyze the influence of net parameters, activation functions as well as single-and multithreading on execution times. Potential side effects such as tact rate variances or other hardware-related influences are being outlined and accounted for. The results underline the complexity of task-partitioning and scheduling strategies while emphasizing the necessity of precise concertation of the parameters to achieve optimal performance on any platform. This study shows that cutting-edge frameworks don’t necessarily perform the required concertations automatically for all configurations, which may negatively impact performance.enartificial neural networkANNANN inferencetensorflow liteembedded systemsbenchmarkingresponse timeAnalysis of Neural Network Inference Response Times on Embedded Platformsconference paper