Ehrhardt, JonasJonasEhrhardtRamonat, MalteMalteRamonatHeesch, ReneReneHeeschBalzereit, KajaKajaBalzereitDiedrich, AlexanderAlexanderDiedrichNiggemann, OliverOliverNiggemann2022-11-172022-11-172022https://publica.fraunhofer.de/handle/publica/42886110.1109/etfa52439.2022.9921546To improve the autonomy of Cyber-Physical Production Systems (CPPS), a growing number of approaches in Artificial Intelligence (AI) is developed. However, implementations of such approaches are often validated on individual use-cases, offering little to no comparability. Though CPPS automation includes a variety of problem domains, existing benchmarks usually focus on single or partial problems. Additionally, they often neglect to test for AI-specific performance indicators, like asymptotic complexity scenarios or runtimes. Within this paper we identify minimum common set requirements for AI benchmarks in the domain of CPPS and introduce a comprehensive benchmark, offering applicability on diagnosis, reconfiguration, and planning approaches from AI. The benchmark consists of a grid of datasets derived from 16 simulations of modular CPPS from process engineering, featuring multiple functionalities, complexities, and individual and superposed faults. We evaluate the benchmark on state-of-the-art AI approaches in diagnosis, reconfiguration, and planning. The benchmark is made publicly available on GitHub.enbenchmarkdatasetplanningreconfigurationdiagnosisCPSCPPSCyber-Physical Production Systemmachine learningartificial intelligenceAn AI benchmark for Diagnosis, Reconfiguration & Planningconference paper