Rawal, Parth KapilParth KapilRawalSompura, MrunalMrunalSompuraHintze, WolfgangWolfgangHintze2024-01-252024-01-252023-11-18https://publica.fraunhofer.de/handle/publica/45933410.48550/arXiv.2311.11039Synthetic data is being used lately for training deep neural networks in computer vision applications such as object detection, object segmentation and 6D object pose estimation. Domain randomization hereby plays an important role in reducing the simulation to reality gap. However, this generalization might not be effectie in specialized domains like a production environment involving complex assemblies. Either the individual parts, trained with synthetic images, are integrated in much larger assemblies making them indistinguishable from their counterparts and result infalse positives or are partially occluded just enough to give rise to false negatives. Domain knowledge is vital in these cases and if conceived effectiely while generating synthetic data, can show a considerable improvement in bridging the simulation to reality gap. This paper focuses on synthetic data generation procedures for parts and assemblies used in a production environment.The basic procedures for synthetic data generation and their various combinations are evaluatedand compared on images captured in a production environment, where results show up to 15% improvement using combinations of basic procedures. Reducing the simulation to reality gap in this way can aid to utilize the true potential of robot assisted production using artificial intelligence.ensynthetic dataphotorealistic renderingproductionsim2real gapobject detectionSynthetic Data Generation for Bridging Sim2Real Gap in a Production Environmentpaper