Under CopyrightMüller, LinusLinusMüllerBätz, MichelMichelBätzBerg, AndreAndreBergGray, TimothyTimothyGrayGul, Muhammad Shahzeb KhanMuhammad Shahzeb KhanGulSchinabeck, ChristianChristianSchinabeckKeinert, JoachimJoachimKeinert2025-10-222025-10-222025-06-30https://publica.fraunhofer.de/handle/publica/497658https://doi.org/10.24406/publica-584110.1109/ICMEW68306.2025.1115202510.24406/publica-5841Neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) use volumetric scene representations to achieve impressive visual results in the field of novel-view synthesis. However, traditional 3D pipelines are dominated by textured meshes, supported by hardware assisted rendering and a huge software ecosystem. In this paper, we show that mesh-based workflows can also profit from those novel reconstruction methods by evaluating mesh reconstruction algorithms paired with view-dependent textures in terms of texture sharpness, surface accuracy and real-time rendering performance. For that purpose, we employ a modular 3D reconstruction pipeline and use it to benchmark not only publicly available data sets, but additionally four new high-quality data sets of our own. Finally, we highlight its applicability in XR applications for virtual trade shows.enMeshTextureNeRFGaussian SplatsReal-Time RenderingDigital TwinsBenchmarking Learnable Mesh and Texture Representations for Immersive Digital Twinsconference paper