Options
June 30, 2025
Conference Paper
Title
Benchmarking Learnable Mesh and Texture Representations for Immersive Digital Twins
Abstract
Neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) use volumetric scene representations to achieve impressive visual results in the field of novel-view synthesis. However, traditional 3D pipelines are dominated by textured meshes, supported by hardware assisted rendering and a huge software ecosystem. In this paper, we show that mesh-based workflows can also profit from those novel reconstruction methods by evaluating mesh reconstruction algorithms paired with
view-dependent textures in terms of texture sharpness, surface accuracy and real-time rendering performance. For that purpose, we employ a modular 3D reconstruction pipeline and use it to benchmark not only publicly available data sets, but additionally four new high-quality data sets of our own. Finally, we highlight its applicability in XR applications for virtual trade shows.
view-dependent textures in terms of texture sharpness, surface accuracy and real-time rendering performance. For that purpose, we employ a modular 3D reconstruction pipeline and use it to benchmark not only publicly available data sets, but additionally four new high-quality data sets of our own. Finally, we highlight its applicability in XR applications for virtual trade shows.
Author(s)
File(s)
Rights
Use according to copyright law
Language
English