Rojtberg, PavelPavelRojtbergPöllabauer, ThomasThomasPöllabauer2025-03-212025-03-212025https://publica.fraunhofer.de/handle/publica/48573710.1007/978-981-97-8705-0_33Our work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects [1] that were used in the YCB-Video (YCB-V) dataset [19], enabling the evaluation of algorithm performance when transferred across datasets. The dataset consists of 21 synchronized event and RGB-D sequences, amounting to a total of 7:43 min of video. Notably, 12 of these sequences feature the same object arrangement as the YCB-V subset used in the BOP challenge [16]. Our dataset is the first to provide ground truth 6DoF pose data for event streams. Furthermore, we evaluate the generalization capabilities of two state-of-the-art algorithms, which were pre-trained for the BOP challenge, using our novel YCB-V sequences. The proposed dataset is available at https://github.com/paroj/ycbev.enBranche: Automotive IndustryResearch Line: Computer vision (CV)LTA: Generation, capture, processing, and output of images and 3D modelsObject pose estimationYCB-Ev: Event-Vision Dataset for 6 DoF Object Pose Estimationconference paper