Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Point Cloud Hand-Object Segmentation Using Multimodal Imaging with Thermal and Color Data for Safe Robotic Object Handover

 
: Zhang, Y.; Müller, S.; Stephan, B.; Gross, H.-M.; Notni, G.

:
Volltext ()

Sensors. Online journal 21 (2021), Nr.16, Art. 5676, 16 S.
https://www.mdpi.com/journal/sensors
ISSN: 1424-8220
ISSN: 1424-8239
ISSN: 1424-3210
Englisch
Zeitschriftenaufsatz, Elektronische Publikation
Fraunhofer IOF ()
multimodal imaging; thermal; Deep neural network; hand segmentation; Point Cloud Segmentation

Abstract
This paper presents an application of neural networks operating on multimodal 3D data (3D point cloud, RGB, thermal) to effectively and precisely segment human hands and objects held in hand to realize a safe human–robot object handover. We discuss the problems encountered in building a multimodal sensor system, while the focus is on the calibration and alignment of a set of cameras including RGB, thermal, and NIR cameras. We propose the use of a copper–plastic chessboard calibration target with an internal active light source (near-infrared and visible light). By brief heating, the calibration target could be simultaneously and legibly captured by all cameras. Based on the multimodal dataset captured by our sensor system, PointNet, PointNet++, and RandLA-Net are utilized to verify the effectiveness of applying multimodal point cloud data for hand–object segmentation. These networks were trained on various data modes (XYZ, XYZ-T, XYZ-RGB, and XYZ-RGB-T). The experimental results show a significant improvement in the segmentation performance of XYZ-RGB-T (mean Intersection over Union: 82.8% by RandLA-Net) compared with the other three modes (77.3% by XYZ-RGB, 35.7% by XYZ-T, 35.7% by XYZ), in which it is worth mentioning that the Intersection over Union for the single class of hand achieves 92.6%.

: http://publica.fraunhofer.de/dokumente/N-640102.html