Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Non-Planar Inside-Out Dense Light-Field Dataset and Reconstruction Pipeline

 
: Zakeri, Faezeh Sadat; Durmush, A.; Ziegler, M.; Bätz, M.; Kleinert, J.

:
Volltext urn:nbn:de:0011-n-5616982 (1.1 MByte PDF)
MD5 Fingerprint: 43fba30d993a6dc6186b9fb715f0e81f
© IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Erstellt am: 15.10.2019


Institute of Electrical and Electronics Engineers -IEEE-; IEEE Signal Processing Society:
IEEE International Conference on Image Processing, ICIP 2019. Proceedings : 22-25 September 2019, Taipei, Taiwan
Taipei, Taiwan: IEEE, 2019
ISBN: 978-1-5386-6249-6
ISBN: 978-1-5386-6250-2
S.1059-1063
International Conference on Image Processing (ICIP) <2019, Taipei>
European Commission EC
H2020; 676401; ETN-FPI
European Training Network on Full Parallax Imaging
Englisch
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IIS ()
light-field; non-planar; dense dataset; inside-out capture; view rendering

Abstract
Light-field imaging provides full spatio-angular information of the real world by capturing the light rays in various directions. This allows image processing algorithms to result in immersive user experiences such as VR. To evaluate, and develop reconstruction algorithms, a precise and dense light-field dataset of the real world that can be used as ground truth is desirable. In this paper, a non-planar capture is done and a view rendering pipeline is implemented. The acquired dataset includes two scenes that are captured by an accurate industrial robot with an attached color camera such that the camera is looking outward. The arm moves on a cylindrical path for a field of view of 125 degrees with angular step size of 0.01 degrees. Both scenes and their corresponding geometric calibration parameters will be available with the publication of the paper. The images are pre-processed in different steps. The disparity between two adjacent views with resolution of 5168×3448 is less than 1.6 pixels; the parallax between the foreground and the background objects is less than 0.6 pixels. Furthermore, the pre-processed data is used for a view rendering experiment to demonstrate an exemplary use case. In addition, the rendered results are evaluated visually and objectively.

: http://publica.fraunhofer.de/dokumente/N-561698.html