Rasool, ShahzadShahzadRasoolSourin, AlexeiAlexeiSourin2022-03-052022-03-052016https://publica.fraunhofer.de/handle/publica/24698410.1007/s00371-016-1224-1Video interaction is a common way of communication in cyberspace. It can become more immersive by incorporating haptic modality. Using commonly available depth sensing controllers like Microsoft Kinect, information about the depth of a scene can be captured in real-time together with the video. In this paper, we present a method for real-time haptic interaction with videos containing depth data. Forces are computed based on the depth information. Spatial and temporal filtering of the depth stream is used to provide stability of force feedback delivered to the haptic device. Fast collision detection ensures the proposed approach to be used in real-time. We present an analysis of various factors that affect algorithm performance. The usefulness of the approach is illustrated by highlighting possible application scenarios.entangible imagestangible videoKinecthaptic renderingLead Topic: Visual Computing as a ServiceLead Topic: Digitized WorkResearch Line: Human computer interaction (HCI)Research Line: Computer graphics (CG)006006Real-time haptic interaction with RGBD video streamsjournal article