Hoeffken, M.M.HoeffkenOberhoff, D.D.OberhoffKolesnik, M.M.Kolesnik2022-03-112022-03-112011https://publica.fraunhofer.de/handle/publica/37342810.1007/978-3-642-23687-7_52In this paper we present an extension to the Bayesian formulation of multi-scale differential optical flow estimation by Simoncelli et. al.[1]. We exploit the observation that optical flow is consistent in consecutive time frames and thus propagating information over time should improve the quality of the flow estimation. This propagation is formulated via insertion of additional Kalman filters that filter the flow over time by tracking the movement of each pixel. To stabilize these filters and the overall estimation, we insert a spatial regularization into the prediction lane. Through the recursive nature of the filter the regularization has the ability to perform filling-in of missing information over extended spatial extents. We benchmark our algorithm, which is implemented in the nVidia Cuda framework to exploit the processing power of modern graphical processing units (GPUs), against a state-of-the-art variational flow estimation algorithm that is also implemented in Cuda. The comparison shows that, while the variational method yields somewhat higher precision, our method is more than an order of magnitude faster and can thus operate in real-time on live video streams.en004Temporal prediction and spatial regularization in differential optical flowconference paper