Investigation on Combining 3D Convolution of Image Data and Optical Flow to Generate Temporal Action Proposals
Workshop paper presented at CRPV Workshop
In this paper, several variants of two-stream architectures for temporal action proposal generation in long, untrimmed videos are presented. Inspired by the recent advances in the field of human action recognition utilizing 3D convolutions in combination with two-stream networks and based on the Single-Stream Temporal Action Proposals (SST) architecture , four different two-stream architectures utilizing sequences of images on one stream and sequences of images of optical flow on the other stream are subsequently investigated. The four architectures fuse the two separate streams at different depths in the model; for each of them, a broad range of parameters is investigated systematically as well as an optimal parametrization is empirically determined. The experiments on the THUMOS' 14  dataset - containing untrimmed videos of 20 different sporting activities for temporal action proposals - show that all four two-stream architectures are able to outperform the original single-stream SST and achieve state of the art results. Additional experiments revealed that the improvements are not restricted to one method of calculating optical flow by exchanging the method of Brox  with FlowNet2  and still achieving improvements.