Deep learning prototype domains for person re-identification
Person re-identification (re-id) is the task of matching multiple occurrences of the same person from different cameras, poses, lighting conditions, and a multitude of other factors which alter the visual appearance. Typically, this is achieved by learning either optimal features or distance metrics which are adapted to specific pairs of camera views dictated by the pairwise labelled training datasets. In this work, we formulate a deep learning based novel approach to automatic prototype-domain discovery for domain perceptive person re-id. The approach scales to new and unseen scenes without requiring new training data. We learn a separate re-id model for each of the discovered prototype-domains and during model deployment, use the person probe image to automatically select the model of the closest prototype-domain. Our approach requires neither supervised nor unsupervised transfer learning, i.e. no data available from target domains. Extensive evaluations are carried out using automatically detected bounding boxes with low-resolution and partial occlusion on two large scale re-id benchmarks, CUHK-SYSU and PRW. Our approach outperforms state-of-the-art unsupervised methods significantly and is competitive against supervised methods which use labelled test domain data.