View-invariant person re-identification with an implicit shape model
In this paper, we approach the task of appearance based person re-identification for scenarios where no biometric features can be used. For that, we build on a person reidentification approach that uses the Implicit Shape Model (ISM) and SIFT features for re-identification. This approach builds identity models of persons during tracking and employs these models for re-identification. We apply this re-identification, which was until now only evaluated in the infrared spectrum, to data acquired in the visible spectrum. Furthermore we evaluate view independence of the re-identification approach and introduce methods that extend view invariance. Specifically, we (i) propose a method for online view-determination of a tracked person, (ii) use the online view-determination to generate view specific identity models of persons which increase model distinctiveness in re-identification, and (iii) introduce a method to convert identity models between views to increase view independence.