Under CopyrightKöhl, PhilippPhilippKöhlSpecker, AndreasAndreasSpeckerSchumann, ArneArneSchumannBeyerer, JürgenJürgenBeyerer2022-03-141.8.20202020https://publica.fraunhofer.de/handle/publica/40850210.1109/CVPRW50498.2020.00529Existing multi target multi camera tracking (MTMCT) datasets are small in terms of the number of identities and video length. The creation of new real world datasets is hard as privacy has to be guaranteed and the labeling is tedious. Therefore in the scope of this work a mod for GTA V to record a MTMCT dataset has been developed and used to record a simulated MTMCT dataset called Multi Camera Track Auto (MTA). The MTA dataset contains over 2,800 person identities, 6 cameras and a video length of over 100 minutes per camera. Additionally a MTMCT system has been implemented to provide a baseline for the created dataset. The system's pipeline consists of stages for person detection, person re-identification, single camera multi target tracking, track distance calculation, and track association. The track distance calculation comprises a weighted aggregation of the following distances: a single camera time constraint, a multi camera time constraint using overlapping camera areas, an appearance feature distance, a homography matching with pairwise camera homographies, and a linear prediction based on the velocity and the time difference of tracks. When using all partial distances, we were able to surpass the results of state-of-the-art single camera trackers by +13% IDF1 score. The MTA dataset, code, and baselines are available at github.com/schuar-iosb/mta-dataset.en004670The MTA Dataset for Multi Target Multi Camera Pedestrian Tracking by Weighted Distance Aggregationconference paper