Inhalt des Dokuments
TUB Multi-Object and Multi-Camera Tracking Dataset
- Sample images of the virtual world with different illumination settings (dawn, day, dusk)
- © Fachgebiet Nachrichtenübertragung
The TU Berlin Multi-Object and Multi-Camera Tracking Dataset (MOCAT) is a synthetic dataset to train and test tracking and detection systems in a virtual world. One of the key advantages of this dataset is that there is a complete and accurate ground truth, including pixel accurate object masks, available. All sequences are rendered 3 times, each with different illumination settings. This allows to directly measure the influence of the illumination to the algorithm under test. There are 8 to 10 different camera views (including camera calibration information) with partly overlapping FOVs for each sequence available. The ground truth contains the world position for each object, so the multi-camera tracking performance can be evaluated as well. All sequences contain vehicles, animals and pedestrians as objects to detect and track.
Related Publications
- Erik Bochinski, Volker Eiselein, Thomas Sikora
Training a Convolutional Neural Network for Multi-Class Object Detection Using Solely Virtual World Data
IEEE International Conference on Advanced Video and Signal-Based Surveillance, Colorado Springs, CO, USA, 23.08.2016 - 26.08.2016, pp. 278-285
Electronic ISBN: 978-1-5090-3811-4 Print on Demand(PoD) ISBN: 978-1-5090-3812-1 DOI: 10.1109/AVSS.2016.7738056
Details BibTeX