Learning Pedestrian Detection from Virtual Worlds
Giuseppe Amato, Luca Ciampi, Fabrizio Falchi, Claudio Gennaro, Nicola Messina

Abstract

In this paper, we present a real-time pedestrian detection system that has been trained using a virtual environment. This is a very popular topic of research having endless practical applications and recently, there was an increasing interest in deep learning architectures for performing such a task. However, the availability of large labeled datasets is a key point for an effective train of such algorithms. For this reason, in this work, we introduced ViPeD, a new synthetically generated set of images extracted from a realistic 3D video game where the labels can be automatically generated exploiting 2D pedestrian positions extracted from the graphics engine. We exploited this new synthetic dataset fine-tuning a state-of-the-art computationally efficient Convolutional Neural Network (CNN). A preliminary experimental evaluation, compared to the performance of other existing approaches trained on real-world images, shows encouraging results.

Paper (Preprint PDF, 3.3MB)

The paper has been presented at ICIAP 2019 .

Dataset

(section under construction)

This dataset is an extension of the JTA (Joint Track Auto) dataset. For this reason, we don't publish the images directly; we instead release the annotations and the Python code used for the image augmentation process.

Here you can find more details about the ViPeD dataset, together with links to the code and the annotations.

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.