Options
2021
Conference Paper
Title
Towards Lower Precision Quantization for Pedestrian Detection in Crowded Scenario
Abstract
Automatic pedestrian detection in real-world uncooperative scenarios is a well-known problem in computer vision, which has again gained in visibility last year due to distancing requirements. This remains a very challenging task, especially in crowded areas. Due to diverse technical and privacy issues, embedded systems such as smart cameras and smaller drones are becoming ubiquitous. Those complex detection models are not designed for on-edge processing in resource-constrained environments. Therefore, quantization techniques are required, in order to reduce the weights of a model to low-precision and not only effectively compress the model, but also allow to use low bit width arithmetic, which in term can be accelerated from specialized hardware. However, using an effective quantization scheme while maintaining accuracy is challenging. In this work we first establish a Quantization-aware training (QAT) and Post-training Quantization (PTQ) baseline for 8-bit uniform quantization to RetinaNet for person detection on the extremely challenging PANDA dataset. Those achieve near lossless performance in terms of accuracy by about 5× speedup of the CPU inference and 4× model size reduction for 8-bit PTQ quantized model. Further experiments with aggressive quantization scheme in 4- and 2-bit show diverse challenges resulting in severe instabilities. We apply both uniform and non-uniform quantization to overcome those and provide insights and strategies to fully quantize in 4- and 2-bit. Through this process we systematically evaluate the sensibility of individual parts of RetinaNet for quantization in very low precision. Finally, we show the resistance of quantization for limited amount of data.