Real-time detection of objects in the 3D scene is one of the tasks an autonomous agent needs to perform for understanding its surroundings. While recent Deep Learning-based solutions achieve satisfactory performance, their high computational cost renders their application in real-life settings in which computations need to be performed on embedded platforms intractable. In this paper, we analyze the efficiency of two popular voxel-based 3D object detection methods providing a good compromise between high performance and speed based on two aspects, their ability to detect objects located at large distances from the agent and their ability to operate in real time on embedded platforms equipped with high-performance GPUs. Our experiments show that these methods mostly fail to detect distant small objects due to the sparsity of the input point clouds at large distances. Moreover, models trained on near objects achieve similar or better performance compared to those trained on all objects in the scene. This means that the models learn object appearance representations mostly from near objects. Our findings suggest that a considerable part of the computations of existing methods is focused on locations of the scene that do not contribute with successful detection. This means that the methods can achieve a speed-up of 40-60% by restricting operation to near objects while not sacrificing much in performance.
|2021 International Conference on Emerging Techniques in Computational Intelligence (ICETCI)
|Naresh Mallenahalli, Arya Bhattacharya, Sabrina Senatore, Atul Negi, Akira Hirose
|Udgivet - aug. 2021
|2021 International Conference on Emerging Techniques in Computational Intelligence, ICETCI 2021 - Virtual, Hyderabad, Indien
Varighed: 25 aug. 2021 → 27 aug. 2021
|2021 International Conference on Emerging Techniques in Computational Intelligence, ICETCI 2021
|25/08/2021 → 27/08/2021