Date: December 04, 2023. — A team of researchers from Incheon National University (INU), Korea, has developed a novel Internet-of-Things-enabled deep learning-based end-to-end 3D object detection system that can improve the detection capabilities of autonomous vehicles even under unfavorable conditions. Their paper was published in the journal IEEE Transactions on Intelligent Transport Systems.
Autonomous vehicles are projected to transform the transportation business by providing safer, more comfortable, and environmentally responsible travel options. However, one of the most difficult difficulties for self-driving cars is their capacity to identify and avoid objects, pedestrians, and other vehicles. This challenge extends across a variety of road conditions. Current detection methods rely on sensors such as LiDARs, RADaRs, and cameras. These sensors are susceptible to weather, unstructured roads, and occlusion. The study addressed this difficulty.
The researchers adapted the cutting-edge YOLOv3 (You Only Look Once) algorithm, a quick and precise method for identifying 2D objects. They used this modified algorithm to identify 3D objects. To improve the system’s performance and dependability, they incorporated Internet of Things (IoT) technology. This technology enables objects to exchange data and communicate over the internet.
The proposed system accepts point cloud data and RGB pictures as input. It outputs bounding boxes with confidence ratings and labels for visible obstacles. The system is capable of detecting a wide range of items, including vehicles, trucks, buses, bicycles, motorbikes, people, and traffic signs. It can also handle occlusion, rotation, and scale variations.
The researchers put their system through its paces on the Lyft dataset, which includes road data collected from 20 autonomous vehicles following a planned route in Palo Alto, California, during a four-month period. They compared their system to various current approaches and discovered that their system was more accurate and had lower latency.
The researchers are confident that their system has the potential to boost the widespread adoption and dependability of autonomous vehicles. Moreover, they assert that its applicability extends beyond vehicular autonomy to various domains like robotics, surveillance, and gaming. Professor Gwanggil Jeon, the lead author of the paper, emphasizes, “Our proposed system functions in real-time, augmenting the object detection capabilities of autonomous vehicles, thereby ensuring smoother and safer navigation through traffic.”
The researchers intend to develop their system further by combining other data sources, such as GPS, IMU, and odometry, as well as by employing more powerful deep learning models, such as Transformer and Graph Neural Network. They also intend to work with industry partners to evaluate their system in realistic circumstances and market their invention.