INU Researchers Develop Novel Deep Learning-Based Detection System for Autonomous Vehicles

The new system, aided by the Internet of things, improves the detection capabilities of autonomous vehicles even under unfavorable conditions.

Autonomous vehicles hold the promise of tackling traffic congestion, enhancing traffic flow through vehicle-to-vehicle communication, and revolutionizing the travel experience by offering comfortable and safe journeys. Additionally, integrating autonomous driving technology into electric vehicles could contribute to more eco-friendly transportation solutions.


A critical requirement for the success of autonomous vehicles is their ability to detect and navigate around obstacles, pedestrians, and other vehicles across diverse environments. Current autonomous vehicles employ smart sensors such as LiDARs (Light Detection and Ranging) for a 3D view of the surroundings and depth information, RADaR (Radio Detection and Ranging) for detecting objects at night and cloudy weather, and a set of cameras for providing RGB images and a 360-degree view, collectively forming a comprehensive dataset known as point cloud. However, these sensors often face challenges like reduced detection capabilities in adverse weather, on unstructured roads, or due to occlusion.

To overcome these shortcomings, an international team of researchers led by Professor Gwanggil Jeon from the Department of Embedded Systems Engineering at Incheon National University (INU), Korea, has recently developed a groundbreaking Internet-of-Things-enabled deep learning-based end-to-end 3D object detection system. "Our proposed system operates in real time, enhancing the object detection capabilities of autonomous vehicles, making navigation through traffic smoother and safer," explains Prof. Jeon. Their paper was made available online on October 17, 2023, and published in Volume 24, Issue 11 of the journal IEEE Transactions on Intelligent Transport Systems on November 2023.

The proposed innovative system is built on the YOLOv3 (You Only Look Once) deep learning object detection technique, which is the most active state-of-the-art technique available for 2D visual detection. The researchers first used this new model for 2D object detection and then modified the YOLOv3 technique to detect 3D objects. Using both point cloud data and RGB images as input, the system generates bounding boxes with confidence scores and labels for visible obstacles as output.

To assess the system's performance, the team conducted experiments using the Lyft dataset, which consisted of road information captured from 20 autonomous vehicles traveling a predetermined route in Palo Alto, California, over a four-month period. The results demonstrated that YOLOv3 exhibits high accuracy, surpassing other state-of-the-art architectures. Notably, the overall accuracy for 2D and 3D object detection were an impressive 96% and 97%, respectively.

Prof. Jeon emphasizes the potential impact of this enhanced detection capability: "By improving detection capabilities, this system could propel autonomous vehicles into the mainstream. The introduction of autonomous vehicles has the potential to transform the transportation and logistics industry, offering economic benefits through reduced dependence on human drivers and the introduction of more efficient transportation methods."

Furthermore, the present work is expected to drive research and development in various technological fields such as sensors, robotics, and artificial intelligence. Going ahead, the team aims to explore additional deep learning algorithms for 3D object detection, recognizing the current focus on 2D image development.

In summary, this groundbreaking study could pave the way for a widespread adoption of autonomous vehicles and, in turn, a more environment-friendly and comfortable mode of transport.

Reference

Title of original paper: A Smart IoT Enabled End-to-End 3D Object Detection System for Autonomous Vehicles
Journal: IEEE Transactions on Intelligent Transport Systems
DOI: https://doi.org/10.1109/TITS.2022.3210490


About Incheon National University
Website: http://www.inu.ac.kr/mbshome/mbs/inuengl/index.html

Featured Product

Helios™2 Ray Time-of-Flight Camera Designed for Unmatched Performance in Outdoor Lighting Conditions

Helios™2 Ray Time-of-Flight Camera Designed for Unmatched Performance in Outdoor Lighting Conditions

The Helios2 Ray camera is powered by Sony's DepthSense IMX556PLR ToF image sensor and is specifically engineered for exceptional performance in challenging outdoor lighting environments. Equipped with 940nm VCSEL laser diodes, the Helios2 Ray generates real-time 3D point clouds, even in direct sunlight, making it suitable for a wide range of outdoor applications. The Helios2 Ray offers the same IP67 and Factory Tough™ design as the standard Helios2 camera featuring a 640 x 480 depth resolution at distances of up to 8.3 meters and a frame rate of 30 fps.