Welcome To  NEM   

Journals(Abstract)

Research on Cross-modal Feature Fusion and Multi-target Recognition Integration Method for Autonomous Driving

Ling Wu, Tingting Gao, Na Li, Shaojie Gao, Ziheng Yang

East University of HeiLongJiang

Abstract:

In this paper, we propose an integration method for cross-modal feature fusion and multi-target recognition for autonomous driving. By fusing data from multiple sensors such as vision, radar, and lidar, this method gives full play to the advantages of different sensors and improves the accuracy and robustness of target detection. In practical applications, vision sensors can provide rich image information, which is helpful to identify the appearance characteristics of targets. Radar can measure the distance and speed of the target and provide dynamic information of the target for autonomous vehicles; Lidar can build a high-precision 3D environment model to help vehicles better perceive the surrounding spatial structure. At the same time, we designed an ensemble learning framework to optimize the performance of multi-object recognition. By integrating multiple different object recognition models, the advantages of each model can be combined, and the limitations of a single model can be reduced, so as to achieve more accurate and reliable multi-object recognition. Experimental results show that the proposed method achieves significant performance improvement on multiple autonomous driving-related datasets, which provides strong support for the further development of autonomous driving technology.


Key Words:

autonomous driving; cross-modal feature fusion; multi-target recognition; integrated learning; sensor data


技术支持:人人站CMS
Powered by RRZCMS