A re-calibration method for object detection with multi-modal alignment bias in autonomous driving

在自动驾驶中,多模态目标检测取得了重大突破,得益于不同传感器互补信息的融合。传感器 such as LiDAR 和相机之间的融合校准总是被期望是精确的,但实际中,在车辆离开工厂时,校准矩阵被固定,可能会导致校准偏差。由于关于校准对融合检测性能的影响的研究相对较少,灵活的校准依赖多传感器检测方法一直很有吸引力。在本文中,我们对 SOTA 检测方法 EPNet++ 进行了实验,证明了校准偏差对检测性能的影响非常严重。我们还提出了一个基于语义分割的重新校准模型,可以与检测算法结合使用,提高多模态校准偏差的可观性能和鲁棒性。

Multi-modal object detection in autonomous driving has achieved great breakthroughs due to the usage of fusing complementary information from different sensors. The calibration in fusion between sensors such as LiDAR and camera is always supposed to be precise in previous work. However, in reality, calibration matrices are fixed when the vehicles leave the factory, but vibration, bumps, and data lags may cause calibration bias. As the research on the calibration influence on fusion detection performance is relatively few, flexible calibration dependency multi-sensor detection method has always been attractive. In this paper, we conducted experiments on SOTA detection method EPNet++ and proved slight bias on calibration can reduce the performance seriously. We also proposed a re-calibration model based on semantic segmentation which can be combined with a detection algorithm to improve the performance and robustness of multi-modal calibration bias.

https://arxiv.org/abs/2405.16848

https://arxiv.org/pdf/2405.16848.pdf

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注