SCaRL- A Synthetic Multi-Modal Dataset for Autonomous Driving

我们提出了一个新颖的合成多模态数据集SCaRL,以实现自动驾驶解决方案的训练和验证。多模态数据集在自动驾驶应用中至关重要,以实现自动驾驶系统的稳健性和高精度。由于基于深度学习的解决方案在物体检测、分类和跟踪任务中越来越普遍,对于自动驾驶来说,结合相机、激光雷达和雷达传感器的大数据集需求越来越大。然而,现有的自动驾驶数据集缺乏来自完整传感器套件的同步数据收集。SCaRL提供了来自红外的同步合成数据,语义/实例图和来自雷达的径向/海拔映射以及来自 coherent LiDAR 的语义、深度和多普勒数据的3D点云/2D地图。SCaRL是基于CARLA模拟器的较大数据集,为各种动态场景和交通情况提供数据。这是第一个包括来自 coherent LiDAR 和 MIMO 雷达传感器合成同步数据的自动驾驶数据集。该数据集的访问URL是:https:// this URL。

We present a novel synthetically generated multi-modal dataset, SCaRL, to enable the training and validation of autonomous driving solutions. Multi-modal datasets are essential to attain the robustness and high accuracy required by autonomous systems in applications such as autonomous driving. As deep learning-based solutions are becoming more prevalent for object detection, classification, and tracking tasks, there is great demand for datasets combining camera, lidar, and radar sensors. Existing real/synthetic datasets for autonomous driving lack synchronized data collection from a complete sensor suite. SCaRL provides synchronized Synthetic data from RGB, semantic/instance, and depth Cameras; Range-Doppler-Azimuth/Elevation maps and raw data from Radar; and 3D point clouds/2D maps of semantic, depth and Doppler data from coherent Lidar. SCaRL is a large dataset based on the CARLA Simulator, which provides data for diverse, dynamic scenarios and traffic conditions. SCaRL is the first dataset to include synthetic synchronized data from coherent Lidar and MIMO radar sensors. The dataset can be accessed here: this https URL

https://arxiv.org/abs/2405.17030

https://arxiv.org/pdf/2405.17030.pdf

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注