DINO-SD: Champion Solution for ICRA 2024 RoboDepth Challenge

环绕视场深度估计是一个关键的任务,旨在获取周围视图的深度图。它在自动驾驶、AR/VR和3D重构等现实场景中有很多应用。然而,由于大多数自动驾驶数据集都是在白天场景中收集的,这导致在分布式数据(OoD)面前,深度模型性能较差。虽然一些工作试图在OoD数据上提高深度模型的鲁棒性,但这些方法要么需要额外的训练数据,要么缺乏泛化能力。在本文中,我们介绍了一种名为DINO-SD的新环绕视场深度估计模型。我们的DINO-SD不需要额外数据,具有很强的鲁棒性。我们的DINO-SD在ICRA 2024 RoboDepth Challenge的track4上取得了最佳成绩。

Surround-view depth estimation is a crucial task aims to acquire the depth maps of the surrounding views. It has many applications in real world scenarios such as autonomous driving, AR/VR and 3D reconstruction, etc. However, given that most of the data in the autonomous driving dataset is collected in daytime scenarios, this leads to poor depth model performance in the face of out-of-distribution(OoD) data. While some works try to improve the robustness of depth model under OoD data, these methods either require additional training data or lake generalizability. In this report, we introduce the DINO-SD, a novel surround-view depth estimation model. Our DINO-SD does not need additional data and has strong robustness. Our DINO-SD get the best performance in the track4 of ICRA 2024 RoboDepth Challenge.

https://arxiv.org/abs/2405.17102

https://arxiv.org/pdf/2405.17102.pdf

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注