Long-term robustness of perception under various environments has been the bottleneck of lifelong trustworthy autonomy in the application of outdoor mobile robotics and autonomous driving. Although monocular depth prediction has been well studied as a typical perception task, there is little work on robust depth prediction across different environments, e.g. changing illumination and seasons, which is owing to the lack of a diverse real-world dataset containing various scenarios and the corresponding benchmark. To this end, we introduce the SeasonDepth Prediction Challenge as the first open-source challenge focusing on depth prediction performance under different environmental conditions.

The SeasonDepth Prediction Challenge is based on our new monocular depth prediction dataset, SeasonDepth, which contains multi-traverse outdoor images from changing environments. To quantitatively evaluate the accuracy and robustness of monocular depth prediction across dramatically changing environments, we set up two tracks with 7 slices of training set under 12 different environmental conditions, using both mean and variance of performance as evaluation metrics. We believe our competition will contribute to flourish the long-term robust perception research among the research community with our dataset and benchmark.

Challenge Tracks

For this ICRA 2022 Competition, we propose to host two tracks --- one supervised learning track and one self-supervised learning track for both supervised learning and self-supervised learning-based methods. We also provide high-quality demonstrations as a tutorial for some baseline algorithms. Anyone can access to the leaderboard of each track after releasing the test set and participants can submit the predicted depth maps to our website to occupy the top spot.


Supervised Learning Track

  • The participants can make full use of all depth maps released in the SeasonDepth dataset to train the model


Self-Supervised Learning Track

  • Only monocular image sequences are used for model training without the supervision of depth maps


The RGB images and depth ground truth have been released for the training set and validation set of the challenge. For the test set of the challenge, only RGB images will be released, and the corresponding ground truth is retained and used to evaluate the submissions in the challenge. The training and validation set contain 7 multi-environment slices of images under 12 different environments, and we leave one additional slice as the test set for the challenge. Besides our released training and validation set, we set no limits on other third-party public datasets or pretrained models in the competition. Each individual participant will be graded based on 6 metrics in SeasonDepth benchmark of the test set. The evaluation code and instructions can be found on evaluation toolkit for the convenience of participants to evaluate the performance themselves before submission to our challenge website. Note that the grading metrics are scaleless for relative depth values, which are compatible for both supervised and self-supervised learning-based methods.



Supervised Learning Track

  • First prize: $500
  • Second prize: $300
  • Third prize: $200


Self-Supervised Learning Track

  • First prize: $500
  • Second prize: $300
  • Third prize: $200


  • Training Dataset Released

    Training and validation set available here

  • Test Set Released

    Available now! Click to Download!

  • Submission Deadline

    Don't forget to include your code link in your submission 

  • Award Decision Announcement

Workshop on Trustworthy Autonomy and Robotics

Keynote Speakers

Fisher Yu

ETH Zurich

Yan Chang


Invited Speakers

Heng Yang


Jiachen Li


Antonio Loquercio

UC Berkeley

Peng Yin


Wenshuo Wang


Wei Zhan

UC Berkeley

Linyi Li



Hanjiang Hu

Ph.D. @ CMU

Jiacheng Zhu

Ph.D. @ CMU

Zuxin Liu

Ph.D. @ CMU

Wenhao Ding

Ph.D. @ CMU

Shuai Wang

Master @ CMU

Jiarun Wei

Master @ CMU

Baoquan Yang

Undergraduate @ SJTU

Zhijian Qiao

Master @ SJTU

Ding Zhao

Assistant Professor @ CMU

Bo Li

Assistant Professor @  UIUC

Hesheng Wang

Professor @  SJTU

Sponsored By AgileX Robotics