Leaderboard | The SeasonDepth Prediction Challenge

We use the following final score to rank different methods based on the ranks of all six metrics

final_score = 0.5 * (RANK(absrel_avg) + RANK(a1_avg)) + 0.4 * (RANK(absrel_var) + RANK(a1_var)) + 0.1 * (RANK(absrel_rng) + RANK(a1_rng))

The deadline (May 20th, 23:59 EST) has passed and the winners are finalized based on the leaderboard below. Congratulations!

Supervised Learning Track

Rank(by Name)* Name Method URL Submission Date Mean AbsRel ↓ Mean a1 ↑ Variance AbsRel (10e-2) ↓ Variance a1 (10e-2) ↓ Relative Range AbsRel ↓ Relative Range a1 ↓
1 GD-VTC DAHT code 05/20 0.130 0.844 0.022 0.114 0.317 0.644
2 HUSTEREO SwinMono code 05/19 0.146 0.811 0.0124 0.0883 0.280 0.568
3 dumpling CMANet code 05/19 0.140 0.818 0.0208 0.1090 0.342 0.610
1* GD-VTC DepthFormer code 05/20 0.135 0.835 0.021 0.120 0.294 0.576
4* chameleon NBTR-Net code 05/19 0.146 0.810 0.0226 0.1175 0.355 0.596
5 Baseline BTS code 02/28 0.242 0.587 0.0222 0.0632 0.220 0.220
4* chameleon DPT code 05/19 0.152 0.790 0.0286 0.1574 0.364 0.637
6 Always ahead BTS** code 05/20 0.195 0.690 0.0394 0.165 0.319 0.409


Self-supervised Learning Track

Rank(by Name)* Name Method URL Submission Date Mean AbsRel ↓ Mean a1 ↑ Variance AbsRel (10e-2) ↓ Variance a1 (10e-2) ↓ Relative Range AbsRel ↓ Relative Range a1 ↓
1 brandley zhou season_depth code 05/15 0.095 0.920 0.008 0.015 0.398 0.668
2* xiangjie van_depth code 05/17 0.131 0.852 0.006 0.024 0.247 0.397
3* jaehyuck many_dataset ss_v1 code 05/20 0.122 0.872 0.007 0.032 0.285 0.525
3* jaehyuck many_dataset ss_v2 code 05/20 0.128 0.861 0.007 0.031 0.231 0.424
2* xiangjie vadepth_sc 5scales 512x384 code 05/19 0.135 0.844 0.007 0.026 0.249 0.372
4 Wangki Shinran monodepth2** code 05/20 0.144 0.824 0.011 0.046 0.305 0.502
2* xiangjie vadepth sc_5scales code 05/18 0.145 0.823 0.016 0.059 0.375 0.522
4 lxc DEIP code N/A 05/14 0.206 0.682 0.037 0.155 0.355 0.465
5 Baseline SfMLearner code 02/28 0.325 0.482 0.107 0.155 0.298 0.236
6 manydepth manydepth code 05/20 0.227 0.649 0.080 0.262 0.486 0.549

* For the multiple submissions from the same team, based on our policy, one team can submit results several times and we will evaluate them and show them on the leaderboard. But for awarding consideration, we will consider the best one for one team, which means the submission with the best performance will be given awards if it outperforms the baseline and ranked in the top 3 for each track.

** For the existing algorithm, it is allowed to submit to our leaderboard, but please indicate clearly in the GitHub repo with clear reference. And if it is not significantly modified and improved, the submission may not be eligible for awards to encourage more original and novel methods, although they are always welcome to appear on our leaderboard.