That’s it, thank you very much! It was quite a stupid mistake…
My submissions are always rated 0.0 “Fraction of done-agents” and 0.0 “Mean Reward”, although it should be more…
See the following submission as an example:
- Submission: https://www.aicrowd.com/challenges/flatland-challenge/submissions/13417
- Issue: https://gitlab.aicrowd.com/wwwjon/flatland-challenge/issues/7
When I run the local evaluation with the provided test environments, I get the following results:
EVALUATION COMPLETE !!
# Mean Reward : -891.15
# Mean Normalized Reward : -4.19
# Mean Percentage Complete : 1.0
The issue of the online evaluation is showing the following results:
Simulations Complete : 20/20
Percentage Evaluation Complete : 100.0%
Mean percentage of done-Agents : 100.00%
Mean Reward : -891.15
Mean Normalized Reward : -4.19
Nevertheless, the submission is rated with 0.0 “Fraction of done-agents” and 0.0 “Mean Reward” on the leaderboard. In addition, the video of the submission lasts only 1 second and obviously shows only the 1st test environment. I have already seen this problem on submissions from other participants.
Is there any help or advice?
Thanks in advance!