According to the competition page, we are evaluated on the following metrics:
- Number of laps
- Lap Time
- Number of resets to the start line
- Number of objects avoided
But I have found that amongst the agents that I have trained the faster agents (following the racing line) get scored much lower than the agents which go as slow as possible on the centerline. This observation has been consistent with all my evaluations.
Also the numbers which I see on the scoreboard are very close to the mean rewards my agents get across multiple runs. Is the scoreboard currently reflecting the mean rewards our agents are accumulating across multiple runs? Could the exact formula for calculating the score be revealed?
Is this a bug or am I missing something here?
[Giveaway Alert 📢] Make your first submission in the AWS DeepRacer Challenge to get free AWS Credits!2 months ago
Submission ID #165110
No. There was one submission remaining for my team mate, at around 11:45 this submission “disappeared” leaving a message “Submissions will be possible as of 2020-08-01 17:40:22 UTC.”. Anyways the competition appears to have been completed so it doesn’t matter now.
Its 11:45 pm UTC at the time of posting this the competitions is supposed to last till 12:01. All submissions seem to have been stopped as of now.