@crlandsc : The submission quotas have not changed. The submission quotas are checked against the number of submissions made by your (or any of your team members) in the last 24 hour window.
So for example, if you make 3 submissions at 23:55 on Day 1, then you can make only 2 more submissions until 23:55 on Day 2 (assuming the submission quota is 5 submissions / day).
The daily submission quotas are not automatically reset at midnight each day.
In the last few days, these are the only submissions we see from your account:
- April 28th 04:13 UTC
- April 28th 04:33 UTC
- April 28th 04:45 UTC
- April 28th 15:34 UTC
- April 29th 19:34 UTC
All of the said submissions failed to evaluate for unrelated reasons (for more details please refer to the relevant debug logs in the issues associated with the failed submissions), and not because of exceeding submission quotas.
Additionally, we are updating the quotas for the competition, to allow up to 5 failed submissions which would not count towards your daily submission quota of 5 submissions.
@kimberley_jensen : There is a spike in the number of submissions because of the approaching deadline, and all the submissions are queued on the evaluation servers, and are all being eventually evaluated. Some submissions are timing out because of being in the queue for too long, and we are manually re-queuing them when that happens.
We will keep a close eye on the submission queues, and intervene as required.
Thank you for your patience.
This issue has been resolved now. Please do let us know if you continue to face this issue.
Apologies for the relative radio silence due to the holiday season.
We are still waiting for a response from the organizing team about the provision of GPUs, so we will have to hold off on answering the question until we hear back from the organizing team.
We are investigating the issue with differential throughputs across different evaluations. A new instance is instantiated for every evaluation on our cloud provider, and the instance type is exactly the same - and hence the resources available. We have confirmed that the exact same instance type is being made available to all the submissions as well. We will get back to you with more details on this soon as well.
The current timeouts are 1hr, or 60 mins.
Apologies on the slow response times due to the Holiday season. We will be providing support at full capacity again starting 2nd of January, 2023.
We are discussing this with the organizing committee, and will get back to you soon on this.
@lyghter : Unfortunately, residents of Russia are not eligible for the prizes in the Challenge. And in case, a leaderboard position is held by a team which is not eligible for the prizes, then the prizes will indeed be rolled over to the next position (and at this point, it will not be possible for us to logistically allow non eligible teams to pass on the prizes to a charity). The rules have been updated to categorically reflect that.
We are constantly trying to make this challenge better for you and would appreciate any feedback you might have .
Please reply to this thread with your suggestions and feedback on making the challenge better for you!
- What have been your major pain points so far?
- What would you like to see improved?
SDX Challenge Team
Competing is more fun with a team!
Introduce yourself here, and find others who are looking to team up!
- A short introduction about you and your background.
- What brings you to this challenge?
- Some ideas you wish to explore as a part of this challenge ?
SDX Challenge Team
All the prizes here have been processed except one participant where the said participant is a Russian national with a bank that our banking partners do not support any transactions to.
We are working closely with the said participant and our financial partners to come to a resolution soon. In the meantime, we have the confirmation from the rest of the winners about the receipt of their prizes.
@ricardodeazambuja : Yes the observation is correct.
As described here:
The dataset contains 422 flights, 2056 total frames (5 frames per flight at different AGLs), Full semantic segmentation annotations of all frames and depth estimations. The dataset has been split into training and (public) test datasets. While the challenge will be scored using a private test dataset, we considered it useful to have this split to allow teams to share their results even after the challenge ends.
Of the total frames available, a subset if used for the (public) test set that you are currently being scored on. There will be an additional (private) test set that the final leaderboards will be based on. And submissions are currently limited to 5 submissions / day.
@victorkras2008 : Thanks for pointing it out.The links are correct, we realized that theres a few more things that need to be updated in the repo before they can be made public. So sorry for the confusion, we will try to make the repositories public as soon as we can.