@curtis_irvine : We passed on your request to the CityLearn team, and they have agreed to increase the maximum team size to 15 members. However, this will be the final limit, as they believe any team larger than this size would be rather large for effective team work in context of a competition like this.
@andy8 : Are you referring to this : CityLearn/rendering.py at citylearn_2022 · intelligent-environments-lab/CityLearn · GitHub ?
We are happy to announce that the list above is the final set of winners for the Amazon KDD Cup 2022 - ESCI Challenge For Improving Product Search.
You can make an independent post for more visibility. Or a reply on the thread is also valid.
@amiruddin_nagri : There is no express registration required for the community contribution prizes. If you submit any resource (a blog post, a forum post, share code repositories, make short tutorial videos, etc etc) and it will automatically be counted towards the community contribution prizes, as long as it is supported by end to end working code for the tasks(s).
In the last four months, Amazon KDD Cup 2022 - ESCI Challenge For Improving Product Search saw more than 1600+ participants making 9400+ submissions.
Thank you for being an active participant in ESCI Challenge For Improving Product Search.
While the code due-diligence phase is still underway, we would not like to keep you all waiting,
and we are happy to announce the list of the tentative Winners for all the three tasks.
tentative winners FINAL WINNERS for the three tasks, based on the scores on the private test set are :
|Rank||Team Name||Private Test Set Score (NDCG)||Prizes|
|#4||GraphMIRAcles||0.9028||$500 (in AWS credits)|
|#5||ZhichunRoad||0.9025||$500 (in AWS credits)|
|#6||ETS-Lab||0.9014||$500 (in AWS credits)|
|#7||ALONG||0.9014||$500 (in AWS credits)|
|#8||ljr333||0.9008||$500 (in AWS credits)|
|#9||NeuralMind||0.9007||$500 (in AWS credits)|
|#10||zackchen||0.8998||$500 (in AWS credits)|
|Rank||Team Name||Private Test Set Score (F1)||Prizes|
|#4||hahaha||0.8251||$500 (in AWS credits)|
|#5||MetaSoul||0.8207||$500 (in AWS credits)|
|#6||www||0.8204||$500 (in AWS credits)|
|#7||ZhichunRoad||0.8194||$500 (in AWS credits)|
|#8||qinpersevere||0.8191||$500 (in AWS credits)|
|#9||zackchen||0.8189||$500 (in AWS credits)|
|#10||LYZD-fintech||0.8183||$500 (in AWS credits)|
|Rank||team_name||Private Test Set Score (F1)||Prizes|
|#4||hahaha||0.8734||$500 (in AWS credits)|
|#5||LYZD-fintech||0.8708||$500 (in AWS credits)|
|#6||qinpersevere||0.8701||$500 (in AWS credits)|
|#7||wookiebort||0.8687||$500 (in AWS credits)|
|#8||ZhichunRoad||0.8686||$500 (in AWS credits)|
|#9||NTT-DOCOMO-LABS-GREEN||0.8677||$500 (in AWS credits)|
|#10||rein20||0.8668||$500 (in AWS credits)|
We will reach out to some of the top teams if we need any inputs or clarifications during the diligence process, and announce the final set of winners by August 1st, 2022.
In meantime, we will take this opportunity to add a reminder about the Community Contribution Prizes.
We will be accepting submissions for the Community Contribution Prizes until August 5th, 2022.
While your solutions are still fresh in your mind, you can still document them well for the rest of the community, and stand a chance to win an Oculus Quest 2, or a Mavic Mini 2 from among the Community Contribution Prizes.
More announcements will follow soon, one important heads up is that we will be organizing a Town Hall for the participants of this competition.
If you wish to present your solution in the Amazon KDD Cup 2022 ESCI Challenge Town Hall, please send out an email to email@example.com expressing your intent to speak.
Presentations at the Town Hall, are also eligible for the Community Contributions Prizes (as long as you share some code along with your presentation).
Best of Luck,
No. The deadline is not extended. Any of the submissions after the deadline will not be included in the leaderboard. They should automatically fail as well as soon as the whole queue calms down.
Best of Luck,
@dami: Across all the submissions you mention, the image builder consistently runs out of memory because of the large number of models that you have included in your submission. Just the docker image itself is ~50GB halfway through the image build process, before image builders run out of memory and stop the evaluation unfortunately !
We can discuss with the Amazon team, if these submissions need to be separately evaluated after providing the image builders more storage, but I believe they would not be inclined to the idea, as it might be unfair towards other participants.
@rodrigo_nogueira : The leaderboard will be computed using the best nDCG in the
private test among all the models submitted by that team.
@Ive_s : That is correct. The evaluators will not accept your submissions after the deadline.
@DCMXT_O : Those evaluations have succeeded indeed. The labels on the gitlab issue also have been corrected.
If the issue occurs again, please do not worry about it, if the status for your public_test_phase and private_test_phase are marked as
Success 🎉, then it is correctly evaluated and is being counted towards the final leaderboard.
Best of luck,
:rotating_light: Emergency! ! ! We found that git mentioned the conference which caused the automation to fail, and no errors were displayed in the log28 days ago
@TransiEnt : The organizers of this competition, reviewed the points raised by you, and still hold the stance that it is okay to use External Datasets in preparation of your submissions (during the training phase), and also during the inference. Your submissions are still expected not to fail in case of unseen
Additionally, the organizers do not believe that the use of External Datasets provide any unfair advantage to any specific team.
@madara : Appreciate the feedback. The community contributions are not supposed to be “submissions” to begin with. They are supposed to be open ended resources created for the community and by the community. We choose to reward ones that we feel are beneficial to others and the whole community as a whole.
Infact some community contributions that won in the past have been completely outside of the AIcrowd ecosystem (YouTube videos created by participants with their walkthroughs). We would love to retain the open endedness here, and avoid adding an excessively bureaucratic pipeline for submission and consideration.
We are constantly trying to make this challenge better for you and would really appreciate any feedback you might have .
Please reply to this thread with your suggestions and feedback on making the challenge better for you!
- What have been your major pain points so far?
- What would you like to see improved?