Activity
Ratings Progression
Challenge Categories
Challenges Entered
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
See Allgraded | 214611 | ||
failed | 214573 | ||
failed | 214570 |
Audio Source Separation using AI
Latest submissions
See Allfailed | 219223 | ||
graded | 213055 | ||
graded | 213054 |
Identify user photos in the marketplace
Latest submissions
See Allgraded | 209775 | ||
failed | 209210 | ||
graded | 208987 |
A benchmark for image-based food recognition
Latest submissions
Using AI For Buildingβs Energy Management
Latest submissions
See Allfailed | 205233 | ||
graded | 199053 | ||
graded | 199034 |
Learning From Human-Feedback
Latest submissions
What data should you label to get the most value for your money?
Latest submissions
Interactive embodied agents for Human-AI collaboration
Latest submissions
See Allgraded | 199453 | ||
graded | 199452 | ||
graded | 198521 |
Specialize and Bargain in Brave New Worlds
Latest submissions
Amazon KDD Cup 2022
Latest submissions
Behavioral Representation Learning from Animal Poses.
Latest submissions
See Allgraded | 198630 | ||
graded | 197504 | ||
graded | 197503 |
Airborne Object Tracking Challenge
Latest submissions
ASCII-rendered single-player dungeon crawl game
Latest submissions
See Allgraded | 158823 | ||
failed | 158209 | ||
failed | 158208 |
Latest submissions
Training sample-efficient agents in Minecraft
Latest submissions
Machine Learning for detection of early onset of Alzheimers
Latest submissions
Measure sample efficiency and generalization in reinforcement learning using procedurally generated environments
Latest submissions
Self-driving RL on DeepRacer cars - From simulation to real world
Latest submissions
See Allgraded | 165209 | ||
failed | 165208 | ||
failed | 165206 |
Robustness and teamwork in a massively multiagent environment
Latest submissions
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Multi-Agent Reinforcement Learning on Trains
Latest submissions
Latest submissions
See Allgraded | 143804 | ||
graded | 125756 | ||
graded | 125751 |
5 Problems 15 Days. Can you solve it all?
Latest submissions
Learn to Recognise New Behaviors from limited training examples.
Latest submissions
See Allgraded | 125756 | ||
graded | 125589 |
Reinforcement Learning, IIT-M, assignment 1
Latest submissions
See Allgraded | 125767 | ||
submitted | 125747 | ||
graded | 125006 |
IIT-M, Reinforcement Learning, DP, Taxi Problem
Latest submissions
See Allgraded | 125767 | ||
graded | 125006 | ||
graded | 124921 |
Latest submissions
See Allgraded | 128400 | ||
submitted | 128365 |
Latest submissions
See Allfailed | 131869 | ||
graded | 130090 | ||
graded | 128401 |
Latest submissions
See Allfailed | 131869 | ||
graded | 130090 | ||
graded | 128401 |
Latest submissions
See Allgraded | 135842 | ||
graded | 130545 |
Round 1 - Completed
Latest submissions
Round 1 - Completed
Latest submissions
Identify Words from silent video inputs.
Latest submissions
Round 2 - Active | Claim AWS Credits by beating the baseline
Latest submissions
See Allgraded | 198630 | ||
graded | 182252 | ||
graded | 178951 |
Round 2 - Active | Claim AWS Credits by beating the baseline
Latest submissions
See Allgraded | 197504 | ||
graded | 197503 | ||
graded | 182254 |
Use an RL agent to build a structure with natural language inputs
Latest submissions
See Allgraded | 199453 | ||
graded | 199452 | ||
graded | 198521 |
Language assisted Human - AI Collaboration
Latest submissions
See Allgraded | 196399 | ||
graded | 196379 | ||
failed | 196363 |
Latest submissions
See Allgraded | 200962 | ||
failed | 200887 | ||
submitted | 200885 |
Estimate depth in aerial images from monocular downward-facing drone
Latest submissions
See Allgraded | 214522 | ||
graded | 214521 | ||
failed | 214517 |
Perform semantic segmentation on aerial images from monocular downward-facing drone
Latest submissions
See Allgraded | 214611 | ||
failed | 214573 | ||
failed | 214570 |
Music source separation of an audio signal into separate tracks for vocals, bass, drums, and other
Latest submissions
See Allfailed | 219223 | ||
graded | 213055 | ||
graded | 213032 |
Source separation of a cinematic audio track into dialogue, sound-effects and misc.
Latest submissions
See Allgraded | 213054 | ||
failed | 213053 | ||
submitted | 213031 |
Latest submissions
See Allgraded | 206453 | ||
submitted | 206452 | ||
submitted | 206337 |
Latest submissions
See Allgraded | 223574 | ||
graded | 223573 | ||
graded | 223572 |
Latest submissions
Participant | Rating |
---|---|
![]() |
0 |
![]() |
0 |
![]() |
0 |
![]() |
0 |
Participant | Rating |
---|
-
Random-walk Airborne Object Tracking ChallengeView
-
R2D2 NeurIPS 2022: CityLearn ChallengeView
-
dipam_chakraborty Testing NetView
Amazon KDD Cup '23: Multilingual Recommendation Ch
π¬ Feedback & Suggestions
2 days agoOn Kaggle a merge is possible only if the sum of submission counts does not exceed 5 times the number of days spent on the competition so far.
Apologies, I wasnβt aware of this.
Iβll mention that we generally do not run competitions without private test sets, which is where this is a much bigger issue. This was an oversight on our part.
I checked the leaderboards and see that there is one team that has managed to make a lot of submissions doing this. After the team merge deadline weβll check for any more unfair merges, and let the Amazon team decide if they want to provide everyone else with additional submissions to keep things fair.
π¬ Feedback & Suggestions
2 days agoAgreed, but this is the system we have, it applies to everyone. If I understand correctly it is also the system on some other platforms.
I understand itβs an outsized advantage for this challenge as itβs not a code based challenge and doesnβt have a private test set.
Weβre open to alternative ideas for future challenges, please share your feedback here or in dm.
π¬ Feedback & Suggestions
2 days agoYes, every team is limited to 5 submissions per day. After merging teams, submissions limits will apply to the whole team.
π¬ Feedback & Suggestions
6 days ago@ECNU_Wei_Zhang Please ask all you team members to dm me and confirm that they want to leave the team.
Clarification and More Information for Phase 2
9 days ago@gaozhanfire Thanks for letting me know. Leaderboards should be displayed correctly now.
Clarification and More Information for Phase 2
9 days ago@tereka Yes, you can use the Phase 1 test set as you like.
Clarification and More Information for Phase 2
9 days ago@tereka The competition will be ranked based on the results of Phase 2. The deadline may be extended, that is upto Amazon, they havenβt notified us yet.
Clarification and More Information for Phase 2
9 days ago@BenediktSchifferer The only difference between the Phase 1 and Phase 2 is the test sets. All new submissions are scored on Phase 2 test sets only.
Clarification and More Information for Phase 2
9 days agoThe Phase 2 datasets are available on the Resources page.
They have been available since 9 hours ago when the Phase 2 submissions opened.
Sound Demixing Challenge 2023
Submission Times
9 days ago@quickpepper947 We do not check for identical commits when counting submissions. I missed that the date you transitioned from debug false to true was not the date you got 6 submissions. In that case Iβll need to further dig into why you got 6 submissions.
I suspect that it may be related to the tags having identical timestamps as you mentioned, I guess further investigation is required.
Submission Times
9 days ago@XavierJ , yes, all teams had the possibility to use 6 submissions due to this issue. However, to my best of my knowledge no one has used it intentionally. I apologize again that your team ended up having fewer submissions due to this.
I also checked @quickpepper947βs extra submission and 2 out of the 6 made on 8th May are actually identical commits.
Submission Times
10 days ago@XavierJ , thanks for the rebuttal post. Indeed some deeper investigation was required on my part, and now I believe Iβve found the root cause of the issue.
Along with the previously mentioned setting difference in submission counting window I mentioned in the previous post. There is another issue that silently affected your team, due to an unfortunate mistake on our end.
The issue lies in the key debug: true
in aicrowd.json
that yoir team seems to have in all your submissions. This key has long been deprecated, and we do not include it in our starter kits. Due to legacy reasons, it is still present in our evaluations codebase, but will only allow 1 submission per day if the key is set to true. The submission just runs normally on the same dataset and the same settings.
However, it seems the mention of this key has not been removed from our autogenerated documentation for the Create Submission
page. I presume this is where you got the key and added it to aicrowd.json
? Normally, people do not add this key, and this issue went undetected.
In your case, since your team has 4 members, this led to the misunderstanding that every team member gets 1 submission each. Instead, they were failing because the debug key was set.
This is also the case with submissions from @quickpepper947, he was not using the debug key throughout the competition. But added it as true
for one of the submissions, and then set it to false
in the next submission. These two settings created seperate counters for the submissions, 1 for the debug set to true and 5 for the debug set to false. This led to 6 submissions going scored in one day.
We sincerely apologize for the confusion caused due to the debug key not being properly deprecated and removed from all communication channels. Weβll fix this issue as soon as possible.
Submission Times
17 days agoDear @JusperLee ,
Thank you for bringing your concerns to our attention. We have thoroughly investigated the issue and have found that there was a misunderstanding, for which we apologize.
Regarding the submission quota, we would like to clarify the following points. Prior to the extension of the deadline, the submission limit was initially set to 10 submissions per day. However, due to a high submission load that caused many submissions to become stuck, we made the decision to extend the deadline by one week. During this extension period, the submission limit was 5 per day.
Regardless of the size of the team, every team is granted 5 submissions per day (or 10 when the limit was 10). This allocation is not based on the number of team members. Therefore, if only one member of your team submits, they can utilize all 5 submissions for that day. If multiple people submit, it counts towards the entire teamβs quota.
It is important to note that our platform does not have a mechanism in place to allow teams or participants to have a different number of submissions. The submission limit remains consistent for all teams throughout the challenge.
Lastly, there was a wrong communication regarding the submission window. Our platform supports two options: a Rolling 24-hour window or a Fixed window between UTC 00:00:00 and UTC 23:59:59. While the intention was to utilize the Rolling 24-hour window, an unintended bug caused the Fixed window to be used for the CDX challenge. We acknowledge that this was not accurately reflected in the forum post. Upon reviewing all the submissions made in the challenge, we can confirm that no team has exceeded the limit of 5 submissions in a day within the fixed window of quota reset (UTC 00:00:00 to UTC 23:59:59). Your screenshots will also confirm the same.
Once again, we apologize for any confusion and inconvenience caused. We value your feedback and will take it into consideration as we strive to enhance the clarity and accuracy of our communications. Thank you for your understanding.
MDX Final Leaderboard A Error
17 days ago@kin_wai_cheuk Youβre right, it had wrongly used the Round 2 datasetβs SDR instead of the full SDR. This is fixed now.
Something wrong with Final Leaderboard B in CDX23 track
21 days agoIt was a mistake on my part, used the same selection key. Its fixed now.
Are these wrong leaderboard submissions? Will they be removed?
21 days ago@subatomicseer , I inspected the inference code and indeed the submissions you mentioned were baseline models, not allowed for leaderboard B and C. Thank you for mentioning these.
Are these wrong leaderboard submissions? Will they be removed?
24 days agoThanks for pointing this out, Iβll follow up with the participants.
Of course in general, we cannot prevent wrongly marked submissions from showing up on the leaderboard if the users donβt tell us to remove them.
However, for the data constrained leaderboards of both CDX (LB A) and MDX (LB A and B), we will collect the training code from the winners and reproduce the results, hence any wrongly marked submissions will certainly be removed in the final leaderboards.
Aicrowd_gym ModuleNotFoundError
About 1 month agoHi @crlandsc
Apologies for the confusion. We are making some changes to our evaluation setup, hence the difference from the documentation provided. However, I understand that since the challenge is ending using environment.yml might be important, hence weβve added back the support for it.
Iβve made some minor changes to your environment.yml and made a submission. I believe now there is some error for the code to run, please check the logs.
Hope this helps.
Notebooks
-
[Getting Started] ETH PSC Summer School Hackathon This is a Baseline Code to get you started with the challenge.dipamΒ· 9 months ago
-
Baseline - BERT Classifier - BM25 Ranker Official baseline that uses BERT based classifier and BM25 rankerdipamΒ· 10 months ago
-
Unsupervised model - SimCLR - Ant-Beetles Video Data Unsupervised model training using contrastive learning with modified SimCLRdipamΒ· About 1 year ago
-
Unsupervised model - SimCLR - Mouse Video Data Unsupervised model training using contrastive learning with modified SimCLRdipamΒ· About 1 year ago
-
Getting Started - Mouse-Triplets Video Data Initial data exploration and a basic embedding using a vision modeldipamΒ· About 1 year ago
-
Getting Started - Ant-Beetles Video Data Initial data exploration and a basic embedding using a vision modeldipamΒ· About 1 year ago
-
BSuite Challenge Starter Kit IITM RL Final Project Bsuite starter kit with random baselinedipamΒ· About 2 years ago
-
Solution for submission 128367 A detailed solution for submission 128367 submitted for challenge IIT-M RL-ASSIGNMENT-2-GRIDWORLDdipamΒ· About 2 years ago
-
Solution for submission 130090 A detailed solution for submission 130090 submitted for challenge IIT-M RL-ASSIGNMENT-2-GRIDWORLDdipamΒ· About 2 years ago
-
Solution for submission 128401 A detailed solution for submission 128401 submitted for challenge IIT-M RL-ASSIGNMENT-2-GRIDWORLDdipamΒ· About 2 years ago
-
Solution for submission 128400 A detailed solution for submission 128400 submitted for challenge IIT-M RL-ASSIGNMENT-2-TAXIdipamΒ· About 2 years ago
-
Taxi Notebook IITM RL Assignment 2 Notebook to be filled for IITM RL Assingnment 2 TaxidipamΒ· About 2 years ago
-
Gridworld Notebook IITM RL Assignment 2 Notebook to be filled for IITM RL Assingnment 2 GridworlddipamΒ· About 2 years ago
π¬ Feedback & Suggestions
Yesterday@CPMP 5 submissions per team should be enforced by the UI on a per day basis (00:00 to 00:00 UTC fixed window). We have two settings that the organizers can choose from, 24 hour rolling window and 24 hour fixed window. For this challenge its set to fixed window.
Is your team able to make more than 5 submissions in the fixed window?