Loading
4 Follower
0 Following
dipam
Dipam Chakraborty

Organization

ML Engineer at AIcrowd

Location

Kolkata, IN

Badges

7
5
3

Connect

Activity

Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Latest submissions

See All
graded 227169
graded 227166
graded 227164

Latest submissions

See All
graded 220243
failed 220238
failed 220005

Understand semantic segmentation and monocular depth estimation from downward-facing drone images

Latest submissions

See All
graded 214611
failed 214573
failed 214570

Latest submissions

See All
failed 219223
graded 213055
graded 213054

Latest submissions

See All
graded 209775
failed 209210
graded 208987

A benchmark for image-based food recognition

Latest submissions

No submissions made in this challenge.

Using AI For Building’s Energy Management

Latest submissions

See All
failed 205233
graded 199053
graded 199034

Latest submissions

No submissions made in this challenge.

What data should you label to get the most value for your money?

Latest submissions

No submissions made in this challenge.

Interactive embodied agents for Human-AI collaboration

Latest submissions

See All
graded 199453
graded 199452
graded 198521

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

Behavioral Representation Learning from Animal Poses.

Latest submissions

See All
graded 198630
graded 197504
graded 197503

Airborne Object Tracking Challenge

Latest submissions

No submissions made in this challenge.

ASCII-rendered single-player dungeon crawl game

Latest submissions

See All
graded 158823
failed 158209
failed 158208

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
graded 152892
graded 152891
failed 152884

Machine Learning for detection of early onset of Alzheimers

Latest submissions

No submissions made in this challenge.

Measure sample efficiency and generalization in reinforcement learning using procedurally generated environments

Latest submissions

No submissions made in this challenge.

Self-driving RL on DeepRacer cars - From simulation to real world

Latest submissions

See All
graded 165209
failed 165208
failed 165206

Robustness and teamwork in a massively multiagent environment

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

5 Puzzles 21 Days. Can you solve it all?

Latest submissions

No submissions made in this challenge.

Multi-Agent Reinforcement Learning on Trains

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
graded 143804
graded 125756
graded 125751

5 Problems 15 Days. Can you solve it all?

Latest submissions

No submissions made in this challenge.

Learn to Recognise New Behaviors from limited training examples.

Latest submissions

See All
graded 125756
graded 125589

Reinforcement Learning, IIT-M, assignment 1

Latest submissions

See All
graded 125767
submitted 125747
graded 125006

IIT-M, Reinforcement Learning, DP, Taxi Problem

Latest submissions

See All
graded 125767
graded 125006
graded 124921

Latest submissions

See All
graded 128400
submitted 128365

Latest submissions

See All
failed 131869
graded 130090
graded 128401

Latest submissions

See All
failed 131869
graded 130090
graded 128401

Latest submissions

See All
graded 135842
graded 130545

Latest submissions

No submissions made in this challenge.

Round 1 - Completed

Latest submissions

No submissions made in this challenge.

Identify Words from silent video inputs.

Latest submissions

No submissions made in this challenge.

Round 2 - Active | Claim AWS Credits by beating the baseline

Latest submissions

See All
graded 198630
graded 182252
graded 178951

Round 2 - Active | Claim AWS Credits by beating the baseline

Latest submissions

See All
graded 197504
graded 197503
graded 182254

Use an RL agent to build a structure with natural language inputs

Latest submissions

See All
graded 199453
graded 199452
graded 198521

Language assisted Human - AI Collaboration

Latest submissions

See All
graded 196399
graded 196379
failed 196363

Latest submissions

See All
graded 200962
failed 200887
submitted 200885

Estimate depth in aerial images from monocular downward-facing drone

Latest submissions

See All
graded 214522
graded 214521
failed 214517

Perform semantic segmentation on aerial images from monocular downward-facing drone

Latest submissions

See All
graded 214611
failed 214573
failed 214570

Music source separation of an audio signal into separate tracks for vocals, bass, drums, and other

Latest submissions

See All
failed 219223
graded 213055
graded 213032

Source separation of a cinematic audio track into dialogue, sound-effects and misc.

Latest submissions

See All
graded 213054
failed 213053
submitted 213031

Latest submissions

See All
graded 206453
submitted 206452
submitted 206337

Latest submissions

See All
graded 227169
graded 211920
failed 211916

Latest submissions

See All
graded 227164
graded 211921

Latest submissions

See All
graded 227166
graded 212284

Latest submissions

See All
graded 223574
graded 223573
graded 223572

Latest submissions

No submissions made in this challenge.
Participant Rating
nachiket_dev_me18b017 0
cadabullos 0
alina_porechina 0
ryan811 0
Participant Rating

Amazon KDD Cup '23: Multilingual Recommendation Ch

πŸ’¬ Feedback & Suggestions

Yesterday

@CPMP 5 submissions per team should be enforced by the UI on a per day basis (00:00 to 00:00 UTC fixed window). We have two settings that the organizers can choose from, 24 hour rolling window and 24 hour fixed window. For this challenge its set to fixed window.

Is your team able to make more than 5 submissions in the fixed window?

πŸ’¬ Feedback & Suggestions

2 days ago

On Kaggle a merge is possible only if the sum of submission counts does not exceed 5 times the number of days spent on the competition so far.

Apologies, I wasn’t aware of this.

I’ll mention that we generally do not run competitions without private test sets, which is where this is a much bigger issue. This was an oversight on our part.

I checked the leaderboards and see that there is one team that has managed to make a lot of submissions doing this. After the team merge deadline we’ll check for any more unfair merges, and let the Amazon team decide if they want to provide everyone else with additional submissions to keep things fair.

πŸ’¬ Feedback & Suggestions

2 days ago

Agreed, but this is the system we have, it applies to everyone. If I understand correctly it is also the system on some other platforms.

I understand it’s an outsized advantage for this challenge as it’s not a code based challenge and doesn’t have a private test set.

We’re open to alternative ideas for future challenges, please share your feedback here or in dm.

πŸ’¬ Feedback & Suggestions

2 days ago

Hi @BenediktSchifferer

Yes, every team is limited to 5 submissions per day. After merging teams, submissions limits will apply to the whole team.

πŸ’¬ Feedback & Suggestions

6 days ago

@ECNU_Wei_Zhang Please ask all you team members to dm me and confirm that they want to leave the team.

πŸ’¬ Feedback & Suggestions

7 days ago

@ECNU_Wei_Zhang Okay your team has been removed

Clarification and More Information for Phase 2

9 days ago

@gaozhanfire Thanks for letting me know. Leaderboards should be displayed correctly now.

Clarification and More Information for Phase 2

9 days ago

@tereka Yes, you can use the Phase 1 test set as you like.

Clarification and More Information for Phase 2

9 days ago

@tereka The competition will be ranked based on the results of Phase 2. The deadline may be extended, that is upto Amazon, they haven’t notified us yet.

Clarification and More Information for Phase 2

9 days ago

@BenediktSchifferer The only difference between the Phase 1 and Phase 2 is the test sets. All new submissions are scored on Phase 2 test sets only.

Clarification and More Information for Phase 2

9 days ago

The Phase 2 datasets are available on the Resources page.

They have been available since 9 hours ago when the Phase 2 submissions opened.

Sound Demixing Challenge 2023

Submission Times

9 days ago

@quickpepper947 We do not check for identical commits when counting submissions. I missed that the date you transitioned from debug false to true was not the date you got 6 submissions. In that case I’ll need to further dig into why you got 6 submissions.

I suspect that it may be related to the tags having identical timestamps as you mentioned, I guess further investigation is required.

Submission Times

9 days ago

@XavierJ , yes, all teams had the possibility to use 6 submissions due to this issue. However, to my best of my knowledge no one has used it intentionally. I apologize again that your team ended up having fewer submissions due to this.

I also checked @quickpepper947’s extra submission and 2 out of the 6 made on 8th May are actually identical commits.

Submission Times

10 days ago

@XavierJ , thanks for the rebuttal post. Indeed some deeper investigation was required on my part, and now I believe I’ve found the root cause of the issue.

Along with the previously mentioned setting difference in submission counting window I mentioned in the previous post. There is another issue that silently affected your team, due to an unfortunate mistake on our end.

The issue lies in the key debug: true in aicrowd.json that yoir team seems to have in all your submissions. This key has long been deprecated, and we do not include it in our starter kits. Due to legacy reasons, it is still present in our evaluations codebase, but will only allow 1 submission per day if the key is set to true. The submission just runs normally on the same dataset and the same settings.

However, it seems the mention of this key has not been removed from our autogenerated documentation for the Create Submission page. I presume this is where you got the key and added it to aicrowd.json? Normally, people do not add this key, and this issue went undetected.

In your case, since your team has 4 members, this led to the misunderstanding that every team member gets 1 submission each. Instead, they were failing because the debug key was set.

This is also the case with submissions from @quickpepper947, he was not using the debug key throughout the competition. But added it as true for one of the submissions, and then set it to false in the next submission. These two settings created seperate counters for the submissions, 1 for the debug set to true and 5 for the debug set to false. This led to 6 submissions going scored in one day.


We sincerely apologize for the confusion caused due to the debug key not being properly deprecated and removed from all communication channels. We’ll fix this issue as soon as possible.

Submission Times

17 days ago

Dear @JusperLee ,

Thank you for bringing your concerns to our attention. We have thoroughly investigated the issue and have found that there was a misunderstanding, for which we apologize.

Regarding the submission quota, we would like to clarify the following points. Prior to the extension of the deadline, the submission limit was initially set to 10 submissions per day. However, due to a high submission load that caused many submissions to become stuck, we made the decision to extend the deadline by one week. During this extension period, the submission limit was 5 per day.

Regardless of the size of the team, every team is granted 5 submissions per day (or 10 when the limit was 10). This allocation is not based on the number of team members. Therefore, if only one member of your team submits, they can utilize all 5 submissions for that day. If multiple people submit, it counts towards the entire team’s quota.

It is important to note that our platform does not have a mechanism in place to allow teams or participants to have a different number of submissions. The submission limit remains consistent for all teams throughout the challenge.

Lastly, there was a wrong communication regarding the submission window. Our platform supports two options: a Rolling 24-hour window or a Fixed window between UTC 00:00:00 and UTC 23:59:59. While the intention was to utilize the Rolling 24-hour window, an unintended bug caused the Fixed window to be used for the CDX challenge. We acknowledge that this was not accurately reflected in the forum post. Upon reviewing all the submissions made in the challenge, we can confirm that no team has exceeded the limit of 5 submissions in a day within the fixed window of quota reset (UTC 00:00:00 to UTC 23:59:59). Your screenshots will also confirm the same.

Once again, we apologize for any confusion and inconvenience caused. We value your feedback and will take it into consideration as we strive to enhance the clarity and accuracy of our communications. Thank you for your understanding.

MDX Final Leaderboard A Error

17 days ago

@kin_wai_cheuk You’re right, it had wrongly used the Round 2 dataset’s SDR instead of the full SDR. This is fixed now.

Something wrong with Final Leaderboard B in CDX23 track

21 days ago

It was a mistake on my part, used the same selection key. Its fixed now.

Are these wrong leaderboard submissions? Will they be removed?

21 days ago

@subatomicseer , I inspected the inference code and indeed the submissions you mentioned were baseline models, not allowed for leaderboard B and C. Thank you for mentioning these.

Are these wrong leaderboard submissions? Will they be removed?

24 days ago

Thanks for pointing this out, I’ll follow up with the participants.

Of course in general, we cannot prevent wrongly marked submissions from showing up on the leaderboard if the users don’t tell us to remove them.

However, for the data constrained leaderboards of both CDX (LB A) and MDX (LB A and B), we will collect the training code from the winners and reproduce the results, hence any wrongly marked submissions will certainly be removed in the final leaderboards.

Aicrowd_gym ModuleNotFoundError

About 1 month ago

Hi @crlandsc

Apologies for the confusion. We are making some changes to our evaluation setup, hence the difference from the documentation provided. However, I understand that since the challenge is ending using environment.yml might be important, hence we’ve added back the support for it.

I’ve made some minor changes to your environment.yml and made a submission. I believe now there is some error for the code to run, please check the logs.

Hope this helps.

dipam has not provided any information yet.

Notebooks

Create Notebook