Location
Badges
Activity
Ratings Progression
Challenge Categories
Challenges Entered
Revolutionising Interior Design with AI
Latest submissions
Evaluate Natural Conversations
Latest submissions
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
Audio Source Separation using AI
Latest submissions
A benchmark for image-based food recognition
Latest submissions
See Allgraded | 177111 | ||
failed | 177108 | ||
graded | 176842 |
What data should you label to get the most value for your money?
Latest submissions
See Allgraded | 179189 | ||
graded | 179151 | ||
graded | 179149 |
Perform semantic segmentation on aerial images from monocular downward-facing drone
Latest submissions
Participant | Rating |
---|
Participant | Rating |
---|
Generative Interior Design Challenge 2024
Can't open baseline starter kit
8 months agoI got 404 when I tried to open baseline starter kit.
@snehananavati Could you please check it out?
Data Purchasing Challenge 2022
[Announcement] Leaderboard Winners
Over 2 years agoThanks! It should be same as other platform like Kaggle, you can just create a discussion thread to share your approach! Of couse it would be the most helpful if you kindly share the code as well, but this competition was very structured so just sharing approach may be eough to understand what leads you to win:)
[Announcement] Leaderboard Winners
Over 2 years agoBig congrats for the winners, especially for @xiaozhou_wang, it seems you won the competition by a large margin! Really curious about your solution, it would be great if you can share with community:)
:rotating_light: Select submissions for final evaluation
Over 2 years agoHi @shivam @dipam, do you have any timeline for the leaderboard update?
:rotating_light: Select submissions for final evaluation
Over 2 years agoHi οΌ shivam, is there any progress?
:rotating_light: Select submissions for final evaluation
Over 2 years agoHi @dipam, thanks for hosting the interesting compeitition!
It seems the competition was finished, when will the leaderborad be finalized?
IMPORTANT: Details about end of competition evaluations π―
Over 2 years ago@dipam Voted, thanks for considering our opinions seriously.
Iβm convinced that it will lead to good results for everyone!
By the way, there are only 2 weeks left, are you planning to extend the deadline or keep it as it is?
:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!
Over 2 years agoThe question is how you define the best or useful images. If itβs the best for improving 10 epochs effnet-b4(which I suspect underfitting), the current scheme makes sense.
But in practice, I geuss people would decide to add data after trying to Improve the model with the current data and finding the performance still doesnβt reach to the expected one.
So my definition of βusefulβ here is βuseful to improve the performance of well enough finetuned modelβ. And I suspect current post training pipeline doesnβt reach to the level, IMHO.
:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!
Over 2 years agoHi @dipam, is there any update about this?
Or, please let me know if you already decided to stick to current training pipeline. Iβll try to optimize my purchase strategy to the one.
:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!
Over 2 years agoThanks, I totally understand the situation. I can imagine itβs much harder to host a competition than just to join as a competitor:)
Anyway, whether the modification would be made or not, Iβll try to do my best.
:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!
Over 2 years ago@dipam Thanks for the comment!
I understand that round 2 tries to make us more focused on the purchase strategies.
My concern is not about how good the final F1 score is, but about the meaning of the best additional data.
In general, increasing dataset size when your model is underfitting is a common bad strategy.
The same thing can be said here, the strategy to choose βgood additional data for underfitted modelβ is less practically meaningful than the one for overfitted model.
The easiest way to fix this issue is just change the training pipeline so that the trained model overfit to 1,000 training dataset.
I believe it would make the competition more useful and everyone can learn more interesting strategies.
Thanks,
:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!
Over 2 years agoHi @dipam, have you already cosidered to change post training code as mentioned in the comment?
Especially for the small number of epoch seems problematic for me.
You can easily check the trained model is still underfitting for the dataset by changing number of epochs to 20 from 10 and see how your score improved.
That means for the model there is almost no need to feed data as itβs still βlearningβ with given data, so it might not a good model for evaluating purchased data quality.
In real situation, I guess the host would never use the way underfitted model to evaluate purchased data, thatβs why I think itβs better to change or allow participants to change the post training code too.
Thanks, hope this competition would become more interesting and useful one!
welcome to any comment;)
Post purchasing training code should be jointly optimized
Over 2 years agoHi, at first thanks for launching round 2 of this exciting and interesting challenge!
Iβve just read through updates at round 2, and was a little bit surprised by the change of post purchase training part.
I know itβs for making us focus more on purchasing strategy, but in my humble opinion it should be jointly optimized with the post purchasing training part. For example, we may want to change model size when the computing budget is small. Or, sometimes we may not want to use the ImageNet pretrained model as it is without any extra finetuning.
I understand what makes this competition unique is the purchasing phase, but I guess what the host wants is a strong classifier for each computing and labeling budget, isnβt it?
To maximize the chance to achieve it, Iβd like to suggest allowing participants to modify post training code.
Thanks, welcome to any opinion!
Food Recognition Benchmark 2022
:rocket: Round 2 Launched
Over 2 years agoAlso, could you please make sure you can correctly untar the dataset files?
There are some wierd points, the extension is somehow tar (not tar.gz as description says) and PaxHeader files are included in image directory.
If you have a certain way to untar, please let me know!
Thanks,
:rocket: Round 2 Launched
Over 2 years agoHi @shivam, thanks for the update!
Let me make sure whether all of the image in 2.0 dataset are included in 2.1.
β° Round 1 Extended to 28th Feb
Over 2 years agoI see, thanks:)
But you donβt have to consider to give prizes, it would make me uneasy as my sub was scored 0.33, which is worth 2nd place
Anyway, canβt wait to try new dataset!
β° Round 1 Extended to 28th Feb
Over 2 years agoThanks, I got it. I was stuck in submission error last nightβ¦
So I intended to submit it to round 2 as timeline says round 2 will start on March 1st, but it wasnβt?
And let me make sure that there is no prize for 1st-3rd place if itβs below 0.47, right?
β° Round 1 Extended to 28th Feb
Over 2 years agoHi @vrv, submission is already closed? My submission seems not reflected to leaderboardβ¦
Can't open baseline starter kit
8 months agoNow I can access it! Thanks for the quick fix.