
Location
Badges
Activity
Ratings Progression
Challenge Categories
Challenges Entered
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
See Allfailed | 212961 |
Identify user photos in the marketplace
Latest submissions
See Allgraded | 212960 | ||
graded | 212890 | ||
graded | 212884 |
A benchmark for image-based food recognition
Latest submissions
See Allgraded | 181483 | ||
graded | 181374 | ||
failed | 181373 |
A benchmark for image-based food recognition
Latest submissions
Participant | Rating |
---|
Participant | Rating |
---|
Visual Product Recognition Challenge 2023

Product Matching: Inference failed
3 days ago@dipam, could you please check
#212567
#212566
It is failed in step βBuild Packages And Envβ, however I have changed only NN params.
As to me it is very strangeβ¦

Is everybody using same Products 10k dataset
24 days agoI donβt know about other competitors (could only assumed), but we use only this dataset, and it is sufficient.
So working on model, training pipeline, etc. provide the possibility to get LB=0.62114.
Food Recognition Benchmark 2022

End of Round 2β±οΈ
11 months agoThanks to Aicrowd for hosting this yearly competition and benchmark. It was a lot of fun working on it, exploring and learning models for instance segmentation for solving the task on food recognition.
Thanks for my teammates for this work.
Thanks @shivam for a lot of helping us with aicrowd infrastructure, @gaurav_singhal for your paper on your previous year best approach and unbelievable race on score other participants for good time within this competition!
Also, congratulations to @gaurav_singhal and @Camaro!!!

Extension of the round?
11 months agoI understood, so the problem was with countdown. Thank you for clarification

Extension of the round?
11 months agoHi, @shivam
Did the competition has been extended?
In the morning timer was 3 days left, now it is 5 days left and I could not find any announcement of it?

Getting less mAP on MMdetection
11 months agoIf you need more info about difference between 1 and 2 stage models, you could read about RPN (region propose network) for example, how it works.
And also, for example about SOLO (1 stage model).
You could try solo in mmdet library as is and see if it helps you to boost your score.

Getting less mAP on MMdetection
11 months agoHi, @saidinesh_pola,
There is a big secret here in data processing before training (as for me). And it is not so big difference between detectron2 and mmdetection.
I assumed that it is not big secret that
#1 team uses mmdet,
#2 team uses mmdet,
#3 team uses mmdet,
#4 team uses detectron2,
(you could observe it due to the errors in some of their submissions).
So, as for me mmdet is better due to large variety of different models, but it is not the main reason here.
For a long time I could not find the idea with data processing and tried to train data as is. And found that only 1 stage models could get good score, however all models with 2 and more stages are not good here (like mask-rcnn, HTC, detectors and so on.)
When I understood the difference between 1 and 2 stage models, I quickly find the data processing idea.
I think difference between your score and other teams from top is dataprocessing step, not the model, parameters or sophisticated augmentation.
Iβd recommend you to try out the 1 stage model and see the resultsβ¦

Local Run Produces Different AP
12 months agoYes, I have same results. I assumed the evaluation are going in several machines or at least gpus, so they have not wait until all the images will be processed and calc mAP after it. Organizers probably decided to evaluate the score during the inference, and I assume when you get for example 10% of images processed other 90% of answers setted as 0.
Thus, your score is continuous improving within the evaluation more and more photos (less zeros remains).

MMdetection sumission
12 months agoYes, in this round val dataset has been copied to train dataset. So, you should just remove images that are in val dataset from the train dataset by yourself, and you will get same workflow as in round 1.
Hope it will help you.


:rocket: Round 2 Launched
About 1 year agoHi, @shivam . We have managed to make pipeline to normally submit for 2nd round and it worked on several weigths for a few days. However, today for the 1st submission it worked and for the next two it looks like something went wrong with the system (after 5 hours of waiting it provided us time out error in one submission and other one still runing). I have seen similar case today in your submission as wellβ¦
Could you please check it and if it is possible to rerun our submissions
176980 and 176979


:rocket: Round 2 Launched
About 1 year ago@shivam, we have rerun the same submissions and one is ok from the second run, the other one is ok from the 3rd run. So, we suspect maybe it is something wrong with submission system?
But, we still could not submit 176725, could you please check this one?
thank you in advanced!


MMDetection Submission
About 1 year agoThank you, very much for quick response.
We have not realize that there are some restrictions on the files sizes that are downloaded.
Thus, we try to upload not just best weights to the model, but other weights as well as a lot of redundant staff that are not necessary for submissionβ¦)

MMDetection Submission
About 1 year agoHi, @shivam. We have still an issue with active submission using MMDetection colab.
Could you please check the submission #174325?
Logs from gitlab:

Issue with active submission
About 1 year agoHi, @shivam. We still could not find the test images in the folders as well as we could not found AICROWD_TEST_IMAGES_PATH and AICROWD_PREDICTIONS_OUTPUT_PATH env variables.
Could you please check our submissions and help us with it?

Example submission template
About 1 year agoThank you very much for prompt response.
update: yes, now it is working ok!

Example submission template
About 1 year agoHi, @shivam ,
we have facing the problem with active submission due to the limit of submission number. However, we made only 1 submission per last day. And another colleague from my team has not submitted any case in last 24hours, and could not submit due to the same error.
Could you help us please with this issue?
Is everybody using same Products 10k dataset
3 days agoyes, they are strongly correlated