Thanks to Aicrowd for hosting this yearly competition and benchmark. It was a lot of fun working on it, exploring and learning models for instance segmentation for solving the task on food recognition.
Thanks for my teammates for this work.
Thanks @shivam for a lot of helping us with aicrowd infrastructure, @gaurav_singhal for your paper on your previous year best approach and unbelievable race on score other participants for good time within this competition!
Also, congratulations to @gaurav_singhal and @Camaro!!!
I understood, so the problem was with countdown. Thank you for clarification
If you need more info about difference between 1 and 2 stage models, you could read about RPN (region propose network) for example, how it works.
And also, for example about SOLO (1 stage model).
You could try solo in mmdet library as is and see if it helps you to boost your score.
There is a big secret here in data processing before training (as for me). And it is not so big difference between detectron2 and mmdetection.
I assumed that it is not big secret that
#1 team uses mmdet,
#2 team uses mmdet,
#3 team uses mmdet,
#4 team uses detectron2,
(you could observe it due to the errors in some of their submissions).
So, as for me mmdet is better due to large variety of different models, but it is not the main reason here.
For a long time I could not find the idea with data processing and tried to train data as is. And found that only 1 stage models could get good score, however all models with 2 and more stages are not good here (like mask-rcnn, HTC, detectors and so on.)
When I understood the difference between 1 and 2 stage models, I quickly find the data processing idea.
I think difference between your score and other teams from top is dataprocessing step, not the model, parameters or sophisticated augmentation.
I’d recommend you to try out the 1 stage model and see the results…
Yes, I have same results. I assumed the evaluation are going in several machines or at least gpus, so they have not wait until all the images will be processed and calc mAP after it. Organizers probably decided to evaluate the score during the inference, and I assume when you get for example 10% of images processed other 90% of answers setted as 0.
Thus, your score is continuous improving within the evaluation more and more photos (less zeros remains).
Yes, in this round val dataset has been copied to train dataset. So, you should just remove images that are in val dataset from the train dataset by yourself, and you will get same workflow as in round 1.
Hope it will help you.
Hi, @shivam . We have managed to make pipeline to normally submit for 2nd round and it worked on several weigths for a few days. However, today for the 1st submission it worked and for the next two it looks like something went wrong with the system (after 5 hours of waiting it provided us time out error in one submission and other one still runing). I have seen similar case today in your submission as well…
Could you please check it and if it is possible to rerun our submissions
176980 and 176979
Thank you very much for very fast and clear response!
@shivam, we have rerun the same submissions and one is ok from the second run, the other one is ok from the 3rd run. So, we suspect maybe it is something wrong with submission system?
But, we still could not submit 176725, could you please check this one?
thank you in advanced!
(post deleted by author)
Thank you, very much for quick response.
We have not realize that there are some restrictions on the files sizes that are downloaded.
Thus, we try to upload not just best weights to the model, but other weights as well as a lot of redundant staff that are not necessary for submission…)
Thank you very much for prompt response.
update: yes, now it is working ok!
Hi, @shivam ,
we have facing the problem with active submission due to the limit of submission number. However, we made only 1 submission per last day. And another colleague from my team has not submitted any case in last 24hours, and could not submit due to the same error.
Could you help us please with this issue?
yes I have seen similar problem with rotated on 90 degrees boxes.
Unfortunately dataset is far from ideal. Another example, when in ground truth labels we could observe some minor class only, however two major objects have not been labeled (but these classes are present in class list).
Nevertheless, such errors are not very frequent in dataset, and thus are not so important, so we could use it as is…