Loading
0 Follower
0 Following
Mykola_Lavreniuk
Mykola Lavreniuk

Location

Kyiv, UA

Badges

1
1
0

Activity

Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Understand semantic segmentation and monocular depth estimation from downward-facing drone images

Latest submissions

See All
failed 212961

Latest submissions

See All
graded 212960
graded 212890
graded 212884

A benchmark for image-based food recognition

Latest submissions

See All
graded 181483
graded 181374
failed 181373

A benchmark for image-based food recognition

Latest submissions

No submissions made in this challenge.
Participant Rating
Participant Rating

Visual Product Recognition Challenge 2023

Is everybody using same Products 10k dataset

3 days ago

yes, they are strongly correlated

Product Matching: Inference failed

3 days ago

@dipam, could you please check
#212567
#212566
It is failed in step β€œBuild Packages And Env”, however I have changed only NN params.
As to me it is very strange…

Is everybody using same Products 10k dataset

24 days ago

I don’t know about other competitors (could only assumed), but we use only this dataset, and it is sufficient.
So working on model, training pipeline, etc. provide the possibility to get LB=0.62114.

Food Recognition Benchmark 2022

End of Round 2⏱️

11 months ago

Thanks to Aicrowd for hosting this yearly competition and benchmark. It was a lot of fun working on it, exploring and learning models for instance segmentation for solving the task on food recognition.
Thanks for my teammates for this work.
Thanks @shivam for a lot of helping us with aicrowd infrastructure, @gaurav_singhal for your paper on your previous year best approach and unbelievable race on score :slight_smile: other participants for good time within this competition!
Also, congratulations to @gaurav_singhal and @Camaro!!!

Extension of the round?

11 months ago

I understood, so the problem was with countdown. Thank you for clarification :slight_smile:

Extension of the round?

11 months ago

Hi, @shivam
Did the competition has been extended?
In the morning timer was 3 days left, now it is 5 days left and I could not find any announcement of it?

Getting less mAP on MMdetection

11 months ago

If you need more info about difference between 1 and 2 stage models, you could read about RPN (region propose network) for example, how it works.
And also, for example about SOLO (1 stage model).
You could try solo in mmdet library as is and see if it helps you to boost your score.

Getting less mAP on MMdetection

11 months ago

Hi, @saidinesh_pola,
There is a big secret here in data processing before training (as for me). And it is not so big difference between detectron2 and mmdetection.
I assumed that it is not big secret that
#1 team uses mmdet,
#2 team uses mmdet,
#3 team uses mmdet,
#4 team uses detectron2,
(you could observe it due to the errors in some of their submissions).
So, as for me mmdet is better due to large variety of different models, but it is not the main reason here.
For a long time I could not find the idea with data processing and tried to train data as is. And found that only 1 stage models could get good score, however all models with 2 and more stages are not good here (like mask-rcnn, HTC, detectors and so on.)
When I understood the difference between 1 and 2 stage models, I quickly find the data processing idea.
I think difference between your score and other teams from top is dataprocessing step, not the model, parameters or sophisticated augmentation.
I’d recommend you to try out the 1 stage model and see the results…

Local Run Produces Different AP

12 months ago

Yes, I have same results. I assumed the evaluation are going in several machines or at least gpus, so they have not wait until all the images will be processed and calc mAP after it. Organizers probably decided to evaluate the score during the inference, and I assume when you get for example 10% of images processed other 90% of answers setted as 0.
Thus, your score is continuous improving within the evaluation more and more photos (less zeros remains).

MMdetection sumission

12 months ago

Yes, in this round val dataset has been copied to train dataset. So, you should just remove images that are in val dataset from the train dataset by yourself, and you will get same workflow as in round 1.
Hope it will help you.

:rocket: Round 2 Launched

About 1 year ago

@shivam , Thank you for quick response!

:rocket: Round 2 Launched

About 1 year ago

Hi, @shivam . We have managed to make pipeline to normally submit for 2nd round and it worked on several weigths for a few days. However, today for the 1st submission it worked and for the next two it looks like something went wrong with the system (after 5 hours of waiting it provided us time out error in one submission and other one still runing). I have seen similar case today in your submission as well…
Could you please check it and if it is possible to rerun our submissions
176980 and 176979

:rocket: Round 2 Launched

About 1 year ago

Thank you very much for very fast and clear response!

:rocket: Round 2 Launched

About 1 year ago

@shivam, we have rerun the same submissions and one is ok from the second run, the other one is ok from the 3rd run. So, we suspect maybe it is something wrong with submission system?
But, we still could not submit 176725, could you please check this one?
thank you in advanced!

:rocket: Round 2 Launched

About 1 year ago

(post deleted by author)

MMDetection Submission

About 1 year ago

Thank you, very much for quick response.
We have not realize that there are some restrictions on the files sizes that are downloaded.
Thus, we try to upload not just best weights to the model, but other weights as well as a lot of redundant staff that are not necessary for submission…)

MMDetection Submission

About 1 year ago

Hi, @shivam. We have still an issue with active submission using MMDetection colab.
Could you please check the submission #174325?

Logs from gitlab:

Issue with active submission

About 1 year ago

Hi, @shivam. We still could not find the test images in the folders as well as we could not found AICROWD_TEST_IMAGES_PATH and AICROWD_PREDICTIONS_OUTPUT_PATH env variables.
Could you please check our submissions and help us with it?

Example submission template

About 1 year ago

Thank you very much for prompt response.
update: yes, now it is working ok!

Example submission template

About 1 year ago

Hi, @shivam ,
we have facing the problem with active submission due to the limit of submission number. However, we made only 1 submission per last day. And another colleague from my team has not submitted any case in last 24hours, and could not submit due to the same error.
Could you help us please with this issue?

Mykola_Lavreniuk has not provided any information yet.