Loading
0 Follower
0 Following
Mykola_Lavreniuk
Mykola Lavreniuk

Location

Kyiv, UA

Badges

1
1
0

Activity

Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Small Object Detection and Classification

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

Understand semantic segmentation and monocular depth estimation from downward-facing drone images

Latest submissions

See All
graded 218909
submitted 218901
graded 218897

Latest submissions

See All
graded 216437
graded 216435
failed 216348

A benchmark for image-based food recognition

Latest submissions

See All
graded 181483
graded 181374
failed 181373

A benchmark for image-based food recognition

Latest submissions

No submissions made in this challenge.
Participant Rating
Participant Rating

Semantic Segmentation

Same submissions with different weights failing

5 months ago

@dipam , could you please pay attention that earlier this problem solved just by resubmitting, now I have faced similar situation, that my submits (without any changes) failed is some cases, in other cases submission just stuck for 10hours or even more than 24 hours.

Visual Product Recognition Challenge 2023

My solution for the challenge

5 months ago

We have tried TRT before inference in our server. We also use reranking on GPU, but maybe this was longer…
even with VIT-H + reranking our solution was almost 10min, cause in some cases it failed and in some cases run successfully, depending on hardware.

My solution for the challenge

5 months ago

@bartosz_ludwiczuk, Congratulations on achieving second place! I’m looking forward to reading about your winning solution.

My solution for the challenge

5 months ago

We attempted to incorporate multiple external datasets into our experiments, spending considerable time trying to train our VIT-H model jointly with Product10k and other datasets, as well as training on other datasets and fine-tuning on Product10k. Surprisingly, despite our efforts, our current leaderboard score was achieved only by using the Product10k dataset; all other datasets resulted in a decrease in our score.

To improve our results, we utilized re-ranking for postprocessing, which gave us a marginal improvement of approximately 0.01%. Additionally, we experimented with convnext and VIT-G models, which boosted our local score by about 0.03%. However, even with the use of TensorRT, our models were unable to pass inference in 10 minutes.

πŸ“₯ Guidelines For Using External Dataset

6 months ago

we have tried:
rp2k
JD_Products_10K
Shopee
Aliproducts
DeepFashion_CTS
DeepFashion2
Fashion_200K
Stanford_Products

right now we are using only Products_10K, and models from OpenClip.

Previous successful submissions failing

6 months ago

still waiting same, more than 23 hours :slight_smile:

Product Matching: Inference failed

6 months ago

@dipam, thank you for the quick response

Product Matching: Inference failed

6 months ago

@dipam , have you changed some settings of the server for inference?
Previously I have faced near 1 failed submitting per day, just rerunning helps.
however today I have changed only NN weights files and thats all, 1 get 1 submit ok, and 4 other weights with same size, same model all same just other epochs - failed.
As to me it is very strange…
Could you pls check it?

#214502

#214483

#214469

If it is timeout, how it could be if other weights are ok, or just resubmitting sometimes helps?

Is everybody using same Products 10k dataset

6 months ago

yes, they are strongly correlated

Product Matching: Inference failed

6 months ago

@dipam, could you please check
#212567
#212566
It is failed in step β€œBuild Packages And Env”, however I have changed only NN params.
As to me it is very strange…

Is everybody using same Products 10k dataset

7 months ago

I don’t know about other competitors (could only assumed), but we use only this dataset, and it is sufficient.
So working on model, training pipeline, etc. provide the possibility to get LB=0.62114.

Mono Depth Perception

Round1 and Round2

6 months ago

@dipam , thank you for clarification, now for me it looks understandable!

Round1 and Round2

6 months ago

@dipam, could you please provide further details about the current situation? It appears that there is some confusion regarding the status of rounds 1 and 2. Additionally, updating the dataset at this stage of the competition seems unusual.

To clarify, it seems that round 1 has been completed, but there was no mention of it in the initial description. It is unclear what this means for the competition as a whole.

Furthermore, updating the dataset at this point in the competition seems questionable, especially if the previous dataset was used for calculating the leaderboard score. It is uncertain whether a new dataset will be used for the leaderboard calculation, and if so, how this will affect the competition.

Food Recognition Benchmark 2022

End of Round 2⏱️

Over 1 year ago

Thanks to Aicrowd for hosting this yearly competition and benchmark. It was a lot of fun working on it, exploring and learning models for instance segmentation for solving the task on food recognition.
Thanks for my teammates for this work.
Thanks @shivam for a lot of helping us with aicrowd infrastructure, @gaurav_singhal for your paper on your previous year best approach and unbelievable race on score :slight_smile: other participants for good time within this competition!
Also, congratulations to @gaurav_singhal and @Camaro!!!

Extension of the round?

Over 1 year ago

I understood, so the problem was with countdown. Thank you for clarification :slight_smile:

Extension of the round?

Over 1 year ago

Hi, @shivam
Did the competition has been extended?
In the morning timer was 3 days left, now it is 5 days left and I could not find any announcement of it?

Getting less mAP on MMdetection

Over 1 year ago

If you need more info about difference between 1 and 2 stage models, you could read about RPN (region propose network) for example, how it works.
And also, for example about SOLO (1 stage model).
You could try solo in mmdet library as is and see if it helps you to boost your score.

Getting less mAP on MMdetection

Over 1 year ago

Hi, @saidinesh_pola,
There is a big secret here in data processing before training (as for me). And it is not so big difference between detectron2 and mmdetection.
I assumed that it is not big secret that
#1 team uses mmdet,
#2 team uses mmdet,
#3 team uses mmdet,
#4 team uses detectron2,
(you could observe it due to the errors in some of their submissions).
So, as for me mmdet is better due to large variety of different models, but it is not the main reason here.
For a long time I could not find the idea with data processing and tried to train data as is. And found that only 1 stage models could get good score, however all models with 2 and more stages are not good here (like mask-rcnn, HTC, detectors and so on.)
When I understood the difference between 1 and 2 stage models, I quickly find the data processing idea.
I think difference between your score and other teams from top is dataprocessing step, not the model, parameters or sophisticated augmentation.
I’d recommend you to try out the 1 stage model and see the results…

Local Run Produces Different AP

Over 1 year ago

Yes, I have same results. I assumed the evaluation are going in several machines or at least gpus, so they have not wait until all the images will be processed and calc mAP after it. Organizers probably decided to evaluate the score during the inference, and I assume when you get for example 10% of images processed other 90% of answers setted as 0.
Thus, your score is continuous improving within the evaluation more and more photos (less zeros remains).

MMdetection sumission

Over 1 year ago

Yes, in this round val dataset has been copied to train dataset. So, you should just remove images that are in val dataset from the train dataset by yourself, and you will get same workflow as in round 1.
Hope it will help you.

Mykola_Lavreniuk has not provided any information yet.