Activity
Ratings Progression
Challenge Categories
Challenges Entered
Small Object Detection and Classification
Latest submissions
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
See Allgraded | 216814 | ||
failed | 216802 | ||
failed | 216792 |
Identify user photos in the marketplace
Latest submissions
See Allfailed | 218195 | ||
graded | 216523 | ||
graded | 216520 |
A benchmark for image-based food recognition
Latest submissions
Participant | Rating |
---|---|
![]() |
0 |
![]() |
0 |
Participant | Rating |
---|---|
![]() |
261 |
![]() |
0 |
Visual Product Recognition Challenge 2023

2nd place solution for the challange
5 months agoI decided to write a blog-post: 2nd place solution for AI-Crowd Visual Product Recognition Challenge 2023 β | by Bartosz Ludwiczuk | Apr, 2023 | Medium
If you have any questions, just let me know!

My solution for the challenge
5 months agoForgot to mentioned, I was using PyTorch 2.0 and it was game changer for both, training and inference time.

My solution for the challenge
5 months agoAbout ViT-H, based on your code you were using ViT-H while training, so I understand that you have used it in inference, am I right?
About using ViT-H for inference, it was not a big deal for me, it was just working without any issue, The code for Vit-H looks like that (I used jit trace
as it is faster than script
TorchScript: Tracing vs. Scripting - Yuxin's Blog)
self.model_scripted = torch.jit.load(model_path).eval().to(device=device_type)
gallery_dataset = SubmissionDataset(
root=self.dataset_path, annotation_file=self.gallery_csv_path,
transforms=get_val_aug_gallery(self.input_size)
)
query_dataset = SubmissionDataset(
root=self.dataset_path, annotation_file=self.queries_csv_path,
transforms=get_val_aug_query(self.input_size), with_bbox=True
)
datasets = ConcatDataset([gallery_dataset, query_dataset])
combine_loader = torch.utils.data.DataLoader(
datasets, batch_size=self.batch_size,
shuffle=False, pin_memory=True, num_workers=self.inference_cfg.num_workers
)
logger.info('Calculating embeddings')
embeddings = []
with torch.cuda.amp.autocast():
with torch.no_grad():
for i, images in tqdm(enumerate(combine_loader), total=len(combine_loader)):
images = images.to(self.device)
outputs = self.model_scripted(images).cpu().numpy()
embeddings.append(outputs)

My solution for the challenge
5 months agoI would say that your solution is quite similar to what I had at some point during this competition.
- was using the same repo as baseline, mean 4th place from Universal Embeddings
- also Vit-H (not sure if you checked the weights from this repo, but just using it I could get ~0.56 in round-1, without any training)
Also, what about post-processing, did you use any technique like Database-side augmentation?
These post-processing techniques could boost my score in Round 1 from ~0.64 to 0.67.
Iβm also gonna describe my solution in blog-post (to describe my whole journey) then we can compare our solutions:)

Fail Running Private Test
5 months agoThe same thing from my side: 216520.
Also as I understand, the current scores are still from Round-1 as they are exactly the same. The standing is a little bit different as some runs were rejected because of failure (presumably in the private set).

π₯ Guidelines For Using External Dataset
6 months agoIβm using the following dataset:
Pretrained models:

Previous successful submissions failing
6 months agoI donβt know the reason of failures but I can advise you to not to include .git file in your Docker image (if you created your own). In my case removing .git file make all pipeline running smooth.

Product Matching: Inference failed
6 months ago@dipam Could you check submission: 213161?
The diagram shows that everything worked fine, but the status is failed:

Product Matching: Inference failed
6 months ago@dipam
Could you add an error to the following inference errors:
- #211884
- #211754

Product Matching: Inference failed
6 months agoHere is the list of submissions with weird errors:
- 211521 (env failed)
- 211522 (env failed)
- 211387 (Product Matching validation time-out, but I donβt see logs for loading the model)
Also, could you also check out what happened here?
- 211640 (I think the error is on my side)

Product Matching: Inference failed
6 months agoHi @dipam. Could you send me the errors for error for this submission?
211540
Also, I can mention that in my last 4 failed submissions, 3 were because of aicrowd platform.
- 2x Build-Env failed, when nth was changed in dependences (try in next day succeeded)
- 1x Product MatchingValidation time-out where in logs it not even started
Not sure if AiCrowd has changed sth or if it was just errors coming from Cloud Provider

Is everybody using same Products 10k dataset
7 months agoI can just say that is too early to reveal the details about submissions. Just work hard, try many ideas and finally, you will get to the top of the leaderboard!

Mixed and half precision
7 months agoI think that there is no restriction, the model running in a different environment is still the same model. And you can do whatever you want with your model during the inference. This is my opinion as it was like that in Kaggle competitions.

Training Dataset
8 months agoI was thinking about the same but my current investigation does not show the positive impact of making Product10k more similar to the test-set).
What I have done:
- Select from Product10k categories which are [shoes, eyewear] as these categories represent >80% of data
- Train model on the such dataset:
- Model is trained way faster as train-dataset is ~5x smaller
- scores for validation set also rise way faster, gaining ~2% of mAP
- Submitted model is worse than the model trained on the whole Product10k dataset
So look like even in the description it is written Example products include sandals and sunglasses
, they are not the only products. Or ratio between fashion-based images and others is different.
This is my current state of knowledge.

Why my submission is failed?
8 months agoOn GPU I was getting nans in inputs and outputs. I just replaced nans with 0, currently didnβt investigate it further.

Why my submission is failed?
8 months agoCould you give me some logs of my issue with submission?
Locally works, validation check in submission works but ranking complete test set returns an error:
Product Matching: Inference failed
Update:
I was able to reproduce the error on the T4, no need of help
Our solution for Visual Product Recognition Challenge
5 months agoCould you explain the re-ranking part of the code? I mean code starting here: my_submission/mcs_baseline_ranker.py Β· main Β· strekalov / 1st place solution Visual Product Recognition Challenge 2023 Β· GitLab