Loading
2 Follower
2 Following
bartosz_ludwiczuk
Bartosz Ludwiczuk

Location

Poznań, PL

Badges

0
0
0

Connect

Activity

Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Small Object Detection and Classification

Latest submissions

No submissions made in this challenge.

Understand semantic segmentation and monocular depth estimation from downward-facing drone images

Latest submissions

See All
graded 216814
failed 216802
failed 216792

Latest submissions

See All
failed 218195
graded 216523
graded 216520

A benchmark for image-based food recognition

Latest submissions

No submissions made in this challenge.
Participant Rating
siavash 0
mayank_sharma2 0
Participant Rating
unnikrishnan.r 261
kbrodt 0
bartosz_ludwiczuk has not joined any teams yet...

Visual Product Recognition Challenge 2023

My solution for the challenge

5 months ago

Forgot to mentioned, I was using PyTorch 2.0 and it was game changer for both, training and inference time.

My solution for the challenge

5 months ago

About ViT-H, based on your code you were using ViT-H while training, so I understand that you have used it in inference, am I right?

About using ViT-H for inference, it was not a big deal for me, it was just working without any issue, The code for Vit-H looks like that (I used jit trace as it is faster than script TorchScript: Tracing vs. Scripting - Yuxin's Blog)

self.model_scripted = torch.jit.load(model_path).eval().to(device=device_type)
gallery_dataset = SubmissionDataset(
            root=self.dataset_path, annotation_file=self.gallery_csv_path,
            transforms=get_val_aug_gallery(self.input_size)
        )

        query_dataset = SubmissionDataset(
            root=self.dataset_path, annotation_file=self.queries_csv_path,
            transforms=get_val_aug_query(self.input_size), with_bbox=True
        )

        datasets = ConcatDataset([gallery_dataset, query_dataset])
        combine_loader = torch.utils.data.DataLoader(
            datasets, batch_size=self.batch_size,
            shuffle=False, pin_memory=True, num_workers=self.inference_cfg.num_workers
        )

        logger.info('Calculating embeddings')
        embeddings = []
        with torch.cuda.amp.autocast():
            with torch.no_grad():
                for i, images in tqdm(enumerate(combine_loader), total=len(combine_loader)):
                    images = images.to(self.device)
                    outputs = self.model_scripted(images).cpu().numpy()
                    embeddings.append(outputs)

My solution for the challenge

5 months ago

I would say that your solution is quite similar to what I had at some point during this competition.

  • was using the same repo as baseline, mean 4th place from Universal Embeddings
  • also Vit-H (not sure if you checked the weights from this repo, but just using it I could get ~0.56 in round-1, without any training)

Also, what about post-processing, did you use any technique like Database-side augmentation?
These post-processing techniques could boost my score in Round 1 from ~0.64 to 0.67.

I’m also gonna describe my solution in blog-post (to describe my whole journey) then we can compare our solutions:)

Fail Running Private Test

5 months ago

The same thing from my side: 216520.

Also as I understand, the current scores are still from Round-1 as they are exactly the same. The standing is a little bit different as some runs were rejected because of failure (presumably in the private set).

📥 Guidelines For Using External Dataset

6 months ago

I’m using the following dataset:

Pretrained models:

Previous successful submissions failing

6 months ago

I don’t know the reason of failures but I can advise you to not to include .git file in your Docker image (if you created your own). In my case removing .git file make all pipeline running smooth.

Product Matching: Inference failed

6 months ago

@dipam Could you check submission: 213161?

The diagram shows that everything worked fine, but the status is failed:

Product Matching: Inference failed

6 months ago

@dipam
Could you add an error to the following inference errors:

  • #211884
  • #211754

Product Matching: Inference failed

6 months ago

Here is the list of submissions with weird errors:

  • 211521 (env failed)
  • 211522 (env failed)
  • 211387 (Product Matching validation time-out, but I don’t see logs for loading the model)

Also, could you also check out what happened here?

  • 211640 (I think the error is on my side)

Product Matching: Inference failed

6 months ago

Hi @dipam. Could you send me the errors for error for this submission?

211540

Also, I can mention that in my last 4 failed submissions, 3 were because of aicrowd platform.

  • 2x Build-Env failed, when nth was changed in dependences (try in next day succeeded)
  • 1x Product MatchingValidation time-out where in logs it not even started

Not sure if AiCrowd has changed sth or if it was just errors coming from Cloud Provider

Is everybody using same Products 10k dataset

7 months ago

I can just say that is too early to reveal the details about submissions. Just work hard, try many ideas and finally, you will get to the top of the leaderboard!

Mixed and half precision

7 months ago

I think that there is no restriction, the model running in a different environment is still the same model. And you can do whatever you want with your model during the inference. This is my opinion as it was like that in Kaggle competitions.

Training Dataset

8 months ago

I was thinking about the same but my current investigation does not show the positive impact of making Product10k more similar to the test-set).
What I have done:

  1. Select from Product10k categories which are [shoes, eyewear] as these categories represent >80% of data
  2. Train model on the such dataset:
  • Model is trained way faster as train-dataset is ~5x smaller
  • scores for validation set also rise way faster, gaining ~2% of mAP
  1. Submitted model is worse than the model trained on the whole Product10k dataset

So look like even in the description it is written Example products include sandals and sunglasses, they are not the only products. Or ratio between fashion-based images and others is different.
This is my current state of knowledge.

Why my submission is failed?

8 months ago

On GPU I was getting nans in inputs and outputs. I just replaced nans with 0, currently didn’t investigate it further.

Why my submission is failed?

8 months ago

Could you give me some logs of my issue with submission?
Locally works, validation check in submission works but ranking complete test set returns an error:
Product Matching: Inference failed

Update:
I was able to reproduce the error on the T4, no need of help :slight_smile:

bartosz_ludwiczuk has not provided any information yet.