Loading
0 Follower
0 Following
Camaro
Daisuke Yamamoto

Location

JP

Badges

0
0
0

Activity

Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

Understand semantic segmentation and monocular depth estimation from downward-facing drone images

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

A benchmark for image-based food recognition

Latest submissions

See All
graded 177111
failed 177108
graded 176842

What data should you label to get the most value for your money?

Latest submissions

See All
graded 179189
graded 179151
graded 179149

Perform semantic segmentation on aerial images from monocular downward-facing drone

Latest submissions

No submissions made in this challenge.
Participant Rating
Participant Rating
Camaro has not joined any teams yet...

Generative Interior Design Challenge 2024

Can't open baseline starter kit

3 months ago

Now I can access it! Thanks for the quick fix.

Can't open baseline starter kit

3 months ago

I got 404 when I tried to open baseline starter kit.
@snehananavati Could you please check it out?

Data Purchasing Challenge 2022

[Announcement] Leaderboard Winners

Almost 2 years ago

Thanks! It should be same as other platform like Kaggle, you can just create a discussion thread to share your approach! Of couse it would be the most helpful if you kindly share the code as well, but this competition was very structured so just sharing approach may be eough to understand what leads you to win:)

[Announcement] Leaderboard Winners

Almost 2 years ago

Big congrats for the winners, especially for @xiaozhou_wang, it seems you won the competition by a large margin! Really curious about your solution, it would be great if you can share with community:)

:rotating_light: Select submissions for final evaluation

Almost 2 years ago

Hi @shivam @dipam, do you have any timeline for the leaderboard update?

:rotating_light: Select submissions for final evaluation

About 2 years ago

Hi οΌ shivam, is there any progress?

:rotating_light: Select submissions for final evaluation

About 2 years ago

Hi @dipam, thanks for hosting the interesting compeitition!
It seems the competition was finished, when will the leaderborad be finalized?

IMPORTANT: Details about end of competition evaluations 🎯

About 2 years ago

@dipam Voted, thanks for considering our opinions seriously.
I’m convinced that it will lead to good results for everyone!
By the way, there are only 2 weeks left, are you planning to extend the deadline or keep it as it is?

:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!

About 2 years ago

The question is how you define the best or useful images. If it’s the best for improving 10 epochs effnet-b4(which I suspect underfitting), the current scheme makes sense.
But in practice, I geuss people would decide to add data after trying to Improve the model with the current data and finding the performance still doesn’t reach to the expected one.
So my definition of β€œuseful” here is β€œuseful to improve the performance of well enough finetuned model”. And I suspect current post training pipeline doesn’t reach to the level, IMHO.

:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!

About 2 years ago

Hi @dipam, is there any update about this?
Or, please let me know if you already decided to stick to current training pipeline. I’ll try to optimize my purchase strategy to the one.

:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!

About 2 years ago

Thanks, I totally understand the situation. I can imagine it’s much harder to host a competition than just to join as a competitor:)
Anyway, whether the modification would be made or not, I’ll try to do my best.

:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!

About 2 years ago

@dipam Thanks for the comment!
I understand that round 2 tries to make us more focused on the purchase strategies.

My concern is not about how good the final F1 score is, but about the meaning of the best additional data.

In general, increasing dataset size when your model is underfitting is a common bad strategy.

The same thing can be said here, the strategy to choose β€œgood additional data for underfitted model” is less practically meaningful than the one for overfitted model.

The easiest way to fix this issue is just change the training pipeline so that the trained model overfit to 1,000 training dataset.

I believe it would make the competition more useful and everyone can learn more interesting strategies.

Thanks,

:aicrowd: [Update] Round 2 of Data Purchasing Challenge is now live!

About 2 years ago

Hi @dipam, have you already cosidered to change post training code as mentioned in the comment?

Especially for the small number of epoch seems problematic for me.
You can easily check the trained model is still underfitting for the dataset by changing number of epochs to 20 from 10 and see how your score improved.
That means for the model there is almost no need to feed data as it’s still β€œlearning” with given data, so it might not a good model for evaluating purchased data quality.

In real situation, I guess the host would never use the way underfitted model to evaluate purchased data, that’s why I think it’s better to change or allow participants to change the post training code too.

Thanks, hope this competition would become more interesting and useful one!
welcome to any comment;)

Post purchasing training code should be jointly optimized

About 2 years ago

Hi, at first thanks for launching round 2 of this exciting and interesting challenge!
I’ve just read through updates at round 2, and was a little bit surprised by the change of post purchase training part.
I know it’s for making us focus more on purchasing strategy, but in my humble opinion it should be jointly optimized with the post purchasing training part. For example, we may want to change model size when the computing budget is small. Or, sometimes we may not want to use the ImageNet pretrained model as it is without any extra finetuning.
I understand what makes this competition unique is the purchasing phase, but I guess what the host wants is a strong classifier for each computing and labeling budget, isn’t it?
To maximize the chance to achieve it, I’d like to suggest allowing participants to modify post training code.

Thanks, welcome to any opinion!

Food Recognition Benchmark 2022

:rocket: Round 2 Launched

About 2 years ago

Also, could you please make sure you can correctly untar the dataset files?
There are some wierd points, the extension is somehow tar (not tar.gz as description says) and PaxHeader files are included in image directory.
If you have a certain way to untar, please let me know!

Thanks,

:rocket: Round 2 Launched

About 2 years ago

Hi @shivam, thanks for the update!
Let me make sure whether all of the image in 2.0 dataset are included in 2.1.

⏰ Round 1 Extended to 28th Feb

About 2 years ago

I see, thanks:)

But you don’t have to consider to give prizes, it would make me uneasy as my sub was scored 0.33, which is worth 2nd place :rofl:

Anyway, can’t wait to try new dataset!

⏰ Round 1 Extended to 28th Feb

About 2 years ago

Thanks, I got it. I was stuck in submission error last night… :sob:
So I intended to submit it to round 2 as timeline says round 2 will start on March 1st, but it wasn’t?

And let me make sure that there is no prize for 1st-3rd place if it’s below 0.47, right?

⏰ Round 1 Extended to 28th Feb

About 2 years ago

Hi @vrv, submission is already closed? My submission seems not reflected to leaderboard…

Camaro has not provided any information yet.