3 Follower
1 Following
leocd
Leo Cahya Dinendra

ID

6
4
1

Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Mon
Wed
Fri

#### Challenges Entered

##### Scene Understanding for Autonomous Drone Delivery (SUADD'23)
By AIcrowd Amazon Prime Air

Understand semantic segmentation and monocular depth estimation from downward-facing drone images

#### Latest submissions

No submissions made in this challenge.

Audio Source Separation using AI

#### Latest submissions

No submissions made in this challenge.
##### Food Recognition Benchmark 2022
By Seerave Foundation

A benchmark for image-based food recognition

#### Latest submissions

No submissions made in this challenge.
##### NeurIPS 2022: CityLearn Challenge
By AIcrowd Intelligent Environments Lab

Using AI For Building’s Energy Management

#### Latest submissions

No submissions made in this challenge.
By Leibniz Centre for European Economic Research

What data should you label to get the most value for your money?

#### Latest submissions

 graded 179064 Thu, 7 Apr 2022 06:44:02 graded 179053 Thu, 7 Apr 2022 04:53:30 graded 179052 Thu, 7 Apr 2022 04:45:48
##### ESCI Challenge for Improving Product Search
By Amazon Search

Amazon KDD Cup 2022

#### Latest submissions

 graded 190340 Fri, 24 Jun 2022 05:32:12 graded 190316 Fri, 24 Jun 2022 02:24:58 graded 189982 Wed, 22 Jun 2022 06:40:41
##### NeurIPS 2021 - The NetHack Challenge
By AIcrowd

ASCII-rendered single-player dungeon crawl game

#### Latest submissions

No submissions made in this challenge.

Machine Learning for detection of early onset of Alzheimers

#### Latest submissions

 graded 140851 Tue, 25 May 2021 10:45:25
##### Seismic Facies Identification Challenge
By SEAM AI

3D Seismic Image Interpretation by Machine Learning

#### Latest submissions

 graded 157061 Sun, 19 Sep 2021 10:00:44 graded 156573 Thu, 16 Sep 2021 06:59:08 graded 156572 Thu, 16 Sep 2021 06:58:46
##### Music Demixing Challenge ISMIR 2021
By Sony Group Corporation

#### Latest submissions

No submissions made in this challenge.
##### Insurance pricing game
By Imperial CPG

Play in a realistic insurance market, compete for profit!

#### Latest submissions

 graded 110896 Fri, 25 Dec 2020 18:20:30 graded 110895 Fri, 25 Dec 2020 18:17:50 graded 110894 Fri, 25 Dec 2020 18:17:13
##### AI Blitz X
By AIcrowd

5 Puzzles 21 Days. Can you solve it all?

#### Latest submissions

No submissions made in this challenge.
##### AI Blitz #9
By AIcrowd

5 Puzzles 21 Days. Can you solve it all?

#### Latest submissions

No submissions made in this challenge.

#### Latest submissions

No submissions made in this challenge.
##### AI Blitz #8
By AIcrowd

5 Puzzles, 3 Weeks. Can you solve them all? 😉

#### Latest submissions

No submissions made in this challenge.
##### Learning to Smell
By Firmenich

Predicting smell of molecular compounds

#### Latest submissions

No submissions made in this challenge.
##### CYD Campus Aircraft Localization Competition
By OpenSky Network Cyber-Defence Campus, armasuisse

Find all the aircraft!

#### Latest submissions

No submissions made in this challenge.
##### AI Blitz #4
By AIcrowd

5 PROBLEMS 3 WEEKS. CAN YOU SOLVE THEM ALL?

#### Latest submissions

 graded 157061 Sun, 19 Sep 2021 10:00:44 graded 156573 Thu, 16 Sep 2021 06:59:08 graded 156572 Thu, 16 Sep 2021 06:58:46
Participant Rating
saeful_ghofar_zamianie_putra 0
shivam 136
vrv 0
Participant Rating
shivam 136
• UDMA_oye Seismic Facies Identification Challenge

### I need to say this

11 months ago

Wow. What a clickbait-y title. But that got your attention

I haven’t properly said it before, but Thank you Zew and Aicrowd for organizing this competition. Thank you, fellow participants. I learn lots of new stuff from this, especially from the top LB solutions & other participants’ notebooks. I think I already got in my mind the best practice when facing this kind of problem in my work in the near future (sooner or later I think I’ll be facing this too, and labeling will be more expensive because of engineer/scientist level labeler needed for the data).

Hope you guys are always in good health.

Cheers

### :rotating_light: Select submissions for final evaluation

12 months ago

Hi @dipam , just need a little clarifications about your post :

The detailed steps are given below:

1. Eligible teams will select two of their submissions to evaluate - Eligibility criteria to be announced soon, it will be based on Round 2 leaderboard.
2. Each submission will run through the pre-train and the purchase phase on the end of competition dataset.
3. The same purchased labels will be put through 5 training pipelines - Details to be released soon.
4. Each training pipeline will be run for 2 seeds and scores averaged, to address any stochasticity in scores.
5. To avoid issues due to difference of average scores from different training pipelines, a Borda ranking system will be used.

while the 5 training pipelines results scored using Borda ranking system, how about the submissions? is it the highest score from the submission that is being used or is it an average from both submission results?

### Simple Way to know any defect on image, finding noisy label, etc using OpenCV

12 months ago

Hi guys, I made a notebook about a simple method to detect defects on images using OpenCV.
It really helps me in detecting noisy labels and adding extra strategies on selecting which data to buy/skip.

you can read it here: AIcrowd | Simple Way to Detect Noisy Label with opencv | Posts

Also pls leave some likes if you don’t mind!

### 📹 Town Hall Recording & Resources from top participants

I tried this locally too!
but still beaten by buying naive prediction on dent label

### Need Clarification for Round 2

Hi AIcrowd Team, just want to clarify something :

1. In the post-purchase training phase,
# Create a runtime instance of the purchased dataset with the right labels
purchased_dataset = instantiate_purchased_dataset(unlabelled_dataset, purchased_labels)
aggregated_dataset = torch.utils.data.ConcatDataset(
[training_dataset, purchased_dataset]
)
print("Training Dataset Size : ", len(training_dataset))
print("Purchased Dataset Size : ", len(purchased_dataset))
print("Aggregataed Dataset Size : ", len(aggregated_dataset))

DEBUG_MODE = os.getenv("AICROWD_DEBUG_MODE", False)
if DEBUG_MODE:
TRAINER_CLASS = ZEWDPCDebugTrainer
else:
TRAINER_CLASS = ZEWDPCTrainer

trainer = ZEWDPCTrainer(num_classes=6, use_pretrained=True)
trainer.train(
training_dataset, num_epochs=10, validation_percentage=0.1, batch_size=5
)

y_pred = trainer.predict(val_dataset)
y_true = val_dataset_gt._get_all_labels()


shouldn’t it be something like this?

trainer.train(
aggregated_dataset , num_epochs=10, validation_percentage=0.1, batch_size=5
)

1. Because the combined and different time budget, shouldn’t it be something like this?

or did I assume it wrong?

Thanks.

### Brainstorming On Augmentations

1. I just want to make it more versatile to any augmentation pipeline I want to use. or maybe that’s the incorrect way? Does anyone else mess with the dataset classes only me? (asking the others)

2. I deleted it to show the result “my way” of training the random pick one from scratch.
My main pipeline is consist of pretraining, using the model to select purchases, resetting the weight then train it from scratch. I don’t think pretraining won’t do anything helpful if I want to do that.

3. I think reproducing is supposed to be doing the same and using the same thing. so probably just like you guess or the maybe seed. thanks for the indirect suggestion I’ll try to add every method from here Reproducibility — PyTorch 1.10 documentation

4. sorry for that I guess?

hi @shivam , sorry to drag you in, just to make sure are there any specific rules about only using a certain way in making the solution (like class, code writing, ml pipelines, frameworks, save path, etc)?

### Brainstorming On Augmentations

Yep.
At first, I tried feeding both the raw + pre-processed ones but it gives a really bad score.
probably because different way of convnet learns from those two types of images.
now I go either using the pre-processed only or raw only.

the seismic challenge while back, the rms attribute does help scale the amplitude. while the raw doesn’t really help me. Apparently, it’s quite different now while the raw can perform well too, the pre-trained weight also helps significantly.

### Brainstorming On Augmentations

I’m only using :

• RandomHorizontalFlip,
• RandomVerticalFlip,
• RandomRotation,

you can see it on my notebook here :

I don’t use any color augmentation at all because some of my current high submissions came from using no raw image input (though I still run some experiments on raw input one in case the preprocess one hit the ceiling, the same experience from the seismic competition before with @santiactis )

### Experiments with “unlabelled” data

yes, it’s very significant.

From my experiment notebook its something like this :

exp no. augmentation pretrained purchase_method score_pretraining_phase score_purchase_phase score_validation_phase LB_Score
1 NO NO NO 0.773 0.773 0.760
2 NO NO RANDOM 3000 0.773 0.804 0.760
3 NO NO ALL 10000 0.773 0.841 0.835
4 NO YES NO 0.857 0.857 0.850
5 NO YES RANDOM 3000 0.857 0.864 0.845 0.851
6 NO YES ALL 10000 0.857 0.892 0.875
7 YES YES NO 0.868 0.868 0.865
8 YES YES RANDOM 3000 0.868 0.886 0.869 0.880
9 YES YES ALL 10000 0.868 0.902 0.893

the notebook :

### My Multiple Experiments Results ( the random one got 0.88 on LB)

I’m using this same parameter for each experiment :

model : efficienet-b1
input: raw image
epoch: 20


tl;dr, use augmentation and pre-trained weight.

I hope you it help you guys, especially for those who just joined.

### Size of Datasets

I think it’s 5000 training images, 3000 to purchase, 3000 to test right? just as in the overview.

### Full list of available pretrained weights

wow, which pytorch version that got vit? nightly version?

### Submit failed with no error log

yes, it should be like that but mine didn’t show up.

### What is this validation submission phase error log means?

==================================
Deleting unsupported pre-trained model: ./.cache/pip/wheels/76/ee/9c/36bfe3e079df99acf5ae57f4e3464ff2771b34447d6d2f2148/gym-0.21.0-py3-none-any.whl
Deleting unsupported pre-trained model: ./.cache/pip/http/1/3/0/c/a/130ca645ced2b235e6f69505044bb4923f610dbb4bc6c8e1d76a50bb
Deleting unsupported pre-trained model: ./.git/objects/pack/pack-c148ae0f71d82068775278a3044e1a3c25b5f4a3.pack
Time left: 10800
timeout: the monitored command dumped core
/home/aicrowd/run.sh: line 38:    61 Segmentation fault      timeout -s 9 \$AICROWD_TIMEOUT_INFO python aicrowd_client/launcher.py


### Submit failed with no error log

I got error in the “Validate Submission” phase but with no log too

### 🚀 Discussion on Starter Kit

hi @vrv ,

I tried using this submission method instead : AIcrowd
the push works, checked it on gitlab, but somehow it’s not on the submission. the tag submission-` prefix is right too. Any idea why?

### Can we access the labels.csv files from the training data folder?

Is it possible to access and read the csv files of training data labels?
what’s the filepath to access it? is it just ./data/training/images/labels.csv just like in the notebook example?

### Allowance of Pre-trained Model

In my past experience doing challenges in Aicrowd, the committee has been really fair. I spotted some accounts that I suspected of cheating that before I reported it (with evidence of course), they have already taken care of it.

### What did you get so far?

I just turned on my PC, you guys are so fast…

### Which average method is used in the calculation of f1-score?

Over 2 years ago

From the discord, @mohanty said it’s macro. But they might change it to weighted maybe in the 2nd round.
cmiiw

leocd has not provided any information yet.