Loading
Feedback

_lyghter 0

Badges

0
0
0

Activity

Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Latest submissions

See All
failed 147881
graded 147850
failed 147849

Latest submissions

No submissions made in this challenge.

Predicting smell of molecular compounds

Latest submissions

See All
graded 121686
graded 121680
failed 121660
Gold 0
Silver 0
Bronze 0

Badges


  • May 16, 2020
Participant Rating
Participant Rating

Music Demixing Challenge @ISMIR 2021

Сan I use test part of MUSDB18 to tune hyperparameters (Leaderboard A)?

Yesterday

Thanks

If I were an MDX organizer, I would set these rules:

Leaderboard A
Winners must provide the organizers with training scripts that use only training part of MUSDB18(HQ) and early stopping. The winning submission must contain only models trained using these scripts. Training must be reproducible.

Leaderboard B
Winners owe nothing to anyone )

Сan I use test part of MUSDB18 to tune hyperparameters (Leaderboard A)?

Yesterday

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Сan I use test part of MUSDB18 to tune hyperparameters (Leaderboard A)?

7 days ago

Is a participant obligated to use early stopping? Can a participant arbitrarily choose the number of epochs? If so, does a participant have to justify their choice?

I don’t have much experience with machine learning. Using the test part seems like a tempting idea to to me. Compare these ways:

  1. Training on 86 with validation on 14 and early stopping, then re-training on 100 with optimal number of epochs (or finetuning on 14 ?)

  2. Training on 100 with validation on 50 and early stopping

The second way is faster and can probably lead to a slightly better model.

Cheating scenario: participant train a model with validation on test part, then remove validation step from the training script, submit model to Leaderbord A and say that number of epochs was chosen based on intuition / experience / leaderboard.

This is probably not a very important issue. I just share my thoughts.

Сan I use test part of MUSDB18 to tune hyperparameters (Leaderboard A)?

8 days ago

I train a model on the training part of MUSDB18, but evaluate it on the test part after each epoch to select optimal number of epochs. So, the training script uses both parts of MUSDB18. Can I submit a model trained in this way to leaderboard A?

Submission failed : Unable to find a valid `aicrowd.json` file at the root of the repository

8 days ago

My submission failed with message:

Submission failed : Unable to find a valid `aicrowd.json` file at the root of the repository.

But this file is all right.

Vulnerability

16 days ago

What happens if a participant does something like this in the submission code?

import os
os.environ['INFERENCE_PER_MUSIC_TIMEOUT_SECONDS'] = '250' #240

Skipping first round submission

20 days ago

Participants make dozens of submissions during the competition. Which ones will be re-evaluated on the new songs? Only those that are visible on the leaderboards?

:genie: Requesting feedback and suggestions

20 days ago

Will this be the longest song in the entire test dataset (28 songs)?

:genie: Requesting feedback and suggestions

20 days ago

It will be great if organizers reproduce training of winning models from Leaderboard A at the end of the competition. Otherwise, participants can hide usage of extra data.

Tracks length

25 days ago

What is the maximum song duration in the full test dataset (28 songs)?

Question about MUSDB18

About 1 month ago

Dear organizers, could you tell us why MUSDB18 includes all tracks from DSD100, but not all tracks from MedleyDB and only 2 tracks from Native Instruments? Is there anything wrong with the rest of the tracks from these datasets?

External datasets

About 1 month ago

Can I train model on a non free dataset or use such pretrained model?

Hardware specifications

About 1 month ago

Should the winners provide the training code for their models at the end of the competition? Will the organizers reproduce the training? If so, with what hardware and time resources?

Hardware specifications

About 2 months ago

What resources (cpu, gpu, memory, internet) are available from container during evaluation?

Learning to Smell

Can I train model localy and unpickle it in conteiner?

4 months ago

Can I train model localy and unpickle it in conteiner?

Can I use external datasets?

4 months ago

Can I use external datasets?

Problem with some SMILES

5 months ago

My submission was evaluated sucsessfuly in debug mode. I set “debug: false” and got “submission failed”, because function rdkit.Chem.MolFromSmiles returned None for some input strings. My rdkit version is 2020.09.2

Could you please check SMILES in test data?

Submission queue

5 months ago

Can I make new submission before container of last submission stops?

List of lists & vocabulary

5 months ago

I tried to predict [[‘rose’],[],[],[],[]] for each molecule in debug mode and got the error:

Submission Vocabulary contains Unknown smell words : .Are you sure you are using the correct vocabulary for this round ?

This force me to predict at least one word in each sentense. Do all input molecules have smells from round-3 vocabulary?

Supplementary Training Data Release for Round-3

5 months ago

Could you make this data available from the container?

_lyghter has not provided any information yet.

Notebooks

Create Notebook