Loading
Feedback

shivam

Name

Shivam Khandelwal

Organization

AIcrowd

Location

Gurgaon, IN

Badges

0
0
2

Activity

Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Sample-efficient reinforcement learning in Minecraft

Latest submissions

See All
failed 69916
graded 69491

Multi Agent Reinforcement Learning on Trains

Latest submissions

See All
failed 68254
failed 68253
failed 68252

Classify images of snake species from around the world

Latest submissions

See All
graded 60335
graded 60334

Latest submissions

See All
failed 68839
failed 68599

A benchmark for image-based food recognition

Latest submissions

See All
graded 59791
failed 59371
failed 31084

Recognise Handwritten Digits

Latest submissions

See All
graded 63126
graded 62025
graded 61850

Robots that learn to interact with the environment autonomously

Latest submissions

See All
failed 75956
failed 75925
graded 75923

5 Problems 15 Days. Can you solve it all?

Latest submissions

See All
graded 67402
graded 66492

Help improve humanitarian crisis response through better NLP modeling

Latest submissions

See All
failed 32245

Predict Labor Class

Latest submissions

See All
graded 68597

Latest submissions

No submissions made in this challenge.

Recognizing bird sounds in monophone soundscapes

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.
Gold 0
Silver 0
Bronze 2
Newtonian
May 16, 2020
Trustable
May 16, 2020

Badges

  • Kudos! You've won a bronze badge in this challenge. Keep up the great work!
    Challenge: droneRL
    May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020
  • Has filled their profile page
    May 16, 2020

  • May 16, 2020

  • May 16, 2020
Participant Rating
Participant Rating
ashivani
vrv
shubhankar.sb 0
kartik.gupta0204 0
shubham_sharma 0
rohitmidha23 269
nickinack 0
sanjaypokkali 187
aditya_morolia 0
shraddhaa_mohan 276
hagrid67 107
yoogottamk
piotrekpasciak 0
akhilesh
akshatcx
pulkit_gera
sauravkar 0
nikhil_rayaprolu 194
jason_reynolds
marcel
mohanty
MasterScrat
anssi 225
aicrowd-bot

NeurIPS 2020: Flatland Challenge

Issue not created when tag is pushed with changes

4 days ago

Hi @seungjaeryanlee,

Only tags with name submission-* are considerd for submission/issue creation.

Submit both RL and OR method

6 days ago

Hi @junjie_li,

Currently best of those submissions is available on the leaderboard.

But I agree with your question, it is possible that someone’s OR submission have better score and RL even better. Although, I guess we assumed (probably wrongly) that one team may not be working in both OR & RL submission.

cc: @MasterScrat for your views

Team merging deadline

7 days ago

Hi @junjie_li,

Sorry for the trouble.
I have extended team creation configuration and you should be able to create teams now.

Team formation will remain open until final date is announced with Round 2 start.

How can i use Pytorch in submitting?

8 days ago

Hi @pf1,

You wouldn’t have internet access during the evaluation. This is how you can specify the runtime for your submission:

Given above have lot of options to do same thing.
I will suggest adding pytorch via conda (it take care of dependencies better) here: https://gitlab.aicrowd.com/flatland/neurips2020-flatland-starter-kit/blob/master/environment.yml#L7

Looking for team member?

About 1 month ago

Hey @mtrazzi,

Sorry I missed replying here.

Yes, you are free to create team even after warm-up round.

Your (and your teammates) submissions would be clubbed as team submissions as soon as you are part of any team.

Flatland challenge website "My Team" button leading to wrong link

About 2 months ago

Hi @compscifan2019,

You are correct and it is a bug on our side. I have raised a fix for the same and would be deployed soon.

It wouldn’t cause problem in your participation.

Meanwhile, you can use the “Teams” tab in your profile to start using your team immediately (it is using correct links).
https://www.aicrowd.com/participants/compscifan2019

Working on the examples given (flatland-examples)

About 2 months ago

(confused the question as Procgen competition one instead of Flatland competition, sorry for the wrong answer earlier)

The experiment file you want to submit as submission i.e. here

https://github.com/AIcrowd/neurips2020-procgen-starter-kit/blob/master/run.sh#L8

Edit: I see, the variable name choice probably isn’t the best one here, and could have caused confusion.

Working on the examples given (flatland-examples)

About 2 months ago

Hi @AntiSquid,

You need to point correct file in run.sh and the entrypoint for all the submissions remain as /home/aicrowd/run.sh.

We will make it more clear in starter kit if this information isn’t clear right now.

Looking for team member?

About 2 months ago

Hi everyone,

There are many participants who are looking or would be willing to team up for NeurIPS 2020: Flatland Competition. This thread is to facilitate the same.

Please post your introduction and skills you may be able to help or need help, etc, so other participants can connect accordingly!

“Alone we can do so little, together we can do so much.”
– Helen Keller

Feel free to spam the thread. :wink:
Hoping for amazing teams build up from scratch, and hopefully winning the competition!!

Conda env creation errors...UPDATED: later EOF error when running evaluator

About 2 months ago

Hey @MemoAI , please check your internet connection or retry doing it (seems to be stuck on download from pypi), it shouldn’t happen.

Conda env creation errors...UPDATED: later EOF error when running evaluator

About 2 months ago

Hi @MemoAI,

Thanks for pointing it out. The packages you mentioned are not available for Windows. (Ex: libuuid)

Let us check and update the environment.yml to be compatible with windows. Meanwhile, do try to create conda environment removing those unresolved packages, it should ideally work.

NeurIPS 2020: Procgen Competition

Change in daily submission limits for round 1

6 days ago

Hi @tim_whitaker,

Thanks for the suggestion.

We agree with your observation and have shared across the concern with the organizing team, and awaiting a decision on it. Although based on initial discussion, the limit will be increased – just collecting a few more datapoints from incoming submissions.

Change in daily submission limits for round 1

6 days ago

Hi @victor_le,

We are working on restoring the limit on top priority, but you can expect higher limit somewhere in ~1-2 weeks in worst case. We will keep you all informed as soon as possible on situation.

Change in daily submission limits for round 1

6 days ago

Hi @ielmorla,

Can you please share where did you see 8th Aug 8 AM message?

The limit is running window, so your next submission time should be 24 hours plus to your second last submission. (verified the configuration too just to be sure)

How to install external libraries?

13 days ago

If you want to run your submissions in a customized environment, first head to aicrowd.json and set docker_build to true . This flag tells that you need a custom environment.

You can read more about it here:

AWS instance setup

13 days ago

Hi @mtrazzi,

  1. Please update run.sh for RAM/CPU you want to use for your run. I assume you are getting 2 CPUs instead of 8 because of default value here. On hindsight, I think we shouldn’t keep default value in starter kit & let rllib automatically detect available resources.
  1. RAY_MEMORY_LIMIT & RAY_STORE_MEMORY are “total” memory reserve and not per timestep, etc, hence you probably want to reserve something in the order of GBs and not MBs, and try again (ideally start with 32GB+ given your system configuration and reduce/increase based on your use case).
    In case this doesn’t solve your issue:
    (a) My initial hunch would be if it is related to lru_evict – in case you have enabled it in your script? https://github.com/ray-project/ray/issues/8558 if not please let us know if:
    (b) any major change done in memory-related configuration, so we can help to debug accordingly? https://docs.ray.io/en/latest/memory-management.html

Are entries on the leaderboard in round 1or warmup round?

About 1 month ago

Hi @kaixin,

The warm round end time was extended and is still ongoing, due to which evaluations are happening on only coinrun env.

Insufficient cluster resources to launch trial

About 1 month ago

Hi @ava6969,

Please modify you run.sh based on your system’s configuration.

Could you confirm that distribution_mode is hard?

About 1 month ago

Hi @denys88,

I can confirm that we are using "distribution_mode": "easy" during the rollout/evaluation phase as mentioned.

Assumption: In case you are not adding distribution_mode in the training phase and judging based on the training stage value, it may be running as hard (i.e. default value).

Can you share observation or logs based on which you think it is configured to hard?
Happy to debug and verify it again on our side! :smiley:

Some questions about team members

About 1 month ago

Hi @RDL_lms,

  1. A person cannot join more than one team.
  2. Yes, you can add new members to your team till the deadline.
  3. The deadline for adding team members is 2 weeks before the end of the final round.

2 hours training time limit

About 1 month ago

Hi @xiaocheng_tang,

The submission is considered as failed right now after 2 hrs timeout.

But I think it is fair request to be able to use last checkpoint in many scenarios. Let us check with team and revert back to you with decision on it.

Looking for team member?

About 2 months ago

Hi everyone,

There are many participants who are looking or would be willing to team up for NeurIPS 2020: Procgen Competition. This thread is to facilitate the same.

Please post your introduction and skills you may be able to help or need help, etc, so other participants can connect accordingly!

“Alone we can do so little, together we can do so much.”
– Helen Keller

Feel free to spam the thread. :wink:
Hoping for amazing teams build up from scratch, and hopefully winning the competition!!

Submission Compute Time Limits

About 2 months ago

Hi @tim_whitaker,

The total execution timeout is 2 hours and is enforced in warm up round too.
Please let us know the submission ID in which you noticed execution for >2 hours, and we can have a look into it.

Meanwhile, only reason why it could have been >2 hours is because we use preemptable resources and don’t consider new resource provision time (time from stop to resume) in the 2 hours timeout which may be displayed wrongly on Gitlab issue.

Will the submission limit reset at the end of each round?

2 months ago

Yes, the submission limits are different for each round and will reset/change accordingly.

TypeError: cannot pickle 'property' object

2 months ago

Thanks, I thought you might be doing something else. We have done most of the testing in Mac OS so this shouldn’t happen ideally.

In that case, can you please share complete traceback, it will help us to debug issue?

TypeError: cannot pickle 'property' object

2 months ago

Hi, can you share the command you executed which showed TypeError?

FAQ: Regarding rllib based approach for submissions

2 months ago

While we have done quite a few challenges in the past where there was no restrictions in terms of which framework to use.

In the context of the Procgen challenge, we are enforcing the framework as an experiment, as it ofcourse helps us orchestrate the evaluations in a more stable way. At the same time, it also ensures that all the code from all the participants at the end of the competition can hypothetically be just merged into the starter kit in a simple way via a pull request - hence increasing the overall impact of all the activity that happened in this challenge.

REAL 2020 - Robot open-Ended Autonomous Learning

NeurIPS 2020: MineRL Competition

Why this user can submit 42 times?

11 days ago

Hi @youkaichao,

Sorry for any confusion caused.
But this account belongs to MineRL organising/baseline team, not participant, so nothing to worry about. :smile:

UberSerb

16 days ago

Hi Martin,

Let me escalate it in team and get it checked.
I was following the discussion initially only and right now it do looks like not in good faith.

Regards,
Shivam

https://discourse.aicrowd.com/t/

6 months ago

(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.)

Use the following paragraphs for a longer description, or to establish category guidelines or rules:

  • Why should people use this category? What is it for?

  • How exactly is this different than the other categories we already have?

  • What should topics in this category generally contain?

  • Do we need this category? Can we merge with another category, or subcategory?

Test Topic, Test Topic

7 months ago

Reply, Reply, Reply, Reply, Reply, Reply, Reply, Reply

Test Topic, Test Topic

7 months ago

Test Content, Test Content, Test Content, Test Content, Test Content, Test Content, Test Content, Test Content

https://discourse.aicrowd.com/t/

7 months ago

(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.)

Use the following paragraphs for a longer description, or to establish category guidelines or rules:

  • Why should people use this category? What is it for?

  • How exactly is this different than the other categories we already have?

  • What should topics in this category generally contain?

  • Do we need this category? Can we merge with another category, or subcategory?

Regarding your failed submissions

8 months ago

Hi @bzhousd,

Your new submissions are facing issue due to downcasting done on “row_id” column. I have added automated patch for your submissions which convert it, but it will be important to get it included in your codebase, so it is fixed properly.

The changes are as follows in your codebase:

replace("EDA_simple.py", 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"]]', 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c!="row_id"]')
replace("EDA_v3.py", 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"]]', 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c!="row_id"]')
replace("EDA.py", 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"]]', 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c!="row_id"]')
replace("EDA_v4.py", 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c not in [\'drugkey\',\'indicationkey\']]', 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c not in [\'drugkey\',\'indicationkey\', \'row_id\']]') 

MASKD

Getting INTERNAL SERVER ERROR

18 days ago

Hi all,

Meanwhile in case due to this error you end up uploading same solution multiple times and want to remove duplicate submission, please revert with submission ID we can delete it so your submission limit isn’t affected due to it.

CHESS

How do I submit solutions?

About 1 month ago

Hey, we seem to be getting submissions right now on the problem.
Can you try now once and inform if it is working fine?

I assume it could have been due to some glitch i.e. challenge had just started. But checking from logs irrespectively.

CYD Campus Aircraft Localization Competition

Evaluation metric

About 1 month ago

cc: @masorx for clarification

Coverage on Leaderboard

About 1 month ago

Please refer this discussion: Evaluation metrics

Baseline Python Solution

About 1 month ago

In case anyone else is following this question.

It has been answered by organisers here:

What ID should be considered

About 1 month ago

No worries, the category name may have not added correctly for some reason, will look.

But for your query, you need to basically use round1_competition.csv.zip (download from resources section) and predict null values in it i.e. 109474 (+1 for header).

The 7 files you mentioned are part of training dataset.

~/Downloads❯ cat round1_competition.csv | grep -E "NaN,NaN" | wc -l
  109474

I hope it helps.

What ID should be considered

About 1 month ago

Hi @thanish, can you please share which challenge are you referring to?

Metric Formula Question

About 1 month ago

cc: @masorx for clarification

Incorrect submission procedure

About 2 months ago

Hi @komsomolsk.ai,

Should incorrect submission be counted in overall submission number?

It is a configurable option per-challenge, I can check with organisers if they want to consider failed submissions into the submission limit and update accordingly.

I even did not download anything - but platform counted empty submit without file.

Let me check this bug, you are correct – in case of no upload, it shouldn’t be considered as submission. I am reverting submissions with empty/no file upload and adding validation so it don’t happen in future. Thanks for reporting it.

Scoring takes long

About 2 months ago

Hi @johnnybrixton,

Sorry for the delay in evaluation.
There was one bug which caused submission to fail and status wasn’t updated.

The bug has been fixed now and you should have feedback for submissions within second(s) now on. :smiley:

Evaluation metrics

About 2 months ago

Hi @RomanChernenko,

The score is RMSE and the secondary score is coverage.

cc: @masorx for confirming it, I will change the header on the leaderboard once confirmed.

Random

About the Random category

About 1 month ago

About the Random category

About 1 month ago

About the Random category

About 1 month ago

(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.)

Use the following paragraphs for a longer description, or to establish category guidelines or rules:

  • Why should people use this category? What is it for?

  • How exactly is this different than the other categories we already have?

  • What should topics in this category generally contain?

  • Do we need this category? Can we merge with another category, or subcategory?

Announcements

Issue in making submission via website upload for last few hours

2 months ago

Hi everyone,

We faced issue in our create submission flow for last few hours, in which submissions are done via the website (i.e. upload CSV/JSON), and were notified recently on forum.

The issue has been identified and fixed now. In case you faced any issue in making a submission, please try now.

Sorry for the inconvenience caused. Wishing you best of luck in your respective challenges! :smiley:

Image build failed errors for code based submissions

3 months ago

Hi participants,

We came to know about higher cases of image build failures in the last couple of days, which caused few image build failures in Food Recognition Challenge and Snake Species Identification Challenge submissions.

Error Like:

Thin Pool has [...] free data blocks which is less than minimum required [...] free data blocks. 
Create more free space in thin pool or use dm.min_free_space option to change behavior

The issue crept in because our docker space cleanup wasn’t working as expected, causing reduced disk space. This has been fixed now, but in case you continue to face this issue please let us know.

AIcrowd Blitz ⚡- May 2020

3 months ago

AIcrowd is excited to announce the launch of AIcrowd Blitz :zap:- our fortnight-long marathon of interesting AI puzzles :tada:.

Whether you are an AI veteran or someone who is just finding feet in the world of ML and AI, there is something for each one of you. And did we mention there are some cash prizes up for grabs too !! :moneybag:

Our problems have always been intriguing and this time would be no exception. So put on that puzzle hat :tophat: and join us in this marathon.

What :zap:: AIcrowd Blitz

When :spiral_calendar:: 2nd May’20 17:00 CEST - 16th May’20 17:00 CEST

:muscle: Challenge Page: https://aicrowd.com/challenges/aicrowd-blitz-may-2020

Sneak Peek :face_with_monocle:: We have taken some of the classic ML problems and given it a flavor of our own.

About the Announcements category

3 months ago

(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.)

Use the following paragraphs for a longer description, or to establish category guidelines or rules:

  • Why should people use this category? What is it for?

  • How exactly is this different than the other categories we already have?

  • What should topics in this category generally contain?

  • Do we need this category? Can we merge with another category, or subcategory?

Unity Obstacle Tower Challenge

Snake Species Identification Challenge

Submission failed : No participant could be found for this username

About 1 month ago

Hi @gloria_macia_munoz,

Sorry for the error, we are working on syncing usernames on change in AIcrowd to Gitlab & Discourse.

This happened because you renamed your username to gloria_macia_munoz, while remained older one on Gitlab (as we aren’t syncing them properly right now, it is being fixed).

I have renamed your username on Gitlab now and you should be able to make new submission.

NOTE: Your repository path would have changed too accordingly, though Gitlab redirect properly so you wouldn’t face any issue due to it.

Let me know in case you still face issue in making submission.

Tried pushing a non-debug mode version but was detected to have debug mode active

About 1 month ago

Hi @spil3141,

debug parameter expects boolean, but you had specified string i.e. "false" (instead of false) due to which it used fallback i.e. debug enabled.

I just checked and it seems to have happened because starter kit’s README have used it as string as well – as example. Just to save people falling into this behaviour accidently I have added checks for string "true" and "false" now on.

Meanwhile, congratulations on your complete evaluation, all the best! :smiley:

Can not pushing large file to private repository

About 1 month ago

Hi @spil3141,

Please refer to this FAQ. You need to use git-lfs for large files.

System confirmation for submissions

2 months ago

Hi @gokuleloop,

Yes, it is 10.0, sorry I mentioned the version in linked topic but not on this post. I have updated so it is visible here too now.

SnakeCLEF how to submit, when and how many?

2 months ago

Hi everyone,

Lukas has already posted the time for Snakes challenge, but I would like to add thanks for the feedback and we will make change to start displaying exact time + timezone on the website, so this don’t create confusion to other challenge/participants.

SnakeCLEF how to submit, when and how many?

2 months ago

Hi @christophmf,

Failed ones are being counted toward the submission limit in this challenge as of now (based on challenge configuration).

We can check up with organizers in case we want to consider X failed submissions not to be part of the daily submission limit. (cc: @picekl in case you can confirm it with Rafael)

Submission not showing up

3 months ago

Hi @yankeesong,

It is because matplotlib is not installed in your current runtime environment.

You can do so by adding it in your environment.yml file. In case you are an advanced linux user, and want to see all the ways you can configure runtime environment, you can read this FAQ post.

You can view the logs for debug submissions (i.e. debug: true, in aicrowd.json) yourself by clicking on link present in your GitLab issue. Something like below:

:bulb: Logs for Admin Reference : agent-logs | pod_spec-logs | error_trace-logs |

Let me know in case you have any follow up question.

Submission not showing up

3 months ago

Hi @yankeesong, the submission is valid only when:

  1. tag starts with prefix submission-
  2. the tag commit hash is new and not same as any previous submission, i.e. 2 submission with same commit is ignored

I think your tag push isn’t considered for submission due to 1st point above.

System confirmation for submissions

3 months ago

Hi participants,

I noticed question from participant regarding system configuration for Snakes Challenge submissions.

Here is the configuration we use:

CPU: 3.92 cores
RAM: 12.3 GB
GPU (if requested via aicrowd.json, cuda: 10.0): K80

These are n1-standard-4 machine. https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#eviction_threshold

I hope this helps!
Wishing you best of luck for the challenge and excited to see awesome submissions. :smiley:

Read state_dict in my submission

3 months ago

Hi @yankeesong,

As @picekl mentioned, git-lfs is the way to go.
You can also refer to below link for trimmed down example for uploading models.

In case you have any follow up question do let us know.

Submission is taking really long

3 months ago

Hi @eric,

I have shared response on Gitlab issue.

It seems like some problem when you are saving your predictions to file and they are filled with Nan, instead of floats.

Submission is taking really long

3 months ago

Hi @eric,

Yes, I am stopping the running submission.

Submission is taking really long

3 months ago

Hi @eric,

I am looking into your submission #67390 now and will update you asap.

Hash id wrong format

3 months ago

Hi @yankeesong,

Please treat the column as “text” and not “number” in whichever software you are viewing train_labels.csv file.

The hash ids are random “text” fields, and the ID above in the dataset is “9990646e65” (text representation) and not “9.990646E+71” (numeric representation).

~❯ cat ~/Downloads/train_labels.csv | grep 99064
natrix-tessellata,Italy,9990646e65,Europe

I hope it helps, let us know in case you have any further query.

Wish you luck with the competition! :smiley:

New Submission does not appear in Leaderboard

3 months ago

Hi @eric,

#65930 and #65938 have failed due to SyntaxError which have been shared by @picekl in GitLab issues comments.

But #65941 and #65950 have failed due to issue on our end. I have requeued them now, and made announcement about the same here.

Is it still possible to submit for the snake competition?

7 months ago

Yes, you can submit. The submissions wouldn’t count toward leaderboard ranking.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

Hi @ValAn, participants,

Congratulations to all for your participation.

There is no update right now. Organisers will be reaching out to the participants shortly with details about their travel grants, etc and post challenge follow-up.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

Hi @amapic, I have started force cudatoolkit=10.0 installation at same time above announcement is made i.e. 14 hours ago.

Edit: I remember the conda environment issue you were facing, and it isn’t related to it.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

Hi @ignasimg,

Thanks for the suggestions.
I completely agree that we need to improve our communication & orientation of information for providing seamless experience to participants.

We would be glad to hear back from you after competition and looking forward for the inputs.


I checked all the submissions and unfortunately multiple participants are facing same issue i.e. GPU is being allocated but not used by submissions, due to cuda version mismatch.

For making GPU work out of box, we have introduced force installation as below in our snakes challenge evaluation process:

conda install cudatoolkit=10.0

This should fix the timing issues and we will continue monitoring all the submissions closely.


@ignasimg I have verified disks performance and it was good. Unfortunately on debugging, I found your submission faced same issue i.e. cudatoolkit=10.1 due to which it may have given the impression that disk is the bottleneck (but it was GPU which wasn’t being utilised). The current submission should finish much sooner after condatoolkit version pinning.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

@ValAn No, I can confirm the timeouts haven’t been change b/w your previous and current runs. The only issue has been timeout wasn’t implemented properly in past and it can be reason why your previous (1 week old) submission get missed from timeout.

We can absolutely check why it is taking >8 hours instead of ~10 minutes on local. Can you help me with following:

  • The local run is with GPU? I can check if your code is utilising GPU (when allocated) or running only on CPU for whatsoever reason.
  • What are the number of images when you are doing locally? The server/test dataset have 32428 images to be exact, which may be causing higher time.

I think specs for online environment would help a bit in case there is significant difference from your local environment: 4 vCPUs, 16 GB memory, K80 GPU (when enabled)

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

Hi @amapic, let me get back on this after confirming with organisers.

Meanwhile we can create new questions instead of following up on this thread, it will make QnA search for future simpler. :sweat_smile:

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

Hi @ValAn,

The submissions ideally should take few hours to run but we have put hard timeout as 8 hours. In case your solution is crossings 8 hours it is marked failed.

According to you how much time your code should run roughly? Is it way too off in local v/s during evaluation phase?

Otherwise you can include GPU (if not doing right now) to speed up computation and finish the evaluation under 8 hours.

Please let us know in case you require more help with debugging your submission. We can try to see which step/part of code is taking higher time if required.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

@amapic This is happening as these packages are only available for linux distribution, due to while installing them in windows (I assume you are using windows) is failing. This is unfortunately a limitation currently with conda.

Example:
https://anaconda.org/anaconda/ncurses, have only osx & linux builds but not windows

In such scenario, I will recommend getting rid of above packages from environment.yaml and continue your conda env creation. These packages are often included being dependencies of “main” dependencies, conda should resolve similar package for your system automatically.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

Hi participants, @ValAn,

Yes the GPUs are available on snakes challenge submissions when gpu: true is done in aicrowd.json.

It need to be 10.0 because nodes on which your code run has GKE version 1.12.x currently -> Nvidia driver 410.79 (based on) -> cuda 10.0 (based on).

We are looking forward to have future challenges on higher CUDA version (GKE version). But to keep consistency in results, timings, etc we do not want to change versions mid-way of contest.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

Hi @gokuleloop,

Thanks for pointing it out. We have updated the last date to Jan 17, 2020 on website as well.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

Hi git lfs migrate is for transferring any older commit to start using lfs. This is useful in case you have lots of older commit (intended/unintended) and want those files to migrate to LFS based in future.

Can I have an example of a code which is working to make a submission on gitlab?

7 months ago

@amapic in case your files are larger then 30-50 MB, you will need to use git-lfs for uploading those files. Please read about it here: How to upload large files (size) to your submission

ImageCLEF 2020 Coral - Pixel-wise parsing

"Create Submission" not working

2 months ago

Hi @picekl,

We have identified the issue on our side and the submission creation should be working now.

Please let us know in case you face any issue.

"Create Submission" not working

2 months ago

Hi @picekl,

Thanks to report, looking into it.

LifeCLEF 2020 Bird - Monophone

Submission file failed

2 months ago

Hi @NPU_BAI,

I will leave to organisers of the challenge to confirm.

But as far as I see, you have predicted whcsp in some cases which is not a valid code. The valid name are whcspa or whcspa1. Due to which submission has failed.

Final countdown

2 months ago

Hi @fenway,

I looked into it and your profile isn’t complete yet due to which files are not available to you for download.

Please fill the required information in your profile page here.


Each time I filled out the profile info

Can you link me to the page you have filled the profile information? I can accordingly verify that everything is working as expected. Thanks.

Have the dataset files been released?

6 months ago

Hi @houkal,

We are working with @kahst to make the dataset available soon.
It is ready but faced upload issue which is being resolved.

Regards,
Shivam

ImageCLEF 2020 Tuberculosis - CT report

Organizing submissions

3 months ago

Hi @SergeKo,

I am sorry I wasn’t aware about this challenge having private leaderboard, the screenshot is now removed.

Organizing submissions

3 months ago

Hi, do you mean submission selected on leaderboard here?

This is picked based on ranking method set by challenge organiser, which is descending for both mean_auc and min_auc in this challenge.

Sorry in case I didn’t understand the question properly.
Please let us know in case you meant something else.

ImageCLEF 2020 DrawnUI

Error Message not present in the evaluation script

3 months ago

Hi,

The full traceback is as follows.

You can try out your csv file with ic2020_drawn_ui_evaluator.py:

  File "ic2020_drawn_ui_evaluator.py", line 526, in <module>
    result = evaluator._evaluate(_client_payload)
  File "ic2020_drawn_ui_evaluator.py", line 37, in _evaluate
    predictions = self.load_predictions(submission_file_path)
  File "ic2020_drawn_ui_evaluator.py", line 118, in load_predictions
    for row in reader:
_csv.Error: field larger than field limit (131072)

Getting incomplete error message after submission

3 months ago

Hi @OG_SouL,

Thanks for notifying this. We didn’t realise the “View” button next to error message doesn’t contain full error message displayed. We will start displaying inside it properly.

Meanwhile for your case, the full error message is as follows:

Error : Incorrect localisation format (Line nbr 1). The format should be …<widget_ID><localisations_delimited_by_comma>…

How long for EUA approval?

7 months ago

cc: @Ivan_Eggel for looking into it

CRDSM

Baseline for CRDSM

3 months ago

Getting Started Code for CRDSM Educational Challenge

Author - Pulkit Gera

Open In Colab

In [0]:
!pip install numpy
!pip install pandas
!pip install sklearn

Download data

The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions

In [0]:
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_crdsm/data/public/test.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_crdsm/data/public/train.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv
--2020-05-16 21:33:33--  https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_crdsm/data/public/test.csv
Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 130.117.252.12, 130.117.252.10, 130.117.252.13, ...
Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|130.117.252.12|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72142 (70K) [text/csv]
Saving to: ‘test.csv’

test.csv            100%[===================>]  70.45K   150KB/s    in 0.5s    

2020-05-16 21:33:34 (150 KB/s) - ‘test.csv’ saved [72142/72142]

--2020-05-16 21:33:36--  https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_crdsm/data/public/train.csv
Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 130.117.252.12, 130.117.252.10, 130.117.252.13, ...
Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|130.117.252.12|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2543764 (2.4M) [text/csv]
Saving to: ‘train.csv’

train.csv           100%[===================>]   2.43M  1.47MB/s    in 1.6s    

2020-05-16 21:33:39 (1.47 MB/s) - ‘train.csv’ saved [2543764/2543764]

Import packages

In [0]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [0]:
all_data = pd.read_csv('data/train.csv')

Analyse Data

In [0]:
all_data.head()
Out[0]:
max_ndvi 20150720_N 20150602_N 20150517_N 20150501_N 20150415_N 20150330_N 20150314_N 20150226_N 20150210_N 20150125_N 20150109_N 20141117_N 20141101_N 20141016_N 20140930_N 20140813_N 20140626_N 20140610_N 20140525_N 20140509_N 20140423_N 20140407_N 20140322_N 20140218_N 20140202_N 20140117_N 20140101_N class
0 997.904 637.5950 658.668 -1882.030 -1924.36 997.904 -1739.990 630.087 -1628.240 -1325.64 -944.084 277.107 -206.7990 536.441 749.348 -482.993 492.001 655.770 -921.193 -1043.160 -1942.490 267.138 366.608 452.238 211.328 -2203.02 -1180.190 433.906 4
1 914.198 634.2400 593.705 -1625.790 -1672.32 914.198 -692.386 707.626 -1670.590 -1408.64 -989.285 214.200 -75.5979 893.439 401.281 -389.933 394.053 666.603 -954.719 -933.934 -625.385 120.059 364.858 476.972 220.878 -2250.00 -1360.560 524.075 4
2 3800.810 1671.3400 1206.880 449.735 1071.21 546.371 1077.840 214.564 849.599 1283.63 1304.910 542.100 922.6190 889.774 836.292 1824.160 1670.270 2307.220 1562.210 1566.160 2208.440 1056.600 385.203 300.560 293.730 2762.57 150.931 3800.810 4
3 952.178 58.0174 -1599.160 210.714 -1052.63 578.807 -1564.630 -858.390 729.790 -3162.14 -1521.680 433.396 228.1530 555.359 530.936 952.178 -1074.760 545.761 -1025.880 368.622 -1786.950 -1227.800 304.621 291.336 369.214 -2202.12 600.359 -1343.550 4
4 1232.120 72.5180 -1220.880 380.436 -1256.93 515.805 -1413.180 -802.942 683.254 -2829.40 -1267.540 461.025 317.5210 404.898 563.716 1232.120 -117.779 682.559 -1813.950 155.624 -1189.710 -924.073 432.150 282.833 298.320 -2197.36 626.379 -826.727 4

Here we use the describe function to get an understanding of the data. It shows us the distribution for all the columns. You can use more functions like info() to get useful info.

In [0]:
all_data.describe()
#all_data.info()
Out[0]:
max_ndvi 20150720_N 20150602_N 20150517_N 20150501_N 20150415_N 20150330_N 20150314_N 20150226_N 20150210_N 20150125_N 20150109_N 20141117_N 20141101_N 20141016_N 20140930_N 20140813_N 20140626_N 20140610_N 20140525_N 20140509_N 20140423_N 20140407_N 20140322_N 20140218_N 20140202_N 20140117_N 20140101_N class
count 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000 10545.000000
mean 7282.721268 5713.832981 4777.434284 4352.914883 5077.372030 2871.423540 4898.348680 3338.303406 4902.600296 4249.307925 5094.772928 2141.881486 3255.355465 2628.115168 2780.793602 2397.228981 1548.151856 3015.626776 4787.492858 3640.367446 3027.313647 3022.054677 2041.609136 2691.604363 2058.300423 6109.309315 2563.511596 2558.926018 0.550213
std 1603.782784 2283.945491 2735.244614 2870.619613 2512.162084 2675.074079 2578.318759 2421.309390 2691.397266 2777.809493 2777.504638 2149.931518 2596.151532 2256.234526 2446.439258 2387.652138 1034.798320 1670.965823 2745.333581 2298.281052 2054.223951 2176.307289 2020.499263 2408.279935 2212.018257 1944.613487 2336.052498 2413.851082 1.009424
min 563.444000 -433.735000 -1781.790000 -2939.740000 -3536.540000 -1815.630000 -5992.080000 -1677.600000 -2624.640000 -3403.050000 -3024.250000 -4505.720000 -1570.780000 -3305.070000 -1633.980000 -482.993000 -1137.170000 372.067000 -3765.860000 -1043.160000 -4869.010000 -1505.780000 -1445.370000 -4354.630000 -232.292000 -6807.550000 -2139.860000 -4145.250000 0.000000
25% 7285.310000 4027.570000 2060.600000 1446.940000 2984.370000 526.911000 2456.310000 1017.710000 2321.550000 1379.210000 2392.480000 559.867000 1068.940000 616.822000 947.793000 513.204000 718.068000 1582.530000 2003.930000 1392.390000 1405.020000 1010.180000 429.881000 766.451000 494.858000 5646.670000 689.922000 685.680000 0.000000
50% 7886.260000 6737.730000 5270.020000 4394.340000 5584.070000 1584.970000 5638.400000 2872.980000 5672.730000 4278.880000 6261.950000 1157.170000 2277.560000 1770.350000 1600.950000 1210.230000 1260.280000 2779.570000 5266.930000 3596.680000 2671.400000 2619.180000 1245.900000 1511.180000 931.713000 6862.060000 1506.570000 1458.870000 0.000000
75% 8121.780000 7589.020000 7484.110000 7317.950000 7440.210000 5460.080000 7245.040000 5516.610000 7395.610000 7144.480000 7545.880000 3006.960000 5290.800000 4513.960000 4066.930000 3963.590000 1994.910000 4255.580000 7549.430000 5817.750000 4174.010000 4837.610000 3016.520000 4508.510000 2950.880000 7378.020000 4208.730000 4112.550000 1.000000
max 8650.500000 8377.720000 8566.420000 8650.500000 8516.100000 8267.120000 8499.330000 8001.700000 8452.380000 8422.060000 8401.100000 8477.560000 8624.780000 7932.690000 8630.420000 8210.230000 5915.740000 7492.230000 8489.970000 7981.820000 8445.410000 7919.070000 8206.780000 8235.400000 8247.630000 8410.330000 8418.230000 8502.020000 5.000000

Split Data into Train and Validation 🔪

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [0]:
X = all_data.drop('class',1)
y = all_data['class']
# Validation testing
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.🧐

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐

In [0]:
# classifier = LogisticRegression()

classifier = SVC(gamma='auto')

# from sklearn import tree
# classifier = tree.DecisionTreeClassifier()
  • To start you off, We have used a basic Support Vector Machines classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here 🧐. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or MLP and compare how the performance changes.

Train the Model

In [0]:
classifier.fit(X_train, y_train)
Out[0]:
SVC(C=1.0, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,
    decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
    max_iter=-1, probability=False, random_state=None, shrinking=True,
    tol=0.001, verbose=False)

got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [0]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score is the metric for this challenge
In [0]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [0]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)
Accuracy of the model is : 0.7140825035561877
Recall of the model is : 0.7140825035561877
Precision of the model is : 0.7140825035561877
F1 score of the model is : 0.138865836791148

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [0]:
test_data = pd.read_csv('data/test.csv')

Predict Test Set

Time for the moment of truth! Predict on test set and time to make the submission.

In [0]:
y_test = classifier.predict(test_data)

Save the prediction to csv

In [0]:
df = pd.DataFrame(y_test,columns=['class'])
df.to_csv('submission.csv',index=False)

🚧 Note :

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

To download the generated csv in collab run the below command

In [0]:
try:
  from google.colab import files
  files.download('submission.csv')
except ImportError as e:
  print("Only for Collab")

Well Done! 👍 We are all set to make a submission and see you name on leaderborad. Let navigate to challenge page and make one.

ImageCLEF 2020 VQA-Med - VQA

About the submission creation

3 months ago

Hi,

You have marked challenge as ImageCLEF 2020 VQA-Med in this question, but it doesn’t seem to havee any retrieval type and run type fields?

Meanwhile, for this challenge, please check the submissions instruction given on challenge page. https://www.aicrowd.com/challenges/imageclef-2020-vqa-med-vqa#submission-instructions

You can also download train/validation files on idea how the “Answer” field should look like.

Let us know in case you have any follow up questions.

Food Recognition Challenge

Submissions taking too long

3 months ago

Hi @naveen_narayanan,

I see all the submissions made by you have failed either due to image build failure caused due to improper Dockerfile or due to exceptions in your code.

I can ignore the failed submissions from your daily count so you can make submission right now (given it is last few hours left), but considering those for final leaderboard or not, will be a decision made by challenge organisers later.

Please go ahead and make a submission!

Submissions taking too long

3 months ago

Hi @simon_mezgec,

Your submissions 67274 went without any problem as far as I see. While, 67214 took longer because existing VMs were already busy in evaluating other submissions. We didn’t considering surge in submissions just before the Round end and I have increased parallel submissions to be evaluated (from 4 to 8) which should keep the queue clear.

I hope it helps.

Round 2 End Time

3 months ago

Hi, the configured end time as of now is 17/05/2020, 00:00 UTC.

Submissions taking too long

3 months ago

Hi @simon_mezgec,

The issue is fixed now and you should be able to make submission. Please remember to pull latest commit from mmdetection starter kit.

Explaination:

This basically happened because mmcv had a new release 0.5.2 ~7 hours back from now.

And mmdetection has requirement of/pinned to latest release of mmcv

Due to this mmdetection installation start failing. I have pinned mmcv version to 0.5.1 in starter kit now. https://gitlab.aicrowd.com/nikhil_rayaprolu/food-pytorch-baseline/commit/84eadc1ca353b5741423e0e1ea9f8db5d4bfd49f

Following this, submissions using this starter kit will go through as usual.
Thanks for notifying the issue to us!

Submissions taking too long

3 months ago

Hi,

No worries. You can ping me at either place.

It isn’t happening due to server side this time.

The issue is happening when Dockerfile is trying to install mmdetection package. I think it is due to any new release of package it is dependent on (or similar). I am trying to debug it on my side and inform as soon as I find fix for your Dockerfile.

https://gitlab.aicrowd.com/simon_mezgec/food-recognition-challenge-starter-kit/snippets/20588#L1854

Submissions taking too long

3 months ago

Hi @simon_mezgec, your submission has been processed properly now, and I have made post about the error here.

Submissions taking too long

3 months ago

Hi @simon_mezgec,

Sorry for the trouble. The submission 65790 is on it’s way to evaluation too now. :smiley:

I will keep a close eye for the new submissions, to make sure this isn’t repeating.

Submissions taking too long

3 months ago

Hi @simon_mezgec,

We had issue in submissions queue due to which submissions got stuck.

We have manually cleaned ongoing submissions – which got stuck and re-queued them now. (to be exact: 65632, 65262, 65404, 65411).

Please let us know in case any other submission ID is stuck for you.

Submission limit

3 months ago

Hi @aimk,

The submission limit depends from competition to competition.
You can check the submission limit for any challenge on “new submissions” page.

It is visible something like:

:information_source: You have 100 submissions remaining.

Similarly for challenges which have daily limit, the message will be visible along with time (i.e. when the limit will reset).

Let me know if you have any further query.

New tag does not create an issue or evaluation

4 months ago

Hi @frgfm,

I have same hypothesis as Mohanty shared above.

Can you share exact output/error when you do git push, I can help based on the error.


Hypothesis in advance based on the exact error:

  1. In case it is throwing Fatal: Maximum size, etc.. then the reason would be file is already added and you need to migrate it from non-LFS to LFS (happens most of the times). Reference: How to upload large files (size) to your submission
  2. If the error is Failed to push LFS/stuck in uploading etc, it can be due to unstable/very-slow internet on your side causing the upload to stop/timeout in middle (rare, but happens). Reference: Cannot upload my model's weights to GitLab - filesize too large

New tag does not create an issue or evaluation

4 months ago

Hi @frgfm,

Welcome to the Food Recognition Challenge. :wave:

:white_check_mark: Solution
To immediately start and make a submission, please create a new commit (editing any file) and submit again using submission- git tag.

:nerd_face: Description
I went through your git history and it happened because you pushed v0.1.0 followed by submission-v1. What happened here is, we only accept submissions from git tags having prefix submission-, due to which v0.1.0 failed to create a submission.

While when you retried using submission-v1 it looked into the history and found the same commit hash (v0.1.0) sent previously and didn’t trigger submission. Ideally, I believe it should cache/check history only for submission- prefix tags, which didn’t happen here and we will improve it on our side.

Sorry for the inconvenience caused.
Hoping to get exciting submissions from you in the challenge! :smiley:

Editing Docker file

4 months ago

Glad to know that we could help.

:crossed_fingers: for you getting good score on your first run, but all the best in improving scores over time too. Wishing you luck!

Editing Docker file

4 months ago

Regarding this, I guess the starter kit/baseline you followed didn’t respect requirements.txt (because of custom Dockerfile being used – which has highest precedence). :frowning:

We will get it fixed in whichever starter kit you used for your submission (let us know the link). Sorry for the confusion caused due to it.

Editing Docker file

4 months ago

Understood. So those lines are actually fine in Dockerfile.

I debugged further to look into issue your code (#59996) is facing and this is what I found:

Traceback (most recent call last):
  File "run.py", line 332, in <module>
    run()
  [.... removed ....]
  File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 1364, in __init__
    name=self.name)
  File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 504, in placeholder
    x = tf.placeholder(dtype, shape=shape, name=name)
AttributeError: module 'tensorflow' has no attribute 'placeholder'

This is happening due to wrong (?) version of tensorflow used in your submission v/s the one you may be using on your system. This can be mitigated by using tensorflow v1.4 or by disabling v2 behaviour, etc. More: https://github.com/theislab/scgen/issues/14, https://stackoverflow.com/questions/37383812/tensorflow-module-object-has-no-attribute-placeholder

When running manually the next issue I came across is, you need to add import skimage:

Traceback (most recent call last):
  File "run.py", line 332, in <module>
    run()
  File "run.py", line 304, in run
    predictions=evaluate_coco(model, image_ids=image_ids)
  File "run.py", line 220, in evaluate_coco
    image = load_image(image_id)
  File "run.py", line 162, in load_image
    image = skimage.io.imread(path)
NameError: name 'skimage' is not defined

After that your submission can start running immediately.

Running COCO evaluation on 1959 images.
0
1
2
3
4
5
6
7
8
[....] (I didn't run further)

You can fix based on above remarks and start submitting your solution.

In case you want to debug properly on your desktop directly and are comfortable with Docker, you can use aicrowd-repo2docker to generate image & execute ./run.sh (More: Which docker image is used for my submissions?).

Let us know in case we can provide any other feedback. Also, it will be good to know which starter kit/initial repository you referred for making the submission, so we can add some more debug/testing scripts to the same.

All the best with the competition! :smiley:

Editing Docker file

4 months ago

Hi @hannan4252,

Can you tell why are you trying to edit and/or achieve by the above edit?
By default, you shouldn’t need to edit the above lines and things should work out of the box.

Not able to ssh to gitlab

4 months ago

Great, I believe you would have followed steps in starter kit or baseline which make you push codebase to gitlab.aicrowd.com (by changing remote), and you ended up correctly on gitlab.aicrowd.com. :smiley:

Happy that things are working on your end now, excited to see yours submissions into the challenge and leaderboard! :muscle:

Can I submit code in PyTorch?

4 months ago

@nofreewill42, in case you are not comfortable with Dockerfile, you can still submit and specify your runtime using requirements.txt, environment.yml etc based on your preference to pip/conda/others. (delete the Dockerfile in case you want to use any of these method)

Read more here: How to specify runtime environment for your submission

Not able to ssh to gitlab

4 months ago

Hi @nofreewill42,

For making submissions for the challenge you need to use gitlab.aicrowd.com and not gitlab.com. I guess there has been confusion regarding the same above.

The steps will be as follows:

  1. Add your SSH key in your account https://gitlab.aicrowd.com/profile/keys
  2. Login and start using git repository via gitlab.aicrowd.com domain.

Unable to orchestrate submission, please contact Administrators

5 months ago

Hi, sorry for wrong error message in this case. Your submission timed out i.e. >8 hours due to which it was terminated.

It can happen due to multiple reasons:

  1. Code is too slow
  2. Code needs GPU while GPU wasn’t requested in aicrowd.json
  3. GPU was requested and provided, but your code isn’t able to utilise the same, either due to code issue or package issue.

In case you can identify one of the reason for your case, you can submit your code again with fix. Otherwise, you can share submission ID which you would like us to look into. We can help you in debug and share what went wrong.

Not able to Download Data

5 months ago

Hi @himanshu ,

Sorry to keep for waiting, the issue is now resolved and datasets are available again on the website.
Thanks again for letting us know about the issue proactively.

Regards,
Shivam

Not able to Download Data

5 months ago

Thanks for informing, we are looking into it and fixing asap.

Not able to Download Data

5 months ago

Hi, can you share the error coming to you and for which file?

Ideally the link shared here should works directly: https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files

Submission struck

5 months ago

Hi @hannan4252, I see your submission #59955 is still ongoing/running and not stuck.

Side note, you have ran your submission without GPU. In case you want your submission to run with GPU and slow run is due to the same, please enable GPU using this guide.

Local Testing for submission error

5 months ago

Sure, is it linux or ubuntu?

Ideally you should be able to test it using docker ps command.
In case you want to install docker in local you can using this help article. https://docs.docker.com/install/

Local Testing for submission error

5 months ago

Thanks, command looks good, can you share full traceback in that case?
And is “docker” running on your local, I suspect that to be reason till now.

Local Testing for submission error

5 months ago

Hi, what command you used to run locally?

Evaluation Criteria

6 months ago

Yes @gloria_macia_munoz, you are correct for image_id & score field. We will also work toward adding this information in starter kit so it is easier for newer participants.

cc: @nikhil_rayaprolu

Evaluation Criteria

6 months ago

Hi @gloria_macia_munoz,

Yes, the structure shared by you is correct. You can ignore iscrowd field.

Example for final structure required is as follows:

[
  {
    "image_id": 28902,
    "category_id": 2738,
    "score": 0.18888643674121008,
    "segmentation": [
      [
        270,
        195,
        381,
        823,
        56,
        819,
        527,
        [....]
      ]
    ],
    "bbox": [
      56,
      165,
      678,
      658
    ]
  }
  [....]
}

Please let us know in case there is any followup question. All the best with the challenge! :smiley:

Instructions, EDA and baseline for Food Recognition Challenge

8 months ago

Hi @joao_schapke,

You will get an environment variable AICROWD_PREDICTIONS_OUTPUT_PATH having absolute path to location at which json file need to be written.

Example from starter kit here.

Cannot upload my model's weights to GitLab - filesize too large

8 months ago

Thanks for the inputs, I have added git for windows in the FAQ above.

We had cases where people wanted to upload files in GBs, due to which timeout was increased/removed. I will go through the current value and set it to a better value.

Instructions, EDA and baseline for Food Recognition Challenge

8 months ago

HI @joao_schapke, please use git lfs clone <repo> / git lfs pull command in your above repository as Nikhil also mentioned. Do let us know how it goes and if the problem continues.

Cannot upload my model's weights to GitLab - filesize too large

8 months ago

Hi @leandro_a_bugnon,

Are you facing the error file size too large or the git-lfs is getting stuck for upload?

In case of file size too large, please go through How to upload large files (size) to your submission.

Instructions, EDA and baseline for Food Recognition Challenge

8 months ago

Hi @shraddhaamohan,

Thanks for notifying about it. The Dockerfile for the baseline was dependent on https://github.com/open-mmlab/mmdetection repository’s master branch which is broken right now. We have updated the baseline repository point to a stable release version now.

Submission confusion. Am I dumb?

8 months ago

@shraddhaamohan Sorry for the confusion above, looks like you were submitting the baseline solution as it is, and this is bug in the same, instead of something you committed. We are updating the baseline with above fix.

Submission confusion. Am I dumb?

8 months ago

I can confirm that GPU is available for evaluations if you have used gpu: true in your aicrowd.json, and they were not removed at any point. In case someone is facing launching GPU in their submission, please share your submission ID with us so it can be investigated.


@shraddhaamohan, in your submission above i.e. #27829, your asset was assert torch.cuda.is_available()==True,"NO GPU AVAILABLE" which wasn’t showing full issue.

I tried to debug it on your submitted code, and this was happening:

>>> import torch
>>> torch.backends.cudnn.enabled
True
>>> torch.cuda.is_available()
False
aicrowd@aicrowd-food-recognition-challenge-27829-38f8:~$ nvidia-smi -L
GPU 0: Tesla K80 (UUID: GPU-cd5d75c4-a9c5-13c5-bd7a-267d82ae4002)
aicrowd@aicrowd-food-recognition-challenge-27829-38f8:~$ nvidia-smi
Tue Dec 17 14:19:22 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |
| N/A   47C    P8    30W / 149W |      0MiB / 11441MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

We further found this is happening because the underlying CUDA version we provide to submissionos was 10.0 and submissions are evaluated with docker image “nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04”. While in your submission you have custom Dockerfile which was trying to run with pytorch/pytorch:1.3-cuda10.1-cudnn7-devel, leading to above no GPU found assert.

Finally, the diff for your existing v/s working Dockerfile is as follows:

--- a/Dockerfile
+++ b/Dockerfile
@@ -1,5 +1,5 @@
-ARG PYTORCH="1.3"
-ARG CUDA="10.1"
+ARG PYTORCH="1.2"
+ARG CUDA="10.0"
 ARG CUDNN="7"

 FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
@@ -17,6 +17,7 @@ RUN conda install cython -y && conda clean --all

 RUN git clone [removed-name] /[removed-name]
 WORKDIR /[removed-name]
+RUN git reset --hard c68890db5910eed4fc8ec2acf4cdf1426cb038e9
 RUN pip install --no-cache-dir -e .
 RUN cd /

The repository you were cloning above was working the last time your docker image was built i.e. Dec 10, and some of the commit currently in master branch has broke pip install. We will suggest to use versioning in your future submission so inconsistent state doesn’t occur on re-build/re-run.

I have shared the new error traceback in your submission’s gitlab issue above (after GPU assert went fine).

tl;dr I tried running your exact codebase with pytorch/pytorch:1.2-cuda10.0-cudnn7-devel base image & above Dockerfile diff. It seems to be working fine after it. Let us know in case there is any follow up doubt.

ImageCLEF 2020 Lifelog - LMRT

10 Test Topics link

3 months ago

Hi, it works now! :smiley:

10 Test Topics link

3 months ago

cc: @Ivan_Eggel for looking into it.

The page referred above is: https://www.imageclef.org/system/files/ImageCLEF2020-test-topics.pdf

And the link is present in overview section here: https://www.aicrowd.com/challenges/imageclef-2020-lifelog-lmrt#topics-and%20ground%20truth%20release

10 test topics for LMRT Tasks are released under this link.

Problem: Registering for LMRT

5 months ago

@BIDAL-HCMUS, we add challenge into your AIcrowd profile page after 1st submission is made for that challenge. I hope this clarifies your doubt.

FOODC

Internal Server Error

3 months ago

Hi everyone,

The issue is now fixed and all the pending/failed submissions have been evaluated.

Need to download datasets is necessary? or else any other way

3 months ago

Yes, you will need to download the dataset for training your model which can finally predict values for the test dataset.

BUT you can use our starter kit present here: Baseline - FOODC and click “Open In Colab” to run it completely online, by using Colab you wouldn’t need to download/install/run anything in your system, but can do it in online server directly (available as python notebook).

Let me know if I understood the question wrongly or you need any further clarification.

Need to download datasets is necessary? or else any other way

3 months ago

Hi @mouli14, you are asking this question about which challenge?

ORIENTME

Where is the leaderboard? Submission confusion

3 months ago

Hi @jakub_bartczuk,

Thanks for notifying.
Let me investigate on where we went wrong and displayed the wrong link as you mentioned. I will meanwhile add quick redirection so it doesn’t cause confusion to any other participant.

Yes, my bad, updated the link in my previous comment now.

Regards,
Shivam

Update: Redirection is now active for all the problems.

Where is the leaderboard? Submission confusion

3 months ago

We have concept of problems & challenges. And the problems can be used as part of multiple challenges.

The link you are referring above have it’s own leaderboard and submission queue, independent of AIcrowd Blitz :zap: submission queue & leaderboard. This was the reason why your scores didn’t reflect back.

I believe you ended up on above link, due to our recent email notification?

Where is the leaderboard? Submission confusion

3 months ago

Hi @jakub_bartczuk,

Your probably made submissions directly to the problem instead of ongoing AIcrowd Blitz :zap: competition, due to which you faced missing name in leaderboard, etc.

I have assigned your submissions manually to AIcrowd Blitz :zap: challenge.

Link to problem:

Submission Link

Sorry for the confusion caused.

Regards,
Shivam

PKHND

LeaderBoard Positions

3 months ago

Hi @dills,

The leaderboards are now updated to reflect rankings properly.

Sorry for the inconvenience and wishing you best of luck with the challenge! :smiley:

LeaderBoard Positions

3 months ago

Hi @dills,

Thanks for pointing it out. We are working on fix in our leaderboard computation for same scores scenario.

It will get changed to “1” for everyone having 1.0 (n users), “N+1” for the next score and so on.

ImageCLEF 2020 Caption - Concept Detection

Approval of EUA

5 months ago

It should not take more than 1 or 2 days. (sharing based on similar question we had on forum in past)

Possibility of mixed teams

6 months ago

cc: @Ivan_Eggel for clarification.

LifeCLEF 2020 Plant

Dataset fails to download

6 months ago

Hi @herve.goeau, @Ivan_Eggel,

I see some queries around dataset for this CLEF challenge.
Please let us know in case AIcrowd should host the dataset on our side, we can coordinate it over email quickly.

Novartis DSAI Challenge

Evaluation Error

7 months ago

Hi @maruthi0506, shared the error logs on your submission.

Randomly failing image builds - what is going on?

7 months ago

Hi @bjoern.holzhauer, looking into it. The error seems to be coming while apt package are being installed, I will keep you updated here.

Unable to push file that "exceeds maximum limit"

7 months ago

Hi @ngewkokyew, @bjoern.holzhauer,

I assume the problem being mentioned here is, large files have been already commited and now no matter what you do git push rejects with above error?

If this is the case, please use the git lfs migrate command which ammend your git tree and fixes this problem. You may need to force push once this is done. https://github.com/git-lfs/git-lfs/wiki/Tutorial#migrating-existing-repository-data-to-lfs

Unable to push file that "exceeds maximum limit"

7 months ago

Hi @ngewkokyew,

I remember the workspace have older git version (i assume) which don’t come with lfs, please install it using:

sudo apt-get update
sudo apt-get install git-lfs

Submissions get killed without any error message

7 months ago

We are using the kubernetes cluster from organisers which have 8G base machines and AKS have quite hard eviction policy due to which it kill code as soon as it reach 5.5G.

Best might be to see if your RAM usage can be reduced by down casting variables.

Meanwhile, @kelleni2, @laurashishodia is it possible to change underlying nodes in AKS cluster from Standard_F4s_v2 to some higher RAM nodes? I am seeing OOM issue for multiple teams (3-4+ at least).

Submissions get killed without any error message

7 months ago

Hi @bjoern.holzhauer,

Sorry for missing out your query earlier.

Yes the Killed is referring to OOMKill of your submission. It happens when your codebase is breaching the memory limit i.e. ~5.5G during evaluation.

The training data on the server is same as workspace except the row_id part which were changed, which I announced on the day on change.

Can you share me your latest submission ID which you think is only getting stuck due to OOM issue? I can debug it for you and share the part of code which is causing high memory usage.

Different submission tags but same commit tag

7 months ago

Hi @ngewkokyew,

Are you committing your code with changes? The submissioin.sh just created submission with current git commit for you.

You will need to do:

[... make your changes ...]
git add <files/you/want/to/add>
git commit -m "Your commit message"
./submission.sh <solution tag>

Please let know in case this solves your problem.

Challenge timelines updates/clarifications

7 months ago

Hi @ngewkokyew,

Please e-mail this issue to Aridhia team (servicedesk@aridhia.com) with description of issue and team name.

UPDATE / EXTENSION: DSAI Challenge: Leaderboard & Presentation deadlines

8 months ago

Hi @carlos.cortes, @all,

The scores are now updated for all the submissions and new ranks are available on the leaderboard.

We were following approach to re-evaluate one, if it fails provide feedback/fix for the submission and so on, which turned out to be quite slow. Right now, we have re-evaluated all the submissions, and submissions which have failed are being provided feedback or applied automatic patches asynchronously.

All the best and, Merry Christmas! :smiley: :christmas_tree:

Evaluation Error

8 months ago

Sorry for the sed issue, we were trying to provided automated patch to user codes for row_id which went wrong. I have undo this and requeued all the submissions affected by it now.

Evaluation Error

8 months ago

Hi @maruthi0506,

I can confirm the recent submissions failed due to OOM kill, when they touched memory usage ~5.5G.

Upon debugging #31962, I found it is happening due to Series.str.get_dummies used in the code, which is not a memory optimised function.
Point at which OOM is happening: https://gitlab.aicrowd.com/maruthi0506/dsai-challenge-solution/blob/master/predict.py#L279

This demonstrates what is happening in your submission along with alternatives which you can use (name of variable changed to hide any potential information getting public on feature used):

(suggested ways #1, decently memory efficient)
>>> something_stack_2 = pd.get_dummies(something_stack)
>>> something_stack_2.info()
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 38981 entries, (0, 0) to (8690, 2)
Columns: 4589 entries,  to <removed>
dtypes: uint8(4589)
memory usage: 170.7 MB

(suggested ways #2, most memory efficient, slower then #1)
>>> something_stack_2 = pd.get_dummies(something_stack, sparse=True)
>>> something_stack_2.info()
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 38981 entries, (0, 0) to (8690, 2)
Columns: 4589 entries,  to <removed>
dtypes: Sparse[uint8, 0](4589)
memory usage: 304.8 KB

(what your submission is doing -- ~5G was available at this time)
>>> something_stack_2 = something_stack.str.get_dummies()
Killed

NOTE: The only difference between two approaches is Series.str.get_dummies use “|” as separator by default. In case you were relying on it, can do something like below:

>>> pd.get_dummies(pd.Series(np.concatenate(something_stack.str.split('|'))))

Let us know in case the problem continues after changing this (here and it’s usage anywhere else in your codebase), we will be happy to debug further accordingly.

References:
[1]: https://github.com/pandas-dev/pandas/issues/19618
[2]: https://stackoverflow.com/a/31324037

Evalution error - row id mismatch

8 months ago

Hi @rachaas1,

Yes the output file format generated by your code is wrong. prob_approval need to be float instead of arr[float]. Shared the current output in above link as comment.

Evaluation Error

8 months ago

The solution have 8GB RAM available.

Edit: out of 8GB, ~5.5GB is available for evaluation code

Evaluation Error

8 months ago

Hi, it is getting killed on running without traceback.

Does it have any high RAM/CPU need?

Evaluation Error

8 months ago

Hi, looks like git-lfs isn’t installed on your system.

Can you try sudo apt-get install git-lfs. (more)

Evaluation Error

8 months ago

Yes, just nltk_data folder need to be present.

Yes, the content remains same.

[Announcement] row_id can be dynamic and different to workspace file

8 months ago

Hi everyone,

Please make sure that your submissions are creating prediction file with correct row_id .

The row_id was not being match strictly till the previous evaluator version and we have added assert for the same now. Due to which the submissions have failed with the row_ids in the generated prediction file do not match that of the ground_truth .

Your solution need to output row_id from testing data during evaluation and not hardcoded / sequential (0,1,2…). Also note, that row_id can be different & shuffled on data present on evaluations v/s workspace, to make sure people who have just submit predictions csv (instead of code) fail automatically.

We are trying to apply automatic patch wherever possible, but it need to be ultimately fixed in solutions submitted. Example patch is present here.

Evaluation Error

8 months ago

Hi @maruthi0506,

Yes, the row_id i.e. 5,6,7 in test data provided to you on workspace can be anything say 1000123, 1001010, 100001 (and in random order) in test data present on the server going forward, so we know predictions are being carried out during evaluation.


To use nltk for the evaluation, you need to provide ntlk_data folder in your repository root, which can be done as follows (current working directory: at your repository root):

python -c "import nltk; nltk.download('stopwords', download_dir='./nltk_data')"
python -c "import nltk; nltk.download('wordnet', download_dir='./nltk_data')"

OR (assuming you already have it downloaded in workspace)

cp ~/nltk_data nltk_data

Followed by uploading it to git as:

#> git lfs install   (if not already using git lfs)
#> git lfs track "nltk_data/**"
#> git add nltk_data
#> git commit [...]

Please let us know in case you still face any issue.

Submission Evaluation - Queued since 2 hours

8 months ago

Hi Shravan,

It looks like you have uploaded your prediction file i.e. lgbm.csv and directly dumping to the output path. We want your prediction model to run on the server itself and not prediction files to be submitted as solution. Due to which your submission is failed.

The row_id i.e. 5,6,7 in test data provided to you can be anything say 1000123, 1001010, 100001 (and in random order) in test data present on the server going forward, so we know predictions are being carried out during evaluation.

Evaluation Error

8 months ago

Hi everyone, please make sure that your submissions are creating prediction file with correct row_id. The row_id was not being match strictly till the previous evaluator version and we have added assert for the same now. Due to which the submissions have failed with the row_ids in the generated prediction file do not match that of the ground_truth.

Your solution need to output row_id as shared in the test data and not hardcoded / sequential (0,1,2…). Also note, that row_id can be different on data present on evaluations v/s workspace, to make sure people aren’t hardcoding from that file.

We are trying to apply automatic patch wherever possible, but it need to be ultimately fixed in solutions submitted.

Submission Evaluation - Queued since 2 hours

8 months ago

Hi everyone, please make sure that your submissions are creating prediction file with correct row_id. The row_id was not being match strictly till the previous evaluator version and we have added assert for the same now. Due to which the submissions have failed with the row_ids in the generated prediction file do not match that of the ground_truth.

Your solution need to output row_id as shared in the test data and not hardcoded / sequential (0,1,2…). Also note, that row_id can be different on data present on evaluations v/s workspace, to make sure people aren’t hardcoding from that file.

We are trying to apply automatic patch wherever possible, but it need to be ultimately fixed in solutions submitted.

UPDATE / EXTENSION: DSAI Challenge: Leaderboard & Presentation deadlines

8 months ago

Hi @carlos.cortes,

The submissions are being reevaluated right now. Given we have large amount of submissions i.e. 1000+ successful submissions, it will take few more hours before all the submissions are reevaluated with new dataset.

Submission Evaluation - Queued since 2 hours

8 months ago

Hi @shravankoninti,

This issue is resolved now, and your above submission have latest feedback i.e. newer dataset. Meanwhile other submissions by you and other participants are still in queue and being re-evaluated right now.

Submission Evaluation - Queued since 2 hours

8 months ago

Yes, I think workspaces will be available to you. Please go through the announcement made by Nick here. UPDATE / EXTENSION: DSAI Challenge: Leaderboard & Presentation deadlines

Submission Evaluation - Queued since 2 hours

8 months ago

Hi @shravankoninti,

Yes this condition is added with newer version of evaluator that is using the updated splitting announced here. I am looking into this and will keep you updated here.

UPDATE / EXTENSION: DSAI Challenge: Leaderboard & Presentation deadlines

8 months ago

Hi @bzhousd,

Yes, the splitting approach is being changed.

Different results from debug vs non-debug mode

8 months ago

Yes, the debug mode has small subset as described above.

No, the logs are not visible for successful submissions as of now in our design, but we will be glad to help in fetching logs in case of successful submission (debug mode I assume) if it is blocking you.

Final submission selection

8 months ago

Hi, linking the existing unanswered thread for this question.

Is the scoring function F1 or logloss?

8 months ago

cc: @kelleni2, @satyakantipudi

Please confirm policy for final scoring i.e. all submissions will be considered or the one having best score on partial dataset?

Log Loss and F1 on Leaderboard different from "PUBLIC_F1" and "PUBLIC_LOGLOSS"

8 months ago

Hi, you are looking into a debug submission score. http://gitlab.aicrowd.com/wangbot/dsai-challenge-solution/issues/23#note_34783

  1. The submission is reflected back on AIcrowd.com / Leaderboard with lowest possible score for given competition.

Read more about scores for debug submission here.

Test file changed

8 months ago

Hi @carlos.cortes,

Can you tell the submission ID and where do you notice the above file? Do you mean in workspace?

Trajnet++ (A Trajectory Forecasting Challenge)

Due Date and Conference related information

7 months ago

Hi @student,

The deadline for AMLD participants especially was December 31.


cc: @parth_kothari, he would be able to share more information about future deadlines.

Frequently Asked Questions

How to add SSH key to Gitlab?

7 months ago

Create and add your SSH public key

It is best practice to use Git over SSH instead of Git over HTTP. In order to use SSH, you will need to:

  1. Create an SSH key pair on your local computer.
  2. Add the key to GitLab

Creating your SSH key pair

ssh-keygen -t rsa -b 4096 -C "name@example.com"

Example:
image

Adding your SSH public key to GitLab

Once you have key in your system at location of your choice. You must manually copy this and add it at https://gitlab.aicrowd.com/profile/keys.

Which docker image is used for my submissions?

7 months ago

We use custom fork of repo2docker for our image build processes. The functionality of it is exactly same as upstream repo2docker, except the base image. Our Dockerfile uses following base image:

FROM nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04

How to use above?
To install forked version of repo2docker, please install it via pypi using:

pip install aicrowd-repo2docker

Example to build your repository:

git clone git@gitlab.aicrowd.com:<your-username>/<your-repository>.git
cd <your-repository>
pip install -U aicrowd-repo2docker
aicrowd-repo2docker \
        --no-run \
        --user-id 1001 \
        --user-name aicrowd \
        --image-name sample_aicrowd_build_45f36 \
        --debug .

How to specify custom runtime?

Please read about it in our FAQ question: How to specify runtime environment for your submission.

Why fork repo2docker?
It is currently not possible to use custom base image in vanilla repo2docker, and this status is being tracked here.

Read more about repo2docker here.

How to enable GPU for your submission?

7 months ago

The GPUs are allotted to your submission on need-basis. In case your submission use GPU and you want to get it allocated, please use gpu: true in your aicrowd.json.

Versions we support right now:

nvidia driver version: 410.79
cuda: 10.0

NOTE: Majority of the challenges haven GPU enabled, this information should be available directly on challenge page or starter kits. In case you are not sure the challenge you are participating has GPUs, please reach out to us on Discourse.

AMLD 2020 - Transfer Learning for International...

Submission Error AMLD 2020

7 months ago

Hi @student,

Are the submissions you are referring #57743 and #57745 respectively? Both of these submissions are part of leaderboard calculation. It is possible that you tried to view leaderboard immediately while we refresh leaderboard in ~30 seconds?

Your submissions:

Flatland Challenge

Can not download test data

8 months ago

Hi @a2821952,

The download link is working fine.
We have expiry to links generated for download, just in case you are re-using older link. So please always download by opening link directly from the resources page here: https://www.aicrowd.com/challenges/flatland-challenge/dataset_files

Let us know in case it doesn’t work out for you.

Evaluation process

8 months ago

Hi @RomanChernenko,

  1. Yes, the solutions are evaluated on same test samples but they are shuffled. https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/evaluators/service.py#L89
  2. Video generation is done on a subset of all environments and remain same for all evaluations. It may be possible when you open the leaderboard, all videos didn’t start playing at the same time leading to this perception?
  3. This is the place where Flatland library is generating score and N+1 thing might not be the reason. I will let @mlerik investigate & comment on it.

Evaluation time

8 months ago

@mugurelionut Please use 8 hours as the time limit, we have updated our evaluation phase to strictly enforce it going forward.

NOTE: Your submission having 28845 seconds as total execution time is safe with 8 hours time limit as well. Non-evaluation phase takes roughly 5-10 minutes which is included in timing visible above.

Total Time = Docker Image Building + Orchestration + Execution (8 hours enforced now)

AMLD 2020 - D'Avatar Challenge

About the submission format

8 months ago

Hi @borisov,

The submission format you shared above is valid json.
You can also download sample output json from resources section on the contest page here.

shivam has not provided any information yet.