Loading

shivam ###

Name

Shivam Khandelwal

Organization

AIcrowd

Location

Gurgaon, IN

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

A benchmark for image-based food recognition

1 Travel Grants
1 Authorship/Co-Authorship
Misc Prizes : Various Prizes

Latest submissions

See All
graded 59791
failed 59371
failed 31084

Help improve humanitarian crisis response through better NLP modeling

AMLD Conference Ticket Prize Money
Misc Prizes : Present of your findings at the Humanitarian Partnership Week at Palais des Nations, Geneva

Latest submissions

See All
failed 32245

Recognizing bird sounds in monophone soundscapes

USD 5K as part of Microsoft's AI for earth program Prize Money
1 Authorship/Co-Authorship

Latest submissions

No submissions made in this challenge.

USD 5K as part of Microsoft's AI for earth program Prize Money
1 Authorship/Co-Authorship

Latest submissions

No submissions made in this challenge.

Food Recognition Challenge

Unable to orchestrate submission, please contact Administrators

4 days ago

Hi, sorry for wrong error message in this case. Your submission timed out i.e. >8 hours due to which it was terminated.

It can happen due to multiple reasons:

  1. Code is too slow
  2. Code needs GPU while GPU wasn’t requested in aicrowd.json
  3. GPU was requested and provided, but your code isn’t able to utilise the same, either due to code issue or package issue.

In case you can identify one of the reason for your case, you can submit your code again with fix. Otherwise, you can share submission ID which you would like us to look into. We can help you in debug and share what went wrong.

Not able to Download Data

8 days ago

Hi @himanshu ,

Sorry to keep for waiting, the issue is now resolved and datasets are available again on the website.
Thanks again for letting us know about the issue proactively.

Regards,
Shivam

Not able to Download Data

9 days ago

Thanks for informing, we are looking into it and fixing asap.

Not able to Download Data

9 days ago

Hi, can you share the error coming to you and for which file?

Ideally the link shared here should works directly: https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files

Submission struck

10 days ago

Hi @hannan4252, I see your submission #59955 is still ongoing/running and not stuck.

Side note, you have ran your submission without GPU. In case you want your submission to run with GPU and slow run is due to the same, please enable GPU using this guide.

Local Testing for submission error

10 days ago

Sure, is it linux or ubuntu?

Ideally you should be able to test it using docker ps command.
In case you want to install docker in local you can using this help article. https://docs.docker.com/install/

Local Testing for submission error

10 days ago

Thanks, command looks good, can you share full traceback in that case?
And is “docker” running on your local, I suspect that to be reason till now.

Local Testing for submission error

10 days ago

Hi, what command you used to run locally?

Evaluation Criteria

About 1 month ago

Yes @gloria_macia_munoz, you are correct for image_id & score field. We will also work toward adding this information in starter kit so it is easier for newer participants.

cc: @nikhil_rayaprolu

Evaluation Criteria

About 1 month ago

Hi @gloria_macia_munoz,

Yes, the structure shared by you is correct. You can ignore iscrowd field.

Example for final structure required is as follows:

[
  {
    "image_id": 28902,
    "category_id": 2738,
    "score": 0.18888643674121008,
    "segmentation": [
      [
        270,
        195,
        381,
        823,
        56,
        819,
        527,
        [....]
      ]
    ],
    "bbox": [
      56,
      165,
      678,
      658
    ]
  }
  [....]
}

Please let us know in case there is any followup question. All the best with the challenge! :smiley:

Instructions, EDA and baseline for Food Recognition Challenge

3 months ago

Hi @joao_schapke,

You will get an environment variable AICROWD_PREDICTIONS_OUTPUT_PATH having absolute path to location at which json file need to be written.

Example from starter kit here.

Cannot upload my model's weights to GitLab - filesize too large

3 months ago

Thanks for the inputs, I have added git for windows in the FAQ above.

We had cases where people wanted to upload files in GBs, due to which timeout was increased/removed. I will go through the current value and set it to a better value.

Instructions, EDA and baseline for Food Recognition Challenge

3 months ago

HI @joao_schapke, please use git lfs clone <repo> / git lfs pull command in your above repository as Nikhil also mentioned. Do let us know how it goes and if the problem continues.

Cannot upload my model's weights to GitLab - filesize too large

3 months ago

Hi @leandro_a_bugnon,

Are you facing the error file size too large or the git-lfs is getting stuck for upload?

In case of file size too large, please go through How to upload large files (size) to your submission.

Instructions, EDA and baseline for Food Recognition Challenge

3 months ago

Hi @shraddhaamohan,

Thanks for notifying about it. The Dockerfile for the baseline was dependent on https://github.com/open-mmlab/mmdetection repository’s master branch which is broken right now. We have updated the baseline repository point to a stable release version now.

Submission confusion. Am I dumb?

3 months ago

@shraddhaamohan Sorry for the confusion above, looks like you were submitting the baseline solution as it is, and this is bug in the same, instead of something you committed. We are updating the baseline with above fix.

Submission confusion. Am I dumb?

3 months ago

I can confirm that GPU is available for evaluations if you have used gpu: true in your aicrowd.json, and they were not removed at any point. In case someone is facing launching GPU in their submission, please share your submission ID with us so it can be investigated.


@shraddhaamohan, in your submission above i.e. #27829, your asset was assert torch.cuda.is_available()==True,"NO GPU AVAILABLE" which wasn’t showing full issue.

I tried to debug it on your submitted code, and this was happening:

>>> import torch
>>> torch.backends.cudnn.enabled
True
>>> torch.cuda.is_available()
False
aicrowd@aicrowd-food-recognition-challenge-27829-38f8:~$ nvidia-smi -L
GPU 0: Tesla K80 (UUID: GPU-cd5d75c4-a9c5-13c5-bd7a-267d82ae4002)
aicrowd@aicrowd-food-recognition-challenge-27829-38f8:~$ nvidia-smi
Tue Dec 17 14:19:22 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |
| N/A   47C    P8    30W / 149W |      0MiB / 11441MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

We further found this is happening because the underlying CUDA version we provide to submissionos was 10.0 and submissions are evaluated with docker image “nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04”. While in your submission you have custom Dockerfile which was trying to run with pytorch/pytorch:1.3-cuda10.1-cudnn7-devel, leading to above no GPU found assert.

Finally, the diff for your existing v/s working Dockerfile is as follows:

--- a/Dockerfile
+++ b/Dockerfile
@@ -1,5 +1,5 @@
-ARG PYTORCH="1.3"
-ARG CUDA="10.1"
+ARG PYTORCH="1.2"
+ARG CUDA="10.0"
 ARG CUDNN="7"

 FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
@@ -17,6 +17,7 @@ RUN conda install cython -y && conda clean --all

 RUN git clone [removed-name] /[removed-name]
 WORKDIR /[removed-name]
+RUN git reset --hard c68890db5910eed4fc8ec2acf4cdf1426cb038e9
 RUN pip install --no-cache-dir -e .
 RUN cd /

The repository you were cloning above was working the last time your docker image was built i.e. Dec 10, and some of the commit currently in master branch has broke pip install. We will suggest to use versioning in your future submission so inconsistent state doesn’t occur on re-build/re-run.

I have shared the new error traceback in your submission’s gitlab issue above (after GPU assert went fine).

tl;dr I tried running your exact codebase with pytorch/pytorch:1.2-cuda10.0-cudnn7-devel base image & above Dockerfile diff. It seems to be working fine after it. Let us know in case there is any follow up doubt.

Issues with submitting

4 months ago

Hi @shraddhaamohan, you are correct. Couple of user submitted codes ran into error but keep on running forever (didn’t exit) due to which pipeline was blocked. We will be adding sensible overall timeout for the challenge so this blockage is taken care of automatically going on.

Issues with submitting

4 months ago

Yes, the test set provided to you in resources section has following description:

Set of test images for local debuggin (Note : These are the same ones that are provided in the validation set)

It is validation set basically. The server runs your code with [hidden] test set in protected environment.

Issues with submitting

4 months ago

Hi @shraddhaamohan,

We debugged on your submission. Your output.json contains prediction as follows (one of them for example):

  {
    "image_id": 10752,
    "category_id": 1040,
    "bbox": [
      5.0,
      29.0,
      466.0,
      427.0
    ],
    "score": 0.8330578207969666,
    "area": 176875,
    "segmentation": [
      [
        195.0,
        455.5,
        194.0,
        455.5,
        193.0,
        455.5,
        192.0,
        455.5,
        [.....]

The coco format is not loading the such generated output properly, the issue is due to bbox of size 4. Please try generating bbox of different dimension. Related issue on Gitlab.

Is GPU available?

4 months ago

Hi @kay,

This challenge now have GPUs enabled. I have requeued your submission above which ran with GPU. You can switch between using GPUs via aicrowd.json. All the best with the competition! :smile:

Issues with submitting

4 months ago

Hi @rohitmidha23,

Yes, gpu=False was working as expected in the meanwhile.

The GPUs issue is resolved now and the submissions with GPU are no longer in pending state.

Issues with submitting

4 months ago

Hi @rohitmidha23,

It is stuck right now due to GPU node provisioning.
We were out of limits in Food challenge, and newer limits have been requested with GCP right now. It will start evaluating shortly after this is resolved.

Is GPU available?

4 months ago

I have raised the request to our team and we will update on the decision.

Is GPU available?

4 months ago

No, this contest don’t have GPUs enabled as of now. We are willing to add GPUs in case it is required here. Please let us know in case your model requires it.

Submissions taking too long

4 months ago

Hi @shraddhaamohan,

You are correct, the submission were stuck in the Food Challenge. The submissions are going through now and you should get feedback on your submissions right now.

I don't know how to submit

4 months ago

Hi,

You need to create a repository on Gitlab at https://gitlab.aicrowd.com/ by forking the food challenge starter kit. New tag in your repository starting with “submission-” prefix counts toward submission.

Please go through the complete README present in this starter kit repository, especially the “submitting” section to check more. Let us know if you still face issue with submission flow.

ImageCLEF 2020 Lifelog - LMRT

Problem: Registering for LMRT

18 days ago

@BIDAL-HCMUS, we add challenge into your AIcrowd profile page after 1st submission is made for that challenge. I hope this clarifies your doubt.

ImageCLEF 2020 Caption - Concept Detection

Approval of EUA

18 days ago

It should not take more than 1 or 2 days. (sharing based on similar question we had on forum in past)

Possibility of mixed teams

About 1 month ago

cc: @Ivan_Eggel for clarification.

LifeCLEF 2020 Plant

Dataset fails to download

30 days ago

Hi @herve.goeau, @Ivan_Eggel,

I see some queries around dataset for this CLEF challenge.
Please let us know in case AIcrowd should host the dataset on our side, we can coordinate it over email quickly.

Novartis DSAI Challenge

Evaluation Error

3 months ago

Hi @maruthi0506, shared the error logs on your submission.

Randomly failing image builds - what is going on?

3 months ago

Hi @bjoern.holzhauer, looking into it. The error seems to be coming while apt package are being installed, I will keep you updated here.

Unable to push file that "exceeds maximum limit"

3 months ago

Hi @ngewkokyew, @bjoern.holzhauer,

I assume the problem being mentioned here is, large files have been already commited and now no matter what you do git push rejects with above error?

If this is the case, please use the git lfs migrate command which ammend your git tree and fixes this problem. You may need to force push once this is done. https://github.com/git-lfs/git-lfs/wiki/Tutorial#migrating-existing-repository-data-to-lfs

Unable to push file that "exceeds maximum limit"

3 months ago

Hi @ngewkokyew,

I remember the workspace have older git version (i assume) which don’t come with lfs, please install it using:

sudo apt-get update
sudo apt-get install git-lfs

Submissions get killed without any error message

3 months ago

We are using the kubernetes cluster from organisers which have 8G base machines and AKS have quite hard eviction policy due to which it kill code as soon as it reach 5.5G.

Best might be to see if your RAM usage can be reduced by down casting variables.

Meanwhile, @kelleni2, @laurashishodia is it possible to change underlying nodes in AKS cluster from Standard_F4s_v2 to some higher RAM nodes? I am seeing OOM issue for multiple teams (3-4+ at least).

Submissions get killed without any error message

3 months ago

Hi @bjoern.holzhauer,

Sorry for missing out your query earlier.

Yes the Killed is referring to OOMKill of your submission. It happens when your codebase is breaching the memory limit i.e. ~5.5G during evaluation.

The training data on the server is same as workspace except the row_id part which were changed, which I announced on the day on change.

Can you share me your latest submission ID which you think is only getting stuck due to OOM issue? I can debug it for you and share the part of code which is causing high memory usage.

Different submission tags but same commit tag

3 months ago

Hi @ngewkokyew,

Are you committing your code with changes? The submissioin.sh just created submission with current git commit for you.

You will need to do:

[... make your changes ...]
git add <files/you/want/to/add>
git commit -m "Your commit message"
./submission.sh <solution tag>

Please let know in case this solves your problem.

Challenge timelines updates/clarifications

3 months ago

Hi @ngewkokyew,

Please e-mail this issue to Aridhia team (servicedesk@aridhia.com) with description of issue and team name.

UPDATE / EXTENSION: DSAI Challenge: Leaderboard & Presentation deadlines

3 months ago

Hi @carlos.cortes, @all,

The scores are now updated for all the submissions and new ranks are available on the leaderboard.

We were following approach to re-evaluate one, if it fails provide feedback/fix for the submission and so on, which turned out to be quite slow. Right now, we have re-evaluated all the submissions, and submissions which have failed are being provided feedback or applied automatic patches asynchronously.

All the best and, Merry Christmas! :smiley: :christmas_tree:

Evaluation Error

3 months ago

Sorry for the sed issue, we were trying to provided automated patch to user codes for row_id which went wrong. I have undo this and requeued all the submissions affected by it now.

Evaluation Error

3 months ago

Hi @maruthi0506,

I can confirm the recent submissions failed due to OOM kill, when they touched memory usage ~5.5G.

Upon debugging #31962, I found it is happening due to Series.str.get_dummies used in the code, which is not a memory optimised function.
Point at which OOM is happening: https://gitlab.aicrowd.com/maruthi0506/dsai-challenge-solution/blob/master/predict.py#L279

This demonstrates what is happening in your submission along with alternatives which you can use (name of variable changed to hide any potential information getting public on feature used):

(suggested ways #1, decently memory efficient)
>>> something_stack_2 = pd.get_dummies(something_stack)
>>> something_stack_2.info()
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 38981 entries, (0, 0) to (8690, 2)
Columns: 4589 entries,  to <removed>
dtypes: uint8(4589)
memory usage: 170.7 MB

(suggested ways #2, most memory efficient, slower then #1)
>>> something_stack_2 = pd.get_dummies(something_stack, sparse=True)
>>> something_stack_2.info()
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 38981 entries, (0, 0) to (8690, 2)
Columns: 4589 entries,  to <removed>
dtypes: Sparse[uint8, 0](4589)
memory usage: 304.8 KB

(what your submission is doing -- ~5G was available at this time)
>>> something_stack_2 = something_stack.str.get_dummies()
Killed

NOTE: The only difference between two approaches is Series.str.get_dummies use “|” as separator by default. In case you were relying on it, can do something like below:

>>> pd.get_dummies(pd.Series(np.concatenate(something_stack.str.split('|'))))

Let us know in case the problem continues after changing this (here and it’s usage anywhere else in your codebase), we will be happy to debug further accordingly.

References:
[1]: https://github.com/pandas-dev/pandas/issues/19618
[2]: https://stackoverflow.com/a/31324037

Evalution error - row id mismatch

3 months ago

Hi @rachaas1,

Yes the output file format generated by your code is wrong. prob_approval need to be float instead of arr[float]. Shared the current output in above link as comment.

Evaluation Error

3 months ago

The solution have 8GB RAM available.

Edit: out of 8GB, ~5.5GB is available for evaluation code

Evaluation Error

3 months ago

Hi, it is getting killed on running without traceback.

Does it have any high RAM/CPU need?

Evaluation Error

3 months ago

Hi, looks like git-lfs isn’t installed on your system.

Can you try sudo apt-get install git-lfs. (more)

Evaluation Error

3 months ago

Yes, just nltk_data folder need to be present.

Yes, the content remains same.

[Announcement] row_id can be dynamic and different to workspace file

3 months ago

Hi everyone,

Please make sure that your submissions are creating prediction file with correct row_id .

The row_id was not being match strictly till the previous evaluator version and we have added assert for the same now. Due to which the submissions have failed with the row_ids in the generated prediction file do not match that of the ground_truth .

Your solution need to output row_id from testing data during evaluation and not hardcoded / sequential (0,1,2…). Also note, that row_id can be different & shuffled on data present on evaluations v/s workspace, to make sure people who have just submit predictions csv (instead of code) fail automatically.

We are trying to apply automatic patch wherever possible, but it need to be ultimately fixed in solutions submitted. Example patch is present here.

Evaluation Error

3 months ago

Hi @maruthi0506,

Yes, the row_id i.e. 5,6,7 in test data provided to you on workspace can be anything say 1000123, 1001010, 100001 (and in random order) in test data present on the server going forward, so we know predictions are being carried out during evaluation.


To use nltk for the evaluation, you need to provide ntlk_data folder in your repository root, which can be done as follows (current working directory: at your repository root):

python -c "import nltk; nltk.download('stopwords', download_dir='./nltk_data')"
python -c "import nltk; nltk.download('wordnet', download_dir='./nltk_data')"

OR (assuming you already have it downloaded in workspace)

cp ~/nltk_data nltk_data

Followed by uploading it to git as:

#> git lfs install   (if not already using git lfs)
#> git lfs track "nltk_data/**"
#> git add nltk_data
#> git commit [...]

Please let us know in case you still face any issue.

Submission Evaluation - Queued since 2 hours

3 months ago

Hi Shravan,

It looks like you have uploaded your prediction file i.e. lgbm.csv and directly dumping to the output path. We want your prediction model to run on the server itself and not prediction files to be submitted as solution. Due to which your submission is failed.

The row_id i.e. 5,6,7 in test data provided to you can be anything say 1000123, 1001010, 100001 (and in random order) in test data present on the server going forward, so we know predictions are being carried out during evaluation.

Evaluation Error

3 months ago

Hi everyone, please make sure that your submissions are creating prediction file with correct row_id. The row_id was not being match strictly till the previous evaluator version and we have added assert for the same now. Due to which the submissions have failed with the row_ids in the generated prediction file do not match that of the ground_truth.

Your solution need to output row_id as shared in the test data and not hardcoded / sequential (0,1,2…). Also note, that row_id can be different on data present on evaluations v/s workspace, to make sure people aren’t hardcoding from that file.

We are trying to apply automatic patch wherever possible, but it need to be ultimately fixed in solutions submitted.

Submission Evaluation - Queued since 2 hours

3 months ago

Hi everyone, please make sure that your submissions are creating prediction file with correct row_id. The row_id was not being match strictly till the previous evaluator version and we have added assert for the same now. Due to which the submissions have failed with the row_ids in the generated prediction file do not match that of the ground_truth.

Your solution need to output row_id as shared in the test data and not hardcoded / sequential (0,1,2…). Also note, that row_id can be different on data present on evaluations v/s workspace, to make sure people aren’t hardcoding from that file.

We are trying to apply automatic patch wherever possible, but it need to be ultimately fixed in solutions submitted.

UPDATE / EXTENSION: DSAI Challenge: Leaderboard & Presentation deadlines

3 months ago

Hi @carlos.cortes,

The submissions are being reevaluated right now. Given we have large amount of submissions i.e. 1000+ successful submissions, it will take few more hours before all the submissions are reevaluated with new dataset.

Submission Evaluation - Queued since 2 hours

3 months ago

Hi @shravankoninti,

This issue is resolved now, and your above submission have latest feedback i.e. newer dataset. Meanwhile other submissions by you and other participants are still in queue and being re-evaluated right now.

Submission Evaluation - Queued since 2 hours

3 months ago

Yes, I think workspaces will be available to you. Please go through the announcement made by Nick here. UPDATE / EXTENSION: DSAI Challenge: Leaderboard & Presentation deadlines

Submission Evaluation - Queued since 2 hours

3 months ago

Hi @shravankoninti,

Yes this condition is added with newer version of evaluator that is using the updated splitting announced here. I am looking into this and will keep you updated here.

UPDATE / EXTENSION: DSAI Challenge: Leaderboard & Presentation deadlines

3 months ago

Hi @bzhousd,

Yes, the splitting approach is being changed.

Different results from debug vs non-debug mode

3 months ago

Yes, the debug mode has small subset as described above.

No, the logs are not visible for successful submissions as of now in our design, but we will be glad to help in fetching logs in case of successful submission (debug mode I assume) if it is blocking you.

Final submission selection

3 months ago

Hi, linking the existing unanswered thread for this question.

Is the scoring function F1 or logloss?

3 months ago

cc: @kelleni2, @satyakantipudi

Please confirm policy for final scoring i.e. all submissions will be considered or the one having best score on partial dataset?

Log Loss and F1 on Leaderboard different from "PUBLIC_F1" and "PUBLIC_LOGLOSS"

3 months ago

Hi, you are looking into a debug submission score. http://gitlab.aicrowd.com/wangbot/dsai-challenge-solution/issues/23#note_34783

  1. The submission is reflected back on AIcrowd.com / Leaderboard with lowest possible score for given competition.

Read more about scores for debug submission here.

Test file changed

4 months ago

Hi @carlos.cortes,

Can you tell the submission ID and where do you notice the above file? Do you mean in workspace?

Unable to find a valid `aicrowd.json` file at the root of the repository

4 months ago

Hi @TayHaoZhe,

This is resolved now and your shared commit ID is now running as submission ID 28656.
We were having limit of maximum files which we were expecting to be present at repository root, which was an incorrect assumption.

Example configuration to use CRAN packages in submission

4 months ago

Yes, all the dependencies need to be present in your repository.

Example configuration to use CRAN packages in submission

4 months ago

Hi, shared the logs in your submission now, I guess you are missing C50 package due to which it is failing.

Example configuration to use CRAN packages in submission

4 months ago

Can you tell your submission ID in which you tried using it? We can look into it.

Example configuration to use CRAN packages in submission

4 months ago

Can you tell your submission ID in which you tried using it? We can look into it.

Update of "LogLoss" Score on Leader board

4 months ago

Hi @sweenke4,

Can you share which submission ID has performed better then the leaderboard one as per you?

Example configuration to use CRAN packages in submission

4 months ago

Hi @TayHaoZhe,

There was discussion for tidyverse package here for installing tidyverse 1.3.0. There is issue in r-stringi package on conda-forge, in case you are trying that one.

Example configuration to use CRAN packages in submission

4 months ago

Hi @ngewkokyew,

You can use the r-mice package from conda-forge, given it isn’t present in R channel.

To do so, in your environment.yml, make sure you have conda-forge under “channels”, and add r-mice under dependencies.

For installing locally on your system you can use: conda install -c conda-forge r-mice

Submission Evaluation - Queued since 2 hours

4 months ago

There are ~12+(6 running) submissions in queue due to stuck pipeline and getting cleared right now. Your submission will be evaluated shortly. https://www.aicrowd.com/challenges/novartis-dsai-challenge/submissions

Submission Evaluation - Queued since 2 hours

4 months ago

Hi @shravankoninti,

The pipeline was blocked due to failed submissions which didn’t terminate with non-zero exit code. We have cleared the pipeline and adding a fix now so it don’t happen again.

Accessing the train file and test file in the same predict.py?

4 months ago

Yes, the evaluations run in seperate servers then your workspaces.

Is the scoring function F1 or logloss?

4 months ago

Hi, I will let @kelleni2 confirm on this from organisers point of view, given it is just configurable setting on our side.

Accessing the train file and test file in the same predict.py?

4 months ago

Hi,

The default path can be anything of your preference i.e. your workspace based path for testing.

While during evaluation this environment variable will be set always and default value wouldn’t be used.

Test data matrix available

4 months ago

Hi, the process it extremely useful in longer run due to multiple reasons. This guarantees the reproducibility of the results and the transparency needed. We also preserves your submissions as docker images which guarantee the code to run forever on current or on future dataset even if any of the dependency is lost in public internet.

Accessing the train file and test file in the same predict.py?

4 months ago

Hi @maruthi0506,

Your codebase need to read this environment variable i.e. absolute and just write final predictions at that location. The example is in starter kit already as well as in this comment above.

Submission of only final predictions file

4 months ago

Hi, the process it extremely useful in longer run due to multiple reasons. This guarantees the reproducibility of the results and the transparency needed. We also preserves your submissions as docker images which guarantee the code to run forever even if any of the dependency is lost in public internet.

Meanwhile if you are facing any issues in setting up, it will be good to share it with us, so that can be taken care of for your smoother participation.

cc: @mohanty @kelleni2 if you have any additional points

Submission limit per day

4 months ago

The limit is per team.

Is the scoring function F1 or logloss?

4 months ago

It is your submission having best score on half of the test dataset.

We already have scores against full dataset for all of your submissions (hidden), so all submissions will be used.

How to identify if my program running successfuly?

4 months ago

Hi, please copy paste the exact command from error message. You have to give permission for .conda folder not anaconda3.

How to identify if my program running successfuly?

4 months ago

Hi, I just remembered that participants were having sudo in their workspace. Can you instead try running the sudo chown... command yourself on workspace? I believe it will fix the permission issue for you, followed by pip=10 installation.

How to identify if my program running successfuly?

4 months ago

Hi, as I mentioned earlier you will need to get in touch with Aridhia first on Microsoft Teams to get your conda working i.e. fixing permission issue above.

After this, conda install pip=10 should resolve this issue. If not, we can debug further.

How to identify if my program running successfuly?

4 months ago

Hi @shravankoninti,

Thanks for sharing the output. The pip package installation looks correct on your side.

I am suspecting pip version to be 18.X on similar on your side which isn’t working out well with conda. (Github Issue)

Can you share output of pip -V and also at the same time try installing pip version 10.X by conda install pip=10 . The exports so generated should contain all of your pip packages as well.

Let me know if this resolves the issue you are facing.

How to identify if my program running successfuly?

4 months ago

Hi, looks like your conda installation have permission issues. Can you get in touch with Aridhia team for permission fix along with above message?

And you haven’t shared output of above two commands by which I can check if your pip install [...] worked properly or not.

How to identify if my program running successfuly?

4 months ago

Hi,

I don’t see pip packages in your environment.yml. Please make sure you have activated your conda environment when you did pip install.

Output from below commands will be useful to know more about the issue you are facing.

  • which pip
  • pip freeze

How to identify if my program running successfuly?

4 months ago

Hi,

Logs shared for both #25991 and #25990.

Please check out this FAQ for debug mode which will speed up your debugging. Meanwhile also try running ./run.sh in local system (workspace) before submitting, to catch bugs without even making a submission (as submissions/day are limited).

How to identify if my program running successfuly?

4 months ago

Hi @shravankoninti,

We provide feedback for all the submissions via Gitlab issues.

To clarify, the exact flow is as follows:

  1. You make changes to your code, followed by git commit
  2. ./submission.sh <xyz> to create a new git tag and push it to the repository
  3. We create a new Gitlab issue in your repository when your submission tag is correct i.e. prefix of the Gitlab tag is submission- (prefix thing happen automatically in submission.sh for Novartis challenge)

Gitlab Issues Page: https://gitlab.aicrowd.com/shravankoninti/dsai-challenge-solution/issues

In case to run locally, you can simply call ./run.sh on your local system and it mimics what will happen on online server (except the runtime environment). When ran in online environment i.e. as a submissions we provide feedback via sharing logs, you can read more about it here.

Leaderboard not updated

4 months ago

Hi @shravankoninti,

This is happening because all of above submissions you have shared are having same commit id i.e. 6b832bec. And the evaluation for this commit ID was already done in #25958. The subsequent tags are being considered to be duplicates (cached for sometime, not permanently).

Please make some change in your repository followed by git commit & submission.sh to trigger a new submission. Let us know in case any doubt still exists.

Leaderboard not updated

4 months ago

Hi @shravankoninti,

Can you share the AIcrowd submission ID (something like #2XXXX) or link to Gitlab issue? I can look into it and update you.

Meanwhile please try to create new post on forum for unrelated issues.

Accessing the train file and test file in the same predict.py?

4 months ago

Sure. Can you point us to the file/link where you find wrong path?

Is training data available during evaluation?

4 months ago

Hi all,

We are sorry that the announcement didn’t went through for this change. The testing data is available during evaluation and starter kit has been updated accordingly for demonstrating example.

It can be accessed via environment variable AICROWD_TRAIN_DATA_PATH which refers to same directory structure as /shared_data/data/training_data/ i.e. in which all of training related files are present.

Example to use it:

AICROWD_TEST_DATA_PATH = os.getenv("AICROWD_TEST_DATA_PATH", "/shared_data/data/testing_data/to_be_added_in_workspace.csv")
[...]
train_df = pd.read_csv(AICROWD_TRAIN_DATA_PATH + 'training_data_2015_split_on_outcome.csv')

Please let us know in case there is any follow up question.

Accessing the train file and test file in the same predict.py?

4 months ago

Hi @shravankoninti,

Yes, you can access all the files at the same time during evaluation.

The starter kit have all the information about the environment variable, but let me clarify on the environment variables available during evaluations here as well.

  • AICROWD_TEST_DATA_PATH: Refers to testing_phase2_release.csv file which is used by evaluator to judge your models in testing phase (soon to be made public)
  • AICROWD_TRAIN_DATA_PATH: Refers to /shared_data/data/training_data/ in which all of training related files are present.
  • AICROWD_PREDICTIONS_OUTPUT_PATH: Refers to the path at which your code is expected to output final predictions

Now in your codebase, you can simply do something as follows to load both the files:

AICROWD_TRAIN_DATA_PATH = os.getenv("AICROWD_TRAIN_DATA_PATH", "/shared_data/data/training_data/")
AICROWD_TEST_DATA_PATH = os.getenv("AICROWD_TEST_DATA_PATH", "/shared_data/data/testing_data/to_be_added_in_workspace.csv")
AICROWD_PREDICTIONS_OUTPUT_PATH = os.getenv("AICROWD_PREDICTIONS_OUTPUT_PATH", "random_prediction.csv")


train_df = pd.read_csv(AICROWD_TRAIN_DATA_PATH + 'training_data_2015_split_on_outcome.csv')
# Do pre-processing, etc
[...]
test_df = pd.read_csv(AICROWD_TEST_DATA_PATH, index_col=0)
# Make predictions
[...]
# Submit your answer
prediction_df.to_csv(AICROWD_PREDICTIONS_OUTPUT_PATH, index=False)

I hope the example clarifies your doubt.

Original Datasets for Train and Test

4 months ago

Hi,

Consider the files present on /shared_data/data/ on workspace as latest version and the records as correct. The README in starter kit contains number from previous dataset version and can be wrong.

I am not sure about random_number_join.csv. @kelleni2 might be aware of it?

Leaderboard not updated

4 months ago

Hi @lcb,

I am sorry for the confusion. I see your submission ran in debug mode, under which we provide lowest score on the leaderboard.

Debug Mode FAQ

How to access error logs?

4 months ago

Hi @michal-pikusa,

We share error logs on best effort directly in your failed Gitlab issues as comment, and looks like you get the response later on by our team in each gitlab issue.

The submission which you made with debug mode, actually had open “agent-logs” i.e. your submitted code. http://gitlab.aicrowd.com/michal-pikusa/dsai-challenge/issues/5#note_30139

Unfortunately, the message was saying “Logs for Admin Reference” instead of “Logs for Participant Reference” which would have caused the confusion that the logs aren’t available to you directly. I have updated the gitlab issue comment content for this challenge now, so this don’t cause any confusion going further.


FAQ Section: Why Debug Mode

Original Datasets for Train and Test

4 months ago

Hi @shravankoninti,

Are you referring in workspace or on evaluator?

In the workspace those files are present in /shared_data/data/ while in the evaluator you can access them using the environment variables AICROWD_TEST_DATA_PATH.

How to use conda-forge or CRAN for packages in evaluation?

4 months ago

Hi @bjoern.holzhauer,

The issue you are facing was multi-fold due to which it took some time on our side as well to figure best solution for you.

  1. The procedure you were following was correct except apt.txt file. This file excepts just the packages name you want to install (although it isn’t the fix/cause of the error). So it should be something like:
~❯ cat apt.txt
libicu-dev

But the error still continues as:

> library('tidyverse')
Error: package or namespace load failed for ‘tidyverse’ in dyn.load(file, DLLpath = DLLpath, ...):
 unable to load shared object '/srv/conda/envs/notebook/lib/R/library/stringi/libs/stringi.so':
  libicui18n.so.64: cannot open shared object file: No such file or directory

We found the issue is due to the dependency r-tidyverse -> r-stringr -> r-stringi, and r-stringi package in conda-forge channel is broken.

  1. We checked with the recommended channel i.e. r and it had working packages by default, so this should have worked at first place.
    - r::r-tidyverse
    - r::r-stringi
  1. [Solution; tldr] But I remember from our call, you needed 1.3.0 version specifically. So, this is the environment.yml entry you need for 1.3.0. Basically getting r-stringr from r channel instead of conda-forge one.
  - conda-forge::r-tidyverse==1.3.0
  - r::r-stringr
  - r::r-stringi

Sorry that you went through long debug cycle.


[Update] I found issue on Github in conda-forge repository as well, and have added the comment about this issue there. https://github.com/conda-forge/r-stringi-feedstock/issues/20

Leaderboard not updated

4 months ago

Hi @lcb,

The leaderboard get update real-time as soon as your submission is failed/successful, and contain your best score. The leaderboard is currently up to date as well.

Your submission #25650 has log loss 1000.0 and f1 score 0.0. This score is lower than your submission #24558 which has log loss 0.973 and f1 score 0.380. Due to which leaderboard didn’t change after your submission.

Let us know in case you have any further doubt on this.

What is being evaluated during submission?

4 months ago

Hi @wangbot,

Welcome to the challenge!

As described here in starter kit README, we use run.sh as the code entry point. You can modify it based on your requirement. https://gitlab.aicrowd.com/novartis/novartis-dsai-challenge-starter-kit#code-entrypoint

We have a debug mode which you can activate using debug: true in aicrowd.json. Under this, you will get complete access to logs and can debug without need of help for logs. NOTE: The submission runs on a extremely small/partial dataset during this and your scores aren’t reflected back to leaderboard. https://gitlab.aicrowd.com/novartis/novartis-dsai-challenge-starter-kit#aicrowdjson

Nevertheless, AIcrowd team [and organisers] have access to all the logs and we do share error tracebacks and relevant logs with you as comment in Gitlab issue on best effort manner, which range from few minutes to few hours.

I hope this clarifies any doubt you had. All the best with the competition!

Is training data available during evaluation?

4 months ago

As an update on this request. The organising team is working on it and training data will be available during the evaluation soon. We will make the announcement once it is available.

Team name in leaderboard

4 months ago

Hi @carlos.cortes,

Please visit the contest page and click “Create Team” button. Once you have created a team and added your team members, your (and any team members) existing solutions will automatically be shown under team’s name.

Let us know in case face any issues.

Is training data available during evaluation?

4 months ago

Training data is not available while evaluation.
We only have the test dataset available as of now. It might be ideal to add your computed model in git repository out of the /shared_data training for the time being.

cc: @kelleni2, @mohanty , should we provide it in evaluation side or pre-computed models work?

Is F1 score on the leaderboard bug free?

4 months ago

Hi, the bug in the leaderboard is addressed now. The change of scores from F1 as primary to Logloss as primary had created inconsistent leaderboard.

Columns missing in the test dataset?

4 months ago

Hi, you are correct. The dataset available during evaluation was one version older and didn’t contain columns like intsponsorid_p2_intclinicaltrialids/DrugKey.

This has been fixed now and current success submissions’ score have been recalculated.

Is the scoring function F1 or logloss?

4 months ago

@yzhounvs The miscommunication has been sorted out and you were correct. The log loss is the primary score and f1 score is secondary. The leaderboard has been fixed and new ranking are listed accordingly.

Is the scoring function F1 or logloss?

4 months ago

@yzhounvs,

We will get the challenge page updated after communicating with organisers, and update here when it’s done. Till then please consider “F1 as primary and logloss as secondary score”.

Is the scoring function F1 or logloss?

4 months ago

Hi @yzhounvs,

The leaderboard is based on F1 as primary and logloss as secondary score.

Can you point us which communication you are referring to above, so we can fix/discuss there?

LifeCLEF 2020 Bird - Monophone

Have the dataset files been released?

About 1 month ago

Hi @houkal,

We are working with @kahst to make the dataset available soon.
It is ready but faced upload issue which is being resolved.

Regards,
Shivam

https://discourse.aicrowd.com/t/

About 2 months ago

(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.)

Use the following paragraphs for a longer description, or to establish category guidelines or rules:

  • Why should people use this category? What is it for?

  • How exactly is this different than the other categories we already have?

  • What should topics in this category generally contain?

  • Do we need this category? Can we merge with another category, or subcategory?

Test Topic, Test Topic

2 months ago

Reply, Reply, Reply, Reply, Reply, Reply, Reply, Reply

Test Topic, Test Topic

2 months ago

Test Content, Test Content, Test Content, Test Content, Test Content, Test Content, Test Content, Test Content

https://discourse.aicrowd.com/t/

2 months ago

(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.)

Use the following paragraphs for a longer description, or to establish category guidelines or rules:

  • Why should people use this category? What is it for?

  • How exactly is this different than the other categories we already have?

  • What should topics in this category generally contain?

  • Do we need this category? Can we merge with another category, or subcategory?

Regarding your failed submissions

3 months ago

Hi @bzhousd,

Your new submissions are facing issue due to downcasting done on “row_id” column. I have added automated patch for your submissions which convert it, but it will be important to get it included in your codebase, so it is fixed properly.

The changes are as follows in your codebase:

replace("EDA_simple.py", 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"]]', 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c!="row_id"]')
replace("EDA_v3.py", 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"]]', 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c!="row_id"]')
replace("EDA.py", 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"]]', 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c!="row_id"]')
replace("EDA_v4.py", 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c not in [\'drugkey\',\'indicationkey\']]', 'int_cols = [c for c in df if df[c].dtype in ["int64", "int32"] and c not in [\'drugkey\',\'indicationkey\', \'row_id\']]') 

REAL Competition - Submission (and VMs?) stuck

4 months ago

Hi Emilio,

Sorry the stuck submission was running into underlying node issue multiple times and finally went through yesterday.

All the nodes in Kubernetes cluster are stopped now and submissions have went through.

Cheers,
Shivam

Disentanglement challenge evaluation

8 months ago

Hi @rauf_kurbanov,

Thanks for raising this concern.
We have identified the issue on our side. The image build process got overloaded during the end of deadline causing intermittent issue with the given submission link (and 3 more). These have been queued again given they were submitted before the deadline, and currently being evaluated.

Submission Error Log for Snake Identification Competition

9 months ago

The debug mode isn’t available in Snakes challenge due to which you can’t see own’s agent logs via debug: true. Meanwhile I have shared the error log for above submission in its GitLab issue.

Trajnet++ (A Trajectory Forecasting Challenge)

Due Date and Conference related information

About 2 months ago

Hi @student,

The deadline for AMLD participants especially was December 31.


cc: @parth_kothari, he would be able to share more information about future deadlines.

Snake Species Identification Challenge

Is it still possible to submit for the snake competition?

2 months ago

Yes, you can submit. The submissions wouldn’t count toward leaderboard ranking.

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

Hi @ValAn, participants,

Congratulations to all for your participation.

There is no update right now. Organisers will be reaching out to the participants shortly with details about their travel grants, etc and post challenge follow-up.

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

Hi @amapic, I have started force cudatoolkit=10.0 installation at same time above announcement is made i.e. 14 hours ago.

Edit: I remember the conda environment issue you were facing, and it isn’t related to it.

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

Hi @ignasimg,

Thanks for the suggestions.
I completely agree that we need to improve our communication & orientation of information for providing seamless experience to participants.

We would be glad to hear back from you after competition and looking forward for the inputs.


I checked all the submissions and unfortunately multiple participants are facing same issue i.e. GPU is being allocated but not used by submissions, due to cuda version mismatch.

For making GPU work out of box, we have introduced force installation as below in our snakes challenge evaluation process:

conda install cudatoolkit=10.0

This should fix the timing issues and we will continue monitoring all the submissions closely.


@ignasimg I have verified disks performance and it was good. Unfortunately on debugging, I found your submission faced same issue i.e. cudatoolkit=10.1 due to which it may have given the impression that disk is the bottleneck (but it was GPU which wasn’t being utilised). The current submission should finish much sooner after condatoolkit version pinning.

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

@ValAn No, I can confirm the timeouts haven’t been change b/w your previous and current runs. The only issue has been timeout wasn’t implemented properly in past and it can be reason why your previous (1 week old) submission get missed from timeout.

We can absolutely check why it is taking >8 hours instead of ~10 minutes on local. Can you help me with following:

  • The local run is with GPU? I can check if your code is utilising GPU (when allocated) or running only on CPU for whatsoever reason.
  • What are the number of images when you are doing locally? The server/test dataset have 32428 images to be exact, which may be causing higher time.

I think specs for online environment would help a bit in case there is significant difference from your local environment: 4 vCPUs, 16 GB memory, K80 GPU (when enabled)

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

Hi @amapic, let me get back on this after confirming with organisers.

Meanwhile we can create new questions instead of following up on this thread, it will make QnA search for future simpler. :sweat_smile:

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

Hi @ValAn,

The submissions ideally should take few hours to run but we have put hard timeout as 8 hours. In case your solution is crossings 8 hours it is marked failed.

According to you how much time your code should run roughly? Is it way too off in local v/s during evaluation phase?

Otherwise you can include GPU (if not doing right now) to speed up computation and finish the evaluation under 8 hours.

Please let us know in case you require more help with debugging your submission. We can try to see which step/part of code is taking higher time if required.

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

@amapic This is happening as these packages are only available for linux distribution, due to while installing them in windows (I assume you are using windows) is failing. This is unfortunately a limitation currently with conda.

Example:
https://anaconda.org/anaconda/ncurses, have only osx & linux builds but not windows

In such scenario, I will recommend getting rid of above packages from environment.yaml and continue your conda env creation. These packages are often included being dependencies of “main” dependencies, conda should resolve similar package for your system automatically.

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

Hi participants, @ValAn,

Yes the GPUs are available on snakes challenge submissions when gpu: true is done in aicrowd.json.

It need to be 10.0 because nodes on which your code run has GKE version 1.12.x currently -> Nvidia driver 410.79 (based on) -> cuda 10.0 (based on).

We are looking forward to have future challenges on higher CUDA version (GKE version). But to keep consistency in results, timings, etc we do not want to change versions mid-way of contest.

Can I have an example of a code which is working to make a submission on gitlab?

2 months ago

Hi @gokuleloop,

Thanks for pointing it out. We have updated the last date to Jan 17, 2020 on website as well.

Can I have an example of a code which is working to make a submission on gitlab?

3 months ago

Hi git lfs migrate is for transferring any older commit to start using lfs. This is useful in case you have lots of older commit (intended/unintended) and want those files to migrate to LFS based in future.

Can I have an example of a code which is working to make a submission on gitlab?

3 months ago

@amapic in case your files are larger then 30-50 MB, you will need to use git-lfs for uploading those files. Please read about it here: How to upload large files (size) to your submission

ImageCLEF 2020 DrawnUI

How long for EUA approval?

2 months ago

cc: @Ivan_Eggel for looking into it

Frequently Asked Questions

How to add SSH key to Gitlab?

2 months ago

Create and add your SSH public key

It is best practice to use Git over SSH instead of Git over HTTP. In order to use SSH, you will need to:

  1. Create an SSH key pair on your local computer.
  2. Add the key to GitLab

Creating your SSH key pair

ssh-keygen -t rsa -b 4096 -C "name@example.com"

Example:
image

Adding your SSH public key to GitLab

Once you have key in your system at location of your choice. You must manually copy this and add it at https://gitlab.aicrowd.com/profile/keys.

Which docker image is used for my submissions?

2 months ago

We use custom fork of repo2docker for our image build processes. The functionality of it is exactly same as upstream repo2docker, except the base image. Our Dockerfile uses following base image:

FROM nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04

How to use above?
To install forked version of repo2docker, please install it via pypi using:

pip install aicrowd-repo2docker

Example to build your repository:

git clone git@gitlab.aicrowd.com:<your-username>/<your-repository>.git
cd <your-repository>
pip install -U aicrowd-repo2docker
aicrowd-repo2docker \
        --no-run \
        --user-id 1001 \
        --user-name aicrowd \
        --image-name sample_aicrowd_build_45f36 \
        --debug .

How to specify custom runtime?

Please read about it in our FAQ question: How to specify runtime environment for your submission.

Why fork repo2docker?
It is currently not possible to use custom base image in vanilla repo2docker, and this status is being tracked here.

Read more about repo2docker here.

How to enable GPU for your submission?

2 months ago

The GPUs are allotted to your submission on need-basis. In case your submission use GPU and you want to get it allocated, please use gpu: true in your aicrowd.json.

Versions we support right now:

nvidia driver version: 410.79
cuda: 10.0

NOTE: Majority of the challenges haven GPU enabled, this information should be available directly on challenge page or starter kits. In case you are not sure the challenge you are participating has GPUs, please reach out to us on Discourse.

How to view logs for your submission?

4 months ago

Background

When we write codes and run them, many of times we make errors. The best friend in such cases are the logs, using which we can debug and fix our codes quickly.

While we want to, but sharing logs by default is tricky because arbitrary codes are executed as part of submission in our competition. These may unintentionally leak part of whole of the datasets. As we all understand, these datasets are confidential and at the same time, knowing testing dataset can be used for undue advantage in a running competition.

Due to this our default policy is to hide the logs. BUT we do keep a close eye on submissions which fail, manually verify and share relevant traceback to the participants in a best effort basis, which can take few minutes to multiple hours. This is surely a issue and the process can’t be scaled. This was the reason which lead to an integration testing phase in our competitions known as “debug” mode. I will be explaining how to enable “debug” mode and what it does below.


Debug Mode

The debug mode when enabled, runs your code against extremely small datasets (different than testing, subpart of testing, subpart of training, etc based on competition), different seed or so on. In a nutshell, the data even when visible to participants hold no value.

Each of the competition have different policy for what all logs are visible and if debug mode should exist. But in majority cases, we enable debug mode by default and show logs for user’s submitted code (not the infrastructure/evaluator logs)

How to use this?

When you submit a solution, the metadata for competition along with other informations is present in aicrowd.json. You can specify debug: true to enable the same.

When enabled following happens with your submission:

  1. Run against different and small dataset for quicker runs
  2. Logs are visible by default to you when the submission fails, under the heading “Logs for participants reference”
  3. The submission is reflected back on AIcrowd.com / Leaderboard with lowest possible score for given competition.

Still facing issues?

We keep the environment for debug mode and actual submission exactly the same. But it is still possible that your code runs well in debug mode while don’t in actual submission. In this case, we will need to revert to traditional support method. The escalations for the same are, we will automatically post your logs -> you can tag competition organisers in Gitlab issue -> let us know in Discourse Forum.

We wish you best of luck for participating in competition!

How to upload large files (size) to your submission

4 months ago

We limit the individual file size to 50M in Gitlab repositories. In case, you are trying to upload a larger file like trained models, etc the git push will fail with following error:

remote: fatal: pack exceeds maximum allowed size
error: remote unpack failed: index-pack abnormal exit

If this is the case, you will have to use git-lfs for larger files. We recommend using git-lfs-migrate to migrate existing larger files into git-lfs.

Links:

  1. Tutorial on why and how to use git-lfs
  2. How to migrate existing large files to git-lfs using git-lfs-migrate

Quick Commands:

❯ git lfs install
Updated git hooks.
Git LFS initialized.
❯ git lfs track "*.mymodel"
Tracking "*.mymodel"
❯ git add .gitattributes
❯ git add some.mymodel
❯ git commit [...]
❯ git push origin master

Other notes:

  • In case you are a windows user and want to avoid using terminal, you can use “git for windows” application instead. (contributed by @HarryWalters)
  • In case you are getting git: 'lfs' is not a git command. See 'git --help'. error, it is possible your git version is old which don’t come with lfs bundled. Install it using apt-get install git-lfs/brew install git-lfs based on your OS distribution.
  • Large files have been already commited and now no matter what you do git push rejects with above error? If this is the case, please use the git lfs migrate command which ammend your git tree and fixes this problem. You may need to force push once this is done. https://github.com/git-lfs/git-lfs/wiki/Tutorial#migrating-existing-repository-data-to-lfs

How to use Conda environment for your submission

4 months ago

Conda

Conda is an open source package management system and environment management system that runs on Windows, macOS and Linux.

Conda’s benefits include:

  • Providing prebuilt packages which avoid the need to deal with compilers or figuring out how to set up a specific tool.
  • Managing one-step installation of tools that are more challenging to install (such as TensorFlow or IRAF).
  • Allowing you to provide your environment to other people across different platforms, which supports the reproducibility of research workflows.
  • Allowing the use of other package management tools, such as pip, inside conda environments where a library or tools are not already packaged for conda.
  • Providing commonly used data science libraries and tools, such as R, NumPy, SciPy, and TensorFlow. These are built using optimized, hardware-specific libraries (such as Intel’s MKL or NVIDIA’s CUDA) which speed up performance without code changes.

Learning Conda

Syncing environment with remote

Create environment on your machine

The starter kits generally contain existing environment.yml file with which sample submission is tested out. You can start using the same environment to create one for your submission by:

conda env create -f environment.yml --name <environment-name>

But if you are feeling adventurous you can start with clean state by:

conda create --name <environment-name>

Export your environment to your submission

Once you have run your code and comfortable with your submission, you can simply export your conda environment in your repository root. You can do so via:

conda env export --no-builds | grep -v "prefix" > environment.yml

NOTES:

  1. If you are using conda on different platform like Mac or Windows, it is possible that your environment.yml export contains packages which are not required & not available for linux. Ex: clangxx_osx, gfortran_osx and so on. This can cause your submission to fail BUT you will have access to build logs and can get rid of such packages manually from above file.
  2. Pinning your packages is a good idea for more reproducible results.

How to specify runtime environment for your submission

4 months ago

Every code has its own package and dependency requirements. To simplify the process for our participants, we use forked version of repo2docker. As the name suggests repo2docker fetches a git repository and builds a container image based on the configuration files found in the repository.

The configuration files need to be present in your repository root. The following configuration files can be used:

You can click on the links above to view example usage for each of the configuration files. We highly recommend using conda environments so you can replicate ~same runtime in your local machines easily. You can read this FAQ to learn how to use Conda for your submission.

Locally build docker image with above configuration files (optional)

Incorrect configuration files can cause your submission to fail BUT you will have access to build logs and can fix your configurations using it. You can test out your configuration files locally for faster debug cycle instead of making submission on AIcrowd.

  1. Install aicrowd-repo2docker package from pypi
pip install aicrowd-repo2docker
  1. Change the current directory to your repository root and create docker image. If this command runs successfully, it means that your image is good to go for submission.
aicrowd-repo2docker \
        --no-run \
        --user-id 1001 \
        --user-name aicrowd \
        --image-name sample_aicrowd_image_build \
        --debug \
        .
  1. (Optional) Verify the runtime environment locally. This is possible that you end up getting environment which isn’t desire due to mistake in writing above configuration files and/or due to package overwritten by different configuration file. The best way to check the same is by running docker image we built above.
docker run -it sample_aicrowd_image_build /bin/bash
#> Inside the newly opened bash session
#> pip freeze
#> [any command you would like to verify environment]

How to create Gitlab based submissions

4 months ago

  1. To make a submission, you will have to create a private repository on https://gitlab.aicrowd.com/ either from scratch OR forking/copying the starter kit provided for the competition (recommended).

  2. You will have to add your SSH Keys to your GitLab account by following the instructions here. If you do not have SSH Keys, you will first need to generate one.

  3. Edit aicrowd.json in the repository with your username and set the desired parameters. These parameters will be different for every challenge, so refer to the aicrowd.json file provided in your challenge starter kit.

  4. Then you can create a submission by making a tag push to your repository on https://gitlab.aicrowd.com/. Any tag push (where the tag name begins with “submission-”) to your private repository is considered as a submission
    Then you can add the correct git remote, and finally submit by doing:

# Add AIcrowd git remote endpoint
git remote add aicrowd git@gitlab.aicrowd.com:<YOUR_AICROWD_USER_NAME>/<YOUR_REPOSITORY>.git
git push aicrowd master

# Create a tag for your submission and push
git tag -am "submission-v0.1" submission-v0.1
git push aicrowd master
git push aicrowd submission-v0.1

# Note : If the contents of your repository (latest commit hash) does not change,
# then pushing a new tag will **not** trigger a new evaluation.
  1. You now should be able to see the details of your submission as a Gitlab issue in your repository at: https://gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/<YOUR_REPOSITORY>/issues

About the Frequently Asked Questions category

4 months ago

This category contains FAQs (and tutorials) which will be helpful to users for submitting to AIcrowd and interacting with website.

AMLD 2020 - Transfer Learning for International...

Submission Error AMLD 2020

3 months ago

Hi @student,

Are the submissions you are referring #57743 and #57745 respectively? Both of these submissions are part of leaderboard calculation. It is possible that you tried to view leaderboard immediately while we refresh leaderboard in ~30 seconds?

Your submissions:

Flatland Challenge

Can not download test data

3 months ago

Hi @a2821952,

The download link is working fine.
We have expiry to links generated for download, just in case you are re-using older link. So please always download by opening link directly from the resources page here: https://www.aicrowd.com/challenges/flatland-challenge/dataset_files

Let us know in case it doesn’t work out for you.

Evaluation process

3 months ago

Hi @RomanChernenko,

  1. Yes, the solutions are evaluated on same test samples but they are shuffled. https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/evaluators/service.py#L89
  2. Video generation is done on a subset of all environments and remain same for all evaluations. It may be possible when you open the leaderboard, all videos didn’t start playing at the same time leading to this perception?
  3. This is the place where Flatland library is generating score and N+1 thing might not be the reason. I will let @mlerik investigate & comment on it.

Evaluation time

3 months ago

@mugurelionut Please use 8 hours as the time limit, we have updated our evaluation phase to strictly enforce it going forward.

NOTE: Your submission having 28845 seconds as total execution time is safe with 8 hours time limit as well. Non-evaluation phase takes roughly 5-10 minutes which is included in timing visible above.

Total Time = Docker Image Building + Orchestration + Execution (8 hours enforced now)

Evaluation time

4 months ago

HI @vitaly_bondar,

Thanks for sharing one example submission. Yes, it will be checked for all the submissions.

Evaluation time

4 months ago

Can you share ID/link of the submission you are referring above?

The exact timeout from participants’ perspective is 8 hours, while we keep a margin of 2 hours (making it total of 10 hours) as overhead for docker image build, node provisioning, scheduling, etc. If any submission has run for longer time than this, we can look further why timeout wasn’t respected.

Evaluation time

4 months ago

We have per step timeout of 15 minutes while the overall submission timeout is 8 hours.

Mean Reward and Mean Normalized Reward

4 months ago

You can checkout the information in flatland-rl documentation here. https://flatlandrl-docs.aicrowd.com/09_faq.html#how-is-the-score-of-a-submission-computed

The scores of your submission are computed as follows:

  1. Mean number of agents done, in other words how many agents reached their target in time.
  2. Mean reward is just the mean of the cummulated reward.
  3. If multiple participants have the same number of done agents we compute a “nomralized” reward as follows: … code-block:

normalized_reward =cumulative_reward / (self.env._max_episode_steps +self.env.get_num_agents()

The mean number of agents done is the primary score value, only when it is tied to we use the “normalized” reward to determine the position on the leaderboard.

Evaluation server hardware specifications

4 months ago

Hi @RomanChernenko,

Every submission for Flatland competition is using AWS c4.xlarge instance. It has following hardware configuration:

vCPU: 4 (2.9 GHz Intel Xeon E5-2666 v3 Processor)
RAM: 7.5

Computation budget

4 months ago

Hi @plemian,

We were lenient with the 8 hours limit due to performance issues in flatland-rl library earlier. The limit has been re-enforced since 14-Nov (yesterday).

Submission failed: No participant could be found for this username

5 months ago

Hi @devid_farinelli,

You are correct, the first issue started happening because of username change on AIcrowd website. Unfortunately, the sync in AIcrowd username with Gitlab isn’t perfect right now and your username remain to older username, causing weird issue earlier. After the fix you did by deleting your account, things are working as expected.

Your newer submission failed because you are using library version 2.1.4 while the server is expecting 2.1.6. Please feel free to update your version.

We do post logs frequently to all failed submissions, but there can be delays. Hence, let me introduce debug=True flag which will be helpful to you in cases like this. You will be able to view all of your logs without @aicrowd-bot involvement and remain unblocked. https://github.com/AIcrowd/flatland-challenge-starter-kit#repository-structure

Wish you luck with the competition. :smiley:

AMLD 2020 - D'Avatar Challenge

About the submission format

3 months ago

Hi @borisov,

The submission format you shared above is valid json.
You can also download sample output json from resources section on the contest page here.

Uncategorized

Can I access train file and test file in the same predict.py? As I see test file path is referring to production evironmwent but trian path is referring to local path?

4 months ago

Hi @shravankoninti,

Yes, you can access all the files at the same time during evaluation.

The starter kit have all the information about the environment variable, but let me clarify on the environment variables available during evaluations here as well.

  • AICROWD_TEST_DATA_PATH: Refers to testing_phase2_release.csv file which is used by evaluator to judge your models in testing phase (soon to be made public)
  • AICROWD_TRAIN_DATA_PATH: Refers to /shared_data/data/training_data/ in which all of training related files are present.
  • AICROWD_PREDICTIONS_OUTPUT_PATH: Refers to the path at which your code is expected to output final predictions

Now in your codebase, you can simply do something as follows to load both the files:

AICROWD_TRAIN_DATA_PATH = os.getenv("AICROWD_TRAIN_DATA_PATH", "/shared_data/data/training_data/")
AICROWD_TEST_DATA_PATH = os.getenv("AICROWD_TEST_DATA_PATH", "/shared_data/data/testing_data/to_be_added_in_workspace.csv")
AICROWD_PREDICTIONS_OUTPUT_PATH = os.getenv("AICROWD_PREDICTIONS_OUTPUT_PATH", "random_prediction.csv")


train_df = pd.read_csv(AICROWD_TRAIN_DATA_PATH + 'training_data_2015_split_on_outcome.csv')
# Do pre-processing, etc
[...]
test_df = pd.read_csv(AICROWD_TEST_DATA_PATH, index_col=0)
# Make predictions
[...]
# Submit your answer
prediction_df.to_csv(AICROWD_PREDICTIONS_OUTPUT_PATH, index=False)

I hope the example clarifies your doubt.

Frequently Asked Questions

4 months ago

This topic contains FAQs (and tutorials) which will be helpful to users for submitting to AIcrowd and interacting with website.

NeurIPS 2019 - Robot open-Ended Autonomous Lear...

Evaluation failed

4 months ago

Hi @CIIRC-Incognite,

I went through the submission #25101. It failed because the underlying Kubernetes node faced issue causing your submission terminate while it was in extrinsic phase. I have requeued the submission now.

Have you ever successfully run 10M steps without resetting env?

4 months ago

Hi, there are multiple fixes we have pushed ~12 hours back to get rid of network dependency in real robots evaluations, so it doesn’t effect any running submission.

We have requeued last submission from every username and scores should be available soon.

Addition: The memory isn’t the problem which I see for evaluations, for context after 2-3M steps, it is 4.64G for @tky’s solution and 1G for @ec_ai (sample submission).

Intrinsic phase timeout

5 months ago

Hi @kim135797531,

Please update your real_robots package to latest one (0.1.16) and create a new submission. We have made some stability fixes which will be important to include in your Round 2 submission.

Thanks,
Shivam

How should I add parameters in round one?

5 months ago

Hi,

You will have to use git-lfs (https://dzone.com/articles/git-lfs-why-and-how-to-use ) for larger files. We recommend using git-lfs-migrate (https://manpages.debian.org/unstable/git-lfs/git-lfs-migrate.1.en.html ) to migrate the larger files into git-lfs.

NeurIPS 2019 : Disentanglement Challenge

Thanking the organizers

4 months ago

Hi @amirabdi, @imheyman.
Updates from the organisers are as follows


Winners of the stage 1 and 2 are the same.

  • 1st entry: DMIRLAB
  • 2nd entry: Maximilian Seitzer
  • 3rd entry: Amir Abdi

There will be no best paper awards due to the lack of submitted reports and the lacking quality of the reports the jury could not decide to award a brilliancy price.

All submissions are rejected with no slots available

6 months ago

Hi,

Thanks for raising this issue.
We found mis-configuration on our end for number of available slots, due to which your submissions weren’t getting scheduled.

This has been fixed now and you should be able to submit now.

Always same evaluation results

8 months ago

Hi @komattasan,

I see the latest commit (80edc707c0a6b821cbd346ce0ec25301cc989e2f) on your repository hasn’t changed since 2 weeks. Along with it all the submissions/tags have been made using the same hash as well.

Are you sure you add adding your updated train_numpy.py as a commit before resubmitting solution i.e. new tag push? If not, please do it like:

git add <file-changed>  # train_numpy.py in this example
git commit -m "Your commit message"
git push origin master

I have submitted, but I can not find it in the submission pages?

9 months ago

Hi, the issue has been resolved now. We are monitoring it closely and getting all the pending submissions cleared asap. Sorry for the inconvenience.

NeurIPS 2019 : MineRL Competition

Evaluation result says file too large?

4 months ago

Hi @rolanchen ,

It seems the error message wasn’t good. I re-ran your submission with minor change to print the traceback. I think this should help you in debugging further, seems to be coming from your server.init()

Traceback (most recent call last):
  File "/home/aicrowd/run.py", line 14, in <module>
    train.main()
  File "/home/aicrowd/train.py", line 159, in main
    server.init()
  File "/home/aicrowd/aiserver.py", line 52, in init
    self.kick_cbuf = SharedCircBuf(self.instance_num, {'NAN':np.zeros([2,2])}, ['NAN'])
  File "/home/aicrowd/lock_free_queue.py", line 86, in __init__
    self.read_queue = SafeQueue(queue_size)
  File "/home/aicrowd/lock_free_queue.py", line 25, in __init__
    sary = multiprocessing.sharedctypes.RawArray('b', 8 * size)
  File "/srv/conda/envs/notebook/lib/python3.7/multiprocessing/sharedctypes.py", line 61, in RawArray
    obj = _new_value(type_)
  File "/srv/conda/envs/notebook/lib/python3.7/multiprocessing/sharedctypes.py", line 41, in _new_value
    wrapper = heap.BufferWrapper(size)
  File "/srv/conda/envs/notebook/lib/python3.7/multiprocessing/heap.py", line 263, in __init__
    block = BufferWrapper._heap.malloc(size)
  File "/srv/conda/envs/notebook/lib/python3.7/multiprocessing/heap.py", line 242, in malloc
    (arena, start, stop) = self._malloc(size)
  File "/srv/conda/envs/notebook/lib/python3.7/multiprocessing/heap.py", line 134, in _malloc
    arena = Arena(length)
  File "/srv/conda/envs/notebook/lib/python3.7/multiprocessing/heap.py", line 77, in __init__
    os.ftruncate(self.fd, size)
OSError: [Errno 27] File too large

Evaluation result says file too large?

4 months ago

Also the error doesn’t seem to be size related:

On attempting to open files with sufficiently long file names, python throws IOError: [Errno 27] File too large.  This is misleading, and perhaps should be relabeled as 'File name too long.'

[1] https://bugs.python.org/issue9271

Evaluation result says file too large?

4 months ago

Hi, it is acceptable. You can training models in train/ folder upto 1000Gi size.

Can you share submission id?

Announcement: Agent logs are now visible to participants

5 months ago

Hi everyone,

To ease up submitting code for remote evaluation we have opened up agent-logs when the submission fails - for submissions now on.

You can click on the agent-logs link on Gitlab issue page and view the tracebacks yourself.

Happy submitting and all the best! :tada:

Problems about the competition startkit

7 months ago

Hi,

It looks like pyro4-ns is misbehaving on your side.

  1. Can you run the ./utility/evaluation_locally.sh along with --verbose flag and share the output logs?
  2. ps aux before & after running ./utility/evaluation_locally.sh

Also, the script name is evaluation_locally.sh and not evaluate_locally.sh right now, can you git pull once as well.

[Announcement] Submissions for Round 1 now open!

7 months ago

Hi @abiro,

The documentation link which I shared above went wrong. It was pointing to earlier version of repo2docker (0.6) which have python 3.6 by default. We are using version 0.9.0 having python 3.7 by default (unless you specify otherwise in your runtime.txt).

Sorry for any confusion caused.

AIcrowd Submission Failed: 404 Tree Not Found (submission_hash: None)

7 months ago

Hi,

Sorry for late update on this issue.
We looked on our side and couldn’t replicate it as of now. Can you share steps done which lead to it?

  1. You created repository and added it as remote OR you pushed directly to create repository automatically.
  2. push master was done followed by tag push OR tag push directly.

Meanwhile, as I see subsequent submission attempt from your side worked. Therefore, considering it to be one off issue as of now.

How is the "reward" on leaderboard page computed?

7 months ago

Hi,

  1. We are using “ObtainDiamond” environment.
  2. The reward currently displayed is the sum of rewards in every episodes, which is wrong. It will be updated to average reward shortly.

[Announcement] Submissions for Round 1 now open!

7 months ago

Hi @seungjaeryanlee,

Thanks for raising the issue. We had misconfiguration in our cluster auto-scaling. The issue has been fixed now and submissions should go through in parallel (instead of 1 at a time happening earlier).

[Announcement] Submissions for Round 1 now open!

7 months ago

Hi, we use repo2docker in running submissions.

By default, repo2docker will assume you are using Python 3.6 unless you include the version of Python in your configuration files. repo2docker support is best with Python 2.7, 3.5, and 3.6.

You can learn more about specifying different packages and their versions from above link.

EPFL ML Road Segmentation 2019

Where is the dataset?

5 months ago

Check the “resources” tab on the contest page for the available downloads.

NeurIPS 2019: Learn to Move - Walk Around

Maximal repository weight for Docker-commit

5 months ago

Hi,

You will have to use git-lfs (https://dzone.com/articles/git-lfs-why-and-how-to-use ) for larger files. We recommend using git-lfs-migrate (https://manpages.debian.org/unstable/git-lfs/git-lfs-migrate.1.en.html ) to migrate the larger files into git-lfs.

ICCV 2019: Learning-to-Drive Challenge

Unity Obstacle Tower Challenge

Submissions are stuck

9 months ago

Hi,
You can view the commit id shown on issue page and match it with list of your tags.

For example:
https://gitlab.aicrowd.com/your-name/your-repo/tags
https://gitlab.aicrowd.com/your-name/your-repo/issues/1

At the same time, it sounds like a good idea to provide this information directly on issue page content. We have added it as enhancement and will make it available in next release.

Submissions are stuck

9 months ago

Hi,
You can view the commit id shown on issue page and match it with list of your tags.

For example:
https://gitlab.aicrowd.com/your-name/your-repo/tags
https://gitlab.aicrowd.com/your-name/your-repo/issues/1

At the same time, it sounds like a good idea to provide this information directly on issue page content. We have added it as enhancement and will make it available in next release.

Submissions are stuck

9 months ago

Hi, our system was healthy since last update but unfortunately not picking up newer submissions.
The issue has been addressed now but pending submission queue have lined up, the queue will be cleared shortly, you will continue to receive updates under Gitlab issues list.

[Solved] No module named numpy (or similar)?

9 months ago

It might be happening that even after putting numpy as requirement you get ModuleNotFoundError or so.

This is happening because the python packages are being installed via requirements.txt in global python, while run.sh is activating base conda environment. As a fix you need to do either of these:

  1. You can just comment the source activate base in run.sh to stop using Conda’s python but use container/global instead
  2. Use environment.yml to install numpy with conda.

libXrender.so.1 error with docker

12 months ago

We use repo2docker for building the submission images. You can check how to add your requirements and packages here: https://repo2docker.readthedocs.io/en/latest/config_files.html

Git push over 50mb

12 months ago

@Petero @kenshi_abe
Thanks for reporting. We noticed problem in our inter-container communication, due to which multiple submissions were stuck in pending state (including yours).

The issue has been fully resolved now and re-evaluation is triggered on all the affected submissions. Please let us know if you still face any issue.

Error while pushing to Gitlab

About 1 year ago

Hi,
It seems you were able to push your submission after moving to LFS.
Please let us know if you face any other issue.

Announcement: Debug your submissions

About 1 year ago

Thanks for flagging, fixed the bug and value will be checked now on instead of just key.

@tky Sorry for the inconvenience caused to you.

Evalutation error : Unity environment took too long to respond

About 1 year ago

Can you share your submission link?

Evalutation error : Unity environment took too long to respond

About 1 year ago

We have rolled out a stability fix now and hoping this problem to be resolved completely.
Please let us know if this still pops up so we can investigate further accordingly.


@kwea123 Looks like your latest submission is having different issue then the mlagents_envs.exception.UnityTimeOutException. But if it pops up again please let us know by replying to this thread.

@banjtheman I re-evaluated the submission shared by you, and it went well w.r.t. mlagents_envs.exception.UnityTimeOutException after the fix, although failed on some other issue, you can view the same on link now.

Announcement: Debug your submissions

About 1 year ago

@ChenKuanSun It looks like a good suggestion which we can try to incorporate in AICrowd. /cc @mohanty

Announcement: Debug your submissions

About 1 year ago

@ChenKuanSun Yes, the debug submissions will continue to count toward overall quota of submissions.
Given, these are treated exactly the same way as actual submissions internally i.e. utilise prod resources, we would like to stop any possible misuse. As a participant, you would have to choose wisely when & how many submissions need to be done as actual v/s debug.

Submission Failed: Evaluation Error

About 1 year ago

Announcement has been made about debug feature now. You can read about it here.

Announcement: Debug your submissions

About 1 year ago

Hi everyone,

We noticed that in the challenge multiple participants were facing problems to debug their submissions. The debugging was even trickier because logs aren’t accessible to participants directly and they have to wait quite sometime for any admin to share it.

To make this whole process simpler we are introducing a debug flag for your submission. The debug mode has following features:

  • The environment seeds used in debug mode is different than “actual” submissions.
  • The scores for debug mode will be 0.01/0.01 and actual scores will not be reflected back.
  • The debug submission will be counted towards your submission limit/quota, to prevent misuse.
  • Agent/Your code’s logs will be available directly to you by default.

How to submit in debug mode?
You need to add debug: true in your aicrowd.json file (default value is false if not present). The final file will look something like:

{
    "challenge_id" : "unity-obstacle-tower-challenge-2019",
    "grader_id": "unity-obstacle-tower-challenge-2019",
    "authors" : ["aicrowd-user"],
    "description" : "Random Obstacle Tower agent",
    "gpu": false,
    "debug": true
}

What is the difference?

This is how submission status page will look like. Notice a new field “Logs for participant reference” which will be visible for debug submissions.

We hope this to be helpful addition to all the participants. All the best! :aicrowd:

Submission Failed: Evaluation Error

About 1 year ago

Hi @zhenghognzhi,

Shared in your issue, although I guess you already figured out the error in latest submission. :smiley:

Submission Failed: Evaluation Error

About 1 year ago

Hi @immik @ChenKuanSun,

We have updated the submissions status page with relevant logs for your submissions. You can browse to your Gitlab repo’s issues to view them.

Is there a max repo size?

About 1 year ago

The issue has been fixed, and you can retry the submissions now.
We were having docker image size limitation (at intermediate stages of docker build as well) of 10G which is now increased to 20G.

shivam has not provided any information yet.