Loading
0 Follower
0 Following
rolanchen

Location

CN

Badges

2
1
1

Activity

May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Sample-efficient reinforcement learning in Minecraft

Latest submissions

See All
failed 25557
failed 25556
failed 25407
Participant Rating
Participant Rating

NeurIPS 2019 : MineRL Competition

Any way to completely terminate a submission?

Over 4 years ago

We have submitted several incorrect version, but it seems that closing the issue won’t stop the processing of the submission. Yet the maximum parallel submission is 3, could we have any ways to terminate them? Otherwise we have no time to try the correct ones.

Evaluation result says file too large?

Over 4 years ago

Thxs for the log! But it is really strange…we didn’t encounter any problem for this part locally…we will go on debuging it any way. Thx again.

Evaluation result says file too large?

Over 4 years ago

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Evaluation result says file too large?

Over 4 years ago

We did consider this, yet the only β€œlong file name” we reach in code, is the record directory name of the human data, this works well on our local machine too.

Evaluation result says file too large?

Over 4 years ago

AIcrowd Submission Received #25266


please find it above. Thx.

Evaluation result says file too large?

Over 4 years ago

2019-11-25T14:04:59.69570723Z [Errno 27] File too large
2019-11-25T14:05:00.438503169Z Ending traning phase

This is the response of evaluation. But our uploaed package is totally no more than 4M.
The checkpoint model file will be larger than 30M, isn’t it acceptable??

About the rule on pre-trained model

Over 4 years ago

Hi all,

According to the rule, any file larger than 15MB will be removed to prevent using pretrained models. So what if my pretrained model is much smaller? Say only 1.5MB. Is it accpetable?

The evaluation result does not match my local testing

Over 4 years ago

Too bad.
Check my above replay, see if could help.

The evaluation result does not match my local testing

Over 4 years ago

As far as i can tell, it seems that 4-steps-1-episode thing happen whenever sth going wrong inside the main script, while the system won’t report any of it (which sucks indeed). So what I can do is deleting part of my codes one by one and submit it again and again, until finding out the critical part…

The evaluation result does not match my local testing

Over 4 years ago

Hey, I just solved mine. The issue is about the Tensorflow version. By default the system will install TF2.0, while my code is running on TF1.13, and the thing unexpected is the system not reporting any error about it. HTH.

The evaluation result does not match my local testing

Over 4 years ago

Hi all,

I have just submit a pretrain version for evaluation, yet the result indicates that it goes only 1 episode with 4 steps; however when I ran the evaluate_local.sh on my own machine the output is 5 episodes with each at least 5k steps. The code files are totally identical. Anyone have any ideas about this situation? Thx.

About the pip install format

Over 4 years ago

Thx for the clarify!

About the pip install format

Over 4 years ago

Just want to make sure:
If i would like to install a specific version of some package, say tensorflow-1.12, how should I format it in my requirement.txt?

About the submission content

Over 4 years ago

Hi,

There are two things I want to make clear about the submission rule, both of which are about Round 1:

  1. In the training phase, we are required to submit the trained model for evaluating, so do we need to submit the test.py for doing the inference? Because I didn’t find any document describing the submitted model’s input/output rule.
  2. In the evaluation phase, we are required to submit the code for retraining. So should our code structure be EXACTLY like the ones in the StartKit? Is it ok to include more scripts in the case that my training logic is too much to be contained only in train.py?

Thanks.

email: cjwfy8871@126.com discourse name: rolanchen