Hey. There is no API to get this information in MineRL environments right now, so no. That info is just for aicrowd evaluation.
Hey. Unfortunately the tight schedule requested by NeurIPS competition board is making it a bit difficult to extend deadlines further. We will likely only extend deadline if more technical issues crop up that require more days to handle.
Hey! I assume this is the number of participants invited to the workshop to present their submissions etc? Ideally we would like to invite all Round 2 finalists + some from Intro track, depending on how NeurIPS workshop is structued! We will let you know when we have the exact numbers
Prizes are to be determined (but they are coming!)
The deadline was set to 15th October initially, but rules indeed make it sound different.
I will let Nicholay (our rules guy) answer the question for final confirmation
Hey! Sorry for the delay (we are more active on the Discord server).
In Intro track, you are free to use the data in any way possible. In Research track, however, you are not allowed to have such fine-grained selection (unless it is selected by a trained neural network).
Ah! I assume you refer to the figure here? https://github.com/minerllabs/competition_submission_template (Edit: I have removed the said figure from the github page. It had text “max. 20 submissions”)
That information is wrong, and for now there are unlimited number of submissions (but with a daily cap). Thanks for pointing this out tho! I will fix it.
Failed submissions are counted as submissions as well, as long they started the building stage (this, too, takes quite a bit of resources to run).
Hey. The submission quota refreshes daily, but in a pinch @Shivam or someone else from AICrowd can requeue your submission if it fails in docker build phase (sorry, I did not notice your message on gitlab up until now).
Nope, no per-step limitations. I would recommend that the MineRL environment will work if you have very long delays between step calls (it should not matter, but easier to check locally than on the evaluation server )
Hey. The Round 2 systems are Azure NC6 instances. There is no per-step limit. The only limits are the number of samples you can read from the live environments (8M) and the wall-clock time (4 days).
Hey. I was not able to reproduce this error. Make sure you import minerl as well. Also try upgrading your MineRL installation with
pip install --upgrade minerl.
If things still fail, please share your system details/setup so I can look into this further
Hey! Yes, the dataset is the same (+ the survival dataset). While the dataset is not as large as something used by, say, Starcraft II, it is still well adequate for kickstarting RL algorithms. See the solutions in previous years, which use offline-RL and imitation learning techniques. You can get roughly 10 average score with IL alone (and probably better!), and with correct combination of RL you can get closer to 100 (and hopefully higher this year!).
This baseline for research track uses behavioural cloning and the Ironpickaxe dataset, and can be trained in 30-60min on a TitanXP machine (your 2070 is probably faster). Of course, this is a baseline solution and you can tune the parameters for longer training, but you should be able to train and evaluate your agents on a RTX 2070 machine inside day or two, depending on what kind of setup you use. For a comparison, I used a single GTX1080 machine last year, where I tested behavioural cloning and was able to test a single setup inside 24h.
There is no further interfaces for that per-se, however MineRL code is all open on github, so with enough digging you might be able to modify it into shape that works for you.
Pinging @BrandonHoughton who has better knowledge of the current situation of MineRL.
Hey! Sorry for the delay!
The baseline code was tested to work during release but sadly it seems like some of the dependencies have been updated since and it breaks this code . My wild guess is that
imitation repository has changed. Your best bet is to checkout older version of
imitation and installing that (a manual process…).
An easier step would be to double-check you have the up-to-date library versions as specified by
environment.yml, but I assume this is already the case.
Thanks for the reply!
Edit: Sorry, misread the last part. We will try to get the read access to all the env attributes in our future releases.
, but for clarity, let me lay out the issue I was having.
My code has a following check for environments:
is_multiagent = hasattr("num_agents", env) and env.num_agents > 1
The environment does not have
num_agents attribute. On normal Gym this check will correctly result in
is_multiagent = False. However on aicrowd-gym it raises
TypeError: '>' not supported between instances of 'NoneType' and 'int'
This is because the
True when it should return
False, and aicrowd-gym seems to be setting
None, while in reality it should be unset.
I managed to workaround these issues for now.
Here are some comments for aicrowd-gym devs they might want to look at (overall works quite well too! I bet it makes doing your eval server secrets much easier )
- Allow access to observation/action space variables that exist in normal Gym (e.g.
spacesfor Dict obs)
- If possible, wrap your underlying obs/action spaces to use the original Gym’s obs/action space classes (e.g. if you have a Discrete action space, it should appear as instance of
gym.spaces.Discreteinstead of something else). Otherwise checks like
- This is perhaps result of bad programming from my end, but make sure variables that do not exist in the environment return appropriate exceptions. For example, I did
somethingwas not a variable in the environment, but with aicrowd-gym it was set to
None. This caused
hasattrto return True and subsequently things failing.
I am trying to iterate over the spaces the Dict space has (see submission #152436), but
aicrowd-gym does not seem to support iteration, whereas the original Gym Dict space supports it. Would it be possible to have this feature in
On a similar note, is there public code for aicrowd-gym available which is used on the servers so one can debug these errors locally? The pypi version seems to be just a wrap around normal Gym.
Our submission uses Pillow, but it seems like the version on evaluation server does not match what we have set in
requirements.txt (see e.g. submission #152385). The build log stats that
8.2.0 is installed, but the eval runs throw an exception that suggests an older version is used (e.g.
AttributeError: 'FreeTypeFont' object has no attribute 'getbbox'
A local build and run of the docker image works as expected. Is there something very sneaky happening or does the evaluation server touch Pillow library in some way?
(PS: the code does not follow the normal submission template, but should not be the reason for this error).