@gaurav_singhal We updated the dataset file names and the extracted directory names. The zip files should extract
validation directories. Also updated the colab notebook with the new dataset paths.
Thanks for your help!
Thanks for pointing these out.
We will update the file names to match the names shown in the listing.
Regarding the colab notebook part, are you using the notebook that was released for round 1? We have an updated notebook for round 2. The updated notebook should have all these fixes along with additional code needed to run the
post_purchase_training_phase. Can you try using the new notebook in case if you haven’t yet?
aicrowd/learn-to-race:base is not available publicly and only works when you submit your code to AIcrowd.
The base image is based on ubuntu and has l2r repo along with the simulator. The image is not released publicly as it has the data need for
VegasNorthRoad circuit (the private track to be used for next round).
You can specify any packages that need be installed using
apt-get in the
The image has conda pre-installed, so you can also specify any conda commands that you want to run.
Finally, you can specify the pip packages in the
You can update the Dockerfile in your repo as you seem fit except for the base image part (the first line of the Dockerfile). In case you are facing any issues in setting up the runtime dependencies during the evaluation, please feel to post on the forums or reach out to us.
Is there some default CUDA installed? Or we have to modify this file to get any CUDA?
We do not have CUDA installed by default. You can install the version that you need by uncommenting the lines in the Dockerfile as mentioned in the above reply. Please feel free to reach out to us or the organizers if you need any further help with setting this up.
I guess also the torch version in requirements.txt should be changed to a CUDA version.
Yes, you can update the requirements.txt based in your need.
Also is there some info on the server hardware (GPU, number of cores).
The evaluations run on AWS
g4dn.xlarge nodes. They have 4 vCPUs, 16 GB RAM and 1x Nvidia Tesla T4 GPU.
You can choose any framework. The easiest way to do this is to specify the framework/library in your
If you need to choose a CUDA version, you can uncomment one of these lines based on your requirement.
You can also edit the Dockerfile as you see fit. However, the base image needs to be
The way we register lap wise metrics had a bug that was causing the evaluator to skip registering a few metrics if the lap was completed by the end of the first episode. This issue is fixed and all the effected submissions were re-evaluated. Please let us know if you are still facing this issue or any of your submissions did not get re-evaluated.
The base docker image mentioned in the dockerfile is a protected image that contains a few evaluation tracks. Unfortunately, you can’t use it to build the image locally and only works on evaluation servers.
The simulator needs an Nvidia graphics card to run and the simulator binaries are built for Linux. So running it on Mac is not feasible at this point. However, as @siddha_ganju mentioned, you can start an EC2 instance on AWS (
g4dn.xlarge instance) and train your agents.
The issue should be fixed now. We are re-evaluating the effected submissions.
is that without considering time spent in the agent?
Yes, it is without considering the time spent in the agent.
How much of that is network latency? I.e., if we have an agent capable of performing faster than that, does all of the agent compute time get hidden under the latency?
This includes the network latency, processing delay and everything that is needed to send a request and get the response. Any latency/compute time on the agent will be added to this.
For example, if you have something like
import aicrowd_gym from tqdm import trange env = aicrowd_gym.make("NetHackChallenge-v0") env.reset() for _ in trange(1000000): _, _, done, _ = env.step(1) if done: env.reset()
This should give you a throughput of 1500-2000 iterations per second during the evaluation.
The evaluations run on AWS EC2 instances. The resources available are as follows
|GPU enabled flag in
||vCPUs||Memory||GPU||AWS instance type|
||4||16 GB||NVIDIA T4||
During the evaluations, you will receive a proxy NetHack env object instead of the actual environment. This proxy object talks to the actual NetHack env over the network and returns the values as needed. We do this to prevent participants from tampering with the env. This also adds an overhead. Based on our benchmarks, a single env should roughly give a throughput of 1500-2000 steps/second. Using something like a batched env increases the throughput.
Hope this helps and please feel free to reach out to us for any help.
Thanks for sharing this with us.
- Allow access to observation/action space variables that exist in normal Gym (e.g.
spacesfor Dict obs)
We will soon update the evaluator to allow access to a few more attributes. At the moment the following attributes/methods can be accessed.
On action space:
( "contains", "dtype", "sample", "shape", "n", )
On observation space:
( "bounded_above", "bounded_below", "contains", "dtype", "high", "is_bounded", "low", "sample", "shape", "n", )
- This is perhaps result of bad programming from my end, but make sure variables that do not exist in the environment return appropriate exceptions. For example, I did
somethingwas not a variable in the environment, but with aicrowd-gym it was set to
None. This caused
hasattrto return True and subsequently things failing.
I believe this happens with
gym.Env objects as well. For example, if I do
import gym env = gym.make("CartPolve-v0") setattr(env, "something", "some value")
it doesn’t return any error. If possible, can you share with a simple example the expected behaviour?
Note: During the evaluation, the env object you have access to is a proxy object and not the actual
gym.Env instance. So setting attributes from your code will not set any attributes on the actual env object. If you have any specific use case for doing this, please feel free to reach out to us and we can try to accommodate your use case.
Edit: Sorry, misread the last part. We will try to get the read access to all the env attributes in our future releases.
You probably want to post this in https://discourse.aicrowd.com/c/neurips-2021-nethack-challenge/
Solution for submission 128368 A detailed solution for submission 128368 submitted for challenge IIT-M RL-ASSIGNMENT-2-TAXI
[Baseline] Detectron2 starter kit for food recognition 🍕 A beginner friendly notebook kick start your instance segmentation skills with detectron2