The issue should be fixed now. We are re-evaluating the effected submissions.
is that without considering time spent in the agent?
Yes, it is without considering the time spent in the agent.
How much of that is network latency? I.e., if we have an agent capable of performing faster than that, does all of the agent compute time get hidden under the latency?
This includes the network latency, processing delay and everything that is needed to send a request and get the response. Any latency/compute time on the agent will be added to this.
For example, if you have something like
import aicrowd_gym from tqdm import trange env = aicrowd_gym.make("NetHackChallenge-v0") env.reset() for _ in trange(1000000): _, _, done, _ = env.step(1) if done: env.reset()
This should give you a throughput of 1500-2000 iterations per second during the evaluation.
The evaluations run on AWS EC2 instances. The resources available are as follows
|GPU enabled flag in
||vCPUs||Memory||GPU||AWS instance type|
||4||16 GB||NVIDIA T4||
During the evaluations, you will receive a proxy NetHack env object instead of the actual environment. This proxy object talks to the actual NetHack env over the network and returns the values as needed. We do this to prevent participants from tampering with the env. This also adds an overhead. Based on our benchmarks, a single env should roughly give a throughput of 1500-2000 steps/second. Using something like a batched env increases the throughput.
Hope this helps and please feel free to reach out to us for any help.
Thanks for sharing this with us.
- Allow access to observation/action space variables that exist in normal Gym (e.g.
spacesfor Dict obs)
We will soon update the evaluator to allow access to a few more attributes. At the moment the following attributes/methods can be accessed.
On action space:
( "contains", "dtype", "sample", "shape", "n", )
On observation space:
( "bounded_above", "bounded_below", "contains", "dtype", "high", "is_bounded", "low", "sample", "shape", "n", )
- This is perhaps result of bad programming from my end, but make sure variables that do not exist in the environment return appropriate exceptions. For example, I did
somethingwas not a variable in the environment, but with aicrowd-gym it was set to
None. This caused
hasattrto return True and subsequently things failing.
I believe this happens with
gym.Env objects as well. For example, if I do
import gym env = gym.make("CartPolve-v0") setattr(env, "something", "some value")
it doesn’t return any error. If possible, can you share with a simple example the expected behaviour?
Note: During the evaluation, the env object you have access to is a proxy object and not the actual
gym.Env instance. So setting attributes from your code will not set any attributes on the actual env object. If you have any specific use case for doing this, please feel free to reach out to us and we can try to accommodate your use case.
Edit: Sorry, misread the last part. We will try to get the read access to all the env attributes in our future releases.
It looks like one of your apt repositories is giving the error. Kitware also made
cmake installable using
pip. You can run the following to install the latest version of
cmake from kitware.
pip install -U cmake
The evaluation ran out of memory. Can you check how much RAM is being used when you run this locally?
Here is a screenshot of the RAM usage for reference.
You probably want to post this in https://discourse.aicrowd.com/c/neurips-2021-nethack-challenge/
You need to use Git LFS to push the large files in your repository (your saved models, etc.,). You can find more information on how to use LFS here
It looks like
# Prediction phase heading is missing in your notebook. Please add this heading after your training code block has ended. We use these to figure out which part of the notebook needs to be executed during evaluation.
For more information please refer this discussion.
Can you share any output that you might have got after running the command? Do you see something like
[NbConvertApp] Executing notebook with kernel: python
when you run the command?
Can you try this?
Solution for submission 128368 A detailed solution for submission 128368 submitted for challenge IIT-M RL-ASSIGNMENT-2-TAXI
[Baseline] Detectron2 starter kit for food recognition 🍕 A beginner friendly notebook kick start your instance segmentation skills with detectron2