Loading

MasterScrat

Name

Florian Laurent

Location

CH

Activity

Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Sample-efficient reinforcement learning in Minecraft

Latest submissions

See All
graded 25410
graded 25409
failed 25399

Multi Agent Reinforcement Learning on Trains.

Latest submissions

See All
failed 67801
graded 67787
failed 67786
Gold 1
gold-challenge-end
May 16, 2020
Silver 1
silver-challenge-end
May 16, 2020
Bronze 0

Badges

NeurIPS 2020 : Flatland Challenge

Start of the competition

3 days ago

Hello @RomanChernenko, you didn’t waste any time :smiley:

The competition will start in the next days, stay tuned!

Cheers,
Florian

Flatland Challenge

Publishing the Solutions

18 days ago

Hello @fabianpieroth,

A recording of the presentations from top participants at the AMLD conference has recently been released: https://www.youtube.com/watch?v=rGzXsOC7qXg

The winning submissions as well as exciting news about the future of this competition will be released this month!

Cheers,
Florian

NeurIPS 2019 : MineRL Competition

Problems running in docker

8 months ago

I want to run my training code on AWS so I can make sure everything runs fine from start to finish on a machine slower that the official one. I am using a p2.xlarge instance with the “Deep Learning AMI (Ubuntu 16.04)”.

I am trying to run the code from the repo competition_submission_starter_template, without adding my own code for now. When I run ./utility/docker_train_locally.sh, I am faced with this error:

2019-10-22 02:01:29 ip-172-30-0-174 minerl.env.malmo.instance.868e96[39] INFO Minecraft process ready
2019-10-22 02:01:29 ip-172-30-0-174 minerl.env.malmo[39] INFO Logging output of Minecraft to ./logs/mc_1.log
2019-10-22 02:01:29 ip-172-30-0-174 root[62] INFO Progress : 1
2019-10-22 02:01:29 ip-172-30-0-174 crowdai_api.events[62] DEBUG Registering crowdAI API Event : CROWDAI_EVENT_INFO register_progress {'event_type': 'minerl_challenge:register_progress', 'training_progress': 1} # with_oracle? : False
Traceback (most recent call last):
  File "run.py", line 13, in <module>
    train.main()
  File "/home/aicrowd/train.py", line 75, in main
    env.close()
  File "/srv/conda/envs/notebook/lib/python3.7/site-packages/gym/core.py", line 236, in close
    return self.env.close()
  File "/srv/conda/envs/notebook/lib/python3.7/site-packages/minerl/env/core.py", line 627, in close
    if self.instance and self.instance.running:
  File "/srv/conda/envs/notebook/lib/python3.7/site-packages/Pyro4/core.py", line 280, in __getattr__
    raise AttributeError("remote object '%s' has no exposed attribute or method '%s'" % (self._pyroUri, name))
AttributeError: remote object 'PYRO:obj_3ec8abe8c48c4b4e9dd7f7b1ac4706b1@localhost:33872' has no exposed attribute or method 'running'
Exception ignored in: <function Proxy.__del__ at 0x7f4585d4f158>
Traceback (most recent call last):
  File "/srv/conda/envs/notebook/lib/python3.7/site-packages/Pyro4/core.py", line 266, in __del__
  File "/srv/conda/envs/notebook/lib/python3.7/site-packages/Pyro4/core.py", line 400, in _pyroRelease
  File "/srv/conda/envs/notebook/lib/python3.7/logging/__init__.py", line 1370, in debug
  File "/srv/conda/envs/notebook/lib/python3.7/logging/__init__.py", line 1626, in isEnabledFor
TypeError: 'NoneType' object is not callable
2019-10-22 02:01:30 ip-172-30-0-174 minerl.env.malmo.instance.868e96[39] DEBUG [02:01:30] [EnvServerSocketHandler/INFO]: Java has been asked to exit (code 0) by net.minecraftforge.fml.common.FMLCommonHandler.exitJava(FMLCommonHandler.java:659).

Where can I find more details? if I run ./utility/docker_run.sh --no-build to check in the container, I see no trace of logs.

Also, how would the trained model be saved in this situation? Is the the train folder mounted as a volume so that the model would be persisted outside of the container?

Finally, the expression $(PWD) in the bash files throws error for me.

Partially rendered env in MineRLObtainDiamondDense-v0

8 months ago

Just happened again, seems to be related with large bodies of water.

Partially rendered env in MineRLObtainDiamondDense-v0

8 months ago

I’ve just witnessed my agent interacting in an environment which looked partially rendered, ie large pieces appeared as transparent:

This is in MineRLObtainDiamondDense-v0. I am using minerl==0.2.7.

mc_1.log output around these times:

[10:51:00] [Client thread/INFO]: [CHAT] §l804...
[10:51:00] [Client thread/INFO]: [CHAT] §l803...
[10:51:00] [Client thread/ERROR]: Null returned as 'hitResult', this shouldn't happen!
[10:51:00] [Client thread/INFO]: [CHAT] §l802...
[10:51:01] [Client thread/INFO]: [CHAT] §l801...
[10:51:01] [Client thread/INFO]: [CHAT] §l800...

I don’t see anything else suspicious in this log file. The following episodes seem to be running correctly.

Can't train in MineRLObtainIronPickaxeDense-v0 since 0.2.7

8 months ago

Great, thanks for the swift fix! :+1:

Can't train in MineRLObtainIronPickaxeDense-v0 since 0.2.7

8 months ago

I just updated to 0.2.7, when trying to train in MineRLObtainIronPickaxeDense-v0 I now get the following errors:

ERROR    - 2019-10-18 04:52:00,768 - [minerl.env.malmo.instance.2edcf5 log_to_file 535] [04:52:00] [EnvServerSocketHandler/INFO]: [STDOUT]: REPLYING WITH: MALMOERRORcvc-complex-type.3.2.2: Attribute 'avoidLoops' is not allowed to appear in element 'RewardForPossessingItem'.
ERROR    - 2019-10-18 04:52:01,867 - [minerl.env.malmo.instance.2edcf5 log_to_file 535] [04:52:01] [EnvServerSocketHandler/INFO]: [STDOUT]: REPLYING WITH: MALMOERRORcvc-complex-type.3.2.2: Attribute 'avoidLoops' is not allowed to appear in element 'RewardForPossessingItem'.
ERROR    - 2019-10-18 04:52:02,950 - [minerl.env.malmo.instance.2edcf5 log_to_file 535] [04:52:02] [EnvServerSocketHandler/INFO]: [STDOUT]: REPLYING WITH: MALMOERRORcvc-complex-type.3.2.2: Attribute 'avoidLoops' is not allowed to appear in element 'RewardForPossessingItem'.
...

This environment was working fine before, but I was using the package version from before the reward loop was fixed, so maybe this problem was already present since 0.2.5.

Unity Obstacle Tower Challenge

Tutorial Deep Reinforcement Learning to try with PyTorch

Over 1 year ago

Incremental PyTorch implementations of main algos:
RL-Adventure DQN / DDQN / Prioritized replay/ noisy networks/ distributional values/ Rainbow/ hierarchical RL
RL-Adventure-2 actor critic / proximal policy optimization / acer / ddpg / twin dueling ddpg / soft actor critic / generative adversarial imitation learning / HER

Good implementations of A2C/PPO/ACKTR: https://github.com/ikostrikov/pytorch-a2c-ppo-acktr

BTW The repo for the Udacity course is open source: https://github.com/udacity/deep-reinforcement-learning

MasterScrat has not provided any information yet.