Loading
0 Follower
0 Following
kyunghyunlee

Location

KR

Badges

1
1
0

Activity

Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Measure sample efficiency and generalization in reinforcement learning using procedurally generated environments

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

A new benchmark for Artificial Intelligence (AI) research in Reinforcement Learning

Latest submissions

See All
graded 4081
graded 4069
graded 4059
Participant Rating
Participant Rating
kyunghyunlee has not joined any teams yet...

Unity Obstacle Tower Challenge

Submissions Q&A

About 5 years ago

It seems evaluation server dead.
Mine stuck for 7 hours

Evalutation error : Unity environment took too long to respond

About 5 years ago

In local docker, it always tested well.
I am getting same error for every submission.

The Unity environment took too long to respond.

Problem on agent

About 5 years ago

It runs correctly in my local machines.
Also, set realtime_mode=True doesn’t help.

Testing agent in local with docker

About 5 years ago

I followed the instruction in README.md.
I successfully built docker image.

I run docker image with tow terminals as described in Run Docker image section.
The agent looks good and it waits for the environment.
When I run the environment, anything happens.
Below are my console messages for both.

Agent

root
INFO:mlagents_envs:Start training by pressing the Play button in the Unity Editor.
Traceback (most recent call last):
  File "run.py", line 27, in <module>
    env = ObstacleTowerEnv(args.environment_filename, docker_training=args.docker_training)
  File "/srv/conda/lib/python3.6/site-packages/obstacle_tower_env.py", line 45, in __init__
    timeout_wait=timeout_wait)
  File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/environment.py", line 69, in __init__
    aca_params = self.send_academy_parameters(rl_init_parameters_in)
  File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/environment.py", line 491, in send_academy_parameters
    return self.communicator.initialize(inputs).rl_initialization_output
  File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/rpc_communicator.py", line 80, in initialize
    "The Unity environment took too long to respond. Make sure that :\n"
mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
	 The environment does not need user interaction to launch
	 The Academy and the External Brain(s) are attached to objects in the Scene
	 The environment and the Python interface have compatible versions.

Environment

+ ENV_PORT=
+ ENV_FILENAME=
+ '[' -z '' ']'
+ ENV_PORT=5005
+ '[' -z '' ']'
+ ENV_FILENAME=/home/otc/ObstacleTower/obstacletower.x86_64
+ touch otc_out.json
+ APP_PID=7
+ xvfb-run --auto-servernum '--server-args=-screen 0 640x480x24' /home/otc/ObstacleTower/obstacletower.x86_64 --port 5005 2
+ TAIL_PID=8
+ wait 7
+ tail -f otc_out.json

Problem on agent

About 5 years ago

I tested the Obstacle tower environment with local machines.
I confirmed that the action space is consist of 4 numbers in list, like [0, 0, 0, 1]
I submitted a starter kit agent for a test, and it evaluated successfully.

Then, I tested my agent for submission which slightly modified from starter kit.
The modification was to force jump action 0 from env.action_space.sample()
Actual source code is below. It is part of run.py in run_episode(env) function.

while not done:
    action = env.action_space.sample()
    action[2] = 0
    obs, reward, done, info = env.step(action)

From evaluation log, it stuck at step 0.

It is my first try to participate in this kind of challenges, therefore I am not familiar with the environment.
What is the problem with my code?

kyunghyunlee has not provided any information yet.