Has filled their profile page
Kudos! You've been awarded a silver badge for this challenge. Keep up the great work!
Challenge: Unity Obstacle Tower Challenge
Ah, you’re right. It was definitely because of storing observations in my local code. (Should have noticed before asking this question )
But I’m still thinking why my evaluation is stuck though. My code in gitlab repository at this point doesn’t store anything (RandomPolicy as it is). It seems like no submission has successfully finished yet (I see no entry for round 2).
I’ve been running intrinsic phase locally without resetting environment as env reset is not allowed in evaluation server. However, I found out that my script dies without error.
I observed increase of memory usage even when running RandomPolicy so I assume there is a memory issue in environment as the number of steps increases in one episode.
I also suspect that makes evaluation process stop or even timeout error. (My RandomPolicy submission is still stuck around 2M steps for a few days now)
Is there anyone facing similar situation?
Or is this just my problem?
Hi, I am using the latest real_robots package (0.1.16) and found out that my evaluation is stuck around 2M steps for a few hours. I made no change to the policy in the starter kit (RandomPolicy) just to see how long it takes to submit no-learning agent. I assume there is still something that slows down the evaluation in the environment.
Could you investigate a little more?
Hi, I used debug mode to test my submission and I tried to set it off to get actual result. But it seems it still runs in debug mode even after I updated
aicrowd.json, pushed to the repo, and created a tag.
Does it take time to reflect the debug flag?
What should I do?