Loading
Feedback

giadefa 175

Name

Gianni De Fabritiis

Organization

Computational science laboratory

Location

Barcelona, ES

Badges

2
0
1

Activity

Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

A new benchmark for Artificial Intelligence (AI) research in Reinforcement Learning

Latest submissions

See All
graded 8963
graded 8954
graded 8868
Gold 2
EulerLearner
May 16, 2020
EulerLearner
May 16, 2020
Silver 0
Bronze 1
Trustable
May 16, 2020

Badges

  • Has filled their profile page
    May 16, 2020
  • Great work! You're one of the top participants in this challenge. Here's a gold badge to celebrate the acheivement.
    Challenge: Unity Obstacle Tower Challenge
    May 16, 2020
  • Great work! You're one of the top participants in this challenge. Here's a gold badge to celebrate the acheivement.
    Challenge: Unity Obstacle Tower Challenge
    May 16, 2020
Participant Rating
Participant Rating
giadefa has not joined any teams yet...

Unity Obstacle Tower Challenge

Release of the evaluation seeds?

Over 1 year ago

I have really loved the environment and the progression of tasks, some looking impossible at first, but doable at last.
Great game design.

g
PS: yes it would be nice to have the test seeds, because it was really slow to test.

Submissions are stuck

Over 1 year ago

Is there any way to know which tag corresponds to a given submission from the issue tab?
Now there are several queued but it is hard to know to which commit they correspond.

g

Submissions are stuck

Over 1 year ago

It does not seem to work for us

Different results between training, debug and evaluation

Over 1 year ago

Thanks. We now think that overfitting is the problem as well.
I did not expect to overfit such a complex environment, but we probably did.
On the training seeds we average out above 20 over 5 runs, but on the evaluation seeds seem to be a totally different story.

g

Submissions are stuck

Over 1 year ago

Now the bot is not even picking up new submissions. Anyone else experiencing the same issues?

Different results between training, debug and evaluation

Over 1 year ago

Hi,
we are experiencing very different results between what we get when evaluating compared to the local and debug mode.

Locally and in debug mode we get the same expected score, while in evaluation we get unexpected low or a lot lower performance.
For example episode 1, we always stop at level 5 in evaluation. According to our stats we have 99% success across the training seeds at level 5 and indeed we never fail at 5 locally and in debug mode.

Now I understand that the evaluation seeds are different, but we cannot understand how there can be such a difference. We tried to change model at level 5 but the behavior is the same, fine locally and in debug

Any idea?

For the admins this is one debug test for instance:
https://gitlab.aicrowd.com/giadefa/obstacle-tower-challenge/issues/16
This is one evaluation:
https://gitlab.aicrowd.com/giadefa/obstacle-tower-challenge/issues/31

It would be interesting to have some info on episode 1 to understand (a video?), to know how it dies?

Episode in evaluation ends prematurely

Over 1 year ago

Maybe it’s reporting steps in action-steps. Although there are 3000 time units at start each env step is 5 time units.

Nevertheless we also find the evaluation results a bit weird.

https://www.linkedin.com/in/gdefabritiis/