Thank you, In my case it fails at clariq ranker and does not provide logs from any steps. I have ran a very similar code with weights of the same size before successfully so all the packages being equal it should not be OOM.
It has failed again. @dipam note that it does not even show logs for the validation parts, it does not show any log at all. Although it would be nice to know what is going on with the ranker.
Here is my new submission AIcrowd
And thank you very much,
Thank you Dipam. Sorry for the confusion. I am resubmitting and I will let you know, but in any case it is weird because the new submit is essentially an old one ( that run successfully and as far as I can tell does not hardcode anything) with different model weights.
HI I am still seeing failures as before. It fails at ranker evaluation and then no logs are shown. Is this happening to anyone else? Maybe it is an error that is only occurring now for certain evaluations?
For example see #204968 which I believe was resubmited from the host side after the fix.
Excellent thank you. Luckily I corrected the yml. When running jupyter-repo2docker in my computer I have no need of adding gcc manually for the docker to build. Maybe I have a different version of repo2docker
I just ran the baseline notebook. It does not fail. But it does not show any logs.
Hi, I have tried more than 5 different submissions in the past two days and I am getting failures without any login message to debug.
Hi in the competition description it says " More information to follow on how the world state can be parsed/ visualized. " . And I need some clarification on what the actions and the observables mean.
In particular looking at the official baseline it claims
gridworld_state - Internal state from the iglu-gridworld simulator corresponding to the instuction NOTE: The state will only contain the "avatarInfo" and "worldEndingState"
So why is the tape included in the steps files
I also detected the same thing is happening with observation_spaces.
It was never added there.
After doing the test I can corroborate that the online evaluation is not passing building_info to the agent.
I think building_info is not being passed in the online evaluation.
I am of the opinion that we should be able to grab it for compatibility with existing agents in the citylearn repo
Hi @kingsley_nweye thank you for adding the building_info to the local_evaluation script.
I am trying to run Marlisa which uses building_info but I think that in production your evaluation is not passing along the building info
this is the log I am getting
I am running thi code to duble check as soon as I get my submissions count refreshed. I will put it in the ordering wrapper to catch the bug in online evaluation
def register_reset(self, observation): """Get the first observation after env.reset, return action""" action_space = observation["action_space"] self.action_space = [dict_to_action_space(asd) for asd in action_space] obs = observation["observation"] self.num_buildings = len(obs) #CHECK MISSING INFO # I check that the dictionary contains the building_info. building_info = observation['building_info'] for agent_id in range(self.num_buildings): action_space = self.action_space[agent_id] # self.agent.set_action_space(agent_id, action_space) self.agent.set_action_space(observation) return self.compute_action(obs)
Thank you very much @dipam . And I have another question are we guaranteed that the observation spaces are the same for all buildings?
Because if not I think , in the evaluation script, we should receive the observation_spaces (as well as the action spaces) and building information as in citylearn repo main examples
# Contain the lower and upper bounds of the states and actions, to be provided to the agent to normalize the variables between 0 and 1. # Can be obtained using observations_spaces[i].low or .high env = CityLearn(**params) observations_spaces, actions_spaces = env.get_state_action_spaces() # Provides information on Building type, Climate Zone, Annual DHW demand, Annual Cooling Demand, Annual Electricity Demand, Solar Capacity, and correllations among buildings building_info = env.get_building_information()
Hi @kingsley_nweye.Sorry. I am referring to a single coordinator, or as you put it a ‘multi agent-cooordinator’ . It should receive the information on the observations/action space for all the buildings.
Hi, after looking at the local_evaluation.py code. It looks like one would have to modify the
OrderEnforcingWrapper so as to pass all the information through to a multi-agent coordinator.
On the other hand OrderEnforcingWrapper has this in its docstring
TRY NOT TO CHANGE THIS
So can we change this? or should we find another way to pass the whole information to a coordinator agent?