Loading
1 Follower
0 Following
ermekaitygulov

Badges

0
0
0

Activity

Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Robots that learn to interact with the environment autonomously

Latest submissions

See All
graded 87061
graded 87047

Sample-efficient reinforcement learning in Minecraft

Latest submissions

No submissions made in this challenge.
Participant Rating
nguyen_thanh_tin 0
Participant Rating
  • CDS NeurIPS 2019 : MineRL Competition
    View

REAL 2020 - Robot open-Ended Autonomous Learning

Cartesian space question

About 1 year ago

Hello. We’ve found cartesian space slows down fps. For example on my PC using ‘macro_action’ and ‘joints’ action spaces environment could make around 1000 steps per second. But ‘cartesian’ slows down to 100 steps per second.
The reason is inverse kinematics calculation. Every environment step is simulation step, so to change arm pose in ‘joints’ or ‘cartesian’ spaces you should send the same action for 100-500 steps and the same inverse kinematics calculations are performed 100-500 times. To speed up actions in ‘cartesian’ space action caching can be used (as in ‘macro’ space). Also ‘gripper_command’ is ignored in ‘cartesian_space’.

Baseline question

About 1 year ago

Hello! Sorry, I’m already in team.

Wrappers using / observation space access

About 1 year ago

About wrappers: It was just a suggestion, no problems :slight_smile:

About observation space: Thank you)

About ‘object_position’: I mean ‘object_position’ space.shape vs ‘object_position’ observation.shape.
Environment observation space is taken from ‘robot’ attribute - Kuka class. Kukas observation space is Dict space. There is key ‘object_position’ and it corresponds to Dict space with keys [‘tomato’, …]. This spaces (‘tomato’-space and etc.) are Box spaces with shape (7,) (real_robots/envs/robot.py, line 75). But environments [‘step’, ‘reset’] methods returns observation where observation[‘object_position’][‘tomato’].shape is (3,), because get_position() is called instead of get_pose() (real_robots/envs/env.py, line 234).

Wrappers using / observation space access

About 1 year ago

Also environments ‘object_positions’ spaces shape differs from corresponding shape in observation: (7,) vs (3,). I guess problem is in get_position() method calling (returns only coordinates) instead of get_pose() (returns coordinates and orientation).

Wrappers using / observation space access

About 1 year ago

Hello!
Is there any way to use wrappers? There are None values (for ‘goal_mask’ and ‘goal_positions’ keys) in observation dict in R1-environment. It can be solved with adding zero values for this keys to 93 line in real_robots/env.py:

self.goal = Goal(retina=self.observation_space.spaces[
                                self.robot.ObsSpaces.GOAL].sample()*0)

or with use of wrappers.
Also it can be useful if observation_space also was provided to controller (for nn model defining and etc.). In my code I got information about observation_space from Kuka class, but it is not the most elegant way)

Baseline question

Over 1 year ago

Hello! Question about ‘percentage_of_actions_ignored_at_the_extremes’ parameter.
As I understand this parameter allows us to drop the least relevant distances. Should there be np.linspace(actions_to_remove, len(self.actions) - 1, …) or np.linspace(0, len(self.actions) - 1 - actions_to_remove, …) instead of np.linspace(actions_to_remove, len(self.actions) - 1 - actions_to_remove, …) in abstractor.py:

        for i in range(condition_dimension):
            sup = ordered_differences_queues[i].get_queue_values()
            for j in np.linspace(actions_to_remove, len(self.actions) - 1 - actions_to_remove, config.abst['total_abstraction']).round(0):
                self.lists_significative_differences[i] += [sup[int(j)]]

? :slight_smile:

NeurIPS 2019 : MineRL Competition

New obtaindiamond

About 2 years ago

There are normal rewards in the latest updates (once per item except logs). But you haven’t changed docker and submissions are evaluated with ‘reward bugs’.

[Announcement] Submissions for Round 1 now open!

Over 2 years ago

Question about deadline of first round: https://www.aicrowd.com/challenges/neurips-2019-minerl-competition there is said that 1 round finishes in 48 days, but it differs from date in “important dates” (22 september). When first round finishes?

How is the "reward" on leaderboard page computed?

Over 2 years ago

Also it looks like it is “Dense” environment, because using evaluate_locally.sh script we’ve got reward for every crafted item, and after replacing “ObtainDiamond” with “ObtainDiamondDense” we’ve got reward only once per item.

ermekaitygulov has not provided any information yet.