Loading

dipam_chakraborty 0

Name

Dipam Chakraborty

Location

IN

Badges

0
0
1

Activity

Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Measure sample efficiency and generalization in reinforcement learning using procedurally generated environments

Latest submissions

See All
graded 94599
graded 94551
graded 93732

ASCII-rendered single-player dungeon crawl game

Latest submissions

No submissions made in this challenge.

3D Seismic Image Interpretation by Machine Learning

Latest submissions

No submissions made in this challenge.

Multi-agent RL in game environment. Train your Derklings, creatures with a neural network brain, to fight for you!

Latest submissions

No submissions made in this challenge.
Gold 0
Silver 0
Bronze 1
Trustable
May 16, 2020

Badges


  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020

  • May 16, 2020
  • Has filled their profile page
    May 16, 2020

  • May 16, 2020

  • May 16, 2020
Participant Rating
nachiket_dev_me18b017 0
priteshgohil 0
Participant Rating
  • Gamma NeurIPS 2020: Procgen Competition
    View

NeurIPS 2021 AWS DeepRacer Challenge

Updates for starter kit

3 days ago

Hi @asche_thor

This used to be an older issue but was fixed, are you on the latest version of the env docker release

Can you please pull the latest docker and check once.

NeurIPS 2021 - The NetHack Challenge

Example of using non-neural model

3 months ago

Hi @pajansen

The env returns the observations that are already separated and encoded into a dictionary for convenience, this has nothing to do with neural net. Is there a particular format you need?

Colab Notebook including submission and working ttyrec recording

3 months ago

Hi @charel_van_hoof, could you try once with git bash on Windows. I’ll try to check but I think nle is currently not support on windows. But just the ttyplay2 script might work.

IITM RL Final Project-b5d2e6

Submission Error : Inference Failed

5 months ago

Hi @shreesha_g_bhat_cs18

Seems like the error is due to RAM usage of your code. The evaluations run with a limit of 4 GB RAM, please make sure to run within this limit.

Some garbage collection code in python might help.

Submission limit

5 months ago

Hi @arnav_anil_mhaske_cs

You should be able to test everything locally then submit.

Can you please share why the extra submissions are needed?

Episode Count in Runner

6 months ago

Hi @s_tarun_prasad_me17b

BSuite gives the limit of 10000 as a fair amount episodes needed to converge.

Generally speaking, to have a level playing field among all competitors, while also not making it too easy, some constraint has to he applied. In this case, the number of episodes serves that role.

I know the number 10000 may be arbitrary, so if enough students feel it should be increased, we’ll do it. For now please try to improve your algorithm to get the highest score with 10000 episodes.

Best Regards,
Dipam

IIT-M RL-ASSIGNMENT-2-GRIDWORLD

Welcome to your 2nd Assignment!

6 months ago

Hi @s_tarun_prasad_me17b

Unfortunately the choice of giving extra test cases is not upto me, please talk to the TAs.

In my opinion the test cases provided are sufficient, please review your code properly.

Welcome to your 2nd Assignment!

6 months ago

Hi @richa_verma_cs20d020

Can you check if your local scoring cell is working without any error.

In case your local scoring cell gives an error its probably because the output format is wrong, check the targets file for the output format.

If your local scoring cell gives no error please let me know.

Welcome to your 2nd Assignment!

6 months ago

Hi @narra_jeevan_reddy_e,

Sounds like a formatting issue with the code on your end, since its not a general issue that affects all students I encourage you to find the bug on your own. With correct format you should get decimals for all algorithms in the local scoring code. Look at the targets for example of the format.

Do let me know in case the problem still persists after you’ve checked it thoroughly.

Welcome to your 2nd Assignment!

6 months ago

Hi @jaswanthi_mandalapu

It looks like the error says you need to accept the challenge rules. Can you please accept the rules then try submitting again.

IIT-M RL-ASSIGNMENT-2-TAXI

Unable to submit post 20th april

6 months ago

Hi @sreenadhuni_kishan_r

We recently fixed this, can you close the notebook and run again from start, that should fix it.

Multi-Agent Behavior: Representation, Model-17508f

Can we use training data from task 1 or 2 for task 3?

6 months ago

Hi @sungbinchoi

Yes, you can use training data of Task 1 for 2 and 3. Feel free to use all the data at your disposal.

You can even use some unsupervised learning on the test sequences.

What is the shape of a frame stands for?

7 months ago

Hi @Lin_Yan_Liang

The each mouse’s keypoint coordinate is a 2D (X,y) coordinate.

2 Mice
2 Coordinates
7 keypoints

Does the clear your doubt?

RL-Taxi

RL notebook

7 months ago

Hi nischith_shadagopan,

Its not important for submission.

RLIITM-1

Rewards for out of grid states

7 months ago

Hi,

No, out of grid states do not have to be considered.

Can you please provide a link where it is?

RL-VI

Any questions about the assignment? Ask them here!

7 months ago

I think your TAs must have communicated that its supposed to be individual states and not matrix norm.

If you think about it matrix norm isn’t even a valid way, two arrays [0,0,1] and [1,0,0] would have the same norm.

Any questions about the assignment? Ask them here!

7 months ago

Hi mizhaan_prajiy_maniy,

Its a effect of how numpy arrays are printed vs how the image is shown in the diagram. Please rotate/transpose as needed for your visualization.

Best Regards,
Dipam

NeurIPS 2020: Procgen Competition

How to find subtle implementation details

11 months ago

Hi @lars12llt

Our full code is in Pytorch. However, I wrote entirely custom code on Pytorch for this competition as I was completely unfamiliar with rllib and wanted fine grained control over the entire code. My implementation works by basically subclassing TorchPolicy in rllib and writing the full training code in the learn_on_batch function. This admittedly removes rllib’s distributed learning benefits but allowed me to get comparable speed and score with Pytorch. Sorry I haven’t released the code yet, will be doing that soon.

Is it a preknowledge that we should select one of existing submissions for the final evaluation?

11 months ago

Hi @the_raven_chaser

I’m just curious as to why you think none of your submissions was prepared for the final evaluation? … Full disclosure we did not try to tune our submissions to the rest of the 10 environments either (Though we knew that the final evaluation will be done on 20 envs)

Questions about the competition timeline

12 months ago

Hello @jyotish

Thanks for the clarification, however this raises further questions. I think sample efficiency and generalization is a trade-off towards the end of training, which means with one submission we can be high scoring on sample efficiency but poor on generalization. Or we can improve generalization but reduce sample efficiency.

So the scenarios are:

There two tracks, two env configs while training, and two separate scoring metrics.

or

There is one track, one env config while training, one joint scoring metric.

Please clarify, which of the above is the case?

If there are two tracks (and two env configs while training), the selected submission can be used be near the top of sample efficiency but low generalization, or be in the middle of both leaderboards. Else, if there is a joint metric, we’d like to test that locally. We’ll plan our submissions accordingly.

dipam_chakraborty has not provided any information yet.

Notebooks

Create Notebook