Loading
1 Follower
0 Following
RomanChernenko

Location

UA

Badges

3
2
2

Connect

Activity

Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Latest submissions

No submissions made in this challenge.

Multi-Agent Reinforcement Learning on Trains

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
graded 119585
graded 119573
graded 119571

Multi Agent Reinforcement Learning on Trains.

Latest submissions

See All
graded 32763
graded 32762
graded 32761

Help improve humanitarian crisis response through better NLP modeling

Latest submissions

No submissions made in this challenge.

Multi-Agent Reinforcement Learning on Trains

Latest submissions

No submissions made in this challenge.
Participant Rating
student 257
Participant Rating
  • CkUa Flatland Challenge
    View
  • ck.ua CYD Campus Aircraft Localization Competition
    View

CYD Campus Aircraft Localization Competition

Round 2 is coming!1

About 1 year ago

Hello @masorx

When data for round 2 will be available for downloading?

Data from the future

Over 1 year ago

Hello @masorx

Just to add the rule to prohibit a future data usage is not enough in general. Always possible to implement a method that solves the offline version of the problem with all future data and then finetune the official online method with predicted data at hidden offline realization as “ground-truth”.
If you really want to solve an online tracking problem, you should invent some method how to strictly hide future data, like at the flatland challenge. But it required something like kernel-competitions at Kaggle.

Evaluation metrics

Over 1 year ago

Hello @masorx

What units (m or km) of a distance do you use for RMSE calculation?

Evaluation metrics

Over 1 year ago

Hello @shivam and @masorx

Thank you for your answers. I have a few additional questions about metrics.
Why you use 2D distance? We need to predict the height of aircraft, no?
Do you check the submission file fully after each submits? So the score on the leaderboard is final, right?

Evaluation metrics

Over 1 year ago

Hello,

Can you please describe the evaluation metrics of the competition? What the score and secondary score means?

Flatland

Start of the competition

Over 1 year ago

Hello,

The link to the competition appears on NeurIPS, but it looks hidden now. When the competition will start?

Flatland Challenge

Publishing the Solutions

Over 1 year ago

Here is our (ck.ua team) 2nd place solution:
https://bitbucket.org/roman_chernenko/flatland-final/src/master/

Problems with submissions

Almost 2 years ago

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Solution Codes and Approaches

Almost 2 years ago

Hello @nilabha,

What results you have with these approaches?

Evaluation process

Almost 2 years ago

Hello,

I have a few questions about evaluation process.

  1. Can you please confirm that our solutions always evaluated on the same test samples? Now looks like the test sequence has shuffled at least.
  2. How maps are choosen for visualization video? I can see different teams has different maps on the video.
  3. I seen something strange with score progress during evaluation. The score always start from some low value and next quickly increased. And at the end I always see a big score jump. For example, I had a score 91.6% after 248 simulations. But in the end I have a score 92%. This significant jump at the end is not possible. Looks like the scoring algorithm divided done-agens sum on N+1, where N is number of finished simulations.

Evaluation server hardware specifications

About 2 years ago

Hello,

What hardware are using for submission evaluation? How much RAM and CPU cores we can use?

Computation budget

About 2 years ago

Hello @mlerik ,

What hardware are you using for submission evaluation (CPU cores, max allowed RAM etc.)?

[ANNOUNCEMENT] Round 2 Update

About 2 years ago

Hello @mlerik

Do you have any updates, when round 2 should be start?

Further questions regarding submissions

Over 2 years ago

@mlerik, Thank you for the clarification. I’m still confusing how final score will be calculated. Here is a quote from the challenge overview:

Further questions regarding submissions

Over 2 years ago

I also have a lot of questions about the challenge regarded with submissions.

  1. What hardware are using for submission evaluation?
  2. How score was calculated in the leaderboard?
  3. How score will combined in Round 1 and Round 2.
  4. What maximum possible world size and train count?

And it’s a time to update the overview text, because now we can find a lot of useful information only in discussions.

The problem with the round duration

Over 2 years ago

Currently I can see a message " Round 0 : Exploration Round: 3 months" at top of the challenge description". But round 0 has finished yesterday.

Submission format

Over 2 years ago

As I understand from the challenge rules, our models will evaluating on a set of unknown random seed worlds.
But how we will submit our models? Just an archive of python files? Or in a Docker?

Mutli Agent Setup

Over 2 years ago

So, we should expect 10000*10000 world size in next round, right? :open_mouth:

RomanChernenko has not provided any information yet.