At this point, the number of classes are scheduled to remain the same. The number of training images and associated annotations will increase nevertheless.
Dear students and faculty of KIIT,
We are very excited to launch the KIIT AI (mini)Blitz Challenge for you today!
You can drop all your queries about the challenge and all the puzzles in the forum here.
All the best!
@davidadsp: Here is something
HanClinto shared on Discord : https://gist.github.com/HanClinto/310bc189dcb34b9628d5151b168a34b0
If you’re new and feeling a bit lost, don’t fret! We recommend teaming up with fellow AIcrowd users for a more enriching experience!
Reply to this thread with a brief intro about you, what brings you to this challenge, and see the magic happen
Welcome to the NeurIPS 2021 NetHack Challenge!
We are excited to have you onboard.
This is an exceptional challenge and we look forward to seeing how the AIcrowd Community takes it!
With the Starter Kit, you can make your first submission with ease.
If you have any questions on the challenge, the starter kit, or anything else, please do not hesitate to post them in the forums here!
(also, join the party on Discord!)
Find teammates for the challenge over here.
For any other questions or queries, drop a comment and we’ll get back to you!
@thanish: this is a multiclass classification problem, so the sum of the probabilities should be equal to 1.
The description specifies that the sum is less than equals to 1, to stay true to the implementation details of the validation strategies in place. And also to communicate that this is a probability distribution after all, and cannot sum up to more than 1.
Hope this clarifies your question ?
looks like there are some issues at the AICrowd side.
I think that misrepresents the situation. It is hard for us to deprioritize everything else just because there is an urgency here because the test data is finally available two weeks before the submission deadline.
Deprioritizing everything else to address this would be unfair to our internal roadmaps and the other challenges which have been planned months in advance.
And separately launching the competition two weeks before the submission deadline is also unfair towards the participants who may or may not have enough time to put together their submissions.
@vamsi_krishna_vallur: we apologise for the trouble. And also apologise for the fact that the local debugging information was not laid out as clearly as we could have.
We are happy to confirm that we have increased the max number of submissions to 50 per team per day.
Best of luck !
Lulz comes in when everyone realises that the @alfarzan account has been powered by a GPT-3 powered bot all along !
Not giving any hints
@victorkras2008: Thanks for pointing these out. We are on it. And indeed, the some of the labels should also be wrong in the test set. We are working on correcting the affected data points, and then re-evaluating all the submissions.
@ayushivani is on it !
Thanks @Shubhamaicrowd !
It’s a bit weird though, I would expect
python-chess to have such a silly bug. Nor do I understand why they wouldn’t consider the side on pawn promotion ?
In any case, if anyone has an understanding of why this is hapenning, please do let us know.
Else I suggest to include a rule based correction for this case and update the dataset for consistency.
We have released some Supplementary Training data for Round-3.
This data contains smell words for 388 molecules.
Please note that, for consistency, the smell words belong to the global vocabulary used in Round-1 and Round-2 of the competition and hence the smell sentences contain smell words that may not be in the Round-3 vocabulary.
The data can be accessed in the Resources Section of the challenge.
Best of Luck,