Yes we used the same technique but got a slightly lower score, but I only saw this post after the competition ended where they say that it’s only human players and not an engine that generated winners. That explains why your approach had better score since you didn’t use a high depth, whereas we use a 20 depth lookahead
Oh wait I think I got it mixed up. When it’s black’s turn, it means that the board is given from black’s perspective right? Because I thought that it was always given from white’s perspective that’s why I assumed it was a draw (since white pawn can only go up and not check the king) but if the board is given from black’s perspective, then in that case it’s a checkmate.
After further examination, there seem to be only one draw (9008.jpg) but there are many boards that I think are illegal (around 3000)
Have a good day!
I noticed that some boards in the test set are actually draws (9008.jpg), in that case what are we supposed to output in the label?
P.S: I also noticed that there are many boards that are just not legal (for example black king already in check but it’s white’s turn)
Thanks in advance and have a good day!
Perfect! I couldn’t find a satisfying term so I stuck with that one , I’ll edit it right away!
It would be interesting if you posted solutions that led you to reduce the MSE to 333333 rather than the 0 MSE one because it doesn’t provide much value for beginners. I understand that the goal of this solution is to pinpoint weaknesses of some competitions that can be “hacked” (several kaggle competitions had this) which is good for beginners but I think it would be even better to include a ML solution too.
Anyways, I was looking forward to seeing your solutions since you slayed everything and thank you for providing them, I am sure they are full of insights for aspiring AIcrew.
Have a nice day
Can we get more information about which f1-score exactly is computed? Is it the weighted f1 score or macro (which I doubt given the results) or the micro one as in sklearn?
Have a good day
Dear ec_ai team,
The submission part is now working without errors, however, the score on the leaderboard and the one displayed in the “submission” tab is 0 for all configurations which doesn’t correspond to the score we get for the last submission v0.9 tag. Is it something we missed in the config too?
We also only sent the transitions file for 15e5 iterations because the one with 15e6 would simply not be pushed (3.5GB file) and it gives rise to:
fatal: Out of memory, malloc failed (tried to allocate x bytes)
Is this problem coming from our submission or is the host not handling it well?
Also, I think I might have missed it somewhere, but the extrinsic trials is only 5 for the submission, isn’t it supposed to run for 50 trials?
Thank you in advance for answering our questions.
Anass’s teammate here. We no longer get the agent error that apparently came for the environment.yml but this now it’s ‘error’ instead. It’s weird since it works perfectly on my ubuntu environment even after clearning Anaconda and only executing the environment.yml.
Is it possible to have a look at the logs again?
Thanks in advance and have a good day