First of all I would like to congratulate the winners and to thank organisers for such an interesting challenge! For my opinion, it has gone quite smooth and 5 submissions per day were just enough. I’m looking forward to seeing the private leaderboard scores even though Martin mentioned that positions seems remain the same. Congratulations to the ck.ua team who managed to achieve excellent result below 100m! The nwpu.i4Sky team and ZAViators showed also great scores and were very close.
Personally, I was surprised how many stations can potentially be used after synchronization, though it requires data “from the future”. Before starting to work on the round 2 solution, I estimated that 70% of test tracks is achievable with accuracy below 100m. It was fun to get about 71% coverage after synchronization of 240 stations - maximum that the method I developed allowed. For my opinion, 90% can still be reachable though with much lower accuracy.
Regarding the third round, the main challenge there would be to predict stations calibration in the future, I assume. A training dataset may contain tracks, for example, only for the first half an hour or one hour while predictions should be made for the other (half an hour?) part. In order to make sure that participants don’t use data from the future, organisers may provide a function which generates points for a given aircraft one by one. Such a function could be mandatory to use by participants and during solution verification at the end of the competition. There are only a couple of questions here:
- whether it would be allowed to update previously predicted points for a given aircraft or not?
- how not to use points for other aircrafts from the future for predictions for a given one?
First of all I would like to say thank you to all organisers who prepared the dataset and the competition. I have never worked with ADS-B data before, so it was a unique opportunity to see the real data and to test different approaches. In addition, it was very exciting each time to achieve new records in accuracy - with different models from 11 kilometers to 33 meters!
As a feedback, I agree with Richard that it might be better to everyone if a test set will be hidden. In this case the final results on a hidden test set will better reflect predictive power of models. Also, I think that 3-5 submissions per day should be enough, so that participants will focus on developing new models / features rather than on fine-tuning not the best but existing ones.
Overall, it was very exiting to participate. I’m looking forward to seeing other winners solutions!
As you confirmed you have already worked with very similar OpenSky Network data, I’ll still highlight my position. The fact that you have been working with very similar data from OpenSky for 2 years makes the competition unfair and pointless to other participants which don’t have practice with this data and pre-existing models prior to the competition. Nobody from the other participants have had access to the data as well as so much time you had to practice with it before the competition. I hope that organisers will follow the fare principles and I would like to wait for their decision.
I would like to ask a question about affiliation of richardalligier who achieved a score of 35.5m from the first attempt. It is easy to find his Github profile where it’s clear that he has been working with OpenSky Network data since at least 2018 and have several research papers related to this competition. It doesn’t look like a fair competition now as he seems to be connected with OpenSky Network. They most likely know him as they provided the data for his research for a couple of years now. I would encourage you to consider his eligibility to participate in this competition. Otherwise the whole idea of competition to search for fresh ideas will be compromised.