Food Recognition Challenge - Round-2 Wrapup Summary
“We are what we eat.”
While some may consider the above quotation as debatable, it's an undeniable fact that our food habits go a long way in defining our overall health. Coupled with our busy and often stressful lifestyle, monitoring our daily food and calorie consumption is of utmost importance, just like how we use fitness gadgets to track our daily activity level.
One convenient way of doing this is by analyzing images of food for identifying the individual components. Such an application can be extremely useful for a variety of use cases ranging from personal interest to medical relevance. Medical studies often involve monitoring the food intake of participants under study, and usually rely on food related questionnaires that are known to be imprecise. An automated system is desired which can help address such problems.
The goal of this challenge is to train a Machine Learning model which can look at images of food items and detect the individual present in them. The task has been posed as a semantic segmentation and classification problem, where the model is supposed to identify and point out the individual food items from a given image of a meal.
The competition has been set up in the format of a benchmark challenge, with multiple rounds having individual winners and prizes.
For the purpose of this challenge, a novel dataset of food images capturing 61 different classes, was collected through the MyFoodRepo app, where numerous volunteers provide images of their daily food intake in the context of a digital cohort called Food & You.
The dataset is an evolving one, and more data shall be released over time.
The biggest challenge in compiling a good dataset is that most food images found on the web are stock photography, where the images have been captured in exceptionally good quality for promotional purposes. However, the models being developed need to work on real world images, and must be robust enough to handle issues such as, but not limited to, poor resolution, variations in lighting, angle of view etc.
The challenge was launched in Nov’19. As of May’20, two rounds have been completed in this challenge and the next round is scheduled to be announced soon.
Round-1: Nov’19 - 31Dec’19
Round-2: 28Jan’20 - 15May’20
Each round has individual prizes for the winning teams, which comprise of:
✈️ Travel Grant - winner of each round will be invited to the Applied Machine Learning Days in Switzerland at EPFL. A travel grant of up to $2500 will cover the costs.
📖 Authorship/Co-Authorship - top contributors will be invited to coauthor a paper on the advances made in all rounds.
It has been a very exciting competition so far with more than 400 teams battling for the top spot with more than 700 submissions.
- team rssfete comprising of Shraddhaa and Rohit emerged the proud winners with Average Precision and Recall as 0.573 and 0.831 respectively
- team rssfete once again emerged as the proud winners with Average Precision and Recall as 0.634 and 0.886 respectively
- simon_mezgec grabbed the second place with Average Precision and Recall as 0.592 and 0.821 respectively
WINNING STRATEGY 🏆
It is interesting to know the thought process that goes behind creating winning solutions.
We asked the winning team about what they all tried and finally what worked best.
“We started by training a Mask_RCNN model by matterport but I don't think it gave us anything past 0.45 MAP on the test set. After that, we started working with something which was the 2018 state-of-the-art for the COCO which was the Hybrid Task Cascade or HTC. We tried out a lot of stuff in Round-1 and we did a lot of Hyper Parameter tuning. We also tried out something called Ensembling the models itself at the testing time. We tried out different image scales and a lot of Hyper Parameter tuning, which helped us out a lot, it gave a massive boost for us.
Eventually, we added the complete 450 images of the validation set, to the training set and we trained this for around 4 more Epochs. That was what gave us the final model, which also used 2 image scales and the best Hyper Parameters for what was giving us the previous best solution”
It is a great feeling to hear directly from participants about how they enjoyed the challenge and the things they liked the most.
“Since Round-2 of the competition ended, I just wanted to say a big thanks to the AIcrowd team, and @shivam in particular, for all the help throughout the competition - especially in the last couple of days. Every error and stuck submission were resolved in time, most of them very quickly, so thanks for that!
The competition itself is loads of fun. As someone who has worked in the food image recognition field for a good couple of years now, it’s fantastic to see not just a benchmark for this field, but also a competition associated with it as well. This should help attract more interest, researchers and data scientists to the problem, which should speed up progress.
Eager to see where this dataset and competition go in the future! 😊”
And some useful inputs from team rssfete:
“If there was some way, by which the evaluation could be made faster, in terms of predicting the time-out or doing a sample submission or even like keeping some sort of development leaderboard, where you release like 10 images or something, and then you let us submit our JSONs, to see that they're in the right format.”
RELATED EVENTS 🎪
Singularity - this competition was opened for the “Singularity” event for all Felicity participants with prizes worth 40k INR up for grabs.
AIcrowd Blitz - as variation of the food challenge, a multi-class classification problem was included among the 5 challenges posted in AIcrowd Blitz event (02-16May’20).
FUTURE ROUNDS 🔮
The challenge gets hotter every round and we are sure to expect even more exciting rounds in the future as more and more images get added to the ever growing dataset.
Additionally, with advancements in machine learning algorithms and availability of high performance models, we expect the top scores to be pushed even further.
Stay tuned for more.