Loading
Feedback
Round 1: Completed Round 2: Completed Round 3: 40 days left #supervised_learning #instance_segmentation

Food Recognition Challenge

A benchmark for image-based food recognition

1 Travel Grants

πŸ“’ Updates

Round 3 for this challenge has started! πŸ’ͺ

 Food recognition baseline is live πŸš€

 

The starter kit for this challenge is available here.

chat on Discord

πŸ•΅οΈ Overview

Recognizing food from images is an extremely useful tool for a variety of use cases. In particular, it would allow people to track their food intake by simply taking a picture of what they consume. Food tracking can be of personal interest, and can often be of medical relevance as well. Medical studies have for some time been interested in the food intake of study participants, but had to rely on food frequency questionnaires that are known to be imprecise.

Image-based food recognition has in the past few years made substantial progress thanks to advances in deep learning. But food recognition remains a difficult problem for a variety of reasons.

Problem Statement

The goal of this challenge is to train models which can look at images of food items and detect the individual food items present in them. We use a novel dataset of food images collected through the MyFoodRepo app where numerous volunteer Swiss users provide images of their daily food intake in the context of a digital cohort called Food & You. This growing data set has been annotated - or automatic annotations have been verified - with respect to segmentation, classification (mapping the individual food items onto an ontology of Swiss Food items), and weight / volume estimation.

This is an evolving dataset, where we will release more data as the dataset grows over time.

image1

πŸ’Ύ Datasets

Finding annotated food images is difficult. There are some databases with some annotations, but they tend to be limited in important ways.

To put it bluntly: most food images on the internet are a lie. Search for any dish, and you’ll find beautiful stock photography of that particular dish. Same on social media: we share photos of dishes with our friends when the image is exceptionally beautiful. But algorithms need to work on real world images. In addition, annotations are generally missing - ideally, food images would be annotated with proper segmentation, classification, and volume / weight estimates.

The dataset for the AIcrowd Food Recognition Challenge is available at https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files

This dataset contains : 

  • train-v0.4.tar.gz : This is the Training Set of 24120 (as RGB images) food images, along with their corresponding 39328 annotations in MS-COCO format
  • val-v0.4.tar.gz: This is the suggested Validation Set of 1269 (as RGB images) food images, along with their corresponding 2053 annotations in MS-COCO format
  • test_images-v0.4.tar.gz : This is the debug Test Set for Round-3, where you are provided the same images as the validation set.

To get started, we would advise you to download all the files, and untar them inside the data/ folder of this repository, so that you have a directory structure like this :

β”œβ”€β”€β”€data
|   β”œβ”€β”€β”€test_images/(NOTE:They are the same as val set images)
β”‚   └───train/
β”‚   β”‚   β”œβ”€β”€β”€annotations.json(train data anns in MS COCO format)
β”‚   β”‚   └───images(has all the images for training)
β”‚   └───val/
β”‚   β”‚   β”œβ”€β”€β”€annotations.json(val data anns in MS COCO format)
β”‚   β”‚   └───images(has all the images for  validating)

    

πŸ’ͺ An open benchmark

For all the reasons mentioned above, food recognition is a difficult, but important problem. Algorithms who could tackle this problem would be extremely useful for everyone. That is why we are establishing this open benchmark for food recognition. The goal is simple: provide high-quality data, and get developers around the world excited about addressing this problem in an open way.

Because of the complexity of the problem, a one-shot approach won’t work. This is a benchmark for the long run.

If you are interested in providing more annotated data, please contact us.

πŸš€ Submission

To submit to the challenge you'll need to ensure you've set up an appropriate repository structure, create a private git repository at https://gitlab.aicrowd.com with the contents of your submission, and push a git tag corresponding to the version of your repository you'd like to submit.

 

AICROWD.JSON

Each repository should have a aicrowd.json file with the following fields:

{    

"challenge_id" : "aicrowd-food-recognition-challenge",    

"grader_id": "aicrowd-food-recognition-challenge",  

"authors" : ["aicrowd-user"],

"description" : "Food Recognition Challenge Submission",

"gpu": false

}

 

This file is used to identify your submission as a part of the Food Recognition Challenge. You must use the challenge_id and grader_id specified above in the submission. The gpu key in the aicrowd.json lets your specify if your submission requires a GPU or not. In which case, a NVIDIA-K80 will be made available to your submission when evaluation the submission.

 

Code Entry point

The evaluator will use /home/aicrowd/run.sh as the entry point. Please remember to have a run.sh at the root which can instantiate any necessary environment variables and execute your code.

 

SUBMITTING using SSh

To make a submission, you will have to create a private repository on https://gitlab.aicrowd.com.

You will have to add your SSH Keys to your GitLab account by following the instructions here. If you do not have SSH Keys, you will first need to generate one.

Then you can create a submission by making a tag push to your repository, adding the correct git remote and pushing to the remote:

git clone https://github.com/AIcrowd/food-recognition-challenge-starter-kit

cd food-recognition-challenge-starter-kit

# Add AICrowd git remote endpoint

git remote add aicrowd git@gitlab.aicrowd.com:<YOUR_AICROWD_USER_NAME>/food-recognition-challenge-starter-kit.git

git push aicrowd master

# Create a tag for your submission and push

git tag -am "submission-v0.1" submission-v0.1

git push aicrowd master

git push aicrowd submission-v0.1

 

Note: If the contents of your repository (latest commit hash) does not change, then pushing a new tag will not trigger a new evaluation.

 

SUBMITTING using HTTP

In order to use http to clone repositories and submit on gitlab:

a) Create a personal access token

1.  Log in to GitLab.

2.  In the upper-right corner, click your avatar and select Settings.

3.  On the User Settings menu, select Access Tokens.

4.  Choose a name and optional expiry date for the token.

5.  Choose the desired scopes.

6.  Click the Create personal access token button.

7.  Save the personal access token somewhere safe, lets call it XXX for now.

NOTE: Once you leave or refresh the page, you won’t be able to access it again.

b) to clone a repo use the following command:

git clone https://github.com/AIcrowd/food-recognition-challenge-starter-kit

cd food-recognition-challenge-starter-kit

 

c)submit a solution:

cd into your submission repo on gitlab

cd (repo_name)

#Add AICrowd git remote endpoint

git remote add aicrowd https://oauth2:XXX@gitlab.aicrowd.com/(username)/(repo_name).git

git push aicrowd master

# Create a tag for your submission and push

git tag -am "submission-v0.1" submission-v0.1

git push aicrowd master

git push aicrowd submission-v0.1

Note: If the contents of your repository (latest commit hash) does not change, then pushing a new tag will not trigger a new evaluation.

 

GIT LARGE File storage

For uploading models to Gitlab, normal commits will not work due to the size of the models. A solution to this is to use Git Large File Storage.

For a primer on how to use Git LFS please refer here and here.

Feel free to ask us any other questions on the Discussions Forums

πŸ“š Resources

πŸ–Š Evaluation Criteria

For a known ground truth mask A, you propose a mask B, then we first compute IoU (Intersection Over Union).

IoU measures the overall overlap between the true region and the proposed region. Then we consider it a True detection, when there is atleast half an overlap, or when IoU > 0.5

Then we can define the following parameters :

Precision (IoU > 0.5)

Recall (IoU > 0.5

The final scoring parameters AP_{IoU > 0.5} and AR_{IoU > 0.5} are computed by averaging over all the precision and recall values for all known annotations in the ground truth.

A further discussion about the evaluation metric can be found here.

🏁 Challenge Rounds

This is an ongoing, multi-round benchmark. At each round, the specific tasks and / or datasets will be updated, and each round will have its own prizes. You can participate in multiple rounds, or in single rounds.

πŸ† Prizes

Round 1

The winner of Round 1 will be invited to the Applied Machine Learning Days in Switzerland at EPFL in January 2020. A travel grant of up to $2500 will cover the costs.

round 2

The winner of Round 2 will be invited to the Applied Machine Learning Days in Switzerland at EPFL in January 2021. A travel grant of up to $2500 will cover the costs.

Top contributors will also be invited to coauthor a paper on the advances made in all rounds.

(more details to be announced soon)

round 3

The winner of Round 3 will get an all-expense paid trip to Applied Machine Learning Days 2021, Switzerland!

πŸ™‹ Frequently Asked Questions

Who can participate in this challenge?

Anyone. This challenge is open to everyone.

DO i need to take part in all the rounds?

No. Each round has separate prizes. You can take part in any one of the rounds or all of them.

I am a beginner. Where do i start?

There is a starter kit available here explaining how to make a submission. You can also use notebooks in the Starter Notebooks section which give details on using MaskRCNN and MMDetection.

What is the team sizE?

Each team can have a maximum of 5 members.

Do i have to pay to participate?

No. Participation is free and open to all.

What are the changes in the round 3 dataset?

The Round 3 dataset includes the data from Round 1 and Round 2 and newly annotated data. New food categories are introduced as well.

What are the changes in the round 2 dataset?

The Round 2 dataset includes data from Round 1 and newly annotated data. New food categories are introduced as well.

is there a private test set?

Yes. The test set given in the Resources section is only for local evaluation. You are required to submit a repository that is run against a private test set. Please read the starter kit for more information.

How do I upload my model to gitlab?

To upload your models, please use Git Large File Storage.

Other questions?

Head over to the Discussions Forum and feel free to ask! 

πŸ“± Contact

Participants

Getting Started

Leaderboard

01 eric_a_scuccimarra 0.342
02
0.274
03 jagruti_patel 0.143
04 aicrowd-bot 0.0
04 victor_boulanger 0.0

Latest Submissions

eric_a_scuccimarra submitted
eric_a_scuccimarra submitted
gaurav_singhal failed
eric_a_scuccimarra graded
eric_a_scuccimarra failed