Loading
Round 1: Completed Round 2: Completed Round 3: Completed Round 4: Completed #supervised_learning #instance_segmentation
58.6k
1889
82
2603

πŸ“’ Updates

πŸ† The challenge has come to an end. Find the winners' solutions here.

πŸš€ Food recognition baseline

πŸ’» Starter kit

✨ The challenge has relaunched with updated dataset. Go to Food Recognition Benchmark 2022!

πŸ•΅οΈ Overview

Recognizing food from images is an extremely useful tool for a variety of use cases. In particular, it would allow people to track their food intake by simply taking a picture of what they consume. Food tracking can be of personal interest, and can often be of medical relevance as well. Medical studies have for some time been interested in the food intake of study participants but had to rely on food frequency questionnaires that are known to be imprecise.

Image-based food recognition has in the past few years made substantial progress thanks to advances in deep learning. But food recognition remains a difficult problem for a variety of reasons.

PROBLEM STATEMENT

The goal of this challenge is to train models which can look at images of food items and detect the individual food items present in them. We use a novel dataset of food images collected through the MyFoodRepo app where numerous volunteer Swiss users provide images of their daily food intake in the context of a digital cohort called Food & You. This growing data set has been annotated - or automatic annotations have been verified - with respect to segmentation, classification (mapping the individual food items onto an ontology of Swiss Food items), and weight/volume estimation.

This is an evolving dataset, where we will release more data as the dataset grows over time.

image1

πŸ’Ύ Datasets

Finding annotated food images is difficult. There are some databases with some annotations, but they tend to be limited in important ways.

To put it bluntly: most food images on the internet are a lie. Search for any dish, and you’ll find beautiful stock photography of that particular dish. Same on social media: we share photos of dishes with our friends when the image is exceptionally beautiful. But algorithms need to work on real-world images. In addition, annotations are generally missing - ideally, food images would be annotated with proper segmentation, classification, and volume/weight estimates.

The dataset for the AIcrowd Food Recognition Challenge is available at https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files

This dataset contains :

  • train-v0.4.tar.gz : This is the Training Set of 24120 (as RGB images) food images, along with their corresponding 39328 annotations in MS-COCO format
  • val-v0.4.tar.gz: This is the suggested Validation Set of 1269 (as RGB images) food images, along with their corresponding 2053 annotations in MS-COCO format
  • test_images-v0.4.tar.gz : This is the debug Test Set for Round-3, where you are provided the same images as the validation set.

To get started, we would advise you to download all the files, and untar them inside the data/ folder of this repository, so that you have a directory structure like this :

πŸ’ͺ An open benchmark

For all the reasons mentioned above, food recognition is a difficult, but important problem. Algorithms who could tackle this problem would be extremely useful for everyone. That is why we are establishing this open benchmark for food recognition. The goal is simple: provide high-quality data, and get developers around the world excited about addressing this problem in an open way.

Because of the complexity of the problem, a one-shot approach won’t work. This is a benchmark for the long run.

If you are interested in providing more annotated data, please contact us.

πŸ† Prizes

We are very excited about this challenge and we have a bunch of prizes for the top winners and for the community!

For Round 4, the first participant who achieves a score greater than 0.70 will get a cash prize of 10,000 CHF! πŸ’°

For Round 4, the first participant who achieves a score greater than 0.62 will get a cash prize of 5,000 CHF! πŸ’°

ON TOP OF THAT, WE HAVE MORE PRIZES FOR THE TOP 4 WINNERS OF ROUND 4

These winners are eligible for the prize if they have a minimum average precision of 0.520 on the leaderboard

πŸš€ Submission

To submit to the challenge you'll need to ensure you've set up an appropriate repository structure, create a private git repository at https://gitlab.aicrowd.com with the contents of your submission, and push a git tag corresponding to the version of your repository you'd like to submit.

AICROWD.JSON

Each repository should have a aicrowd.json file with the following fields:

{ "challengeid" : "aicrowd-food-recognition-challenge", "graderid": "aicrowd-food-recognition-challenge", "authors" : ["aicrowd-user"], "description" : "Food Recognition Challenge Submission", "gpu": false }

This file is used to identify your submission as a part of the Food Recognition Challenge. You must use the challengeid and graderid specified above in the submission. The GPU key in the aicrowd.json lets you specify if your submission requires a GPU or not. In which case, a NVIDIA-K80 will be made available to your submission when evaluating the submission.

Code Entry point

The evaluator will use /home/aicrowd/run.sh as the entry point. Please remember to have a run.sh at the root which can instantiate any necessary environment variables and execute your code.

SUBMITTING using SSH

To make a submission, you will have to create a private repository on https://gitlab.aicrowd.com.

You will have to add your SSH Keys to your GitLab account by following the instructions here. If you do not have SSH Keys, you will first need to generate one.

Then you can create a submission by making a tag push to your repository, adding the correct git remote and pushing to the remote:

git clone https://github.com/AIcrowd/food-recognition-challenge-starter-kit

cd food-recognition-challenge-starter-kit

Add AICrowd git remote endpoint

git remote add aicrowd git@gitlab.aicrowd.com:/food-recognition-challenge-starter-kit.git

git push aicrowd master

Create a tag for your submission and push

git tag -am "submission-v0.1" submission-v0.1

git push aicrowd master

git push aicrowd submission-v0.1

Note: If the contents of your repository (latest commit hash) does not change, then pushing a new tag will not trigger a new evaluation.

SUBMITTING using HTTP

In order to use HTTP to clone repositories and submit on GitLab:

a) Create a personal access token

  1. Log in to GitLab.
  2. In the upper-right corner, click your avatar and select Settings.
  3. On the User Settings menu, select Access Tokens.
  4. Choose a name and optional expiry date for the token.
  5. Choose the desired scopes.
  6. Click the Create personal access token button.
  7. Save the personal access token somewhere safe, lets call it XXX for now.

NOTE: Once you leave or refresh the page, you won’t be able to access it again.

b) to clone a repo using the following command:

git clone https://github.com/AIcrowd/food-recognition-challenge-starter-kit

cd food-recognition-challenge-starter-kit

c) submit a solution:

cd into your submission repo on gitlab cd (repo_name)

Add AICrowd

git remote endpoint git remote add aicrowd https://oauth2:XXX@gitlab.aicrowd.com/(username)/(repo_name).git

git push aicrowd master

Create a tag for your submission and push

git tag -am "submission-v0.1" submission-v0.1

git push aicrowd master

git push aicrowd submission-v0.1

Note: If the contents of your repository (latest commit hash) does not change, then pushing a new tag will not trigger a new evaluation.

GIT LARGE File storage

For uploading models to Gitlab, normal commits will not work due to the size of the models. A solution to this is to use Git Large File Storage.

For a primer on how to use Git LFS please refer here and here.

Feel free to ask us any other questions on the Discussions Forums.

πŸ“š Resources

πŸ–Š Evaluation Criteria

For a known ground truth mask A, you propose a mask B, then we first compute IoU (Intersection Over Union).

IoU measures the overall overlap between the true region and the proposed region. Then we consider it a True detection, when there is at least half an overlap, or when IoU > 0.5

Then we can define the following parameters :

Precision (IoU > 0.5) :Recall (IoU > 0.5)

The final scoring parameters AP{IoU > 0.5} and AR{IoU > 0.5} are computed by averaging over all the precision and recall values for all known annotations in the ground truth.

A further discussion about the evaluation metric can be found here.

🏁 Challenge Rounds

This is an ongoing, multi-round benchmark. At each round, the specific tasks and / or datasets will be updated, and each round will have its own prizes. You can participate in multiple rounds, or in single rounds.

πŸ™‹ Frequently Asked Questions

Who can participate in this challenge?

Anyone. This challenge is open to everyone.

Do I need to take part in all the rounds?

No. Each round has separate prizes. You can take part in any one of the rounds or all of them.

I am a beginner. Where do I start?

There is a starter kit available here explaining how to make a submission. You can also use notebooks in the Starter Notebooks section which give details on using MaskRCNN and MMDetection.

What is team size?

Each team can have a maximum of 5 members.

Do I have to pay to participate?

No. Participation is free and open to all.

What are the changes in the Round 4 dataset?

The Round 4 dataset includes the data from Round 1, Round 2, Round 3 and newly annotated data. New food categories are introduced as well.

Is there a private test set?

Yes. The test set given in the Resources section is only for local evaluation. You are required to submit a repository that is run against a private test set. Please read the starter kit for more information.

How do I upload my model to GitLab?

To upload your models, please use Git Large File Storage.

Other questions?

Head over to the Discussions Forum and feel free to ask!

πŸ“± Contact

  • Sharada Mohanty