Loading
Round 1: Completed Round 2: Completed #instance_segmentation
36.9k
1323
39
1285

๐Ÿ“ข Updates

๐Ÿš€ Round 2 Launched!

๐Ÿ’ป Starter Kit | ๐Ÿ’ชQuick Submission (detectron2) | ๐Ÿš€ Food Recognition Baseline (detectron2)

โฎ๏ธ Previous Editions (2019, 2020, 2021)

๐Ÿ‘ฅ Find Teammates Here

chat on Discord

๐Ÿ•ต๏ธ Overview

For almost all human history, the main concern about food-centered around one goal: to get enough of it. Only in the past few decades has food ceased to be a limited resource for many. Today, food is abundant for most - but not all - inhabitants of high- and middle-income countries and its role has correspondingly changed. Whereas the primary goal of food used to be to provide sufficient energy, today, the main public health challenges are the avoidance of excessive calories and the nutritional composition of diets.

Recognizing food from images is an extremely useful tool for various use cases. In particular, it would allow people to track their food intake by simply taking a picture of what they consume. Food tracking can be of personal interest and can often be of medical relevance as well. Medical studies have for some time been interested in the food intake of study participants but had to rely on food frequency questionnaires that are known to be imprecise.

Image-based food recognition has made substantial progress thanks to advances in deep learning in the past few years. But food recognition remains a difficult problem for a variety of reasons. This is the 3rd consecutive year we are hosting this benchmark on AIcrowd. This benchmark builds upon the success of the 2019/2020/2021 Food Recognition Challenge.

PROBLEM STATEMENT

The goal of this benchmark is to train models which can look at images of food items and detect the individual food items present in them. We use a novel dataset of food images collected through the MyFoodRepo app, where numerous volunteer Swiss users provide images of their daily food intake in the context of a digital cohort called Food & You.

This growing data set has been annotated - or automatic annotations have been verified - with respect to segmentation, classification (mapping the individual food items onto an ontology of Swiss Food items), and weight/volume estimation. This is an evolving dataset, where we will continue to release more data as the dataset grows over time.

image1

๐Ÿ’พ Datasets

Finding annotated food images is difficult. There are some databases with some annotations, but they tend to be limited in important ways.

To put it bluntly: most food images on the internet are a lie. Search for any dish, and youโ€™ll find beautiful stock photography of that particular dish. Same on social media: we share photos of dishes with our friends when the image is exceptionally beautiful. But algorithms need to work on real-world images. In addition, annotations are generally missing - ideally, food images would be annotated with proper segmentation, classification, and volume/weight estimates.

With this 2022 iteration of the Food Recognition Benchmark, we release the following versions of the dataset :

  • v2.0, containing a training set of 39,962 images food items, with 76,491 annotations spread over 498 food classes.
  • v2.1, containing a training set of 54,392 images food items, with 100,256 annotations spread over 323 food classes.

The datasets for the AIcrowd Food Recognition Benchmark is available at https://www.aicrowd.com/challenges/food-recognition-benchmark-2022/dataset_files

Round 2 of the competition, focuses on v2.1 of the MyFoodRepo Dataset and contains :

  • public\_training\_set\_release\_2.1.tar.gz: This is the Training Set of 54,392 (as RGB images) food images, along with their corresponding 100,256 annotations from 323 food classes in MS-COCO format
  • public\_validation\_set\_2.1.tar.gz: This is the suggested Validation Set of 946 (as RGB images) food images, along with their corresponding 1708 annotations from 323 food classes in MS-COCO format
  • public\_test\_release\_2.1.tar.gz: This is the Public Test Set for Food Recognition Benchmark 2022: Round 1.

To get started, we would advise you to download all the files and untar them inside the data/ folder of this repository so that you have a directory structure like this:

๐Ÿ’ช An open benchmark

For all the reasons mentioned above, food recognition is a difficult but important problem. Algorithms that could tackle this problem would be extremely useful for everyone. That is why we are establishing this open benchmark for food recognition. The goal is simple: provide high-quality data, and get developers around the world excited about addressing this problem in an open way. Because of the complexity of the problem, a one-shot approach wonโ€™t work. This is a benchmark for the long run. If you are interested in providing more annotated data, please contact us.

๐Ÿ“… Timeline

This is an ongoing, multi-round benchmark. The specific tasks and/or datasets will be updated at each round, and each round will have its own prizes. You can participate in multiple rounds or in single rounds.

  • Round 1: December 20th, 2021 - February 20th, 2022 February 28th, 2022
  • Round 2: March 3rd, 2022 - May 3rd, 2022 (Ongoing)

๐Ÿ‘ฅ Participation Routes

There are 2 routes of participating in the challenge.

You can make a quick submission just with your predictions files. Or you can go the claasic code-based route.

  • Quick Participation ๐Ÿƒ
    • You need to upload prediction json files
    • Scores are computed on 40% of the publicly released test set
    • You are not eligible for the final leaderboard (and prizes)
  • Active Participation ๐Ÿ‘จโ€๐Ÿ’ป
    • You need to submit code (and AIcrowd evaluators runs the code to generate predictions)
    • Scores are computed on 100% of the publicly released test set + 40% of the (unreleased) extended test set
    • You are eligible for the final leaderboard and prizes

The flow for Active Participation look as follows:

๐Ÿ† Prizes

๐Ÿ first-to-cross prize

The First Participant or Team to reach an AP of 0.44 on the leaderboard will receive a DJI Mavic Mini 2 as prize! ๐ŸŽ‰
(This prize is awarded to the first such participant/team in active participation track, and is valid through both the rounds of the challenge)

๐Ÿ’ช Round 2 prizes (Active participation track)

The prizes will be awarded based on the final leaderboard for Round 2.

Note: Round 2 prizes are separate from the First-to-cross prize and there is no threshold for minimum score for Round 2 prizes.

๐Ÿ“ Paper Authorships

Top participants from Round 1 and Round 2 of the Benchmark will be invited to be co-authors of the dataset release paper and the challenge solution paper. If you have any questions, please let us know on the challenge forum.

๐Ÿš€ Submission

You can find more details on making a submission to the benchmark in the official starter kit here.

๐Ÿ–Š Evaluation Criteria

The benchmark uses the official detection evaluation metrics used by COCO. The primary evaluation metric is AP @ IoU=0.50:0.05:0.95. The secondary evaluation metric is AR @ IoU=0.50:0.05:0.95. A further discussion about the evaluation metric can be found here.

โœจ Inspiration

The AIcrowd team talked to the previous winners of the benchmark on their experience of participating in the benchmark, brief notes on their approaches, and their tips to fellow participants. There are many interesting snippets in their stories that you may want to check out!

๐Ÿ™‹ Frequently Asked Questions

  • Who can participate in this benchmark?
    • Anyone. This benchmark is open to everyone.
  • Do I need to take part in all the rounds?
    • No. Each round has separate prizes. You can participate in any one of the rounds or all of them.
  • I am a beginner. Where do I start?
    • There is a starter kit available here explaining how to make a submission. You can also use notebooks in the Starter Notebooks section, which give details on using MaskRCNN and MMDetection.
  • What is team size?
    • Each team can have a maximum of 5 members.
  • Do I have to pay to participate?
    • No. Participation is free and open to all.
  • Is there a private test set?
    • Yes. The test set given in the Resources section is only for local evaluation. You are required to submit a repository that is run against a private test set. Please read the starter kit for more information.
  • How do I upload my model to GitLab?
  • How are the timeouts and resources available in Active (GitLab) Submissions?
    • AWS.g4dn.xlarge instances are used for inference purposes, with a timeout of 1.5 second/image.
  • Other questions?

๐Ÿ“ฑ Contact

Participants

Getting Started

Notebooks

See all
MMdetection training and submissions (Quick, Active)
By
jerome_patel
About 2 years ago
0
Detectron2 training and submissions (Quick, Active)
By
jerome_patel
About 2 years ago
0
0