Loading
Round 1: 6 days left

Food Recognition Challenge

A benchmark for image-based food recognition

1 Travel Grants
1 Authorship/Co-Authorship
Misc Prizes : Various Prizes
4915
192
196

The Starter Kit for this challenge is available at : https://github.com/AIcrowd/food-recognition-challenge-starter-kit

Overview

Recognizing food from images is an extremely useful tool for a variety of use cases. In particular, it would allow people to track their food intake by simply taking a picture of what they consume. Food tracking can be of personal interest, and can often be of medical relevance as well. Medical studies have for some time been interested in the food intake of study participants, but had to rely on food frequency questionnaires that are known to be imprecise.

Image-based food recognition has in the past few years made substantial progress thanks to advances in deep learning. But food recognition remains a difficult problem for a variety of reasons.

Problem Statement

The goal of this challenge is to train models which can look at images of food items and detect the individual food items present in them. We use a novel dataset of food images collected through the MyFoodRepo app where numerous volunteer Swiss users provide images of their daily food intake in the context of a digital cohort called Food & You. This growing data set has been annotated - or automatic annotations have been verified - with respect to segmentation, classification (mapping the individual food items onto an ontology of Swiss Food items), and weight / volume estimation.

This is an evolving dataset, where we will release more data as the dataset grows over time.

image1

Datasets

Finding annotated food images is difficult. There are some databases with some annotations, but they tend to be limited in important ways.

To put it bluntly: most food images on the internet are a lie. Search for any dish, and you’ll find beautiful stock photography of that particular dish. Same on social media: we share photos of dishes with our friends when the image is exceptionally beautiful. But algorithms need to work on real world images. In addition, annotations are generally missing - ideally, food images would be annotated with proper segmentation, classification, and volume / weight estimates.

The dataset for the AIcrowd Food Recognition Challenge is available at https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files

This dataset contains :
* train.tar.gz : This is the Training Set of 5545 (as RGB images) food images, along with their corresponding annotations in MS-COCO format

  • val.tar.gz: This is the suggested Validation Set of 291 (as RGB images) food images, along with their corresponding annotations in MS-COCO format

  • test_images.tar.gz : This is the debug Test Set for Round-1, where you are provided the same images as the validation set.

To get started, we would advise you to download all the files, and untar them inside the data/ folder of this repository, so that you have a directory structure like this :

|-- data/
|   |-- test_images/ (has all images for prediction)(**NOTE** : They are the same as the validation set images)
|   |-- train/
|   |   |-- images (has all the images for training)
|   |   |__ annotation.json : Annotation of the data in MS COCO format
|   |   |__ annotation-small.json : Smaller version of the previous dataset
|   |-- val/
|   |   |-- images (has all the images for training)
|   |   |__ annotation.json : Annotation of the data in MS COCO format
|   |   |__ annotation-small.json : Smaller version of the previous dataset

An open benchmark

For all the reasons mentioned above, food recognition is a difficult, but important problem. Algorithms who could tackle this problem would be extremely useful for everyone. That is why we are establishing this open benchmark for food recognition. The goal is simple: provide high quality data, and get developers around the world excited about addressing this problem in an open way.

Because of the complexity of the problem, a one shot approach won’t work. This is a benchmark for the long run.

If you are interested in providing more annotated data, please contact us.

Available Notebooks

Evaluation Criteria

For a known ground truth mask , you propose a mask , then we first compute (Intersection Over Union) :

measures the overall overlap between the true region and the proposed region. Then we consider it a True detection, when there is atleast half an overlap, or when

Then we can define the following parameters :

  • Precision ()

  • Recall ()
    .

The final scoring parameters and are computed by averaging over all the precision and recall values for all known annotations in the ground truth.

Challenge Rounds

This is an ongoing, multi-round benchmark. At each round, the specific tasks and / or datasets will be updated, and each round will have its own prizes. You can participate in multiple rounds, or in single rounds.

Prizes

The winner of the first round will be invited to the Applied Machine Learning Days in Switzerland at EPFL in January 2020. A travel grant of up to $2500 will cover the costs.

Top contributors will also be invited to coauthor a paper on the advances made in this round.

Contact