Loading
0 Follower
0 Following
herve.goeau
Hervé Goëau

Organization

cirad

Location

Montpellier cedex 5, FR

Badges

2
0
0

Activity

Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

A benchmark for image-based food recognition

Latest submissions

No submissions made in this challenge.

PlantVillage is built on the premise that all knowledge that helps people grow food should be openly accessible to anyone on the planet.

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
graded 136804
graded 136802
graded 136800

Latest submissions

No submissions made in this challenge.

Image-based plant identification at global scale

Latest submissions

See All
failed 211802
Participant Rating
Participant Rating
herve.goeau has not joined any teams yet...

LifeCLEF 2022-23 Plant

Results and working notes - Official Round PlantCLEF 2023

10 months ago

Dear participants,

we have reported the results for the second official round PlantCLEF2023 in the form of a table and a graph here, as well as the instructions for writing the working notes:
https://www.imageclef.org/PlantCLEF2023

Congratulations to MingleXu, who remains in the lead on this new edition! And a big thank you to challengers Neuon_ai and BioMachina.

We remind that all participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper as part of CLEF, the Conference and Labs of the Evaluation Forum : https://clef2023.clef-initiative.eu/

I remind here the instructions:

  • Submission of reports is done through EasyChair – please make absolutely sure that the author (names and order), title, and affiliation information you provide in EasyChair match the submitted PDF exactly
  • Strict deadline for Working Notes Papers: 7 June 2023
  • Strict deadline for CEUR-WS Camera Ready Working Notes Papers: 7 July 2023

Thank you very much for your contributions, thank you for your tenacity in spite of the very large volume of classes and data, leading to long training times of models.

Same training and testing dataset with last year?

About 1 year ago

The submission system is back up and running, you can try again if you want. I put a new fake_run with random predictions, thanks for warning me about that too

Same training and testing dataset with last year?

About 1 year ago

Hi Mingle we encounter some technical difficulties, we close the submission system until it is fixed

Same training and testing dataset with last year?

About 1 year ago

Hi Mingle, sorry for the late reply, we are entering the active round of submitting runs, I will be more present on the forums now. Yes indeed this year we will play the same challenge on the same data in train and in test, no need to download them again for those who have recovered them last year. Thanks to you for considering participating in the challenge again!

Results and working notes

Almost 2 years ago

Hello, I would say that normally these conclusions should be made by yourself as a preliminary study to your runs. Typically the participants take a small part of the observations from the training set to create a validation set. Then they are free to report in the working notes any conclusions they find relevant to observe: e.g. the impact of different types of data augmentation, the classification capacity of different architectures, techniques for reducing the last layer of classification, or different techniques for combining images from the same observation as you ask me. I understand that in the context of the challenge, re-training such a large model on a sub-part of the training set is expensive, but if you are short of time you can for example do your preliminary studies on a sub-part of the classes, limiting the task to 1000 or 10000 classes, it will be faster and you will still be able to put forward relevant analyses on the aspects of image combination that can be a priori valid and generalized to the case of considering more classes it seems to me

Results and working notes

Almost 2 years ago

Hi Mingle Xu, you did a great job in such a short time so it can’t have been easy to get your models to converge so quickly, I’m very curious to see what approach you used in your future working note. For your additional run, normally we have to limit the restitution of the results only to the runs submitted during the challenge and we prefer to stay on the display of the results of the tab leaderboard as it is and where you appear in first position (congratulations by the way!). I imagine however that it is very frustrating to have runs ready and that you would like to know the performance. I suggest you send me your new runs by email with a downloadable link. I could calculate and communicate your scores outside the aicrowd platform. You can then mention this last result in your working note, at the end of the document in a separate subsection but saying explicitly that it is a post-challenge result, out of competition. Would this be convenient for you?

Does 3rd Place need to submit working notes?

Almost 2 years ago

Hello,
all participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper. As organizers of the challenge we also have to write an overview presenting the challenge, the dataset and the different methods that were explored during this event. The challenge was very difficult this year and yet some of the methods like yours allowed us to obtain promising results. But for the moment we have very few elements to describe your approach. We thank you in advance for writing this working paper to help us highlight the techniques that worked and those that did not. The working note can be relatively short, but it should contain enough information to allow for the reproduction of your results. Instructions for writing can be found at the bottom of the page here: PlantCLEF2022 | ImageCLEF / LifeCLEF - Multimedia Retrieval in CLEF

Results and working notes

Almost 2 years ago

Dear participants,
we have reported the results in the form of a table and a graph here, as well as the instructions for writing the working notes:
https://www.imageclef.org/PlantCLEF2022
We remind that all participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper as part of CLEF, the Conference and Labs of the Evaluation Forum : https://clef2022.clef-initiative.eu/
Thank you very much for your contributions, thank you for your tenacity in spite of the very large volume of classes and data, leading to long training times of models.

Deadline extension for run submission (until Sunday May 15 23:55 UTC)

Almost 2 years ago

I think I won’t have time to fix this with the aicrowd developers, I just increased the submission limits to 15 runs, can you try again?

Deadline extension for run submission (until Sunday May 15 23:55 UTC)

Almost 2 years ago

Badly formed runs are not counted. You are allowed to submit 10 “Graded successfully!” runs, you have submitted 6 and so you have the possibility to submit 4 more runs, does that seem enough to you?

Deadline extension for run submission (until Sunday May 15 23:55 UTC)

Almost 2 years ago

yes, I think we could open the system for a second round after the CLEF event, we will discuss it with my co-organizers to decide the right time to do

Deadline extension for run submission (until Sunday May 15 23:55 UTC)

Almost 2 years ago

we will leave the submission system open for an additional 2 days during this week-end for the very last submissions (firm deadline Sunday May 15 23:55 UTC)

Deadline extension for run submission (until Sunday May 15 23:55 UTC)

Almost 2 years ago

Dear participants,

it’s rush hour now, the GPUs must be warming up… thanks to the participants who submitted the first runs, the results are encouraging despite the great difficulty of the challenge! Thanks again for all your efforts and your investment on this problem of great importance for a better knowledge of the biodiversity of plants. However, due to the size of the data and the difficulty to access high end GPU servers, we receive requests to extend the deadline. We therefore propose to keep the submission system open for 1 additional week, until Sunday May 15 23:55 UTC. In return, this will reduce the time to write the working notes to less than 2 weeks (27 May 2022), please anticipate writing your notes until the final results are released next week. Good luck for the final stretch!

The organizers

What does plant observation mean?

Almost 2 years ago

Yes indeed, this is a notable difference with classical image classification problems where we usually compute the performance of a model by averaging its prediction scores on test images. Here the test samples are observations and not images. If an observation is associated with several images, then the predictions must be combined. This also reflects the use of botanists: to ensure the species identification of one single plant, a botanist must often observe several organs (a flower, a fruit, a leaf), even a same organ from different angles (a flower in front view, the same flower in bottom view, side view, etc). A single image can possibly help to identify the genus of a plant, but hardly the species.

LifeCLEF 2022 Plant Evaluation

Almost 2 years ago

Thank you for your comment and request for clarification:

  1. Yes, the Macro-Average version of the MRR is the average of the MRRs for each species.
  2. No, not all species are represented in the test.

In other words, the MA-MRR is calculated only on the species actually present in the test set. Also, there are several observations per species, so there are less than 26,868 species in the test set.

We would have liked to have been able to evaluate all species in the training set, but it was difficult to collect so much sufficiently expert data by botanists at such a scale.
We could have retrieved complementary data published on GBIF for the test set, but because we allow participants to use external complementary training data, there would have been the risk of having test images included in external training data, which would have biased the results of the challenge.

Finally, we can add that having fewer species in the test set corresponds to a realistic scenario faced by automatic identification systems such as Pl@ntNet, Inaturalist: these systems must be able to recognize as many species as possible without knowing in advance which species will be the most frequently requested and which will never be requested.

LifeCLEF 2021 Plant

Deadline extension?

Almost 3 years ago

Thank you Holmes. An important information to take into account: working notes must be submitted by May 28 (http://clef2021.clef-initiative.eu/index.php?page=Pages/schedule.html)

Deadline extension?

Almost 3 years ago

Dear all,

first of all thank you for your interest in the “LifeCLEF 2021 Plant” challenge. Many thanks to the first 4 participants who took up the challenge and already submitted 25 runs. Like last year edition, the challenge is really difficult but it is of great interest for both botanists and CV/ML researchers around the topic of domain adaptation.

It’s the home stretch and probably you have models still in training. We propose to let open the challenge few more days, until next Wednesday, May 12 23:59. This could allow you to train your models a bit longer, especially if you train them with external data like ExpertCLEF19 and the GBIF mentioned in the “External training data” section on the main page.

However, this will delay the publication of the final results and shorten the time to write the working notes for the CLEF conference. Would you still agree to this last extension until Wednesday, May 12?

Thanks for feedback and suggestions.

Traits label in test set

Almost 3 years ago

Hi Holmes, many thanks for your question. The traits are defined at the species level only for the training set. Only images will be provided in the test set, without trait labels.

LifeCLEF 2020 Plant

Labels for test datasets

Almost 4 years ago

Hi,
the PlantCLEF event is not yet over, we are entering the phase of publishing and presenting the results until the CLEF conference (https://clef2020.clef-initiative.eu/index.php) which will take place online between September 22-25. Although the task was very difficult, the results showed that it was still possible to achieve honourable performances, even on difficult species with few training field photos. This opens up interesting research perspectives. We hope to be able to reorganize again this challenge again after the CLEF conference, during the next edition of LifeCLEF if accepted, or if not, just as a new round under aicrowd. It’s still a bit early to know, but we prefer not to publish the groundtruth at the moment, to not kill the challenge and leave us several possibilities. Thank you for your understanding

Deadline extension? Confirmed June 14th

Almost 4 years ago

Hi Aymane, thank you for your intention to participate in this challenge despite the difficulty of the task. . We have extended last Friday the deadline until June 14th, but this time it will be a firm deadline unfortunately: LifeCLEF lab is part of the Conference and Labs of the Evaluation Forum CLEF 2020, and therefore we must try our best to fit the schedule of this conference. https://clef2020.clef-initiative.eu/index.php?page=Pages/schedule.html . We have to set aside a minimum amount of time to publish the results, check with the participants that everything is ok, set aside time for writing the working notes, the overview, etc. I hope we’ll be lucky enough to get some submissions from you, good luck!

herve.goeau has not provided any information yet.