Loading
Feedback
Official round: Completed

ImageCLEF 2020 Lifelog - LMRT

1 Authorship/Co-Authorship
1676
30
0
58

CLEF 2020 Note: ImageCLEF Lifelog 2020 is divided into 2 subtasks (challenges). This challenge is about Lifelog moment retrieval (LMRT). For information on the Sport Performance Lifelog (SPLL) challenge click here. Both challenges share the same dataset, so registering for one of these challenges will automatically give you access to the other one.

Note: Do not forget to read the Rules section on this page. Pressing the red Participate button leads you to a page where you have to agree with those rules. You will not be able to submit any results before agreeing with the rules.

Note: Before trying to submit results, read the Submission instructions section on this page.

Lifelog Schedule

The schedule was updated last time on 30.04.2020.

  • 10.02.2020: registration opens
  • 17.02.2020: development data released
  • 04.03.2020: visual concepts data updated
  • 14.04.2020: test data release starts
  • 11.05.2020: test data release
  • 05.06.2020: deadline for submitting the participants runs
  • 08.06.2020: extended deadline for submitting the participants runs
  • 12.06.2020: release of the processed results by the task organizers
  • 10.07.2020: deadline for submission of working notes papers by the participants
  • 07.08.2020: notification of acceptance of the working notes papers
  • 21.08.2020: camera ready working notes papers
  • 22-25.09.2020: CLEF 2020, Thessaloniki, Greece

Challenge description

Lifelog Core Task: lifelog moment retrieval (LMRT, 4th edition). The participants are required to retrieve a number of specific predefined activities in a lifelogger’s life. For example, they are asked to return the relevant moments for the query “Find the moment(s) when the lifelogger was having an icecream on the beach”. Particular attention should be paid to the diversification of the selected moments with respect to the target scenario. Data. A new rich multimodal dataset will be used (e.g., about 4.5 months in total of data from three lifeloggers, 1,500-2,500 images per day, visual concepts, semantic content, biometrics information, music listening history, computer usage). …

Data

The 4th edition of this task will come with new, enriched data, focused on daily living activities and the chronological order of the moments and a completely new task for assessing sport performance. *** The data is available under the “Resources” tab. ***

Topics and Ground Truth Release

There are 10 dev topics for LMRT Tasks, like linked! The clusters and ground truth for these 10 topics are here!

10 test topics for LMRT Tasks are released under this link

Notice: the third column of ground truth is [topic id, image id, cluster id], which is different from the one of submission instruction [topic id, image id, confidence score]. The meaning of cluster id is to measure the diversity of the retrieved results for each topic. Participants should follow the submission instruction to generate the correct format of submission file.

Submission instructions

The submissions will be received through the ImageCLEF 2019 system. Go to "Runs", then "Submit run", and then select the track.

Participants will be permitted to submit up to 10 runs.

Each system run will consist of a single ASCII plain text file. The results of each run should be given in separate lines in the text file. The format of the text file is as follows:

A submitted run for the LMRT sub-task must be in the form of a text file in the following format:

[topic id, image id, confidence score]

Where:

  • topic id: Number of the queried topic, e.g., from 1 to 10 for the development set.
  • image id: The image ID that answers the topic. Each image ID is mapped into moments. If there are more than one sequential images that answer the topic (i.e. the moment is more than one image in duration), then any image from within that moment is acceptable.
  • confidence score: from 0 to 1.

Sample:

1, u1_2015-02-26_095916_1, 1.00
1, u1_2015-02-26_095950_2, 1.00
1, u1_2015-02-26_100028_1, 1.00
...
10, u3_2015-08-01_144854_1, 1.00
10, u3_2015-08-01_145314_1, 1.00
10, u3_2015-08-01_145345_2, 1.00
10, u3_2015-08-01_145531_1, 0.80
 

Submission files

The file name must be followed the rule <task abbreviation>_<team name without spaces>_<run name without spaces>.csv

Examples:

- LMRT_DCU_run1.csv

As soon as the submission is open, you will find a “Create Submission” button on this page (next to the tabs).

Before being allowed to submit your results, you have to first press the red participate button, which leads you to a page where you have to accept the challenges rules.

Evaluation criteria

For assessing performance, classic metrics will be deployed. These metrics are:

Cluster Recall at X (CR@X) - a metric that assesses how many different clusters from the ground truth are represented among the top X results; Precision at X (P@X) - measures the number of relevant photos among the top X results; F1-measure at X (F1@X) - the harmonic mean of the previous two. Various cut off points are to be considered, e.g., X=5, 10, 20, 30, 40, 50. Official ranking metrics this year will be the F1-measure@10, which gives equal importance to diversity (via CR@10) and relevance (via P@10).

Participants are allowed to undertake the sub-tasks in an interactive or automatic manner. For interactive submissions, a maximum of five minutes of search time is allowed per topic. In particular, the organizers would like to emphasize methods that allow interaction with real users (via Relevance Feedback (RF), for example), i.e., beside of the best performance, the way of interaction (like number of iterations using RF), or innovation level of the method (for example, new way to interact with real users) are encouraged.

Rules

Note: In order to participate in this challenge you have to sign an End User Agreement (EUA). You will find more information on the ‘Resources’ tab.

ImageCLEF lab is part of the Conference and Labs of the Evaluation Forum: CLEF 2020. CLEF 2020 consists of independent peer-reviewed workshops on a broad range of challenges in the fields of multilingual and multimodal information access evaluation, and a set of benchmarking activities carried in various labs designed to test different aspects of mono and cross-language Information retrieval systems. More details about the conference can be found here .

Submitting a working note with the full description of the methods used in each run is mandatory. Any run that could not be reproduced thanks to its description in the working notes might be removed from the official publication of the results. Working notes are published within CEUR-WS proceedings, resulting in an assignment of an individual DOI (URN) and an indexing by many bibliography systems including DBLP. According to the CEUR-WS policies, a light review of the working notes will be conducted by ImageCLEF organizing committee to ensure quality. As an illustration, ImageCLEF 2019 working notes (task overviews and participant working notes) can be found within CLEF 2019 CEUR-WS proceedings.

Important

Participants of this challenge will automatically be registered at CLEF 2020. In order to be compliant with the CLEF registration requirements, please edit your profile by providing the following additional information:

First name

Last name

Affiliation

Address

City

Country

Regarding the username, please choose a name that represents your team.

This information will not be publicly visible and will be exclusively used to contact you and to send the registration data to CLEF, which is the main organizer of all CLEF labs

Participating as an individual (non affiliated) researcher

We welcome individual researchers, i.e. not affiliated to any institution, to participate. We kindly ask you to provide us with a motivation letter containing the following information:

the presentation of your most relevant research activities related to the task/tasks

your motivation for participating in the task/tasks and how you want to exploit the results

a list of the most relevant 5 publications (if applicable)

the link to your personal webpage

The motivation letter should be directly concatenated to the End User Agreement document or sent as a PDF file to bionescu at imag dot pub dot ro. The request will be analyzed by the ImageCLEF organizing committee. We reserve the right to refuse any applicants whose experience in the field is too narrow, and would therefore most likely prevent them from being able to finish the task/tasks.

Citations

Information will be posted after the challenge ends.

Prizes

Publication

ImageCLEF 2020 is an evaluation campaign that is being organized as part of the CLEF initiative labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings (CEUR-WS.org). Selected contributions among the participants, will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS) together with the annual lab overviews.

Resources

Contact us

Discussion Forum - You can ask questions related to this challenge on the Discussion Forum. Before asking a new question please make sure that question has not been asked before. - Click on Discussion tab above or direct link: https://discourse.aicrowd.com/c/imageclef-2020-lifelog-lmrt

Alternative channels

We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. In extreme cases, if there are any queries or comments that you would like to make using a private communication channel, then you can send us an email at :

  • ductien.dangnguyen[at]uib[dot]no
  • zhou.liting2[at]mail[dot]dcu[dot]ie
  • tu.ninhvan[at]adaptcentre[dot]ie
  • tukhiem.le4[at]mail[dot]dcu[dot]ie
  • luca.piras[at]diee[dot]unica[dot]it
  • michael[at]simula[dot]no
  • tmtriet[at]hcmus[dot]edu[dot]vn
  • mlux[at]itec[dot]aau[dot]at
  • cgurrin[at]computing[dot]dcu[dot]ie

More information

You can find additional information on the challenge here: https://www.imageclef.org/2020/lifelog

Participants

Leaderboard

01 HCMUS 0.811
02 BIDAL-HCMUS 0.693
03 RRibeiro 0.517
04 duydedai 0.484
05 nvtu 0.321

Latest Submissions

FatmaBA_RegimLab graded
FatmaBA_RegimLab graded
FatmaBA_RegimLab graded
FatmaBA_RegimLab failed
FatmaBA_RegimLab failed