Loading
Round 1: Completed Round 2: Completed Post-Challenge Round: Completed

MEDIQA 2019 - Question Answering (QA)

ACL-BioNLP Shared Task

7719
182
4
119

The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).

Question Answering Task (QA)

The objective of this task is to filter and improve the ranking of automatically retrieved answers. The input ranks are generated by the medical QA system CHiQA.

We highly recommend the reuse of RQE and/or NLI systems (first tasks) in the QA task.

Datasets

Training, validation and test sets are available here: https://github.com/abachaa/MEDIQA2019/tree/master/MEDIQA_Task3_QA

In addition, the MedQuAD dataset can be used to retrieve answered questions that are entailed from the original questions. [1]

[1] A. Ben Abacha & D. Demner-Fushman. “A Question-Entailment Approach to Question Answering”. arXiv:1901.08079 [cs.CL], January 2019. Link

Timeline

  • March 19, 2019: Release of the validation set for the QA task.
  • April 15, 2019: Release of the test sets for the 3 tasks.
  • April 30, 2019: Run submission deadline. Participants’ results will be available on AIcrowd.
  • May 15, 2019: Paper submission deadline.
  • August 1, 2019: BioNLP workshop, ACL 2019, Florence, Italy.

You can download the datasets in the Resources Section.

Evaluation Criteria

The evaluation of the QA task will be based on the Accuracy, Mean Reciprocal Rank (MRR), Precision, and Spearman’s Rank Correlation Coefficient.

Submission format

1) Each line should have the following format: QuestionID,AnswerID,Label.

Label = 0 (incorrect answer) Label = 1 (correct answer)

2) The line number should correspond to the rank of the answer. Incorrect answers (label values of 0) will be used to compute accuracy. For rank-based measures, incorrect answers will be filtered out automatically by our evaluation script.

– No header in the submission file.

Example

Test question Q1 with 5 answers: A11, A12, A13, A14 and A15 (systemRanks)

A submission file with 3 correct answers ranked: A13, A11, A15 and 2 incorrect answers: A12 and A15, should look like:

  • Q1,A13,1
  • Q1,A11,1
  • Q1,A15,1
  • Q1,A12,0
  • Q1,A15,0

Rules

1) Each team is allowed to submit a maximum of 5 runs.

2) Please choose a username that represents your team, and update your profile with the following information: First name, Last nam, Affiliation, Address, City, Country.

3) For each run submission, it is mandatory to fill in the submission description field of the submission form with a short description of the methods, tools and resources used for that run.

4) The final results will not be considered official until a working notes paper with the full description of the methods is submitted.

Contact us

Participants