Loading
Feedback
Round 1: Starting soon

MICCAI 2021: HECKTOR

 

MICCAI 2021 challenge: HEad and neCK TumOR segmentation and outcome prediction in PET/CT images (HECKTOR)

MICCAI 2021 website: https://miccai2021.org/en/

📱Contact

vincent[dot]andrearczyk[at]gmail[dot]com

🕵 Introduction

Following the succes of the first HECKTOR challenge in 2020, this challenge will be presented at the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, September 27 - October 1, 2021. Three tasks are proposed this year:

  • Task 1: the automatic segmentation of Head and Neck (H&N) primary tumors in FDG-PET and CT images;
  • Task 2: the prediction of patient outcomes, namely Disease Free Survival (DFS) from the FDG-PET and CT images;
  • Task 3: The prediction of DFS (same as task 2) from the FDG-PET and CT images, except that the ground truth annotations of primary tumors will be provided. 

Motivation: Head and Neck (H&N) cancers are among the most common cancers worldwide (5th leading cancer by incidence) (Parkin et al. 2005). Radiotherapy combined with cetuximab has been established as standard treatment (Bonner et al. 2010). However, locoregional failures remain a major challenge and occur in up to 40% of patients in the first two years after the treatment (Chajon et al. 2013). Recently, several radiomics studies based on Positron Emission Tomography (PET) and Computed Tomography (CT) imaging were proposed to better identify patients with a worse prognosis in a non-invasive fashion and by exploiting already available images such as these acquired for diagnosis and treatment planning (Vallières et al. 2017),(Bogowicz et al. 2017),(Castelli et al. 2017). Although highly promising, these methods were validated on 100-400 patients. Further validation on larger cohorts (e.g. 300-3000 patients) is required to ensure an adequate ratio between the number of variables and observations in order to avoid an overestimation of the generalization performance. Achieving such a validation requires the manual delineation of primary tumors and nodal metastases for every patient and in three dimensions, which is intractable and error-prone. Methods for automated lesion segmentation in medical images were proposed in various contexts, often achieving expert-level performance (Heimann and Meinzer 2009), (Menze et al. 2015). Surprisingly few studies evaluated the performance of computerized automated segmentation of tumor lesions in PET and CT images (Song et al. 2013),(Blanc-Durand et al. 2018), (Moe et al. 2019). In 2020, we organized the first HECKTOR challenge to offer an oppsky ortunity for participants working on 3D segmentation algorithms to develop automatic bi-modal approaches for the segmentation of H&N tumors in PET/CT scans, focusing on oropharyngeal cancers. Following good participation and promising results in the 2020 challenge, we will increase the dataset size with 81 new cases provided by additional organization partners, from another clinical center with a different PET/CT scanner model and associated reconstruction settings  (CHU Milétrie, Poitiers, France). In addition, we expand the scope of the challenge by considering an additional task with the purpose of outcome prediction based on the PET/CT images. A clinically-relevant endpoint that can be leveraged for personalizing patient management at diagnosis will be considered: prediction of progression-free survival from diagnostic PET/CT images. By focusing on metabolic and morphological tissue properties respectively, PET and CT modalities include complementary and synergistic information for cancerous lesion segmentation as well as tumor characteristics relevant for patient outcome prediction, in addition to usual clinical variables (e.g., clinical stage, age, gender, treatment modality). Modern image analysis methods must be developed to best extract and leverage this information. The data used in this challenge is multi-centric, including four centers in Canada (Vallières et al. 2017), one center in Switzerland (Castelli et al. 2017), and one center in France (Hatt et al. 2019; Legot et al. 2018)  for a total of 335 patients with annotated primary tumors.

(Blanc-Durand et al. 2018) Blanc-Durand, Paul, et al. "Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study." PLoS One 13.4 (2018): e0195798.

(Bogowicz et al. 2017) Bogowicz, Marta, et al. "Comparison of PET and CT radiomics for prediction of local tumor control in head and neck squamous cell carcinoma." Acta oncologica 56.11 (2017): 1531-1536.

(Castelli et al. 2017) Castelli, Joël, et al. "A PET-based nomogram for oropharyngeal cancers." European journal of cancer 75 (2017): 222-230.

(Chajon et al. 2013) Chajon, Enrique, et al. "Salivary gland-sparing other than parotid-sparing in definitive head-and-neck intensity-modulated radiotherapy does not seem to jeopardize local control." Radiation oncology 8.1 (2013): 1-9.

(Hatt et al. 2009) Hatt, Mathieu, et al. "A fuzzy locally adaptive Bayesian segmentation approach for volume determination in PET." IEEE transactions on medical imaging 28.6 (2009): 881-893.

(Heiman and Meinzer 2009) Heimann, Tobias, and Hans-Peter Meinzer. "Statistical shape models for 3D medical image segmentation: a review." Medical image analysis 13.4 (2009): 543-563.

(Legot et al. 2018) Legot, Floriane, et al. "Use of baseline 18F-FDG PET scan to identify initial sub-volumes with local failure after concomitant radio-chemotherapy in head and neck cancer." Oncotarget 9.31 (2018): 21811.

(Menze et al. 2014) Menze, Bjoern H., et al. "The multimodal brain tumor image segmentation benchmark (BRATS)." IEEE transactions on medical imaging 34.10 (2014): 1993-2024.

(Moe et al. 2019) Moe, Yngve Mardal, et al. “Deep learning for automatic tumour segmentation in PET/CT images of patients with head and neck cancers.” Medical Imaging with Deep Learning (2019).

(Parkin et al. 2005) Parkin, D. Max, et al. "Global cancer statistics, 2002." CA: a cancer journal for clinicians 55.2 (2005): 74-108.

(Song et al. 2013) Song, Qi, et al. "Optimal co-segmentation of tumor in PET-CT images with context information." IEEE transactions on medical imaging 32.9 (2013): 1685-1697.

(Vallières et al. 2017) Vallières, Martin et al. “Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer.” Scientific reports, 7(1):10117, 2017

📅 Timeline

  • the release date of the training cases: June 01 2021
  • the release date of the test cases: Aug. 01 2021
  • the submission date(s): opens Sept. 01 2021 closes Sept. 10 2021 (23:59 UTC-10)
  • paper submission deadline: Sept. 15 2021 (23:59 UTC-10)
  • the release date of the results: Sept. 15 2021
  • associated workshop days: Sept. 27 2021 or Oct. 01 2021

✍Paper submission

In order to be eligible for the official ranking, the participants must submit a paper describing their methods due Sept. 15 2021. We will review them (independently from the MICCAI conference reviews) and publish a Lecture Notes in Computer Science (LNCS) volume in the challenges subline.

Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer’s proceedings LaTeX templates are also available in Overleaf. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made. Please send the form by email, specifying the title of the paper, to vincent[dot]andrearczyk[at]gmail[dot]com.

The following papers must be cited:

TBD

We encourage the participants to release their code and add the github link to their papers. 

The top ranked teams with a paper submission will be contacted in September to prepare an oral presentation for the half-day event at MICCAI 2021.

🏆Prize

TBD

💾 Data description

  • Task 1: Training and testing cases represent one 3D FDG-PET volume registered with a 3D CT volume of the head and neck region, as well as a binary contour with the annotated ground truth tumors (only available for training cases to the participating teams). The labels represent the primary Gross Tumor Volume (GTVt). Patient information including gender and age is also included with each case.
  • Task 2: Same as task 1. In addition, the cases also include the patient outcome information (only available for training cases to the participating teams) of progression-free survival (time-to-event in days and censoring).
  • Task 3. Same as task 2. In addition, ground-truth contours are provided for the algorithm but only through a docker framework to ensure the challengers do not have direct access to them.

The total number of training cases is 224. No specific validation cases are provided and the training set can be split in any manner for cross-validation. The total number of test cases is 106. A total of 76 cases were added to the previous year’s dataset (23 and 53 to the train and test set respectively).

📨 Results submission format

TBD 

 

⚖ Evaluation Criteria

  • Task 1: The Dice Similarity Coefficient (DSC) and Hausdorf distance (95%) will be performed on the 3D volumes to assess the segmentation algorithms by comparing the automatic segmentation and the annotated ground truth within the provided bounding boxes. The final ranking will be based on the mean ranking across the test cases of the two metrics. Precision and recall will also be computed to assess over- and under-segmentation, as well as the arithmetic mean of sensitivity and positive predictive value.
  • Task 2: The ranking will be based on the concordance index (C-index) on the test data. The C-index quantifies the model’s ability to provide an accurate ranking of the survival times based on the computed individual risk scores, generalizing the area under the ROC curve (AUC). It can account for censored data and represents the global assessment of the model discrimination power.
  • Task 3: Same as task 2.

👥 Oganiser Info

  • Vincent Andrearczyk: Vincent Andrearczyk completed his PhD degree on deep learning for texture and dynamic texture analysis at Dublin City University in 2017. He is currently a senior researcher at the University of Applied Sciences and Arts Western Switzerland with a research focus on deep learning for texture analysis and medical imaging. Vincent co-organized ImageCLEF 2018 Caption detection and prediction challenge and his team at HES-SO Valais has extensive experience in organizing challenges (various tasks in ImageCLEF every year since 2012)
  • Valentin Oreiller: Valentin Oreiller received his M.Sc. degree in bioengineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland with a specialization in bioimaging. He is currently a PhD candidate at the University of Applied Sciences and Arts Western Switzerland with a research focus on radiomics.
  • Martin Vallières: Martin Vallières is a newly appointed Assistant Professor in the Department of Computer Science of Université de Sherbrooke (April 2020). He received a PhD in Medical Physics from McGill University in 2017, and completed post-doctoral training in France and USA in 2018 and 2019. The overarching goal of Martin Vallières’ research is centered on the development of clinically-actionable models to better personalize cancer treatments and care (“precision oncology”). He is an expert in the field of radiomics (i.e. the high-throughput and quantitative analysis of medical images) and machine learning in oncology. Over the course of his career, he has developed multiple prediction models for different types of cancers. His main research interest is now focused on the graph-based integration of heterogeneous medical data types for improved precision oncology. He has shared various datasets on The Cancer Imaging Archive (TCIA), including Soft-tissue sarcoma: FDG-PET/CT and MR imaging data of 51 patients, with tumors contours (RTstruct) and clinical data, Low-grade gliomas: Tumour contours for MR images of 108 patients of the TCGA-LGG dataset in MATLAB format, and Head-and-neck: FDG-PET/CT imaging data of 300 patients, with RT plans (RTstruct, RTdose, RTplan) and clinical data. Moreover, he has co-organized the PET radiomics challenge: A MICCAI 2018 CPM Grand Challenge. He participated in the organization of the data online. He also contributed to the challenge data pool via the Head-and-neck TCIA collection.
  • Catherine Chez Le Rest: Nuclear medicine department, CHU Poitiers, Poitiers, France and LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
  • Hesham Elhalawani: Hesham Elhalawani, MD, MSc is a radiation oncology clinical fellow at Cleveland Clinic. He completed a 3-year quantitative imaging biomarker research fellowship at MD Anderson Cancer Center. His deep-rooted research focus is leveraging artificial intelligence, radiomics, and imaging informatics to personalize cancer patients care. He published more than 50 peer-reviewed articles and served as a reviewer for journals and conferences, including Radiotherapy & Oncology, Red Journal, European Radiology, and AMIA conferences. He is among the editorial board of Radiology: Artificial intelligence, an RSNA publication. He has been an advocate for FAIR principles of data management via contributing to the mission and goals of NCI Cancer Imaging Program. Collaboratively with The Cancer Imaging Archive (TCIA), they publicly shared two large curated head and neck cancer datasets that included matched clinical and multi-modal imaging data. Moreover, he served on the organizing committee for the 2016 and 2018 MICCAI radiomics challenges that were hosted on Kaggle in Class to fuel the growing trend in mass crowdsource innovation.
  • Sarah Boughdad: Dr. Boughdad is currently a Fellow at the Service of Nuclear Medicine and Molecular Imaging at Lausanne University Hospital, Switzerland. In 2014, she graduated from the Medical Faculty of Paris-Sud, Paris-Saclay. She obtained her PhD in medical physics in 2018 from EOBE, Orsay University. She is an active researcher in the field of Radiomics.
  • Mario Jreige: Mario Jreige, MD, is a nuclear medicine resident at Lausanne University Hospital, Switzerland. He has previously completed a specialization in radiology at the Saint-Joseph University, Beirut. He is a junior member of the Swiss Society of Nuclear Medicine.
  • John O. Prior: John O. Prior, PhD MD, FEBNM has been Professor and Head of Nuclear Medicine and Molecular Imaging at Lausanne University Hospital, Switzerland since 2010. After graduating with a MSEE degree from ETH Zurich, he received a PhD in Biomedical Engineering from The University of Texas Southwestern Medical Center at Dallas and a MD from the University of Lausanne. He underwent thereafter specialization training in nuclear medicine in Lausanne and a visiting associate professorship at the University of California at Los Angeles (UCLA). Prof. Prior is currently President of the Swiss Society of Nuclear Medicine, Member of the European Association of Nuclear Medicine, the Society of Nuclear Medicine and Molecular Imaging, as well as IEEE Senior Member.
  • Dimitris Visvikis: Dimitris Visvikis is a director of research with the National Institute of Health and Medical Research (INSERM) in France and the Director of the Medical Information Processing Lab in Brest (LaTIM, UMR 1101). Dimitris has been involved in nuclear medicine research for more than 25 years. He obtained his PhD from the University of London in 1996 working in PET detector development within the Joint Department of Physics in the Royal Marsden Hospital and the Institute of Cancer Research. After working as a Senior Research Fellow in the Wolfson Brain Imaging Centre of the University of Cambridge he joined the Institute of Nuclear Medicine as Principal Medical Physicist in University College London where he introduced and worked for five years with one of the first clinical PET/CT systems in the world. He has spent the majority of his scientific activity in the field of PET/CT imaging, including developments in both hardware and software domains. His current research interests focus on improvement in PET/CT image quantitation for specific oncology applications, such as response to therapy and radiotherapy treatment planning, through the development of methodologies for detection and correction of respiratory motion, 4D PET image reconstruction, partial volume correction and denoising, tumour volume automatic segmentation and machine learning for radiomics, as well as the development and validation of Monte Carlo simulations for emission tomography and radiotherapy treatment dosimetry applications.

    He is a member of numerous professional societies such as IPEM (Fellow, Past Vice-President International), IEEE (Senior Member, Past NPSS NMISC chair), AAPM, SNMMI (CaIC board of directors 2007-2012) and EANM (physics committee chair). He is also the first Editor in Chief of the IEEE Transactions in Radiation and Plasma Medical Sciences

  • Mathieu Hatt: Mathieu Hatt is a computer scientist. He received his PhD in 2008 and his habilitation to supervise research in 2012. His main skills and expertise lie in radiomics, from automated image segmentation to features extraction, as well as machine (deep) learning methods, for PET/CT, MRI and CT modalities. He is in charge of a research group "radiomics modeling" in the team ACTION (therapeutic action guided by multimodal images in oncology) of the LaTIM (Laboratory of Medical Information Processing, INSERM UMR 1101, University of Brest, France). He is an elected member of the EANM physics committee, the SNMMI physics, data science and instrumentation council board of directors, and the IEEE nuclear medical and imaging sciences council.
  • Adrien Depeursinge: Adrien Depeursinge received the M.Sc. degree in electrical engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland with a specialization in signal processing. From 2006 to 2010, he performed his Ph.D. thesis on medical image analysis at the University Hospitals of Geneva (HUG). He then spent two years as a Postdoctoral Fellow at the Department of Radiology of the School of Medicine at Stanford University. He has currently a joint position as an Associate Professor at the Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), and as a Senior Research Scientist at the Lausanne University Hospital (CHUV). A large experience in challenge organization (e.g. ImageCLEF, VISCERAL) exists in his group jointly led with Prof. Müller (MedGIFT). He also prepared a dataset of Interstitial Lung Disease (ILD) for comparison of algos open access dataset. The library contains 128 patients affected with ILDs, 108 image series with more than 41 liters of annotated lung tissue patterns as well as a comprehensive set of 99 clinical parameters related to ILDs. This dataset has become a reference for research on ILDs and the associated paper has >100 citations.

Participants