Loading
Round 1: Completed

MICCAI 2020: HECKTOR

500 € Prize Money
36.2k
497
64
624

######################################################

Don't miss the 2021 edition of the HECKTOR challenge: 

https://www.aicrowd.com/challenges/miccai-2021-hecktor

######################################################

 

Sponsored by Siemens Healthineers Switzerland

                                                                       

MICCAI 2020 challenge: HEad and neCK TumOR segmentation challenge (HECKTOR)

MICCAI 2020 website: https://www.miccai2020.org/en/ 

Contact

vincent[dot]andrearczyk[at]gmail[dot]com

Introduction

This challenge will be presented at the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention, October 4th to 8th, 2020 (conference and satellite events fully virtual). The task is the automatic segmentation of Head and Neck (H&N) primary tumors in FDG-PET and CT images. It will offer an opportunity for participants working on 3D segmentation algorithms to develop automatic bi-modal approaches for the segmentation of H&N tumors in PET-CT scans, focusing on oropharyngeal cancers. Various approaches must be explored and compared to extract and merge information from the two modalities, including early or late fusion, full volume or patch-based approaches, 2-, 2.5- or 3-D approach.

Timeline

  • the release date of the training cases: June 01 2020 June 10 2020
  • the release date of the test cases: Aug. 01 2020
  • the submission date(s): opens Sept. 01 2020 closes Sept. 10 2020 (23:59 UTC-10)
  • paper submission deadline: Sept. 15 2020 Sept. 18 2020 (23:59 UTC-10)
  • the release date of the results: Sept. 15 2020
  • associated workshop days: Oct. 04 2020, 9:00-13:00 UTC https://miccai2020.pathable.co/meetings/virtual/o76WmuXENK3vJfbRb (zoom link available in "Introduction and Main Meeting Event" a few minutes before the start of the event)
  • the leaderboard remains open to post-challenge submissions (Show post-challenge submissions in the Leaderboard filters as shown on screenshot below)  

Scientific Program

9:00-13:00 UTC, half-day on Sunday, Oct. 4, virtual (zoom meeting)

9:00 - 9:30 (30 min) Introduction talk by organizers (data, results, prize, etc.)

9:30 - 10:15 (45 min) Keynote 1: Matthias Guckenberger, Prof. Dr. med., Professor for Radiation Oncology, University Hospital Zurich, University of Zurich “Radiomics for precision radiotherapy of head and neck cancer”

10:15 - 10:30: (15 min) Participant presentation: Jun Ma “Combining CNN and Hybrid Active Contours for Head and Neck Tumor Segmentation in CT and PET images”

10:30 - 10:45: (15 min) Participant presentation: Andrei Iantsen “Squeeze-and-Excitation Normalization for Automated Delineation of Head and Neck Primary Tumors in Combined PET and CT Images”

10:45 - 11:00: (15 min) Participant presentation: Ying Peng and Juanying Xie “The Segmentation of Head and Neck Tumors Using nnU-Net with Spatial and Channel ‘Squeeze & Excitation’ Blocks”

11:00 - 11:15: (15 min) Participant presentation: Yading Yuan “Automatic Head and Neck Tumor Segmentation in PET/CT with Scale Attention Network”

11:15 - 11:30 (15 min) Participant presentation: Huai Chen “Iteratively Refine the Segmentation of Head and Neck Tumor in FDG-PET and CT images”

11:30 - 11:45: (15 min) Participant presentation: Kanchan Ghimire “Patch-based 3D UNet for Head and Neck Tumor Segmentation with an Ensemble of Conventional and Dilated Convolutions”

11:45 - 12:00 (15 min) Break

12:00 - 12:45 (45 min) Keynote 2: Martin Vallières, PhD, Assistant Professor in the Department of Computer Science of Université de Sherbrooke “Radiomics : The Image Biomarker Standardization Initiative (IBSI)”

12:45 - 12:55 (10 min) Feedback from participants / What next

12:55 - 13:00 (5 min) Closing remarks

 

Paper submission

In order to be eligible for the official ranking, the participants must submit a paper describing their methods (short paper: 6-8 pages or full paper: 12-15 pages) due Sept. 15 2020. We will review them (independently from the MICCAI conference reviews) and publish a Lecture Notes in Computer Science (LNCS) volume in the challenges subline.

The submission platform (EasyChair) can be found here: https://easychair.org/conferences/?conf=hecktor2020

Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer’s proceedings LaTeX templates are also available in Overleaf. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made. Please send the form by email, specifying the title of the paper, to vincent[dot]andrearczyk[at]gmail[dot]com.

The following two papers must be cited:

Overview of the HECKTOR challenge at MICCAI 2020: Automatic Head and Neck Tumor Segmentation in PET/CT. Vincent Andrearczyk, Valentin Oreiller, Mario Jreige, Martin Vallières, Joel Castelli, Hesham Elhalawani, Sarah Boughdad, John O. Prior, Adrien Depeursinge. 2021

Automatic Segmentation of Head and Neck Tumors and Nodal Metastases in PET-CT scans. Vincent Andrearczyk, Valentin Oreiller, Martin Vallières, Joel Castelli, Hesham Elhalawani, Mario Jreige, Sarah Boughdad, John O. Prior, Adrien Depeursinge. In: Medical Imaging with Deep Learning. MIDL 2020. 

We encourage the participants to release their code and add the github link to their papers. 

Oral presentation

The top ranked teams will be contacted in September to prepare an oral presentation for the half-day event at MICCAI 2020 (Oct. 04 2020).

Prize

We will offer a 500 euros winner prize, kindly sponsored by Siemens Healthineers Switzerland (under the condition that the winning team submits a paper describing the method).

Data description

The training data comprises 201 cases from four centers (CHGJ, CHMR, CHUM and CHUS). The test data comprise 53 cases from another center (CHUV).
Each case comprises: CT, PET and GTVt (primary Gross Tumor Volume) in NIfTI format, as well as the bounding box location which (in bbox.csv file) and patient information (in hecktor_patient_info_training.csv file). 
bbox.csv contains one row per patient that specifies a 144x144x144 mm bounding box (in absolute mm reference) in itk convention. I.e., in the patient reference, x goes from right to left, y anterior to posterior and z inferior to superior. These bounding boxes can be used for training the models, e.g. as proposed in the baseline provided on the github repository. Similar bounding boxes will be provided for the test set. The evaluation (DSC scores) will be computed only within these bounding boxes at the original CT resolution.

The training data originate from (Vallières et al. 2017). They were used in (Andrearczyk et al. 2020), then curated (re-annotated by an expert) for the purpose of the challenge. The test data were annotated in the same way by the expert.

We also provide various functions to load, crop, resample the data, train a baseline CNN (niftynet) and evaluate the results on our github repository: https://github.com/voreille/hecktor. These codes are provided as a suggestion to help the participants. As long as the results are submitted in the original resolution and cropped to the correct bounding boxes, any other processing can be used.

In order to provide a fair comparison, participants who want to use additional data for training should also report results using only the HECKTOR data and discuss differences in the results.

Results submission format

Results should be provided as a single binary mask (1 in the predicted GTVt) .nii.gz file per patient in the CT original resolution and cropped using the provided bounding boxes. The participants should pay attention to saving NIfTI volumes with the correct pixel spacing and origin with respect to the original reference frame. The .nii files should be named [PatientID].nii.gz, matching the patient names, e.g. CHUV001.nii.gz and placed in a folder. This folder should be zipped before submission. If results are submitted without cropping and/or resampling, we will employ nearest neighbor interpolation given that the coordinate system is provided. Participants are allowed five valid submissions. The best result will be reported for each team. 

Motivation

Radiomics, the prediction of disease characteristics using quantitative image biomarkers from medical images has shown tremendous potential to optimize patient care, particularly in the context of H&N tumors (Vallières et al. 2017). However, it relies on an expensive and error-prone manual annotation process of Regions of Interest (ROI) to focus the analysis. The automatic segmentation of H&N tumors and nodal metastases from FDG-PET and CT images will enable the validation of radiomics models on very large cohorts and with optimal reproducibility. By focusing on metabolic and morphological tissue properties respectively, FDG-PET and CT modalities include complementary and synergistic information for cancerous lesion segmentation. This challenge will allow identifying the best methods to leverage the rich bi-modal information in the context of H&N primary tumor segmentation. This precious knowledge will be transferable to other cancer types and radiomics studies. In previous work, automated PET-CT analysis has been proposed for different tasks, including lung cancer segmentation in (Kumar et al. 2019, Li et al. 2019, Zhao et al. 2018, Zhong et al. 2018) and bone lesion detection in (Xu et al. 2018). In (Moe et al. 2019), a PET-CT segmentation was proposed for a task similar to the one presented in this challenge, i.e. H&N Gross Tumour Volume (GTV) delineation of the primary tumor as well as metastatic lymph nodes using a 2D U-Net architecture. An interesting two-stream chained fusion of PET-CT images was proposed in (Jin, et al. 2019) for esophageal GTV segmentation. This challenge will build upon these works by comparing, on a publicly available dataset, 2D and 3D recent segmentation architectures (V-Net) as well as the complementarity of the two modalities with quantitative and qualitative analyses (Andrearczyk et al. 2020). Finally, we evaluate the generalization of the trained algorithms to new centers in distinct geographic locations.

(Andrearczyk et al. 2020) Vincent Andrearczyk et al. "Automatic segmentation of head and neck tumors and nodal metastases in PET-CT scans", in: Medical Imaging with Deep Learning (MIDL), 2020.

(Vallières et al. 2017) Vallières, Martin et al. “Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer.” Scientific reports, 7(1):10117, 2017

(Kumar et al. 2019) Kumar, Ashnil, et al. “Co-learning feature fusion maps from PET-CT images of lung cancer.” IEEE Transactions on Medical Imaging 39.1 (2019): 204-217.

(Li et al. 2019) Li, Laquan, et al. “Deep learning for variational multimodality tumor segmentation in PET/CT.” Neurocomputing (2019).

(Zhao et al. 2018) Zhao, Xiangming, et al. “Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network.” Physics in Medicine & Biology 64.1 (2018): 015011.

(Zhong et al. 2018) Zhong, Zisha, et al. “3D fully convolutional networks for co-segmentation of tumors on PET-CT images.” 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE, 2018.

(Xu et al. 2018) Xu, Lina, et al. “Automated whole-body bone lesion detection for multiple myeloma on 68Ga-Pentixafor PET/CT imaging using deep learning methods.” Contrast media & molecular imaging (2018).

(Jin, et al. 2019) Dakai, Jin, et al. "Accurate esophageal gross tumor volume segmentation in pet/ct using two-stream chained 3D deep network fusion." International Conference on Medical Image Computing and Computer-Assisted Intervention, 2019.

(Moe et al. 2019) Moe, Yngve Mardal, et al. “Deep learning for automatic tumour segmentation in PET/CT images of patients with head and neck cancers.” Medical Imaging with Deep Learning (2019).

Evaluation Criteria

The Dice Similarity Coefficient (DSC) will be computed from the 3D volumes to assess the performance of segmentation algorithms by comparing the automatic segmentation and the annotated ground truth. Participant’s runs will be ranked based on the average DSC across all test cases. The method with the highest average DSC will be best (in the event of a tie, the variance will be considered) . DSC measures volumetric overlap between segmentation results and annotations. It is an appropriate measure of segmentation for imbalanced segmentation problems, i.e. when the region to segment is small as compared to the image size. DSC is commonly used in the evaluation of segmentation algorithms and particularly tumor segmentation tasks (Gudi et al. 2017), (Song et al. 2013), (Blanc-Durand et al. 2018), (Moe et al. 2019), (Menze et al. 2015). One aim of the developed algorithms is to further perform radiomics studies to predict clinical outcomes. DSC mostly evaluates the segmentation inside the ground truth volume (similar to intersection over union) and less the segmentation precision at the boundary. Therefore, DSC is particularly relevant for radiomics where first and second-order statistics are most relevant and less sensitive to small changes of the contour boundaries (Depeursinge et al. 2015). When compared to e.g. lung cancer, shape features are less useful in H&N because the oropharyngeal tumors are not spiculated and constrained by the anatomy of the throat.

(Gudi et al. 2017) Gudi, Shivakumar, et al. “Interobserver variability in the delineation of gross tumour volume and specified organs-at-risk during IMRT for head and neck cancers and the impact of FDG-PET/CT on such variability at the primary site.” Journal of medical imaging and radiation sciences 48.2 (2017): 184-192.

(Song et al. 2013) Song, Qi, et al. “Optimal co-segmentation of tumor in PET-CT images with context information.” IEEE transactions on medical imaging 32.9 (2013): 1685-1697.

(Blanc-Durand et al. 2018) Blanc-Durand, Paul, et al. “Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study.” PLoS One 13.4 (2018).

(Moe et al. 2019) Moe, Yngve Mardal, et al. “Deep learning for automatic tumour segmentation in PET/CT images of patients with head and neck cancers.” arXiv preprint arXiv:1908.00841 (2019).

(Menze et al. 2014) Menze, Bjoern H., et al. “The multimodal brain tumor image segmentation benchmark (BRATS).” IEEE transactions on medical imaging 34.10 (2014): 1993-2024.

(Depeursinge et al. 2015) Depeursinge, Adrien, et al. “Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT.” Medical physics 42.4 (2015): 2054-2063.

Organiser Info

  • Vincent Andrearczyk: Vincent Andrearczyk completed his PhD degree on deep learning for texture and dynamic texture analysis at Dublin City University in 2017. He is currently a senior researcher at the University of Applied Sciences and Arts Western Switzerland with a research focus on deep learning for texture analysis and medical imaging. Vincent co-organized ImageCLEF 2018 Caption detection and prediction challenge and his team at HES-SO Valais has extensive experience in organizing challenges (various tasks in ImageCLEF every year since 2012)
  • Valentin Oreiller: Valentin Oreiller received his M.Sc. degree in bioengineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland with a specialization in bioimaging. He is currently a PhD candidate at the University of Applied Sciences and Arts Western Switzerland with a research focus on radiomics.
  • Martin Vallières: Martin Vallières is a newly appointed Assistant Professor in the Department of Computer Science of Université de Sherbrooke (April 2020). He received a PhD in Medical Physics from McGill University in 2017, and completed post-doctoral training in France and USA in 2018 and 2019. The overarching goal of Martin Vallières’ research is centered on the development of clinically-actionable models to better personalize cancer treatments and care (“precision oncology”). He is an expert in the field of radiomics (i.e. the high-throughput and quantitative analysis of medical images) and machine learning in oncology. Over the course of his career, he has developed multiple prediction models for different types of cancers. His main research interest is now focused on the graph-based integration of heterogeneous medical data types for improved precision oncology. He has shared various datasets on The Cancer Imaging Archive (TCIA), including Soft-tissue sarcoma: FDG-PET/CT and MR imaging data of 51 patients, with tumors contours (RTstruct) and clinical data, Low-grade gliomas: Tumour contours for MR images of 108 patients of the TCGA-LGG dataset in MATLAB format, and Head-and-neck: FDG-PET/CT imaging data of 300 patients, with RT plans (RTstruct, RTdose, RTplan) and clinical data. Moreover, he has co-organized the PET radiomics challenge: A MICCAI 2018 CPM Grand Challenge. He participated in the organization of the data online. He also contributed to the challenge data pool via the Head-and-neck TCIA collection.
  • Joel Castelli: Dr Joël Castelli is an oncologist-radiation therapist at the radiation department in Centre Eugène Marquis, Rennes, France. He completed his PhD at the University of Rennes 1, France in 2017 on adaptive radiotherapy of head and neck cancers.
  • Hesham Elhalawani: Hesham Elhalawani, MD, MSc is a radiation oncology clinical fellow at Cleveland Clinic. He completed a 3-year quantitative imaging biomarker research fellowship at MD Anderson Cancer Center. His deep-rooted research focus is leveraging artificial intelligence, radiomics, and imaging informatics to personalize cancer patients care. He published more than 50 peer-reviewed articles and served as a reviewer for journals and conferences, including Radiotherapy & Oncology, Red Journal, European Radiology, and AMIA conferences. He is among the editorial board of Radiology: Artificial intelligence, an RSNA publication. He has been an advocate for FAIR principles of data management via contributing to the mission and goals of NCI Cancer Imaging Program. Collaboratively with The Cancer Imaging Archive (TCIA), they publicly shared two large curated head and neck cancer datasets that included matched clinical and multi-modal imaging data. Moreover, he served on the organizing committee for the 2016 and 2018 MICCAI radiomics challenges that were hosted on Kaggle in Class to fuel the growing trend in mass crowdsource innovation.
  • Sarah Boughdad: Dr. Boughdad is currently a Fellow at the Service of Nuclear Medicine and Molecular Imaging at Lausanne University Hospital, Switzerland. In 2014, she graduated from the Medical Faculty of Paris-Sud, Paris-Saclay. She obtained her PhD in medical physics in 2018 from EOBE, Orsay University. She is an active researcher in the field of Radiomics.
  • Mario Jreige: Mario Jreige, MD, is a nuclear medicine resident at Lausanne University Hospital, Switzerland. He has previously completed a specialization in radiology at the Saint-Joseph University, Beirut. He is a junior member of the Swiss Society of Nuclear Medicine.
  • John O. Prior: John O. Prior, PhD MD, FEBNM has been Professor and Head of Nuclear Medicine and Molecular Imaging at Lausanne University Hospital, Switzerland since 2010. After graduating with a MSEE degree from ETH Zurich, he received a PhD in Biomedical Engineering from The University of Texas Southwestern Medical Center at Dallas and a MD from the University of Lausanne. He underwent thereafter specialization training in nuclear medicine in Lausanne and a visiting associate professorship at the University of California at Los Angeles (UCLA). Prof. Prior is currently President of the Swiss Society of Nuclear Medicine, Member of the European Association of Nuclear Medicine, the Society of Nuclear Medicine and Molecular Imaging, as well as IEEE Senior Member.
  • Adrien Depeursinge: Adrien Depeursinge received the M.Sc. degree in electrical engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland with a specialization in signal processing. From 2006 to 2010, he performed his Ph.D. thesis on medical image analysis at the University Hospitals of Geneva (HUG). He then spent two years as a Postdoctoral Fellow at the Department of Radiology of the School of Medicine at Stanford University. He has currently a joint position as an Associate Professor at the Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), and as a Senior Research Scientist at the Lausanne University Hospital (CHUV). A large experience in challenge organization (e.g. ImageCLEF, VISCERAL) exists in his group jointly led with Prof. Müller (MedGIFT). He also prepared a dataset of Interstitial Lung Disease (ILD) for comparison of algos open access dataset. The library contains 128 patients affected with ILDs, 108 image series with more than 41 liters of annotated lung tissue patterns as well as a comprehensive set of 99 clinical parameters related to ILDs. This dataset has become a reference for research on ILDs and the associated paper has >100 citations.

Participants