Automated Lesion Segmentation in Whole-Body PET/CT and Longitudinal CT - The human frontier

🎬 Introduction

We invite you to participate in the fourth autoPET Challenge. The focus of this year's challenge is to explore the an interactive human-in-the-loop scenario for lesion segmentation in two tasks: 1) whole-body PET/CT and 2) longitudinal CT.

Positron Emission Tomography / Computed Tomography (PET/CT) and CT are an integral part of the diagnostic workup for various malignant solid tumor entities. Currently, response assessment for cancer treatment is performed by radiologists (i.e. human observers) on consecutive PET/CT or CT scans through the detection of changes in tumour size and distribution using standardised criteria. Despite the highly time-consuming nature of this manual task, only unidimensional (diameter) evaluations of a subset of tumour lesions are used to assess tumour dynamics. Additional quantitative evaluation of PET information would potentially allow for more precise and individualized diagnostic decisions. Besides the risk of inter-observer variability, the manual approach only extracts a diminutive proportion of the morphologic tumour data derived from images, thereby neglecting valuable, significant, and prognostic information.

Automation of tumour detection and segmentation as well as longitudinal evaluation may enable faster and more comprehensive information and data extraction. However, automated solutions for this task are lacking. AI-based approaches, using deep-learning models, may be an appropriate way to address lesion detection and segmentation in whole-body hybrid imaging (PET/CT and CT) to compensate workload and time pressure during radiological readings. So far, most AI solutions analyse isolated scans at single time points, and/or from a single imaging modality and thus information from preliminary or additional examinations is excluded. Moreover, methods are often prone to specialize to specific imaging conditions (imaging scanner, lesion phenotype, PET tracer, and so on) rendering it a challenge to generalize for different imaging scenarios. In addition, the necessity or potential benefit of integrating human experts in the training and/or inference loop is not yet explored in this setting.

Join us in autoPET/CT IV to explore the role of human annotations in segmenting and tracking lesions for PET/CT and CT imaging. Algorithms are provided with varying levels of annotations with the aim to investigate the model conditioning on label information. To this end, we provide a third, large longitudinal CT training dataset of melanoma patients under therapy. We allow the submission of data-centric solutions (using the provided baselines), integration and interaction with foundation models, using/extending pre-trained algorithms, or the developments of novel algorithms.

The autoPET IV challenge is hosted at MICCAI 2025:
and supported by the European Society for hybrid, molecular and translational imaging (ESHI). The challenge is part of the autoPET series and the successor of autoPET, autoPET II, and autoPET III.