Semi- and self-supervised learning for lung segmentation

Acute damage to the lung leads to changes in the thoracic CT that can be quantified from the CT and thus allow both an accurate diagnosis and provide important information for an appropriate, lung-protective therapy. In order to make this quantification possible, it is necessary to segment the lung from the surrounding structures in each CT image slice. Only then, with the help of suitable image analysis algorithms, can the type and extent of lung damage be precisely characterized and quantified, and corresponding therapeutic consequences derived.

For healthy lung tissue, automatic segmentation is already possible using conventional image processing algorithms (e.g., edge detection filters). Due to the same Hounsfield density of damaged, edematous lung tissue and the surrounding body tissue, these algorithms fail in the border and edge areas, with them it is not possible to distinguish “reliably” between damaged lung parenchyma and surrounding tissue. Segmentation in this case is only possible manually, i.e. an experienced, trained physician must manually move around and mark the lung in each individual CT image slice using suitable software. This step is time-consuming and prevents immediate clinical use. At the same time, this step carries a small risk of interobserver bias, which is possible even in experienced hands.

Recently, convolutional U-Nets have been shown to provide lung segmentations with good quality. However, they still need to be trained on images labeled by a trained physician.

The goal of this master thesis is to explore the use of unlabeled data along with contrastive and other self-supervised training schemes to build a representation that can yield good segmentation results with fewer labeled instances.

The master thesis will be co-supervised by Dr. Peter Hermann and Prof. Dr. med. Michael Quintel from the UMG.