Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Primary Supervisor: Dr Bartek Papiez

Second Supervisor: Prof Katherine Vallis 

Project Overview:

Contouring tumour volumes is an important early step in the radiotherapy planning process. Several imaging modalities (CT, PET and MRI) are used to guide contouring, but inter-observer variation in contouring tumour volumes can be significant. Deep Learning (DL) methods have been developed for delineation for limited tumour types and have shown good agreement with manual contouring, with a reduction in observer variability. However, combining different complementary imaging modalities (e.g. metabolic information from PET and anatomical information from CT) raises questions about the contribution of each modality to the auto-contouring process. Furthermore, medical images have limited resolution and are often marred by undesirable noise, reducing the information that can be reliably extracted from them. Advances in both the incorporation of physics information (e.g. image acquisition or image reconstruction) into DL algorithms and in the use of hybrid DL methods have been shown to greatly enhance the information that can be derived from medical images. In addition, the inclusion of clinical metadata in algorithm development can contribute to precision.

The aims of this project are:

(1) to understand the contribution of different complementary imaging modalities to the accuracy of tumour segmentation and to develop algorithms that utilise information from all modalities and,

(2) to develop new approaches that can integrate complementary information from clinical imaging by using physics-informed or biologically-inspired DL.

The initial focus of the project will be on head and neck and lung cancers. Archived PET-CT and MRI scans of patients who underwent radiotherapy and where the tumour volume has been defined by expert(s) will be used for neural network development. Questions that will be addressed include how to minimise the number of false positive predictions, and how to strengthen the generalisability of developed models.

Training Opportunities

The project will suit students with a background/interest in computer science or vision, physics, mathematics, engineering or radiation medicine sciences. The student will contribute to the development and implementation of segmentation algorithms using computer vision and machine learning techniques. They will acquire strong software skills, e.g. in C++/Python, with experience in implementing algorithms for medical imaging. The student will be based at the Big Data Institute (BDI) which is a world-renowned research centre and is housed in a new state-of-the-art research facility. Full training will be provided in a range of Health Data Science topics. Skills training will include a grounding in clinical research methods including clinical data management and analysis. There will be clinical observation opportunities in radiation oncology as appropriate, and the project will involve direct collaboration with clinical colleagues in oncology. The student will have the opportunity to attend research seminars offered at the BDI, the Department of Oncology and the Institute of Biomedical Engineering (IBME) (as Prof. Papiez is a member of Imaging Hub at the IBME: https://eng.ox.ac.uk/biomedical-image-analysis/). Subject-specific training will be provided through weekly supervision meetings with both supervisors and research group meetings.

Relevant Publications:

Trimpl, M.J., Boukerroui, D., Stride, E.P., Vallis, K.A. and Gooding, M.J., 2021. Interactive contouring through contextual deep learning. Medical Physics48(6), pp.2951-2959.

Bourigault, E., McGowan, D.R., Mehranian, A. and Papież, B.W., 2021. Multimodal PET/CT tumour segmentation and prediction of progression-free survival using a full-scale UNet with attention. In 3D Head and Neck Tumor Segmentation in PET/CT Challenge (pp. 189-201). Cham: Springer International Publishing.

Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S. and Yang, L., 2021. Physics-informed machine learning. Nature Reviews Physics3(6), pp.422-440.

 

Maier, A., Köstler, H., Heisig, M., Krauss, P. and Yang, S.H., 2022. Known operator learning and hybrid machine learning in medical imaging—a review of the past, the present, and the future. Progress in Biomedical Engineering4(2), p.022002.

Trimpl, M.J., Primakov, S., Lambin, P., Stride, E.P., Vallis, K.A. and Gooding, M.J., 2022. Beyond automatic medical image segmentation—the spectrum between fully manual and fully automatic delineation. Physics in Medicine & Biology67(12), p.12TR01.