Multi-modal Image Fusion in Lung Cancer Application
Adam Szmul, University of Oxford
Date & Time:
Wednesday, 7 March 2018, 15:00
Multi-modal Image Fusion in Lung Cancer Application Adam Szmul*
*Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford
In the case of lung cancer, extracting information from the imaging data as well as fusing data originating from different imaging modalities has the potential to provide more accurate diagnosis and guide more effective radiotherapy treatment. An assessment of regional lung function could spare well-functioning parts of the lungs and be used for follow-up.
To achieve this task, an accurate image registration between CT images acquired during dynamic imaging has to be performed. We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion". Applying this method to lung image registration results in highly accurate image registration (achieving target registration error of 1.16 mm on average per landmark) and anatomically plausible estimations of the deformations.
The resulting deformation fields might be further used to estimate lung ventilation maps. We present a novel approach for regional lung ventilation estimation from dynamic lung CT imaging. Our method combines a supervoxel-based image representation with deformable image registration, performed between peak breathing phases, for which we track changes in intensity of previously extracted supervoxels. Such a region-oriented approach is expected to be more physiologically consistent with lung anatomy than previous methods relying on voxel-wise relationships, as it has the potential to mimic the lung anatomy. Our results are compared with static ventilation images acquired from hyperpolarized Xenon129 MRI (XeMRI). We bring the CT-based ventilation maps and XeMRI images into alignment using a dedicated image registration framework, where we combine a number of affine and deformable registration steps. In our study we use three patient datasets consisting of 4DCT and XeMRI. We achieve higher correlation (0.487) compared with the commonly used method for estimating ventilation performed in a voxel-wise manner (0.423) on average based on global correlation coefficients. We also achieve higher correlation values for our method when ventilated/non-ventilated lung regions are investigated. The increase of the number of layers of supervoxels further improves our results, with one layer achieving 0.393, compared to 0.487 for 15 layers.
In our talk we present a novel approach for performing deformable lung CT image registration, preserving anatomically plausible deformation. We also propose a supervoxel-based method for estimating ventilation maps from previously aligned dynamic CT images. To evaluate the estimated maps we applied a dedicated registration framework and compared them with static XeMRI ventilation images.