Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

PURPOSE: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. METHODS: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. RESULTS: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4 +/- 1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images takes a little less than a second where most time is spent on the image preprocessing. CONCLUSIONS: The authors have developed a method to automatically detect multiple markers in a pair of projection images that is robust, accurate, and sufficiently fast for clinical use. It can be used for kV, MV, or mixed image pairs and can cope with limited motion between the projection images.

Original publication

DOI

10.1118/1.3355871

Type

Journal article

Journal

Med Phys

Publication Date

04/2010

Volume

37

Pages

1554 - 1564

Keywords

Algorithms, Automation, Humans, Image Processing, Computer-Assisted, Imaging, Three-Dimensional, Male, Models, Statistical, Motion, Particle Accelerators, Phantoms, Imaging, Prostatic Neoplasms, Radiotherapy Planning, Computer-Assisted, Reproducibility of Results