Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

OBJECTIVES: Besides clinical examination, cranial CT plays a critical role in diagnostics in neurosurgery. In trauma cases or perioperatively, having low-barrier access to CT-like imaging would be highly beneficial. Therefore, this feasibility study examines at an early stage if and how well synthetic cranial CT imaging can be generated from biplanar radiographs of adult neurosurgical patients using deep learning. MATERIALS AND METHODS: Two 2D to 3D generative adversarial networks (GANs) were trained for the generation of synthetic cranial CTs using radiographs taken in two planes as input. Model 1 uses digitally reconstructed radiographs (DRRs) as input, while model 2 was trained using real X-rays. In total, model 1 was trained and validated using 235 images from three separate centers. Model 2 was trained and tested using 1323 images from a single center. RESULTS: The performance of the model using DDRs as input reached a peak-signal-to-noise ratio (PSNR) of 15.61 and a structural similarity index measure (SSIM) of 0.782 during external validation. The second model, using real X-rays as input, attained a PSNR of 14.69 and an SSIM of 0.717 upon internal validation. CONCLUSIONS: At the present stage, the synthetic cranial tomography scans generated as part of this study show promise but do not seamlessly correspond to ground-truth CTs. However, this proof-of-concept study is the first to derive such artificial cranial images using deep learning and can serve as a starting point for further investigation. KEY POINTS: Question Cranial computed tomography involves radiation, logistical challenges, and access is limited in rural areas. Generating synthetic CT images with deep learning could address these challenges. Findings Two deep-learning models were trained to produce CT images from radiographs. Reconstruction from DRRs is promising, but using real X-rays remains more challenging. Clinical relevance As a proof-of-concept, the models' exact clinical relevance remains to be defined. The proposed approach may broaden access to tomographic neuroimaging, reduce radiation, and enhance intraoperative and maybe even diagnostic support, potentially improving outcomes in neurosurgery and neuro-critical care.

More information Original publication

DOI

10.1007/s00330-025-12253-1

Type

Journal article

Publication Date

2026-01-15T00:00:00+00:00

Keywords

Deep learning, Head, Neuroimaging, Neurosurgery, Tomography (X-ray computed)