INTRODUCTION: The advent of the operating microscope revolutionized neurosurgery by introducing stereoscopic visualization and enabling microneurosurgery. Endoscopy, in turn, brought a compact, wide-angle view through minimal access corridors but sacrificed depth perception. Although 3D endoscopes partially restore stereopsis, they remain costly and hardware dependent. Synthetic or virtual 3D endoscopy aims to recreate depth perception algorithmically from standard 2D endoscopic footage using computer vision and deep learning. AREAS COVERED: This expert review delineates the three main methodological pillars of depth estimation in endoscopy (motion-based, shading-based, and learning-based approaches) and outlines how recent advances such as MASt3R (Matching and Stereo 3D Reconstruction) and DINOv2/3 foundation models have transformed dense depth prediction. The authors highlight datasets, algorithmic trade-offs, and challenges specific to skull base surgery, including reflective surfaces, deformation, and limited texture. The review synthesizes evidence showing that software-based virtual 3D reconstruction can achieve relatively low-latency performance and generate geometrically consistent, visually realistic views without dedicated hardware. EXPERT OPINION: Virtual 3D endoscopy may enhance hand-eye coordination, depth judgment, and training for junior surgeons. While measurable clinical gains may remain modest, software-based, hardware-agnostic 3D visualization offers a pragmatic, scalable alternative to expensive optical 3D systems, potentially democratizing stereoscopic vision in skull base and endoscopic neurosurgery.
Journal article
2026-04-01T00:00:00+00:00
26
371 - 380
9
Skull base, depth prediction, endoscopy, microsurgery, neurosurgery, synthetic imaging, virtual 3D, Humans, Imaging, Three-Dimensional, Skull Base, Neurosurgical Procedures, Neuroendoscopy, Depth Perception, Endoscopy, Surgery, Computer-Assisted