The spine, as the axis of the human body, plays a crucial role in supporting loads, facilitating movement, and protecting the spinal cord and cauda equina. The incidence of spinal diseases, caused by factors such as trauma, degeneration, and tumors, has been increasing annually. Spinal surgery is a challenging procedure due to its technical complexity and proximity to vital organs such as the spinal cord, nerves, and aorta [1]. In recent years, there has been a growing demand for spinal surgery [2-4], with an increasing preference for minimally invasive spine surgery (MISS) over open surgery. MISS offers benefits such as reduced damage to surrounding tissues, decreased blood loss, less postoperative pain, and shorter hospital stays [5], resulting in improved efficacy and lower overall costs [6-7]. However, during MISS, surgeons heavily rely on image guidance, typically using intraoperative 2D fluoroscopy or navigation, to plan and place surgical hardware such as pedicle screws [8-9].

Image-guided surgery (IGS), also known as computer-assisted surgery, is a widely used navigation technology in minimally invasive orthopedic procedures [10]. Advanced surgical guidance systems, both domestically and internationally, have been utilized to locate lesions, track surgical instruments, and guide the surgical process through image processing of medical imaging data. These systems have shown to improve the accuracy of surgical procedures [11-12], simplify surgical steps, reduce surgical time, and effectively decrease radiation exposure and surgical complications [13]. However, conventional image-guided systems rely on the surgeon's visual-spatial skills, requiring them to mentally convert intraoperative 2D images into 3D anatomical structures. This constant switching of focus between the screen and the surgical site makes it challenging to determine the correct direction and position of tools, surgical targets, and anatomical structures, resulting in reduced surgical efficiency and accuracy. The use of intraoperative 3D fluoroscopy and computed tomography (CT) imaging may also lead to potential harmful ionizing radiation [8,14] and increased patient registration time [15-16].

In recent years, augmented reality (AR) technology has made significant advancements and is now being implemented in MISS. This technology allows surgeons to directly visualize the surgical area through specialized equipment, overlaying virtual images with real-time position data of surgical instruments onto the real environment. By superimposing the reconstructed 3D model of the patient's spine onto the real surgical site, AR technology provides surgeons with a clear understanding of the patient's anatomy, enabling better surgical judgment and operation. This, in turn, reduces patient pain, simplifies surgery, and improves the success rate of procedures [17]. AR technology overcomes the inherent limitations of conventional surgical navigation systems mentioned above, providing surgeons with a 'perspective eye.' Growing evidence suggests that the use of AR in pedicle screw placement improves surgical accuracy, clinical outcomes, and reduces the required radiation exposure for navigation [8,17-19]. The integration of AR in spinal surgery has demonstrated a reduction in radiation exposure by over 70% [8,20].

Spatial augmented reality technology combines computer graphics and spatial tracking technology to enable navigation. In a spine surgery navigation system based on spatial augmented reality, the key step is to fuse reconstructed medical images with the patient's world space coordinates. Currently, physical anatomical feature-based registration is the predominant method in surgical navigation systems [21]. However, this method requires invasive exposure of the target anatomy and manual segmentation of features in preoperative images. Another commonly used spatial registration method involves attaching markers to the patient's skin to establish a relationship between the patient and medical image [22]. However, errors can occur during surgery due to skin and bone displacement.

A novel image-guided system, FLASH Navigation (SeaSpine, California, USA), utilizes machine vision to create a 3D image of the patient's surface bone anatomy. This system projects visible light to construct a digital terrain map of the exposed vertebral body during surgery, which is then correlated with preoperative CT data of the spine anatomy [23-24]. However, this system is limited in its application to minimally invasive surgery as it requires exposure of the bone for registration.

Therefore, this study proposes a novel spatial registration method based on 2D/3D registration to accurately align the medical image coordinate system with the patient coordinate system. This method aims to provide accurate and reliable image guidance for spine surgery navigation based on augmented reality. Firstly, a personalized marker that can be recognized by both X-ray and monocular cameras is designed to register the optical navigation space to the augmented reality space. Then, 2D/3D registration technology based on grayscale is employed to align the preoperative CT image with the intraoperative X-ray image. The resulting transformation matrix is used to achieve spatial registration between the medical image coordinate system and the patient coordinate system. Finally, the spatial registration results are applied to the augmented reality navigation system of spinal surgery, enabling the fusion of virtual and real patient images on the display screen. To validate the surface registration accuracy of the spatial registration method based on 2D/3D registration technology in augmented reality spine surgery, a phantom experiment was conducted. The design and implementation of this system will be detailed in the following chapters.

Spatial Registration for Augmented Reality Spine Surgery: A Novel Approach Based on 2D/3D Registration

原文地址: https://www.cveoy.top/t/topic/pYUU 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录