2.4 Virtual and real registration\nWe have developed a software for spinal surgery navigation system that enables the fusion of 3D virtual spinal models with real patients through the recognition of ArUco markers. The surgical procedure was captured using a camera. During the registration process, the real-time calculation of the relative position between the camera and the personalized marker is performed. Computer vision technology is used to accurately position the virtual spinal model, resulting in the fusion of the virtual model with the surgical scene.\n\nThe virtual-real image fusion is achieved through the ICP registration of personalized markers, which correspond to the relative position of the patient in the real world and medical images. The patient registration process is illustrated in Fig. 8.\n\nFig. 8 illustrates the patient registration process, where medical images and patients are registered. Once registered, the virtual organ is fused with the patient to guide the surgery.\n\nBy capturing the position of the personalized marker in real space using the camera, we can calculate the transformation matrix using the least squares (LS) method and singular value decomposition (SVD) algorithm. The objective function of the algorithm is represented by equation (7), where the rotation matrix and translation matrix are denoted as R and T, respectively, and together they form the transformation matrix. The coordinates of the four corner points of the marker in the world and model coordinate systems are denoted as Pw and Pm, respectively. The LS and SVD registration ensures correct alignment of the medical images and models with the actual patient, completing the intraoperative registration.\n\nIn the augmented reality spine surgery navigation system presented in this paper, the virtual model is dynamically registered in the world space coordinate system using the personalized marker to obtain the transformation matrix. This allows for the registration and fusion of the spine model with the reconstructed 3D virtual image. The relationship between the personalized marker and medical images in the virtual space is determined through the previous registration process. This method eliminates the need for preoperative rigid connections, reducing trauma and achieving better alignment between markers and patients.\n\n2.5 Experimental design\nTo validate the accuracy of the space registration method, two types of experiments were conducted: 2D-3D image registration and spinal model experiments. The spinal model used in the experiments was prepared based on the human spine. Medical images were obtained from the Department of Radiation Therapy of Tianjin Medical University Cancer Institute and Hospital in Tianjin. The CT scan was used to obtain the CT medical sequence images of the spine model, while X-ray images were taken to simulate intraoperative X-ray images. The parameters of the CT medical sequence images were as follows: slice thickness of 3mm, pitch of 0.8027, actual size of 411×411mm, and pixel size of 512×512. The parameters of the X-ray images were: pixel spacing of 0.388mm and pixel size of 1024×768.\n\nIn the 2D/3D registration experiment, CT medical images and X-ray images of two patients and the spinal model were used. For the spinal model, the X-ray image was rotated clockwise around the z-axis by ten degrees each time, and the experiment was repeated ten times. The registration experiment was also conducted using the medical images of the two patients. In this experiment, the "gold standard" data for 2D/3D registration [28] was used, and the registration success rate was calculated using the mean target registration error (mTRE) as defined in equation (8), where T and T' represent the transformation matrix and the gold standard transformation matrix obtained after registration, and P is the target point set randomly selected from the CT volume data. The initial transformation parameters were obtained by perturbing the "gold standard" parameters, resulting in an initial mTRE distribution between 0 and 10mm. The intensity pattern intensity mode was set with parameter α as 10 and β as 3.\n\nA surface registration experiment was designed to evaluate the accuracy and practicality of the space registration method. The experimental process is depicted in Fig. 9 and involves the following steps:\n\nStep 1: The CT medical sequence images of the patient were reconstructed in 3D using the cube movement algorithm, resulting in a 3D virtual model of the spine.\nStep 2: The personalized marker image was captured in real time using the ArUco marker-based matching algorithm, enabling the spatial registration of the patient to the world coordinate system.\nStep 3: The transformation matrix from the marker coordinate system to the X-ray image coordinate system and the transformation matrix from the intraoperative X-ray image coordinate system to the CT image coordinate system were calculated, completing the spatial registration of the medical image coordinate system to the world coordinate system.\nStep 4: Finally, the augmented reality spine surgery navigation system was used to overlay and fuse the virtual spine model with the patient image, achieving external visualization of the spine image.\n\nFig. 9 illustrates the experimental process for surface registration error of the model. The processed medical images are registered, and the results are applied to the augmented reality spinal surgery navigation system.\n\nThe surface registration error of the ball on the spine model was used as the evaluation metric, defined as the root mean square (RMS) registration error of the position of the real ball and the virtual ball in the world coordinate system. The expression for this error is given by equation (10). The virtual model and real model are shown in Fig. 10(a) and Fig. 10(b), respectively. The real position of the ball was measured using the surgical tracker proposed by our research group [29], as shown in Fig. 10(c). After registration, the virtual image is fused with the spine model, as shown in Fig. 10(d), and the software provides the positions of the five virtual particles along with a set of error values. The experimental steps were repeated for the second set of experiments by changing the spatial position of the model. A total of 10 experiments were conducted.\n\nFig. 10 depicts the registration error of the ball, showing the virtual 3D model (a), real spine model (b), the tracker used to measure the real position of the ball (c), and the virtual-real fusion effect (d).


原文地址: https://www.cveoy.top/t/topic/pYVs 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录