Accurate Spinal Surgery Navigation Using Personalized Markers and 2D/3D Registration
Since the shape and size of the personalized marker are precisely known in 3D, the 3D to 2D point pair relationship can be calculated using the Perspective-n-Point (PnP) method if the marker can be detected in the 2D image. This relationship is represented by Equation (2):\n\n(2)\n\nIn this equation, represents the coordinate of the point in the pixel coordinate system, represents the coordinate of the point in the camera coordinate system, represents the coordinate of the point in the marker coordinate system, represents the depth of the point, represents the camera's intrinsic matrix, and represents the pose transformation from the marker coordinate system to the camera coordinate system.\n\nThe ArUco coordinate system is used as the marker coordinate system, and the initial position of the marker is placed on the plane, as shown in Fig.4. By knowing the true physical size of the marker, the three-dimensional spatial coordinates of the marker's four corner points (、、、) can be obtained. The corresponding coordinates of these points in the pixel coordinate system (、、、) can be obtained through monocular camera detection and recognition, and the camera's intrinsic matrix has been obtained in previous camera calibration work [26].\n\nFig.4 illustrates the transformation of coordinate system in ArUco. Taking point A as an example, the relationship is represented by Equation (3):\n\n(3)\n\nIn this equation, and are the rotation matrix and translation vector that we want to find, which represent the external parameters of the camera.\n\nAccording to the principle of C-arm X-ray image acquisition, the C-arm X-ray radiation source is treated as a monocular camera to obtain the pixel coordinates of the personalized marker's four vertices. After obtaining the 2D and 3D point pairs, the transformation relationship between the marker coordinate system and the camera coordinate system can be calculated, and the transformation matrix can be obtained.\n\n2.3 X-ray image to CT registration\n\nTo avoid trauma to patients caused by traditional registration methods, we used a personalized marker that only appears in intraoperative X-ray images and does not exist in preoperative CT images. Through 2D/3D registration, we obtained the relationship between the marker and the CT medical images.\n\nThe process of 2D/3D registration is as follows: 1) 3D modeling of preoperative CT images to obtain a virtual 3D model of the spine; 2) taking intraoperative lateral X-ray images of the spine; 3) setting the initial transformation parameters to determine the initial guess of the relative position between the X-ray image and the CT data for optimization; 4) projecting the virtual 3D model of the spine obtained from step 1 with a point light source using ray tracing technology, and obtaining the digitally reconstructed radiograph (DRR) image in lateral planes according to the initial transformation parameters; 5) calculating the similarity between the DRR image and the intraoperative lateral X-ray image; 6) if the similarity calculation result meets the standard, the 3D-2D registration process can be completed to obtain the position of the point light source relative to the 3D model in the virtual space. Otherwise, repeating steps 3 to 6 and updating the parameters of the point light source. Fig.5 illustrates the 2D/3D registration process.\n\nThis study used the ray casting algorithm to operate on the CT volume data, which simulates the process of X-ray imaging formation as shown in Fig.6.\n\nFig.6 illustrates the principle of DRR generation. After given initialization parameters p, a ray is emitted from the virtual point source, and the intersection point of the ray and the projection plane determines the pixel of the point in the DRR. The absorption coefficient of each intersection point between this ray and CT volume data is calculated, and the CT values obtained along the entire path are accumulated to obtain the pixel value of the DRR image corresponding to the point on the detector. Repeat the above steps, DRR images are obtained after all ray projections are completed. The DRR image of the spinal model is shown in Fig.7.\n\nThe DRR image is compared with the X-ray image to be registered, and the similarity measurement based on pattern intensity (PI) is calculated. This measurement judges whether the registration is successful by measuring whether the pattern in the difference image (the difference in gray values between the two images) has been minimized. The similarity measurement function is represented by Equation (6):\n\n(6)\n\nIn this equation, ; , , , and represent pixel coordinates in the image; represents the pixel value of a specific coordinate in the subtracting image of the two images to be registered; represents the pixel value of coordinates in the neighborhood of a pixel coordinate in the subtracting image of the two images to be registered; represents the radius of the effective calculation area of the mode intensity of each pixel; represents the diameter; the constant is the weight of the function to eliminate the interference of noise. represents the final mode intensity value.\n\nIn the process of constantly exploring the optimal solution, the Powell algorithm is used to speed up the search process. The Powell algorithm is a conjugate search method that constructs a conjugate search direction directly using function values. It does not require complex calculations such as gradients and Hessian matrices, making it faster and more effective. By using the Powell algorithm for optimization, the transformation parameters are output after the measurement value reaches the optimum, and the transformation matrix from the X-ray image coordinate system to the CT image coordinate system is solved.\n\n2.4 Virtual and real registration\n\nWe developed a spinal surgery navigation system software to realize the virtual-real fusion of 3D virtual spinal models and real patients by recognizing ArUco markers. During the registration process, the relative position between the camera and the personalized marker is calculated in real time. Computer vision technology is used to place the virtual spinal model in the correct position, and the fusion of the virtual spinal model and the surgical scene is realized\n
原文地址: https://www.cveoy.top/t/topic/pYVl 著作权归作者所有。请勿转载和采集!