Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Similar documents
Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay

Iterative CT Reconstruction Using Curvelet-Based Regularization

2D Vessel Segmentation Using Local Adaptive Contrast Enhancement

Discrete Estimation of Data Completeness for 3D Scan Trajectories with Detector Offset

Scaling Calibration in the ATRACT Algorithm

Total Variation Regularization Method for 3-D Rotational Coronary Angiography

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

Edge-Preserving Denoising for Segmentation in CT-Images

Automatic Removal of Externally Attached Fiducial Markers in Cone Beam C-arm CT

Convolution-Based Truncation Correction for C-Arm CT using Scattered Radiation

Visual Tracking (1) Feature Point Tracking and Block Matching

Portability of TV-Regularized Reconstruction Parameters to Varying Data Sets

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Total Variation Regularization Method for 3D Rotational Coronary Angiography

Non-Stationary CT Image Noise Spectrum Analysis

An Overview of Matchmoving using Structure from Motion Methods

A Summary of Projective Geometry

ToF/RGB Sensor Fusion for Augmented 3-D Endoscopy using a Fully Automatic Calibration Scheme

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Make the Most of Time

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab

Improvement and Evaluation of a Time-of-Flight-based Patient Positioning System

Band-Pass Filter Design by Segmentation in Frequency Domain for Detection of Epithelial Cells in Endomicroscope Images

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Geometry-Based Optic Disk Tracking in Retinal Fundus Videos

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Visual Tracking (1) Pixel-intensity-based methods

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging

arxiv: v1 [cs.cv] 28 Sep 2018

A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray

Quality Guided Image Denoising for Low-Cost Fundus Imaging

Spatial-temporal Total Variation Regularization (STTVR) for 4D-CT Reconstruction

Separate CT-Reconstruction for Orientation and Position Adaptive Wavelet Denoising

Respiratory Motion Estimation using a 3D Diaphragm Model

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Factorization Method Using Interpolated Feature Tracking via Projective Geometry

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

Measurement of Pedestrian Groups Using Subtraction Stereo

Comparison of Default Patient Surface Model Estimation Methods

Comments on Consistent Depth Maps Recovery from a Video Sequence

A parallel patch based algorithm for CT image denoising on the Cell Broadband Engine

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

3D Corner Detection from Room Environment Using the Handy Video Camera

Evaluation of Spectrum Mismatching using Spectrum Binning Approach for Statistical Polychromatic Reconstruction in CT

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

GPU-Accelerated Time-of-Flight Super-Resolution for Image-Guided Surgery

Blind Sparse Motion MRI with Linear Subpixel Interpolation

Intensity-Based 3D-Reconstruction of Non-Rigid Moving Stenosis From Many Angiographies erschienen in

Reduction of Metal Artifacts in Computed Tomographies for the Planning and Simulation of Radiation Therapy

Combination of Markerless Surrogates for Motion Estimation in Radiation Therapy

Rectification and Distortion Correction

Projection and Reconstruction-Based Noise Filtering Methods in Cone Beam CT

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Kanade Lucas Tomasi Tracking (KLT tracker)

Feature Tracking and Optical Flow

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

Respiratory Motion Compensation for C-arm CT Liver Imaging

High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications

Disparity Search Range Estimation: Enforcing Temporal Consistency

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Automatic Lung Surface Registration Using Selective Distance Measure in Temporal CT Scans

Flexible Calibration of a Portable Structured Light System through Surface Plane

Support Vector Machine Density Estimator as a Generalized Parzen Windows Estimator for Mutual Information Based Image Registration

Multi-modal Image Registration Using the Generalized Survival Exponential Entropy

Factorization with Missing and Noisy Data

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter

Markov Random Field Optimization for Intensity-based 2D-3D Registration

EE795: Computer Vision and Intelligent Systems

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Multi-stable Perception. Necker Cube

Time-of-Flight Surface De-noising through Spectral Decomposition

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach

Self-Calibration of Cone-Beam CT Using 3D-2D Image Registration

Response to Reviewers

Pattern Feature Detection for Camera Calibration Using Circular Sample

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Markerless Real-Time Target Region Tracking: Application to Frameless Stereotactic Radiosurgery

Vision Review: Image Formation. Course web page:

Texture-Based Detection of Myositis in Ultrasonographies

Silhouette-based Multiple-View Camera Calibration

Towards an Estimation of Acoustic Impedance from Multiple Ultrasound Images

On the Characteristics of Helical 3-D X-ray Dark-field Imaging

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Augmenting Reality, Naturally:

Light source estimation using feature points from specular highlights and cast shadows

Tracking Points in Sequences of Color Images

Robust Camera Calibration from Images and Rotation Data

Dense 3D Reconstruction. Christiano Gava

Fully Automatic Endoscope Calibration for Intraoperative Use

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

Motion Estimation. There are three main types (or applications) of motion estimation:

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

A new calibration-free beam hardening reduction method for industrial CT

Segmentation and Tracking of Partial Planar Templates

Transcription:

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg 2 Healthcare Sector, Siemens AG, Forchheim 3 Erlangen Graduate School in Advanced Optical Technologies (SAOT) jian.wang@cs.fau.de Abstract. A novel depth-layer based patient motion compensation approach for 2D/3D overlay applications is introduced. Depth-aware tracking enables automatic detection and correction of patient motion without the iterative computation of digitally reconstructed radiographs (DRR) frame by frame. Depth layer images are computed to match and reconstruct 2D features into 2D+ space. Using standard 2D tracking and the additional depth information, we directly estimate the 3D rigid motion. The experimental results show that with about 30 depth layers a 3D motion can be recovered with a projection error below 2 mm. 1 Introduction In interventional radiology, a typical scenario is that real-time information (e.g. projection position of a catheter tip) is provided by 2D X-ray sequences acquired by the C-arm system. The pre-interventional 3D volume is overlaid to augment the 2D images with additional spatial information, noted as 2D/3D overlay. Over decades, different 2D/3D registration methods were developed [1]. In nowaday clinical practice, 2D/3D overlay can achieve high accuracy at the beginning of a procedure as well as after the correction of misalignments due to patient motion during the procedure. However, patient motion needs to be detected by the physician and the correction is manually triggered in most of state-ofthe-art applications. Our goal is to achieve automatic patient motion detection and real-time compensation. State-of-the-art 2D/3D registration methods [1] commonly depend on the expensive iterative computation of DRR images of the 3D volume. Meanwhile, standard tracking based motion detection/compensation methods are not optimal for X-ray sequences, due to the fact that structures from different depths overlap each other in the X-ray projection sequences. In this paper, a novel approach of 2D/3D registration for rigid patient motion compensation is introduced. Instead of iteratively computing DRRs frame by frame, we introduce the concept of depth layer images for recovering depths of the 2D features. Then a standard tracking method is employed to find 2D correspondences between neighboring frames. 3D rigid motion is directly estimated from the depth-aware correspondences. H.-P. Meinzer et al. (Hrsg.), Bildverarbeitung für die Medizin 2013, Informatik aktuell, DOI: 10.1007/978-3-642-36480-8_24, Springer-Verlag Berlin Heidelberg 2013 128

2 Materials and methods Motion Compensation for 2D/3D Overlay 129 The proposed approach takes the advantage of depth information from the initially registered 3D volume, and estimates the rigid 3D motion from depth-aware tracking. As illustrated in Fig. 1, it starts with an initially registered 2D/3D overlay, based on which depth layer images are generated (2.1) and compared to the initial frame for 2D+ reconstruction of 2D patches (2.2). A standard tracker is then employed to track patches over time. The rigid 3D motion is estimated taking into account depth information (2.3), which is then applied to the 3D volume for motion compensation. 2.1 Depth layer generation Depth information is lost in the X-ray projections, which is one of the reasons of using 2D/3D overlay. Given a registered 2D/3D overlay, depth of the overlaid 3D can be encoded in color to improve the depth perception [2]. This leads to the idea of recovering depth information of 2D X-ray features from the initially registered 3D volume. We divide the 3D volume into a stack of depth layer volumes V di,i = 1,..., n (n is the number of depth layers), in the way that all subvolumes are uniformly divided along the principal ray direction (Fig. 2), which can be rendered independently to generate depth layer images D i. One advantage of this concept is that depth layers decompose the overlapping structures into certain depth intervals. Another advantage is that the windowing and volume rendering technique can be specifically selected according to the registration content. For example, in our work bone structures are rendered as the structure of interest due to the fact that bones are almost rigid during the intervention and can represent the rigid patient motion. Contour enhanced volume rendering is used for the purpose of 2D/3D matching. Since the patient motion during the procedure is relative small, the depth layer generation is done only once as an initialization for a specific working position. Fig. 1. Flow chart of depth layer based 2D/3D registration for motion compensation.

130 Wang, Borsdorf & Hornegger 2.2 2D/3D matching and 2D+ reconstruction Given the depth layers generated from the initially registered 3D volume, the 2D/3D matching procedure is performed between the X-ray image I 0 and the depth layer image D i (i =1,..., n). As distinctive boundaries of structures in 3D image correspond to strong intensity gradients in the 2D image, our matching strategy is gradient-based: volumetric contour rendering technique [3] is employed to generate the depth layers, and the 2D gradient magnitude map ( I 0 ) is used. The matching is done by patchwise similarity measurement. 2D grids are applied to I 0 and D i to generate 2D patches p k I and 0 pk D i from the images (k =1,..., K, where K is the number of patches in a 2D image). In our case, I 0 and D i with the size of 800 800 pixels have the patch size of 8 8 pixels. Normalized cross correlation (NCC) [4] is employed as the similarity measure, and the similarity between p k I and 0 pk D i is weighted by 1 n wi k = NCC(p k D i,p k I ) 0 NCC(p k D j,p k I ) 0 (1) The weight wi k indicates the matching probability of p k I 0 (the kth patch of the X-ray image I 0 ) with a certain depth d i. The weight wi k is normalized over all depths d i (i =1,..., n) for the same patch position k. Then the 3D position of the 2D feature patches {p k I 0,k =1,..., K} with high matching weight (e.g. for wi k 0.5) are estimated by 2D+ reconstruction. Since the depth values of the 2D patches associated with D i are all estimated as the center depth d i of V di, we call the procedure 2D+ reconstruction. The reconstruction procedure is based on the back-projection of points to rays [5]. In homogeneous coordinates, a 2D point x 2D R 3 can be back projected to a ray r(λ) =P + x 2D +λc, that passes through the X-ray source c and the point j=1 Depth v u Fig. 2. Matching & reconstruction. Left: Depth layer volumes in C-arm geometry; middle: Matching patches with high weights; right: 2D+ reconstructed matching patches.

Motion Compensation for 2D/3D Overlay 131 x r = P + x 2D (P + is the pseudo-inverse of the projection matrix P R 3 4 ). And the 2D+ reconstruction x 2D+ of x 2D in depth d i can be determined by x 2D+ = x r + λ(d i )c (2) In the eye coordinate system (Fig. 2), the origin is at the X-ray source c and the z-axis is aligned with the principal ray direction. Therefore, we have c = (0, 0, 0, 1) T and z 2D+ /ω 2D+ = d i, where x 2D+ =(x 2D+,y 2D+,z 2D+,ω 2D+ ) T. Together with 2, we have λ(d i )=(z xr d i ω xr )/d i, which is used to determine entries of x 2D+. The results can be transformed to world coordinate system using the projection parameters of C-arm system. The center points of 2D patches with high weights (above) are reconstructed in 2D+ space (below). 2.3 Tracking and motion estimation After the reconstruction procedure, motion in 3D is estimated by depth-aware tracking. Kanade-Lucas-Tomasi (KLT) Feature Tracker [6] is employed to find 2D correspondences x 2D x 2D between neighboring frames. Our goal is to estimate the motion of the patient between two frames. This motion can be expressed as the inverse of the relative motion of the C-arm, which is considered as a perspective camera. The projection matrix P can be represented as P = K[R t], where K R 3 3 contains the intrinsic parameters, rotation R R 3 3 and translation t R 3 give the rigid camera motion T =[R t] R 3 4. Non-linear optimization is applied to recover the parameters of rotation and translation. The following error function is minimized between two frames arg min T ( w dist(ˆx2d, x 2D)) where dist(, ) is the Euclidean distance and ˆx 2D = K T x 2D+. The projection errors are weighted by the matching weights w (calculated in (1)) so that points with higher w contributes more to the results and vise versa. (3) 3 Results 3.1 Point-based simulation experiment (quantitative) The aim of the experiment is to answer the following questions: (i) how accurate the algorithm can be, (ii) how many depth layers are needed and (iii) how robust is the method with respect to noise. The projection parameters of a real C-arm system (detector pixel size 0.308 mm) is used in the experiment. 3D point sets (20 sets, 50 points per set) were randomly generated in 3D space where the patient is usually laid. Two projections of the 3D points (without and with rigid motion) were generated as two neighboring frames. The experiment results of 5 examples are shown in Tab. 1. Four motions (pure translation(motion 1), in-plane motion(motion 2), pure rotation(motion 3) and general motion(motion 4)) were tested without 2D corresponding noise. 2D

132 Wang, Borsdorf & Hornegger Table 1. Experiment results of projection errors. The general case refers to motion 4 with 2D noise. motion 1 motion 2 motion 3 motion 4 general translation (mm) (6, 4, 4) (6, 0, 4) (0, 0, 0) (6, 4, 4) (6, 4, 4) rotation ( ) (0, 0, 0) (0, 9, 0) (-9, 9, 4.5) (-9, 9, 4.5) (-9, 9, 4.5) ɛ ref (mm) 2.78 4.74 6.15 6.45 6.45 # depth layers projection errors after registration ɛ reg(mm) 15 0.12 ± 0.05 0.11 ± 0.05 3.39 ± 1.58 3.20 ± 1.48 3.22 ± 2.01 25 0.08 ± 0.03 0.06 ± 0.03 1.47 ± 1.46 1.90 ± 0.93 2.70 ± 0.79 35 0.06 ± 0.03 0.05 ± 0.02 1.69 ± 0.92 1.56 ± 0.85 2.08 ± 1.23 correspondence noise (±2 pixels or ±0.62 mm) was added in the general motion case. The mean 2D offsets (ɛ ref ) caused by the motions vary from 2.78 mm to 6.45 mm. The projection errors after registration (ɛ reg ) with 15, 25 and 35 depth layers are shown. In the non-noise case, the projection error ɛ reg decreased below 2mm using 25 depth layers. In the pure translation and the in-plane motion cases, the motions were better corrected. Using 35 depth layers, ɛ reg was around 2 mm in the general motion case with noise. In Fig. 3, the estimation errors of the motion components are plotted with all tested depth resolutions (motion 4 with noise). The rotation errors were bounded within ±1 after 10 depth layers. The in-plane rotation ( R y ) was better estimated then off-plane rotation ( R x and R z ). The translation errors also stabilized after 20 to 30 depth layers. These results show that our approach is capable of recovering 3D motion by using depth-aware 2D correspondences, even for the stronger motion (motion 4) under 2D correspondence noise. 3.2 Preliminary tracking-based experiment (qualitative) In the tracking-based experiment, the X-ray sequence was simulated by DRR computation from a clinical CT volume with a rigid motion sequence. KLT tracking method was employed for 2D tracking. An example of the results is shown in Fig. 4. The initial registered 2D/3D overlay (left), which is the starting point of our approach. Using our depth-layer based motion compensation, the overlay was corrected using the estimated motion from frame to frame. The Error in Rotation ( ) 2 1 0 1 R x R y R z 10 20 30 40 50 # depth layer (depth resolution) Error in translation (mm) 20 10 0 10 20 t x t y t z 10 20 30 40 50 # depth layer (depth resolution) Fig. 3. Simulation experiment: general motion case (motion 4, with 2D noise).

Motion Compensation for 2D/3D Overlay 133 Fig. 4. Tracking based experiment. Left: Initially registered 2D/3D overlay (red); middle: Depth layer based motion compensation (green); right: Overlay with (green) and without (red) motion compensation. preliminary result shows the potential of our depth-aware tracking based approach towards real-time rigid motion compensation. 4 Discussion We have presented a novel depth-layer based 2D/3D registration approach. The experimental results show that our approach is capable to estimate the rigid 3D motion by 2D tracking and depth-aware motion estimation. With depth layers (n [25, 35]) the 3D motion can be recovered with a projection error below 2 mm. The method was tested for its robustness against noise in 2D tracking. Furthermore, the method is computationally very efficient, because we do not rely on frame-by-frame iterative DRR computation. This shows the high potential of our approach for robust and real-time compensation of patient motion for 2D/3D overlay. References 1. Markelj P, Tomaževič D, Likar B, et al. A review of 3D/2D registration methods for image-guided interventions. Med Image Anal. 2012;16(3):642 61. 2. Wang J, Fallavollita P, Wang L, et al. Augmented reality during angiography: integration of a virtual mirror for improved 2D/3D visualization. Proc IEEE Int Symp Mixed Augment Real. 2012; p. 257 64. 3. Csébfalvi B, Mroz L, Hauser H, et al.; Wiley Online Library. Fast visualization of object contours by non-photorealistic volume rendering. Proc Comput Graph Forum. 2002;20(3):452 60. 4. Penney G, Weese J, Little J, et al. A comparison of similarity measure for use in 2D 3D medical image registration. IEEE Trans Med Imaging. 1998;17(4):586 95. 5. Hartley R, Zisserman A. Multiple view geometry in computer vision. 2nd ed. Cambridge Univsersity Press; 2003. 6. Tomasi C, Kanade T. Detection and tracking of point features. Tech Report: CMU- CS; 1991.