3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

Similar documents
Model-Based Respiratory Motion Compensation for Image-Guided Cardiac Interventions

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Image-Based Device Tracking for the Co-registration of Angiography and Intravascular Ultrasound Images

Guide wire tracking in interventional radiology

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay

3D/2D Model-to-Image Registration Applied to TIPS Surgery

Non-rigid 2D-3D image registration for use in Endovascular repair of Abdominal Aortic Aneurysms.

Markov Random Field Optimization for Intensity-based 2D-3D Registration

Automatic Vascular Tree Formation Using the Mahalanobis Distance

3-D Respiratory Motion Compensation during EP Procedures by Image-Based 3-D Lasso Catheter Model Generation and Tracking

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

2D Vessel Segmentation Using Local Adaptive Contrast Enhancement

Image Guidance of Intracardiac Ultrasound with Fusion of Pre-operative Images

Landmark-based 3D Elastic Registration of Pre- and Postoperative Liver CT Data

Towards full-body X-ray images

Respiratory Motion Compensation for C-arm CT Liver Imaging

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart

A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab

Scaling Calibration in the ATRACT Algorithm

Intensity-Based 3D-Reconstruction of Non-Rigid Moving Stenosis From Many Angiographies erschienen in

Interactive segmentation of vascular structures in CT images for liver surgery planning

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

Fiducial localization in C-arm based Cone-Beam CT

A Novel 2D-3D Registration Algorithm for Aligning Fluoro Images with 3D Pre-op CT/MR Images

Discrete Estimation of Data Completeness for 3D Scan Trajectories with Detector Offset

Iterative Closest Point Algorithm in the Presence of Anisotropic Noise

Virtual Touch : An Efficient Registration Method for Catheter Navigation in Left Atrium

Towards an Estimation of Acoustic Impedance from Multiple Ultrasound Images

3D Augmented Fluoroscopy in Interventional Neuroradiology: Precision Assessment and First Evaluation on Clinical Cases

Model of a Vascular C-Arm for 3D Augmented Fluoroscopy in Interventional Radiology

Manifold Learning for Image-Based Breathing Gating with Application to 4D Ultrasound

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

A New Approach to Ultrasound Guided Radio-Frequency Needle Placement

Automatic Cerebral Aneurysm Detection in Multimodal Angiographic Images

A Real-Time Deformable Model for Flexible Instruments Inserted into Tubular Structures

University College London, Gower Street, London, UK, WC1E 6BT {g.gao, s.tarte, p.chinchapatnam, d.hill,

3D Statistical Shape Model Building using Consistent Parameterization

Model-to-image based 2D-3D-registration of angiographic data

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information

Motion Compensation from Short-Scan Data in Cardiac CT

Response to Reviewers

Self-Calibration of Cone-Beam CT Using 3D-2D Image Registration

Rigid and Deformable Vasculature-to-Image Registration : a Hierarchical Approach

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

arxiv: v1 [cs.cv] 17 Jul 2017

Contrast Enhancement with Dual Energy CT for the Assessment of Atherosclerosis

Automatic Extraction of 3D Dynamic Left Ventricle Model from 2D Rotational Angiocardiogram

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

Graph Cuts Based Left Atrium Segmentation Refinement and Right Middle Pulmonary Vein Extraction in C-Arm CT

Dynamic Cone Beam Reconstruction Using a New Level Set Formulation

Total Variation Regularization Method for 3D Rotational Coronary Angiography

Estimating Arterial Wall Shear Stress 1

Viewpoint Planning for Quantitative Coronary Angiography

A Comprehensive Method for Geometrically Correct 3-D Reconstruction of Coronary Arteries by Fusion of Intravascular Ultrasound and Biplane Angiography

Robust Discriminative Wire Structure Modeling with Application to Stent Enhancement in Fluoroscopy

MR-Guided Mixed Reality for Breast Conserving Surgical Planning

Online Detection of Straight Lines in 3-D Ultrasound Image Volumes for Image-Guided Needle Navigation

arxiv: v1 [cs.cv] 18 Sep 2017

A model-based approach for tool tracking in laparoscopy

Total Variation Regularization Method for 3-D Rotational Coronary Angiography

Image-based Co-Registration of Angiography and Intravascular Ultrasound Images

Nonparametric Clustering of High Dimensional Data

Image Guided Navigation for Minimally Invasive Surgery

Robust extraction of tumorous vascular networks in high resolution 3D images

An Overview of Matchmoving using Structure from Motion Methods

Improvement and Evaluation of a Time-of-Flight-based Patient Positioning System

Combination of Markerless Surrogates for Motion Estimation in Radiation Therapy

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 8, AUGUST David Rivest-Hénault*, Hari Sundar, and Mohamed Cheriet

A preliminary study on adaptive field-of-view tracking in peripheral digital subtraction angiography

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001

Geometry-Based Optic Disk Tracking in Retinal Fundus Videos

Projection-Based Needle Segmentation in 3D Ultrasound Images

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics]

Automatic Extraction of quasi-synchronous Views from Rotational Angiographic Sequence without ECG-Data

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute

CV: 3D sensing and calibration

A POINT-EVENT DETECTION ALGORITHM FOR THE ANALYSIS OF CONTRAST BOLUS IN FLUOROSCOPIC IMAGES OF THE CORONARY ARTERIES

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Automatic Removal of Externally Attached Fiducial Markers in Cone Beam C-arm CT

Freehand 3-D Sonographic Measurement of the Superficial Femoral Artery

Augmenting Intraoperative 3D Ultrasound with Preoperative Models for Navigation in Liver Surgery

Computer-Aided Diagnosis in Abdominal and Cardiac Radiology Using Neural Networks

Automatic Patient Pose Estimation Using Pressure Sensing Mattresses

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001

2D/3D Registration Based on Volume Gradients

Active double-contour for segmentation of vessels in digital subtraction angiography

GPU-accelerated Rendering for Medical Augmented Reality in Minimally-invasive Procedures

Introduction to Medical Imaging. Cone-Beam CT. Klaus Mueller. Computer Science Department Stony Brook University

Virtual Phantoms for IGRT QA

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Three-dimensional synthetic blood vessel generation using stochastic L-systems

Simultaneous Model-based Segmentation of Multiple Objects

Learning-based Hypothesis Fusion for Robust Catheter Tracking in 2D X-ray Fluoroscopy

Lecture 6: Medical imaging and image-guided interventions

82 REGISTRATION OF RETINOGRAPHIES

Image Acquisition Systems

Nonlinear Mean Shift for Robust Pose Estimation

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY

Transcription:

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations Martin Groher 2, Frederik Bender 1, Ali Khamene 3, Wolfgang Wein 3, Tim Hauke Heibel 2, Nassir Navab 2 1 Siemens AG Healthcare Sector AX, Forchheim, Germany 2 Computer Aided Medical Procedures (CAMP), TUM, Munich, Germany 3 Siemens Corporate Research (SCR), Princeton, NJ, USA groher@cs.tum.edu Abstract. Nowadays, abdominal catheterizations are mainly guided by 2D fluoroscopic imaging from one view, which complicates this task due to missing depth information, vessel invisibility, and patient motion. We propose a new technique for 3D navigation of guide wires based on singleplane 2D fluoroscopic views and a previously extracted 3D model of the vasculature. In order to relate the guide wire, which is only visible in the fluoroscopic view, to the 3D vasculature, we comprise methods for 2D-3D registration, apparent breathing motion compensation, guide wire detection, and robust backprojection. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. 1 Introduction Minimally invasive catheter-guided interventions are common practice in modern hospitals and are usually guided by fluoroscopic imaging from one view. A guide wire can be visualized thus, which is moving in the image sequence due to respiratory patient motion and the physician s push/pull actions. In order to visualize blood vessels, which cannot be seen in fluoroscopic X-ray views, most hospitals acquire a high-resolution 2D Digitally Subtracted Angiogram (DSA) to avoid blind navigation. Using 2D DSAs, the navigation is still difficult at vessel overlays produced by the projection, or at vessel segments that run perpendicular to the image plane. Thus, 3D angiographic data were introduced into interventional workflow. However, they must be related to the current fluoroscopic frame, which is currently done via user interactions, hampering the navigation process. In order to relate 3D high resolution data to the current 2D fluoroscopic frame, systems have been proposed based on 2D-3D registration [1], or C-arm calibration [2]. Atasoy et al. [3] also takes breathing motion into account, which is common in abdominal fluoroscopic sequences, by compensating for the apparent motion of the guide wire. While clearly beneficial in terms of vessel visibility, depth perception cannot be increased with these approaches.

3D guide wire navigation 357 We propose a method for 3D guide wire navigation that transfers the 2D navigational procedure to 3D by taking information about 2D guide wire tip location, apparent breathing motion, view and 3D vessel geometry into account (Fig. 1). From a vascular model extracted from a 3D scan, and a DSA, we compute the projection matrix of the C-arm device. We relate the DSA to the current fluoroscopic frame by tracking the apparent catheter motion in 2D. From possible guide wire tip locations in 2D, we determine candidate 3D locations via backprojection to the 3D vessel model. With the estimates of view geometry and motion, we derive uncertainties for all these locations and determine the most probable of all 3D candidates using a variable bandwidth kernel estimator for information fusion. With our proposed system, which has been recently presented in [4], we successfully bring abdominal guide wire navigation to 3D, thus solving the problems of blind navigation and lack of depth perception. 2 Method 2.1 Data and preprocessing The data sets that form the basis to our method are a fluoroscopic image sequence S, a high resolution DSA with the same projection parameters as the fluoroscopic sequence, and a 3D angiographic scan from which we extract a 3D vessel model V as described in [5]. To determine the 3D position X of the guide wire tip, we need to know its position X within the current 2D fluoroscopic image frame Fig. 1. 3D guide wire navigation

358 Groher et al. S i. To this end we employ a line detector proposed in [6]. We need a projection matrix P, which aligns the 3D scan with the DSA image. A method, which is robust to deformation changes and different vessel contrast as proposed in [1] is used here. To countervail apparent respiratory motion, we compute the 2D motion in-between frames using the template based approach proposed in [3]. This method yields a global 2D translation, which approximates the 2D apparent breathing motion and relates each fluoroscopic frame to the initial DSA. 2.2 3D guide wire tip localization Our method fuses all gathered information to compute the 3D correspondence of the guide wire tip location for every image frame, assuming a given uncertainty connected to each input parameter. In a first step, we subtract the respiratory motion displacement of the current image frame from the guide wire tip location to obtain the tip location X 0 with respect to the DSA image. This is advantageous, since we can then use the pseudo-inverse of P to backproject the guide wire tip into 3D space. However, a simple backprojection of X 0 will not always yield a satisfactory solution. First, ambiguities can arise if the backprojection ray intersects the vessel model multiple times. Second, the estimate of the guide wire tip location in 2D is prone to error caused by detection and motion compensation. We solve these problems with a statistical approach that is based on the creation of possible 2D guide wire tip candidates and the evaluation of their registration accuracy. We take errors from detection and apparent motion estimation into account by randomly modifying the computed displacements in a range of [±10, ±10] pixels to generate n 2D tip candidates around X 0, denoted by X k where k = 1..n. We backproject each X k using the pseudo-inverse of P, compute the closest point on the vessel model V to the resulting ray, and define it to be a 3D correspondence candidate X k. Next, we compute an uncertainty connected to each X k that is introduced by the uncertainty of the pose parameters Θ from P. We randomly modify the registered pose Θ m times within an interval of [±5, ±5] mm for the translational, and [±5, ±5] for the rotational parameters. This results in m different poses Θ j, which yield n sets of m 3D points: { X kj }, k = 1..n, j = 1..m. We assign a weight r j to each X kj, where r j is computed as the reciprocal of the cost function used for 2D-3D registration [1], evaluated for pose Θ j, and normalized such that r j [0; 1]. The uncertainty of each X k can be represented as its weighted covariance matrix (visualized by green spheres in Fig. 1) where X k = 1 r j j=1 j=1 C k = 1 r j j=1 j=1 r j (X kj X k )(X kj X k ) T, (1) r j X kj is the weighted mean. In a final step, we fuse this uncertainty information to obtain an estimate for the 3D guide wire tip location. An approach to fuse such information is

3D guide wire navigation 359 Table 1. Results of target errors ɛ and standard deviations σ in mm Phantom (Fig. 3(c)) ɛ x ɛ y ɛ z ɛ σ x σ y σ z σ Case 1 New 1.78 2.51 1.96 3.98 1.55 1.00 1.10 1.44 (light gray) Naïve 2.70 4.97 24.39 26.59 1.38 2.94 43.41 42.58 Case 2 New 0.89 1.90 1.13 2.66 0.71 1.74 0.67 1.58 (dark gray) Naïve 1.35 2.71 19.38 20.75 1.59 3.82 37.27 36.87 the Variable-Bandwidth Density-Based Fusion algorithm (VBDF), proposed by Comaniciu in [7]. It is robust to outliers, which occur as 3D correspondence candidates on a wrong vessel branch in our scenario, and involves the uncertainties of the measurements. In this approach, we estimate a density function from all measurement means X k and define its most significant mode to be the estimated 3D guide wire tip location X. We involve information about the 3D location that was computed from the previous fluoroscopic image frame. The idea is to weigh the measurements X k dependent on their distance to the estimate of the previous frame. This is justified by the fact, that the guide wire will probably not move dramatically between successive image frames. To ensure the 3D guide wire to be inside a vessel, we pick the closest point on V to the computed X and define this point to be the final 3D guide wire tip position. 3 Experiments and results To test our method, we ran a study on a realistic breathing phantom with a known ground truth. To this end we use a phantom including a vessel system, which is formed by a set of connected plastic tubes fixed in liquid wax (Fig. 3(a)). The breathing motion is simulated by an electronic pump that is attached to the phantom, yielding apparent in-plane displacements of up to 14 pixels (in 1024 1024 images). The image data is acquired with a Siemens AXIOM Artis dba, a calibrated bi-plane C-arm device. It allows us to grab two fluoroscopic sequences (Fig. 3(c)) from which we can compute the guide wire tip s 3D ground truth position. The vessel model V is extracted from a rotational run under contrast agent. For testing our algorithm, we used only one of the projection sequences, where two particular overlay cases are evaluated (Fig. 3(b)). The DSA for registration was created from a second, contrasted sequence. 3D Euclidean errors and associated standard deviations are shown in Tab. 1. A comparison is made with a naïve approach, which does not use uncertainty information for backprojection. Please note that the error especially in ray direction (ε z ) can be reduced significantly using our method. 4 Conclusion In this work we propose an algorithm, which robustly facilitates 3D navigation of guide wires in abdominal catheterizations using fluoroscopic images from a

360 Groher et al. Fig. 2. (a) Breathing phantom. (b) Rendered 3D volume of phantom with arrows marking the path of case 1 (light gray), case 2 (dark gray). (c) Phantom fluoroscopy showing tubes and guide wire. The circels mark the same paths as arrows in (b). (a) (b) (c) monoplane C-arm system. Encouraged by the good results from synthetic and phantom tests, we will proceed with an extensive patient study. Further ideas are aiming at a feedback system for physicians in cases where the C-arm configuration is not optimal in regard to the desired operation situs, or where the system s predicition is too ambiguous. The concepts and information presented in this paper are based on research and are not commercially available. References 1. Groher M, Bender F, Hoffmann RT, et al. Segmentation-driven 2D-3D registration for abdominal catheter interventions. Procs MICCAI. 2007; p. 527 535. 2. Gorges S, Kerrien E, Berger MO, et al. Model of a vascular C-arm for 3D augmented fluoroscopy in interventional radiology. Procs MICCAI. 2005; p. 214 222. 3. Atasoy S, Groher M, Zikic D, et al. Real-Time respiratory motion tracking: Roadmap correction for hepatic artery catheterizations. Procs SPIE. 2008. 4. Bender F, Groher M, Khamene A, et al. 3D dynamic roadmapping for abdominal catheterizations. MICCAI. 2008. 5. Selle D, Preim B, Schenk A, et al. Analysis of vasculature for liver surgery planning. IEEE Tran Med Imaging. 2002;21(8):1344 1357. 6. Steger C. An unbiased detector of curvilinear structures. IEEE Trans Pattern Anal Mach Intell. 1998;20(2):113 125. 7. Comaniciu D. Nonparametric information fusion for motion estimation. Procs IEEE CVPR. 2003; p. I 59 I 66.