Hybrid Navigation Interface for Orthopedic and Trauma Surgery

Similar documents
Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality

GPU-accelerated Rendering for Medical Augmented Reality in Minimally-invasive Procedures

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

A New Approach to Ultrasound Guided Radio-Frequency Needle Placement

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Validation System of MR Image Overlay and Other Needle Insertion Techniques

Real-time Volume Rendering for High Quality Visualization in Augmented Reality

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha

Contextual Anatomic Mimesis Hybrid In-Situ Visualization Method for Improving Multi-Sensory Depth Perception in Medical Augmented Reality

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

Abstract. 1. Introduction

Markerless Real-Time Target Region Tracking: Application to Frameless Stereotactic Radiosurgery

Automatic Patient Registration for Port Placement in Minimally Invasive Endoscopic Surgery

A Novel Phantom-Less Spatial and Temporal Ultrasound Calibration Method

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Development and Human Factors Analysis of Neuronavigation vs. Augmented Reality

An Image Overlay System with Enhanced Reality for Percutaneous Therapy Performed Inside CT Scanner

Tracked surgical drill calibration

Sensor-aided Milling with a Surgical Robot System

A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Overview of Proposed TG-132 Recommendations

Registration of Video Images to Tomographic Images by Optimising Mutual Information Using Texture Mapping

Non-rigid 2D-3D image registration for use in Endovascular repair of Abdominal Aortic Aneurysms.

An Accuracy Approach to Robotic Microsurgery in the Ear

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Introducing Computer-Assisted Surgery into combined PET/CT image based Biopsy

C-mode Real Time Tomographic Reflection for a Matrix Array Ultrasound Sonic Flashlight

Needle Insertion in CT Scanner with Image Overlay Cadaver Studies

Augmented Reality System for Oral Surgery Using 3D Auto Stereoscopic Visualization

4D Shape Registration for Dynamic Electrophysiological Cardiac Mapping

A Study of Medical Image Analysis System

Towards an Ultrasound Probe with Vision: Structured Light to Determine Surface Orientation

2D Rigid Registration of MR Scans using the 1d Binary Projections

A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot

Fully Automatic Endoscope Calibration for Intraoperative Use

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter

(b) (a) 394 H. Liao et al. Fig.1 System configuration

Endoscopic Navigation for Minimally Invasive Suturing

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane

Leksell SurgiPlan. Powerful planning for success

Leksell SurgiPlan Overview. Powerful planning for surgical success

Fiber Selection from Diffusion Tensor Data based on Boolean Operators

Augmented Reality in the Psychomotor Phase of a Procedural Task

A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING

A Novel Laser Guidance System for Alignment of Linear Surgical Tools: Its Principles and Performance Evaluation as a Man Machine System

USING AUGMENTED REALITY FOR REAL-TIME VISUALIZATION OF TACTILE HEALTH EXAMINATION

Optimal Planning of Robotically Assisted Heart Surgery: Transfer Precision in the Operating Room

Introduction to Digitization Techniques for Surgical Guidance

Real-time self-calibration of a tracked augmented reality display

BUZZ - DIGITAL OR WHITEPAPER

An Iterative Framework for Improving the Accuracy of Intraoperative Intensity-Based 2D/3D Registration for Image-Guided Orthopedic Surgery

Bone registration with 3D CT and ultrasound data sets

LEAD LOCALIZATION. Version 1.0. Software User Guide Revision 1.1. Copyright 2018, Brainlab AG Germany. All rights reserved.

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

Cortical Surface Registration Using Texture Mapped Point Clouds and Mutual Information

Medicale Image Analysis

Computed Photography - Final Project Endoscope Exploration on Knee Surface

Multi-modal Image Registration Using the Generalized Survival Exponential Entropy

Ultrasound Probe Tracking for Real-Time Ultrasound/MRI Overlay and Visualization of Brain Shift

Registration Tasks for a Hybrid Tracking System for Medical Augmented Reality

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay

On the Design and Experiments of a Fluoro-Robotic Navigation System for Closed Intramedullary Nailing of Femur

syngo.mr Neuro 3D: Your All-In-One Post Processing, Visualization and Reporting Engine for BOLD Functional and Diffusion Tensor MR Imaging Datasets

ToF/RGB Sensor Fusion for Augmented 3-D Endoscopy using a Fully Automatic Calibration Scheme

Video Registration Virtual Reality for Non-linkage Stereotactic Surgery

Direct Co-Calibration of Endobronchial Ultrasound and Video

Markerless Endoscopic Registration and Referencing

A Video Guided Solution for Screw Insertion in Orthopedic Plate Fixation

Automated Model-Based Rib Cage Segmentation and Labeling in CT Images

Inverse C-arm positioning for interventional procedures using real-time body part detection

Optical Guidance. Sanford L. Meeks. July 22, 2010

Fiducial localization in C-arm based Cone-Beam CT

Towards an Estimation of Acoustic Impedance from Multiple Ultrasound Images

2D Vessel Segmentation Using Local Adaptive Contrast Enhancement

Design and Implementation of a Laparoscope Calibration Method for Augmented Reality Navigation

Backward-Warping Ultrasound Reconstruction for Improving Diagnostic Value and Registration

MR-Guided Mixed Reality for Breast Conserving Surgical Planning

3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli

Comparison of Different Metrics for Appearance-model-based 2D/3D-registration with X-ray Images

Robot-assisted MRI-guided prostate biopsy using 3D Slicer

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems

Image Guidance of Intracardiac Ultrasound with Fusion of Pre-operative Images

Image-Guided Interventions Technology and Applications. Organizers: Emad Boctor, Pascal Fallavollita, Ali Kamen, Ziv Yaniv

Elastic registration of medical images using finite element meshes

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY

3D/2D Model-to-Image Registration Applied to TIPS Surgery

CHomework Assignment /655 Fall 2017 (Circle One)

Projection-Based Needle Segmentation in 3D Ultrasound Images

Towards Markerless Augmented Medical Visualization

Automatic segmentation of the cortical grey and white matter in MRI using a Region Growing approach based on anatomical knowledge

PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

Prostate Brachytherapy Seed Reconstruction Using C-Arm Rotation Measurement and Motion Compensation

Iterative CT Reconstruction Using Curvelet-Based Regularization

Augmenting Intraoperative 3D Ultrasound with Preoperative Models for Navigation in Liver Surgery

Closed-Loop Control in Fused MR-TRUS Image-Guided Prostate Biopsy

Transcription:

Hybrid Navigation Interface for Orthopedic and Trauma Surgery Joerg Traub 1, Philipp Stefan 1, Sandro Michael Heining 2, Tobias Sielhorst 1, Christian Riquarts 2, Ekkehard Euler 2, and Nassir Navab 1 1 Chair for Computer Aided Medical Procedures (CAMP), TU Munich, Germany {traub, stefanp, sielhors, navab}@cs.tum.edu 2 Chirurgische Klinik und Poliklinik - Innenstadt, LMU Munich, Germany {Sandro-Michael.Heining, Christian.Riquarts, Ekkehard.Euler}@med.uni-muenchen.de Abstract. Several visualization methods for intraoperative navigation systems were proposed in the past. In standard slice based navigation, three dimensional imaging data is visualized on a two dimensional user interface in the surgery room. Another technology is the in-situ visualization i.e. the superimposition of imaging data directly into the view of the surgeon, spatially registered with the patient. Thus, the three dimensional information is represented on a three dimensional interface. We created a hybrid navigation interface combining an augmented reality visualization system, which is based on a stereoscopic head mounted display, with a standard two dimensional navigation interface. Using an experimental setup, trauma surgeons performed a drilling task using the standard slice based navigation system, different visualization modes of an augmented reality system, and the combination of both. The integration of a standard slice based navigation interface into an augmented reality visualization overcomes the shortcomings of both systems. 1 Introduction Standard slice based navigation systems are commonly used and commercially available for orthopedic and trauma surgery. In general they consist of a position and orientation tracking system and a two dimensional user interface. These systems visualize the navigation information based on three dimensional medical imaging data on an external monitor. The three major drawbacks of state of the art navigation systems are a) every imaging and navigation device comes with its own user interface, b) the guidance information based on three dimensional data is visualized on two dimensional user interfaces, and c) the navigational information is not visualized directly on the operation situs, forcing the surgeon to observe the navigation information at a different location as the action is performed. Augmented reality visualization was introduced as an alternative user interface for navigated surgery. The navigation information is superimposed onto the surgeon s view of the real world. In the past decade numerous applications and hardware setups using augmented reality visualization in medical navigation R. Larsen, M. Nielsen, and J. Sporring (Eds.): MICCAI 2006, LNCS 4190, pp. 373 380, 2006. c Springer-Verlag Berlin Heidelberg 2006

374 J. Traub et al. were proposed. King, Edwards et al. [1] developed a stereoscopic system called MAGI that is used for microscope based neurosurgery. Birkfellner et al. [2] designed and developed the VarioscopeAR, an augmented reality head mounted operation microscope, for maxillofacial surgery. Sauer et al. [3] use a stereoscopic video-see through head mounted display to visualize three dimensional imaging data for various domains in interventional radiology and surgery [4]. In the past standard navigation systems and augmented reality were presented as concurrent approaches and evaluations dealt with the comparison of two dimensional user interfaces versus three dimensional in-situ visualization techniques [5]. Based on a head mounted display augmented reality system (section 2) we implemented a hybrid navigation interface (section 3.3) as a combination of the standard slice based navigation (section 3.1) and augmented reality visualizations (section 3.2). We propose to fuse the two separate interfaces to create one single three dimensional user interface for orthopedic and trauma surgery applications. Exemplary applications are pedicle screw placement in spinal surgery, orthopedic implant positioning and osteochondritis dissecans. In order to evaluate possible benefits of the fusion of these systems isolated systems and its usefulness for surgery, we designed a phantom experiment, in which a drill must be navigated to a given target location. Three surgeons evaluated different visualization systems through a set of detailed experiments (section 4). 2 System Setup Our system is a combination of two existing components, a three dimensional user interface and an optical tracking system. We developed a fully automatic procedure that is based on a CT scan with attached fiducials that are visible in CT and tracking space. This allows to use the proposed visualization on any object after a CT scan with attached fiducials is performed. 2.1 Hardware Setup The augmented reality system is based on a stereoscopic video see-through head mounted display (HMD) developed by Sauer et al. (SCR, Princeton, USA) [3]. The head mounted display is equipped with two color cameras to obtain images of the observed scene. Additionally a tracking camera is attached to the system for head pose estimation[6]. This technique, often referred to as inside-out tracking, has been proven to minimize the error of visualization in augmented reality [7]. There are two reasons for the preference of a video-see-through display to an optical-see-through device. Firstly these systems achieve a perfect synchronization of video and head pose data since the cameras are genlocked, eliminating any time lag between the images of the cameras, which could lead to perceivable jitter or swimming [8]. Secondly we have more options for visualization since we have the full control over the image display, whereas in optical systems only brightening of the augmentation is possible. The drawback of using a single camera for instrument tracking is that large marker configurations are needed for a precise localization of targets[9]. As large

Hybrid Navigation Interface for Orthopedic and Trauma Surgery 375 tracking targets attached to instruments are not desired in the operating theatre, we use the ARTtrack1 1 external optical tracking system to track instruments and surgical objects. This device is capable of tracking targets within our working volume with an accuracy of < 0.2 [mm] RMS. A marker frame (Fig. 1(D)) is used to establish a common coordinate frame between the inside-out and external optical tracking system. The entire hardware setup is shown in Fig. 1. Fig. 1. Illustration of the setup for the hybrid navigation interface. (A) The HMD with two cameras for the video images and a single camera tracker for determination of the pose relative to the marker frame (D). An external optical infrared tracking device (B) is used for tracking surgical instruments (G) and CT detectable, infrared retro-reflective markers (E) attached to the phantom (F). The hybrid navigation view (F) is displayed on two miniature LCD monitors. In this augmentation all coordinate systems are visualized representing the transformations involved. The marker frame (D) is used as a common coordinate system for both, single camera tracking (A) and external optical tracking (B). The transformation from the coordinate system of the external tracking device to two dimensional coordinates in the overlay image is given by Overlay H Target = Overlay H Cam Cam H Frame ( Ext H Frame ) 1 Ext H Target (1) where the transformations Ext H Frame and Ext H Target are provided by the external tracking system, Cam H Frame and Overlay H Cam are derived using Tsai calibration. 2.2 Clinical Integration The requirement for a navigation system applicable in trauma surgery is a seamless integration into the clinical workflow and an automatic configuration with no additional interactive calibration or registration procedures during surgery. 1 A.R.T. GmbH, Weilheim, Germany.

376 J. Traub et al. Most methods described in literature use tracked pointers to register markers in patient space with their corresponding centroids segmented from imaging data [1]. We designed markers that are automatically detectable both in the imaging data and in the physical space. We use 4 [mm] CT-Spots from Beekley Corp (Bristol, CT, USA), coated with infrared retro reflective material (Fig. 1(E)). Following the approach of Wang et al.[10], we use an automatic segmentation based on binary thresholding and region growing, followed by a classification of the segmented region. The centroids of segmented regions are calculated intensity-weighted using the voxel intensities of the imaging data. Finally, the correct point correspondences are established and the transformation Target H CT from the CT coordinates into the tracking coordinates is computed. This is done by a distance-weighted graph matching approach [11] followed by a point based registration algorithm [12]. Thus the data in the CT coordinate system can be transformed to the overlay image coordinate system by Overlay H CT = Overlay H Target Target H CT,with Overlay H Target from equation 1. 3 Navigation Modes 3.1 Standard Slice Based Navigation Interface In standard slice based navigation systems information is presented on two dimensional monitors. The slices displayed are controlled by the pose of the instrument. This pose, as well as the virtual extension of the instrument is drawn onto the slice. We implemented this in our visualization software to have all navigation modes presented in the head mounted display. Therefore, we project the slices at a fixed location in space (Fig. 2(d)). 3.2 Augmented Reality Visualization Modes We implemented various augmented reality visualization modes. The requirement for navigation is the guidance of surgical instruments to a specific target point based on three dimensional imaging data. The requirement for the clinical integration is that no interaction is required to prepare the data (e.g. interactive segmentation or planning). All three implemented augmented reality navigation modes work directly on the DICOM data with the imaging data registered as described in section 2.2. Volume Rendering. Multiple planes parallel to the image plane are clipped against the volume boundaries and rendered by interpolating within the volume and blending appropriately. Intensity values in the volume domain are mapped to the three dimensional color space using transfer functions in order to accentuate anatomically interesting structures. Additionally to the volume the extension of the instrument axis is visualized to provide navigational feedback (Fig. 2(a)). Aligned Slice View. Any arbitrary slice can be visualized in-situ onto the surgical object. We visualize the plane defined by the target point and the sagittal, frontal, or axial direction as normal vector of the plane. The extension of the instrument axis is visualized. Its intersection with the plane is explicitly highlighted (Fig. 2(b)).

Hybrid Navigation Interface for Orthopedic and Trauma Surgery 377 Instrument Aligned Orthoslice View. An orthoslice view is rendered spatially aligned along the instrument axis. The slices are controlled by the pose of the instrument. The intersection of the two planes corresponds to the instrument axis and along this line the user can see the extension of the instrument inside the anatomy (Fig. 2(c)). (a) Volume rendering. (b) Aligned slice (c) Instrument (d) Standard slice view. aligned orthoslice based navigation. view. Fig. 2. Different navigation modes that are rendered into the head mounted display 3.3 Hybrid Navigation Interface The hybrid navigation interface combines the standard navigation interface and the above described augmented reality in-situ visualization modes into a single three dimensional interface (Fig. 3). The standard navigation is mapped just beside the real surgical object. Visualizing it close to the surgical object makes it visible at the same time as the in-situ modes project their information directly onto the surgical object. The advantage of the in-situ visualization i.e. intuitive and accurate for lateral dimensions, complements the advantages of the standard slice based navigation system i.e. high accuracy in all dimensions. 4 Experiments For the evaluation and comparison of different combinations of navigation modes, we designed a phantom that mimics epidermal and osseous structures. We implanted 4 [mm] metal spheres in a block of wood at a depth of approximately 30 [mm]. The surface of the phantom was covered with a silicone rubber compound which has properties similar to human skin. Additionally combined CT (a) Navigation and volume rendering. (b) With aligned slice view. (c) With instrument aligned orthoslice view. Fig. 3. Different hybrid navigation modes combining in-situ and standard slice based navigation

378 J. Traub et al. and infrared retro-reflective markers were attached to the phantom in order to allow image-to-physical object registration. The fiducial registration error (FRE) using automatic marker segmentation and point based registration was 0.28 [mm]. We estimated a maximum target registration error (TRE) of 0.52 [mm] and a mean error of 0.43 [mm]±0.03 [mm] using the estimation approach of Fitzpatrick et al. [13]. Using the head mounted display, one of the metal spheres is highlighted in the in-situ view. The subject is then asked to navigate a tracked surgical drill to the target point using either one of the visualization modes (Fig. 2) or the proposed hybrid navigation user interface (Fig. 3). During the experiment the time used to reach the target point and the distance between the tip of the drill and the centroid of the marked metal sphere is measured. The experiment was conducted by three trauma surgeons with different levels of experience,who marked 4 targets per visualization method, resulting in 28 measurements per subject. 5 Results The mean accuracy of the three trauma surgeons for each representation is summarize in table 1. Table 1. Comparison of different navigation modes in terms of accuracy and time required to complete the drilling task. The different modes were standard slice based navigation (NAV), volume rendering (VOLREN), aligned slice view (ASV), instrument aligned orthoslice view (IOV), and combinations of in-situ modes with the standard navigation. Surgeon A Surgeon B Surgeon C Navigation error [mm] time [s] error [mm] time [s] error [mm] time [s] NAV 1.7 ± 1.1 128 ± 32 1.6 ± 0.7 140 ± 151 1.2 ± 0.7 35 ± 9 VOLREN 2.1 ± 1.1 134 ± 18 1.0 ± 0.4 91 ± 88 2.5 ± 1.6 24 ± 5 ASV 2.6 ± 0.8 76 ± 18 0.7 ± 0.3 50 ± 21 1.6 ± 0.9 49 ± 29 IOV 2.3 ± 0.9 98 ± 24 1.7 ± 0.6 57 ± 52 3.1 ± 2.9 47 ± 28 VOLREN + NAV 1.9 ± 1.5 106 ± 52 1.5 ± 1.3 52 ± 5 2.4 ± 0.5 26 ± 6 ASV + NAV 1.5 ± 0.2 83 ± 18 1.1 ± 0.6 50 ± 20 2.1 ± 1.5 24 ± 10 IOV + NAV 1.6 ± 0.6 95 ± 28 1.9 ± 0.5 84 ± 17 1.9 ± 0.2 26 ± 12 The results in terms of accuracy and speed of execution for each surgeon, as well as the distribution within the different visualization approaches lead to the assumption that the overall performance depends on the level of experience. Surgeon A is a relatively inexperienced trauma surgeon, who does not use navigation systems in clinical routine. He performed the task with higher accuracy using the standard navigation interface in comparison to in-situ visualization modes. On the other hand he required more time with standard navigation than using in-situ visualization. Using the hybrid system, he achieved the same accuracy as with the standard navigation system but was significantly faster. This

Hybrid Navigation Interface for Orthopedic and Trauma Surgery 379 shows the advantage of a hybrid mode over a standard slice based navigation in terms of speed and over augmented reality modes in terms of accuracy. The gain in speed can be related to the intuitive usability of the in-situ component especially for inexperienced surgeons. Surgeon B is a very experienced trauma surgeon, who uses standard navigation systems regularly in the OR. Table 1 shows no significant difference in the accuracy throughout all visualization modes. However, more time was needed using the standard navigation mode. This results from difficulties in finding the right entry point and drill orientation. Here the hybrid modes seem to compensate for this effect. In addition he reported after the experiment that using the hybrid modes he had the confidence of the standard navigation mode. Surgeon C is an experienced surgeon who is familiar with standard navigation systems and augmented reality visualization. Thus, he performed the task throughout fast with no significant difference. The relatively low accuracy compared to the other candidates was due to a miscalibration of the tooltip by 1[mm] in the direction of the drill axis. Since he was used to augmented reality he hardly used the standard slice based navigation in the hybrid mode. 6 Discussion We presented different visualization modes for navigation in one single three dimensional user interface. Within the experiment, it was shown that the hybrid mode was improving the performance of a surgical action in accuracy compared to in-situ visualization mode and in terms of speed compared to standard navigation. This is only valid for performing the surgical action not for the overall time of a navigated procedure. In addition to the measured results, the surgeons confirmed that the use of both, standard slice based navigation in combination with augmented reality visualization, can be of great benefit for surgery. The standard navigation system is not the most intuitive navigation interface. Problems especially arose with the position and orientation of the instrument in the lateral dimensions. In-situ visualization has its limitations in precise navigation, especially in depth. The optimal placement of the standard navigation interface in space and the most useful mode of in-situ visualization depend on the clinical application. They will be defined as we will apply the system to real anatomy. With the hybrid navigation mode, we propose an alternative visualization interface that provides useful guidance for orthopedic and trauma surgery. It however needs to be totally integrated in the overall clinical workflow. Acknowledgment. Special thanks to Frank Sauer, Ali Khamene, and Sebastian Vogt from Siemens Corporate Research for the design and implementation of the in-situ visualization system RAMP.

380 J. Traub et al. References 1. King, A.P., Edwards, P.J., Maurer, Jr., C.R., de Cunha, D.A., Hawkes, D.J., Hill, D.L.G., Gaston, R.P., Fenlon, M.R., Strong, A.J., Chandler, C.L., Richards, A., Gleeson, M.J.: A system for microscope-assisted guided interventions. IEEE Trans. Med. Imag. 19(11) (2000) 1082 1093 2. Birkfellner, W., Figl, M., Huber, K., Watzinger, F., Wanschitz, F., Hummel, J., Hanel, R., Greimel, W., Homolka, P., Ewers, R., Bergmann, H.: A head-mounted operating binocular for augmented reality visualization in medicinedesign and initial evaluation. IEEE Trans. Med. Imag. 21(8) (2002) 991 997 3. Sauer, F., Khamene, A., Bascle, B., Rubino, G.J.: A head-mounted display system for augmented reality image guidance: Towards clinical evaluation for imri-guided neurosurgery. In: Proc. of MICCAI, Springer-Verlag (2001) 707 716 4. Wacker, F.K., Vogt, S., Khamene, A., Jesberger, J.A., Nour, S.G., Elgort, D.R., Sauer, F., Duerk, J.L., Lewin, J.S.: An augmented reality system for mr imageguided needle biopsy: Initial results in a swine model. Radiology 238(2) (2006) 497 504 5. Azar, F.S., Perrin, N., Khamene, A., Vogt, S., Sauer, F.: User performance analysis of different image-based navigation systems for needle placement procedures. In: Proc. of the SPIE, Volume 5367, pp. 110-121 (2004). (2004) 110 121 6. Sauer, F., Wenzel, F., Vogt, S., Tao, Y., Genc, Y., Bani-Hashemi, A.: Augmented workspace: designing an ar testbed. In: Proc. of IEEE and ACM ISAR. (2000) 47 53 7. Hoff, W.A., Vincent, T.L.: Analysis of head pose accuracy in augmented reality. IEEE Trans. Visualization and Computer Graphics 6 (2000) 8. Sauer, F., Schoepf, U.J., Khamene, A., Vogt, S., Das, M., Silverman, S.G.: Augmented reality system for ct-guided interventions: System description and initial phantom trials. In: Medical Imaging: Visualization, Image-Guided Procedures, and Display. (2003) 9. Vogt, S., Khamene, A., Sauer, F., Niemann, H.: Single camera tracking of marker clusters: Multiparameter cluster optimization and experimental verification. In: Proc. of IEEE and ACM ISMAR. (2002) 127 136 10. Wang, M.Y., Maurer, Jr., C.R., Fitzpatrick, J.M., Maciunas, R.J.: An automatic technique for finding and localizing externally attached markers in ct and mr volume images of the head. IEEE Trans. Biomed. Eng. 43(6) (1996) 627 637 11. Gold, S., Rangarajan, A.: A graduated assignment algorithm for graph matching. IEEE Trans. Pattern Anal. Mach. Intell. 18(4) (1996) 377388 12. Umeyama, S.: Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell. 13(4) (1991) 376 380 13. Fitzpatrick, J.M., West, J.B., Maurer, Jr., C.R.: Predicting error in rigid-body point-based registration. IEEE Trans. Med. Imag. 14(5) (1998) 694 702