Endoscopic Navigation for Minimally Invasive Suturing

Similar documents
Fully Automatic Endoscope Calibration for Intraoperative Use

Markerless Endoscopic Registration and Referencing

Markerless Endoscopic Registration and Referencing

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics]

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

Real-time self-calibration of a tracked augmented reality display

Computed Photography - Final Project Endoscope Exploration on Knee Surface

A Survey of Light Source Detection Methods

Sensor-aided Milling with a Surgical Robot System

Range Sensors (time of flight) (1)

Design and Implementation of a Laparoscope Calibration Method for Augmented Reality Navigation

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

An easy calibration for oblique-viewing endoscopes

3D Modeling of Objects Using Laser Scanning

GPU-accelerated Rendering for Medical Augmented Reality in Minimally-invasive Procedures

Occlusion Detection of Real Objects using Contour Based Stereo Matching

A Study of Medical Image Analysis System

Real-Time Photo-Realistic Rendering for Surgical Simulations with Graphics Hardware

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I

arxiv: v1 [cs.cv] 28 Sep 2018

A Video Guided Solution for Screw Insertion in Orthopedic Plate Fixation

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

Using surface markings to enhance accuracy and stability of object perception in graphic displays

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

On-line and Off-line 3D Reconstruction for Crisis Management Applications

A Robust Two Feature Points Based Depth Estimation Method 1)

Estimating Camera Position And Posture by Using Feature Landmark Database

IRIS SEGMENTATION OF NON-IDEAL IMAGES

ToF/RGB Sensor Fusion for Augmented 3-D Endoscopy using a Fully Automatic Calibration Scheme

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart

Introduction to Digitization Techniques for Surgical Guidance

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Sensor Modalities. Sensor modality: Different modalities:

Simultaneous Stereoscope Localization and Soft-Tissue Mapping for Minimal Invasive Surgery

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

A New Approach to Ultrasound Guided Radio-Frequency Needle Placement

Computer Vision Lecture 17

Measurements using three-dimensional product imaging

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Computer Vision Lecture 17

A Full Geometric and Photometric Calibration Method for Oblique-viewing Endoscope

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

3D from Images - Assisted Modeling, Photogrammetry. Marco Callieri ISTI-CNR, Pisa, Italy

Mech. Engineering, Comp. Science, and Rad. Oncology Departments. Schools of Engineering and Medicine, Bio-X Program, Stanford University

Coordinate Transformations, Tracking, Camera and Tool Calibration

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Use of Ultrasound and Computer Vision for 3D Reconstruction

3D scanning. 3D scanning is a family of technologies created as a means of automatic measurement of geometric properties of objects.

Other approaches to obtaining 3D structure

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

Flexible Calibration of a Portable Structured Light System through Surface Plane

Tracked surgical drill calibration

Recovering light directions and camera poses from a single sphere.

A model-based approach for tool tracking in laparoscopy

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane

3D object recognition used by team robotto

A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging

Miniature faking. In close-up photo, the depth of field is limited.

An Overview of Matchmoving using Structure from Motion Methods

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Chapter 5. Projections and Rendering

Stereo Vision. MAN-522 Computer Vision

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Tutorial on 3D Surface Reconstruction in Laparoscopic Surgery. Simultaneous Localization and Mapping for Minimally Invasive Surgery

Validation System of MR Image Overlay and Other Needle Insertion Techniques

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy

Efficient Rendering of Glossy Reflection Using Graphics Hardware

3D-2D Laser Range Finder calibration using a conic based geometry shape

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Augmenting Reality with Projected Interactive Displays

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Structured light 3D reconstruction

C-mode Real Time Tomographic Reflection for a Matrix Array Ultrasound Sonic Flashlight

CS 130 Final. Fall 2015

Endoscopic Reconstruction with Robust Feature Matching

6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note

Light source estimation using feature points from specular highlights and cast shadows

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Lecture 6: Medical imaging and image-guided interventions

A 100Hz Real-time Sensing System of Textured Range Images

A high definition Mueller polarimetric endoscope for tissue characterisation

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Transcription:

Endoscopic Navigation for Minimally Invasive Suturing Christian Wengert 1, Lukas Bossard 1, Armin Häberling 1, Charles Baur 2, Gábor Székely 1, and Philippe C. Cattin 1 1 Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland wengert@vision.ee.ethz.ch 2 Virtual Reality and Active Interfaces, EPFL Lausanne, 1015 Lausanne, Switzerland Abstract. Manipulating small objects such as needles, screws or plates inside the human body during minimally invasive surgery can be very difficult for less experienced surgeons, due to the loss of 3D depth perception. This paper presents an approach for tracking a suturing needle using a standard endoscope. The resulting pose information of the needle is then used to generate artificial 3D cues on the 2D screen to optimally support surgeons during tissue suturing. Additionally, if an external tracking device is provided to report the endoscope s position, the suturing needle can be tracked in a hybrid fashion with sub-millimeter accuracy. Finally, a visual navigation aid can be incorporated, if a 3D surface is intraoperatively reconstructed from video or registered from preoperative imaging. 1 Introduction Surgical interventions are increasingly being performed in a minimally invasive fashion. The main advantages of causing less trauma to the patient, smaller infection rates, and faster recovery are well documented. Minimally invasive surgeries (MIS), however, require a lot of experience and pronounced skills from the surgeon. One of the main reasons lies in the almost complete loss of depth perception that makes the manipulation of small objects such as needles, screws, or plates very difficult. This paper presents a framework for tracking small objects and estimating their pose using a standard, monocular endoscope. As one example application, the tracking and pose estimation of a suturing needle is presented. The resulting pose information is then used for augmenting the surgeon s view by generating artificial 3D cues onto the 2D display and hence providing additional support during suturing. Such augmentation techniques are especially helpful for trainees and less experienced surgeons. The presented approach is general and can be applied to any object with known geometry by performing minor adaptations of the pose estimation module. Traditional setups usually cannot support the navigation process, as it is often impossible to track small objects due to the missing line of sight and/or because their dimensions or function does not allow marker attachment. While magnetic markers of small dimensions exist, the presence of metallic objects

2 in their vicinity severely deteriorates their accuracy. Our method can offer a solution by relying on an external tracking device reporting the endoscope s pose. Based on this information the needle can be fully tracked in a hybrid fashion in the world coordinate system. Finally, if pre- or intra-operative data are registered to the system, the possible interaction between the tracked object and the registered data can also be visualized and used as a navigation aid. The methods proposed in this paper rely on the combination of object tracking, pose estimation and view augmentation. Color-coded instrument tracking has been presented in [1] for positioning surgical instruments in a robotized laparoscopic surgical environment. Another color coded approach for tracking laparoscopic instruments was proposed in [2]. The pose computation from objects in 2D images given 2D to 3D point correspondences has been explored in [3], using textures in [4] and from parameterized geometry in [5]. Pose computation from different geometric entities such as ellipses, lines, points and their combinations was presented in [6]. An endoscopic hybrid tracking approach used for spine surgery has been presented in [7]. Passive fiducials are attached to the vertebrae and their pose detected by a tracked endoscope. However, the registration of the fiducials with the target anatomy adds another step to the navigation procedure. In [8], fiducials are attached to the tools and their 3D pose measured by a head-mounted monocular camera. The results are displayed using a HMD. Objects without a direct line of sight to the camera cannot be handled using this system. Other approaches for tracking and pose estimation of surgical instruments include structured light endoscopes [9] and ultrasound probes [10]. An approach for facilitating the depth perception was presented in [11], where the invisible shadows of the tools where artificially rendered into the scene. We propose a system for estimating the pose of very small colored objects in order to augment the surgeon s view by relying only on the standard environment of endoscopic surgeries. No modifications of the instruments involved (like marker attachment) is needed, only the surface color of the object to be identified has to be adjusted. This approach enables tracking of these objects in a hybrid fashion and enhances existing navigation systems. 2 Methods 2.1 Needle Tracking and Pose Estimation For all experiments a 10 mm radial distortion corrected endoscope 1 with an oblique viewing angle of 25 was used. In order to avoid interlacing artifacts, a progressive frame, color CCD camera with a resolution of 800 600 pixels and 30 fps has been incorporated. The camera can be calibrated pre-operatively by the surgeon without requiring technical assistance [12]. As the endoscope/camera combination provides a depth of field in the range of 3 7 cm, the focal length 1 Panoview Plus, Richard Wolf GmbH, http://www.richard-wolf.com/

3 of the camera can be kept constant during the entire procedure. This allows to avoid recalibration of the system during surgery. The surgical needle 2 has a nearly half circular shape with a radius r = 8.5 mm and a needle diameter d of 1 mm, see Fig. 4b. Tracking such a needle in a surgical environment is a challenging task due to the moving camera, the cluttered background, and the unconstrained motion. In addition, partial occlusions of the needle occur during the suturing process and while being held by the gripper. Finally, the coaxial illumination yields strong specularities and an inhomogeneous light distribution. The needle was painted using a light matt green color finish, preventing most of the specularities, thus increasing speed and robustness of the segmentation process. On startup, the system enters an interactive phase, where the needle first needs to pass through the central search area in the image. Once the needle is found, the tracking and pose estimation starts. Due to the inherently ambiguous solution of the orientation of the needle, the surgeon quickly needs to verify whether the correct orientation has been chosen and manually switch to the correct solution if necessary. The whole algorithm is depicted in Fig. 1 and presented in more detail in the following paragraph. A color segmentation is Acquire image Search area selection Compute motion vector Model update None Found Invalid Invalid Color segmentation Compound labeling Validate areas Fit ellipse Validate 2D Found Valid Pose estimation Validate 3D Valid Fig. 1. Needle detection algorithm after interactive initialization applied to the search area, selecting all green pixels. The RGB image is converted to the HSI-color space and all pixels within the hue range from 65 H 160 are selected. Additional constraints for the saturation S 0.15 and intensity I 0.06 reduce the specularities and help to remove dark areas. All of these values were empirically determined but remained constant through all experiments. The segmented pixels are labeled using a connected component labeling algorithm while small components of less than 50 pixels are discarded. An ellipse is fitted to all remaining components using the method proposed in [13]. This results in the center of the ellipse c e = [x 0, y 0 ] T, the major and minor axis a and b, and the tilt angle θ, defined as the angle between the major axis a and the x-axis, as shown in Fig. 2a. The actual parameter set p t = [x 0, y 0, a, b, θ] T is compared to the previously computed set p t 1. If the new ellipse center is within the bounds defined by the predicted motion and the change of the parameters a, b and θ is below 30 %, the new model is regarded as valid and a new motion vector is computed for adjusting the search area in the next image. Otherwise the found ellipse is discarded, the next image acquired, and the ellipse s parameters are estimated under the same conditions as before. From the found 2D ellipse 2 Atraloc, Ethicon Ltd., http://www.ethicon.com

4 parameters, the corresponding 3D circle is computed using the method from [14]. This results in the location of the circle center C = [X c, Y c, Z c ] T and the normal n of the plane containing the needle as depicted in Fig. 2b. The computation of the needle pose is, however, inherently ambiguous and results in two distinct solutions for the center (C, C ) and for the normal (n, n ). The correct circle center can be determined by backprojecting both solutions c = P C to the image and choosing the solution yielding the smaller distance to the previously computed ellipse center c e, with P being the projection matrix of the camera containing the intrinsic parameters. In order to chose the correct circle plane n t, the current plane normals (n, n ) are compared to the previously computed normal n t 1 and the solution with the smaller angular difference d = cos 1 (n t n t 1 ) is used. In order to cope with situations where the needle passes through a degenerate pose as shown in Fig. 3d, the current normal is selected to move consistently with the prior motion. a) y y 0 b x 0 c e a θ x b) y x z Image plane with 2D ellipse n (Xc,Yc,Zc) 3D circle Fig.2. a) 2D Ellipse parameters c e = [x 0, y 0] T, a, b, θ, b) projection of a 3D circle, defined by C = [X c, Y c, Z c] T, n, to the image plane 2.2 Augmented Visualization The methods presented above can be used to compensate the loss of 3D depth perception by rendering artificial orientation cues. This does not require significant adaptations from the surgeon as his working environment remains unchanged. For example, a semi-transparent plane containing the needle can be projected onto the 2D image indicating the relative pose of the needle with respect to the camera as can be seen in Fig. 3a. The plane is represented by the square surrounding the circle containing the needle, whose vertices X i are backprojected to the image x i = PX i. The square is displayed in a semi-transparent way in order to minimize the loss of information due to occlusion and its projective distortion indicates which part of the needle is closer to the camera, see Fig. 3. 2.3 Hybrid Tracking As mentioned above, the system described can be extended to allow object tracking, too. If the endoscope is tracked externally and the needle pose is computed in the camera coordinate system, as indicated in Fig. 4a, hybrid tracking of the needle is possible: Hworld needle = Hneedle cam Hmarker cam Hmarker world, with Hmarker world the pose of the marker in the world coordinate system, Hmarker cam the transformation relating the camera with the world coordinate system as computed during the calibration process, and Hcam needle the resulting transformation from the pose estimation process as described in Section 2.1.

5 a) b) c) d) Fig. 3. a) Example visualization of the needle showing the detected ellipse and the plane, b) needle held by gripper, c) example showing partial occlusion during the suturing process on a phantom mockup, d) degenerate case with the needle plane being almost perpendicular to the image plane For these experiments, a marker is attached to the endoscope that is being followed by the active optical tracker 3 providing accurate position (< 0.2 mm localization error) and orientation information in a working volume of 50 50 50 cm 3. An external hardware triggering logic ensures the synchronized acquisition of the tracking data and the camera images during dynamic freehand manipulation. a) Optical tracking Hardware b) system Synchronisation d H marker world Marker 2D Screen r H needle world H cam marker H needle cam Endoscope CCD Camera Gripper PC Light source Needle Fig. 4. a) Overview of the different components of the presented system, b) close-up of the needle held by a gripper 2.4 Navigation Aid Even more navigation cues can be integrated if registered 3D data such as CT or MRI are available and if the needle is tracked. This allows to visualize the possible interactions between the tracked object and the 3D data resulting in a navigation aid. In this paper, such 3D models are created by the same endoscope, using an intra-operative 3D reconstruction method, as described in [15]. As the endoscope is moved over the surgical scene, the images from this sequence are used to build a 3D model of the visible anatomy online, during the intervention, resulting in a 3D triangle mesh M world. The plane π world defined by (C, n) in the world coordinate system can then be cut with the 3D triangle mesh M world resulting in a lineset l = π world M world. A visibility filter v cam returns those lines from this set that are visible in the current view: l vis = v cam (l, M world, [R, t]), with [R, t] being the camera pose in the world coordinate system. v cam casts rays from 3 EasyTrack500, Atracsys LLC., http://www.atracsys.com

6 the camera position to both vertices of each line. If a face is encountered between the camera position and one of the line vertices, this line is set to invisible. This pessimistic approach may lead to the complete loss of only partially visible line segments. However, this is a rare event considering usual organ topography. In rare cases where multiple cutting lines result from this procedure, the solution closer to the needle is selected. Please note that, while the reconstructed 3D model is essential for these operations, it remains hidden from the surgeon and the system only presents the final result (i.e. the cutting line), overlaid on the endoscopic view (see Fig. 5a). a) b) Fig.5. a) 2D augmented view with needle plane and cutting line, b) internal 3D representation of the same scene showing the intra-operatively reconstructed texturemapped 3D model, a disk representing the needle and the 3D cutting line 3 Results The framerate for the hybrid tracking is mainly determined by the needle detection and the ellipse fitting process. They both depend on the number of pixels thus the processing time decreases with the distance between the needle and the camera. In the working range of 3 7 cm the system runs in real-time and the framerate on a 2.8 GHz CPU is between 15 30 fps. The framerate for the virtual interaction depends on the number of vertices and faces of the 3D model. For the 3D models in our experiments with approximately 500 1000 faces the framerate dropped to 10 15 fps. In order to assess the accuracy of the pose estimation, a 2 DOF positioning table with a resolution of 0.02 mm was used to move the needle to distinct positions, covering the whole working volume, as depicted in Fig. 6a. As the errors in x and y are very similar (both depend mainly on the in-plane resolution of the camera), only the errors in x and z are presented here. Around each distinct position, defining the reference coordinate system (due to the lack of an absolute coordinate system) eight small deviations of 0.1 mm were introduced and the pose estimated. These perturbed measurements were then compared to the reference position. The standard deviations of the errors in the z = 30 mm plane were 0.003 ± 0.065 mm along the optical z-axis and 0.01 ± 0.018 mm along the x-axis. On the lateral positions the errors were 0.02 ± 0.024 mm along the optical z-axis and 0.01 ± 0.015 mm along the x-axis. For the z = 70 mm plane, the errors along the optical axis were 0.053 ± 0.05 mm in the z-direction and 0.04 ± 0.03 mm in the x-direction. The measurements on the

7 lateral positions resulted in errors of 0.037 ± 0.024 mm and 0.026 ± 0.043 mm respectively. The angular accuracy was quantified in a similar way as in the above experiment by introducing perturbations to the rotation of the needle with respect to the image plane. This has been repeated with the needle being parallel to the image plane, and inclined by 30 and 60 respectively. Then, perturbations of 5 were added to the current pose and compared to the initial angular measurement. For the needle plane being parallel to the image plane, the standard deviations for the angular errors were 2.3 ± 0.5 in the z = 30 mm plane and 2.5 ± 0.5 in the z = 70 mm plane. For the needle plane inclined by 30 the standard deviations were 1.1 ±0.7 close to the camera and 1.3 ±0.7 at the farthest position. Finally, for the 60 case, the errors were 0.7 ±0.5 and 1.0 ± 0.5 respectively. As expected from the projective nature of the image formation, the error decreases with the increasing inclination of the needle plane. a) b) Needle parallel to x z 30mm 70mm image plane Inclined needle plane y Camera 10mm 60 o Fig.6. a) Setup for measuring the errors, b) needle plane parallel to image plane and with an inclination of 60 4 Conclusions In this paper a multi-purpose tracking system for small artificial objects was presented that can cope with partial occlusions, cluttered background and fast object movement. The proposed system allows real-time tracking of a suturing needle with sub-millimeter accuracy. The needle tracking is very robust and quickly recovers from occlusions, however their influence on the accuracy needs to be carefully investigated. The augmented visualization helps the surgeon to perceive 3D cues from 2D images using the existing instrumentation. The hybrid tracking can improve existing navigation systems by adding the possibility to handle small objects. The color dependence of the system requires the white balance to be set accurately in advance, which has to be integrated into the calibration procedure. The pose ambiguity cannot always be resolved correctly, therefore the system allows to manually switch to the correct solution. It is planned to solve this ambiguity by using other visual cues. Future work includes tracking and pose estimation for other objects such as screws and plates leading to greater flexibility of the system as well as enabling to track multiple objects. In addition, a test setup for proving the practical use of the augmentation has to be designed with a clinician. At the same time it needs to be investigated whether the augmentation can be improved using other cues

8 than the semi-transparent plane. Finally, quality assessment will be incorporated by recording the whole intervention thus allowing quantitative measurements about the surgeons performance. Acknowledgments This work has been supported by the CO-ME/NCCR research network of the Swiss National Science Foundation (http://co-me.ch). References 1. Krupa, A., et al.: Automatic 3-d positioning of surgical instruments during robotized laparoscopic surgery using automatic visual feedback. In: MICCAI. (2002) 2. Wei, G.Q., et al.: Automatic tracking of laparoscopic instruments by color coding. In: First Joint Conference on Computer Vision, Virtual Reality and Robotics in Medicine and Medial Robotics and Computer-Assisted Surgery, London, UK (1997) 3. Haralick, R.M., et al.: Pose estimation from corresponding point data. Systems, Man and Cybernetics, IEEE Transactions on (1989) 1426 1446 4. Rosenhahn, B., et al.: Texture driven pose estimation. In: Proc. of the Int. Conference on Computer Graphics, Imaging and Visualization. (2005) 271 277 5. Lowe, D.G.: Fitting parameterized three-dimensional models to images. Pattern Analysis and Machine Intelligence, IEEE Transactions on (1991) 441 450 6. Ji, Q., Haralick, R.: A statistically efficient method for ellipse detection. In: Image Processing, Int. Conference on. (1999) 730 734 7. Thoranaghatte, R.U., et al.: Endoscope based hybrid-navigation system for minimally invasive ventral-spine surgeries. Computer Aided Surgery (2005) 351 356 8. Sauer, F., Khamene, A., Vogt, S.: An Augmented Reality Navigation System with a Single-Camera Tracker: System Design and Needle Biopsy Phantom Trial. In: MICCAI. (2002) 116 124 9. Fuchs, H., et al.: Augmented reality visualization for laparoscopic surgery. In: MICCAI. (1998) 934 943 10. Novotny, P.M., et al.: GPU Based Real-Time Instrument Tracking with Three Dimensional Ultrasound. In: MICCAI. (2006) 58 65 11. Nicolaou, M., James, A., Lo, B., Darzi, A., Guang-Zhong, Y.: Invisible shadow for navigation and planning in minimal invasive surgery. In: MICCAI. (2005) 25 32 12. Wengert, C., Reeff, M., Cattin, P., Székely, G.: Fully Automatic Endoscope Calibration for Intraoperative Use. In: Bildverarbeitung für die Medizin. (2006) 419 23 13. Fitzgibbon, A.W., Pilu, M., Fisher, R.B.: Direct least squares fitting of ellipses. IEEE Transactions on Pattern Analysis and Machine Intelligence (1999) 14. De Ipiña, D.L., Mendonça, P.R.S., Hopper, A.: Trip: A low-cost vision-based location system for ubiquitous computing. Personal Ubiquitous Computing (2002) 15. Wengert, C., Cattin, P.C., Duff, J.M., Székely, G.: Markerless endoscopic registration and referencing. In: MICCAI. (2006) 806 814