Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

Similar documents
Computational Medical Imaging Analysis Chapter 4: Image Visualization

GPU-accelerated Rendering for Medical Augmented Reality in Minimally-invasive Procedures

Endoscopic Reconstruction with Robust Feature Matching

Markerless Endoscopic Registration and Referencing

Endoscopic Navigation for Minimally Invasive Suturing

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart

Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality

Real-time self-calibration of a tracked augmented reality display

METK The Medical Exploration Toolkit

Fully Automatic Endoscope Calibration for Intraoperative Use

A New Approach to Ultrasound Guided Radio-Frequency Needle Placement

Design and Implementation of a Laparoscope Calibration Method for Augmented Reality Navigation

Non-rigid 2D-3D image registration for use in Endovascular repair of Abdominal Aortic Aneurysms.

Development and Human Factors Analysis of Neuronavigation vs. Augmented Reality

Lecture 6: Medical imaging and image-guided interventions

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Hybrid Navigation Interface for Orthopedic and Trauma Surgery

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

Bone registration with 3D CT and ultrasound data sets

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

Markerless Endoscopic Registration and Referencing

Introduction to Digitization Techniques for Surgical Guidance

A Study of Medical Image Analysis System

Automatic Patient Registration for Port Placement in Minimally Invasive Endoscopic Surgery

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

Image-Guided Interventions Technology and Applications. Organizers: Emad Boctor, Pascal Fallavollita, Ali Kamen, Ziv Yaniv

Sensor-aided Milling with a Surgical Robot System

ToF/RGB Sensor Fusion for Augmented 3-D Endoscopy using a Fully Automatic Calibration Scheme

BUZZ - DIGITAL OR WHITEPAPER

Intelligent 3DHD Visualization for Microsurgery

A Workflow Optimized Software Platform for Multimodal Neurosurgical Planning and Monitoring

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis

An Image Overlay System with Enhanced Reality for Percutaneous Therapy Performed Inside CT Scanner

MR-Guided Mixed Reality for Breast Conserving Surgical Planning

Validation System of MR Image Overlay and Other Needle Insertion Techniques

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics]

Kinematic Analysis of Lumbar Spine Undergoing Extension and Dynamic Neural Foramina Cross Section Measurement

Towards deformable registration for augmented reality in robotic assisted partial nephrectomy

Intraoperative Fast 3D Shape Recovery of Abdominal Organs in Laparoscopy

A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot

A multi-camera positioning system for steering of a THz stand-off scanner

Ultrasound Probe Tracking for Real-Time Ultrasound/MRI Overlay and Visualization of Brain Shift

Imaging protocols for navigated procedures

An easy calibration for oblique-viewing endoscopes

Computer Aided Surgery 8:49 69 (2003)

Computed Photography - Final Project Endoscope Exploration on Knee Surface

USING AUGMENTED REALITY FOR REAL-TIME VISUALIZATION OF TACTILE HEALTH EXAMINATION

Augmented Reality System for Oral Surgery Using 3D Auto Stereoscopic Visualization

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)

Augmenting Reality with Projected Interactive Displays

Medical Image Registration

Anatomic Hepatectomy Planning through Mobile Display Visualization and Interaction

Registration of Video Images to Tomographic Images by Optimising Mutual Information Using Texture Mapping

A model-based approach for tool tracking in laparoscopy

Optimal Planning of Robotically Assisted Heart Surgery: Transfer Precision in the Operating Room

ITS 102: Visualize This! Lecture 7: Illustrative Visualization. Introduction

Video Registration Virtual Reality for Non-linkage Stereotactic Surgery

Robotic Ultrasound Needle Placement and Tracking: Robot-to-Robot Calibration

Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery

The optics matter HD quality before HD even existed. KARL STORZ telescopes the original

The Use of Augmented Reality in the Operating Room: a Review

Trackerless Surgical Image-guided System Design Using an Interactive Extension of 3D Slicer

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration

Virtual Endoscopy: Modeling the Navigation in 3D Brain Volumes

C-mode Real Time Tomographic Reflection for a Matrix Array Ultrasound Sonic Flashlight

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Leksell SurgiPlan Overview. Powerful planning for surgical success

A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING

ECE1778 Final Report MRI Visualizer

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

(b) (a) 394 H. Liao et al. Fig.1 System configuration

Single Camera Calibration

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems

Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes

Millennium 3 Engineering

An Accuracy Approach to Robotic Microsurgery in the Ear

Camera Calibration for a Robust Omni-directional Photogrammetry System

Rendering-Based Video-CT Registration with Physical Constraints for Image-Guided Endoscopic Sinus Surgery

Project report Augmented reality with ARToolKit

P1: OTA/XYZ P2: ABC c01 JWBK288-Cyganek December 5, :11 Printer Name: Yet to Come. Part I COPYRIGHTED MATERIAL

Intrinsic and Extrinsic Camera Parameter Estimation with Zoomable Camera for Augmented Reality

I.R.SNApp Image Reconstruction and Segmentation for Neurosurgery Applications

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane

Laser-Pointing Endoscope System for Intra-Operative

Use of Ultrasound and Computer Vision for 3D Reconstruction

Multimodal Visualization of DTI and fmri Data using Illustrative Methods

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Towards an Ultrasound Probe with Vision: Structured Light to Determine Surface Orientation

A Multi-view Active Contour Method for Bone Cement Reconstruction from C-Arm X-Ray Images

CS 4758: Automated Semantic Mapping of Environment

Leksell SurgiPlan. Powerful planning for success

High-Accuracy 3D Image-Based Registration of Endoscopic Video to C-Arm Cone-Beam CT for Image-Guided Skull Base Surgery

Video to CT Registration for Image Overlay on Solid Organs

Fiducial localization in C-arm based Cone-Beam CT

Tracking Under Low-light Conditions Using Background Subtraction

Functional Analysis of the Vertebral Column Based on MR and Direct Volume Rendering

Towards webcam-based tracking for interventional navigation

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Transcription:

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Zein Salah 1,2, Bernhard Preim 1, Erck Elolf 3, Jörg Franke 4, Georg Rose 2 1Department of Simulation and Graphics, University of Magdeburg 2 Department of Telematics and Biomedical Engineering, University of Magdeburg 3 Department of Neuroradiology, University Hospital of Magdeburg 4 Department of Orthopedic Surgery, University Hospital of Magdeburg zein.salah@ovgu.de Abstract. Image-guided surgical systems are increasingly becoming established tools for visual aid in several interventional procedures. In this paper, we introduce a prototypic add-on system for enhancing the intraoperative visualization within a navigated spine surgery utilizing an extended reality approach. In essence, operation-specific important anatomical structures are segmented from preoperative patient data and superimposed on the video stream of the operation field. In addition, slices of the anatomy data, as well as shape and depth information of targeted structures, like spinal nerves or herniated discs, can be blended, which allows for a better protection of risk anatomy and accurate identification of the structures under consideration, and thus raises the safety and accuracy factors of the intervention. 1 Introduction One of the challenging tasks in a surgeon s life is to transfer the information displayed in 2D diagnostic images to the 3D situation in the real world of the patient s anatomy. This challenge is mastered in a continuous learning process, still with significant obstacles. In this regard, medical navigation systems help physicians to establish correspondences between locations in an acquired patient dataset and the patient s physical body during navigated surgeries. This is highly advantageous for surgeries in regions with high density of critical and vital structures; like the brain, skull base, and spine. However, this requires the surgeon to switch between the operation field, i.e. microscope or endoscope view, and wall-mounted or computer displays. Introducing augmented reality facilitates the transfer of the diagnostic imaging to the individual patient anatomy in a straightforward fashion. In this context, some research works provided enhanced endoscopic views that are paired with synthesized virtual renderings generated from the same view, e.g. [1]. Other systems tried to modify the design of operating binoculars [2] and microscopes [3] to allow for data augmentation. Augmented reality has also been introduced as a training tool for surgical procedures [4, 5].

320 Salah et al. In this paper, we present a prototype for improving intraoperative visualization by augmenting the video stream of the operation field with relevant patient data from different diagnostic and intraoperative imaging modalities. This will facilitate the identification of vital structures like blood vessels and nerves or landmarks like bony structures. This would potentially make surgical procedures safer and easier. In addition, the presented approach may serve as a teaching tool for surgical procedures, since critical anatomical relations can be identified and discussed before the actual procedure starts. 2 Material and Methods In this section, we adapt and extend our intraoperative visualization method [6] for the use in navigated spine surgery. We first describe the new prototype setup and then present the visualization approach. 2.1 Prototype Setup: Calibration, Tracking, and Registration Figure 1 (right) depicts our prototype, in which we use a phantom of the upper body with an integrated model of the lower part of the spine, including the lumbar vertebrae and the sacrum. We also adapt a tablet PC with a high resolution built-in camera to simulate the surgical optical device (e.g. microscope or endoscope). In a first step, we compute the camera intrinsic parameters. To this end, we capture several images of a special calibration pattern from different locations and orientations and then use MATLAB s Camera Calibration Toolbox to compute the focal length, principal point, and skew and distortion coefficients of the camera. Fig. 1. Prototype of the intraoperative extended reality visualization system.

Improved Navigated Spine Surgery 321 For tracking, we implement a marker-based optical tracking server, which continuously captures a video stream of the trackable region with a high resolution camera (Logitech QuickCam Pro 9000; 1280x720 pixel at 30 fps). The tracking camera has to be calibrated once, the same way as the tablet PC camera. At each frame, the system searches for predefined markers and, for each detected marker, computes its pose in the camera coordinate system. Pose information of all detected markers are transmitted to the tablet PC using the VRPN (Virtual Reality Peripheral Network) protocol over a WLAN connection. Technically, this design allows for connecting to several commercial tracking systems. Prior to the operation, a one-time, fully-automatic hand-eye calibration step is performed. In essence, the tablet PC camera is calibrated with respect to the tracker using a common reference marker that is visible (only at the calibration step) to both cameras. A second tracked marker is fixed to the tablet PC. Following the chain of transformations, the transformation from the tablet PC marker to its video camera is calculated as T mc = T cr.t 1 kr.t km, where the transformations are video-camera to reference, tracker to reference, and tracker to tablet PC marker, respectively. The transformation T mc remains valid as long as the marker does not move with respect to the video camera. At operation time, the patient is fixed with respect to the reference marker, which can then be removed from the view field of the camera. Thereafter, the transformation T cr can be computed using T cr = T mc.t 1 km.t kr. Since T mc and T kr are fixed from the previous step, T cr is only dependent on the tracking data of the marker. To register the patient/phantom with the scanned dataset (and hence with the anatomical models reconstructed from it), the reference marker is attached to a rectangular plastic board that is scanned with the patient. The corners of the marker are interactively selected from the scanned dataset, and their absolute positions are calculated, considering image spacing, and defined as registration points. 3D positions of the corresponding points in the patient coordinate system are precisely defined using the tracking system. The two sets of points are finally registered adapting a paired-point rigid registration scheme, applying a least square fitting approach. 2.2 Extended Reality Visualization Module The rendering module, running on the tablet PC, continuously captures a video stream and renders it as a background. At each frame, relevant virtual objects are rendered/overlaid using a two-pass rendering algorithm that highlights objects silhouettes for better shape perception. Several objects can be overlaid according to the current operation conditions. These include 3D reconstructions of segmented structures from the anatomy dataset. Additionally, tomographical slices can be superimposed. For this purpose, we adapt an optimized slicing algorithm [7] to compute tomographical slices at the desired position and orientation. The generated slice image is then blended in the real scene with the correct pose. Here, the dimension and spacing of the 3D dataset and the generated cross section are considered to correctly adjust the physical proportion

322 Salah et al. with the patient and environment. For certain structures, e.g. tumors, an enhanced visualization of shape and depth information can also be provided. This is achieved by extracting the planar contours of the tumor at successive depths perpendicularly to the viewing direction. Depth information is conveyed via depth cueing by defining the transparency of a contour as a linear function of its depth. In minimally-invasive endoscopic or microscope-based spine surgery, augmentation should be performed on the video stream of the endoscope/microcope camera. However, this requires tracking these devices and a more complicated calibration of their cameras. 3 Results The software modules have been implemented with C++, OpenGL, VTK, and Qt. For our current implementation, we relied on marker-based tracking provided by the ARToolkit, which allows for multiple-marker tracking in real time. However, due to the inaccurate calibration, we calibrate camera parameters using MATLAB, as stated in Section 2.1. As a result, the new calibration was significantly more accurate regarding marker detection and pose estimation. For our simulated OP scenario, a phantom model of the upper body is scanned with the mounted reference marker. After extrinsic calibration of the video camera, the phantom model is registered to the scanned data, using the corners of the reference marker as the set of correspondence point pairs. From a co-registered patient dataset, three vertebrae, inter-vertebral discs, and the spinal canal are segmented and 3D models are reconstructed. Finally, the visualization module starts video stream augmentation. The upper-left part of Figure 1 depicts a snapshot of the GUI of the visualization module in a simulated spine surgery scenario. Figure 2 (left) shows a left posterior oblique (LPO) view, with augmented models of the lumbar vertebrae L2-L4 (cyan), inter-vertebral discs (green), and spinal canal (pink). Objects silhouettes are slightly highlighted for enhanced shape perception. In Figure 2 (right), a side view with an additional transparent overlay of a tomographical slice from the patient data is shown. The slicing algorithm allows for on-the-fly computation and rendering of slices at a near real-time rate. 4 Discussion A prototypic tablet PC based add-on for enhancing intra-operative visualization has been introduced, with the focus on spine surgeries. The hardware setup and functional components of the prototype have been depicted. We aim at raising the safety and accuracy factors of the intervention by superimposing relevant operation-specific anatomical structures on the video stream of the operation field, which obviously allows for the protection of risk structures like spinal nerves, blood vessels, and the spinal canal. In addition, targeted structures like tumors or herniated discs are better localized and information about the

Improved Navigated Spine Surgery 323 Fig. 2. Left: LPO view with augmented vertebra (cyan), discs (green), and spinal canal (pink). Right: Side view with additionally blended cross section from the MRI dataset. shape and extend of such structures can be conveyed at different depth levels. From surgical point of view, we have received positive feedback regarding the relevance and applicability of the presented approach. Our future work will focus on transferring the prototype for application in real operation suits and evaluating the applicability of the concept to other surgery scenarios. Acknowledgement. This work is funded by the German Ministry of Education and Science (BMBF) within the ViERforES project (No. 01IM08003C). References 1. Lapeer R, Chen M, Gonzalez G, et al. Image-enhanced surgical navigation for endoscopic sinus surgery: evaluating calibration, registration and tracking. Int J Med Robot. 2008;4(1):32 452. 2. Birkfellner W, Figl M, Huber K, et al. A head-mounted operating binocular for augmented reality visualization in medicine-design and initial evaluation. IEEE Trans Med Imaging. 2002;21(8):991 7. 3. Edwards P, King A, Maurer C, et al. Design and evaluation of a system for microscope-assisted guided interventions. IEEE Trans Med Imaging. 2000;19(11):1082 93. 4. Ockert B, Bichlmeier C, Heining S, et al. Development of an augmented reality (AR) training environment for orthopedic surgery procedures. In: Proc CAOS; 2009. 5. Navab N, Feuerstein M, Bichlmeier C. Laparoscopic virtual mirror: new interaction paradigm for monitor based augmented reality. In: Virtual Real; 2007. p. 43 50. 6. Salah Z, Preim B, Samii A, et al. Enhanced interoperative visualization for brain surgery: a prototypic simulation scenario. In: Proc CURAC; 2010. p. 125 30. 7. Salah Z, Preim B, Rose G. An approach for enhanced slice visualization utilizing augmented reality: algorithms and applications. In: Proc PICCIT; 2010. p. 1 6.