Real-time self-calibration of a tracked augmented reality display

Similar documents
Real-Time Data Acquisition for Cardiovascular Research

3D Slicer Overview. Andras Lasso, PhD PerkLab, Queen s University

ABSTRACT 1. INTRODUCTION

Introduction to Digitization Techniques for Surgical Guidance

Skull registration for prone patient position using tracked ultrasound

3D-printed surface mould applicator for high-dose-rate brachytherapy

Real-time transverse process detection in ultrasound

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Prototyping clinical applications with PLUS and SlicerIGT

Monitoring electromagnetic tracking error using redundant sensors

Robot-assisted MRI-guided prostate biopsy using 3D Slicer

Towards webcam-based tracking for interventional navigation

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2014 July 28.

SlicerRT Image-guided radiation therapy research toolkit for 3D Slicer

Validation System of MR Image Overlay and Other Needle Insertion Techniques

Dynamic management of segmented structures in 3D Slicer

A New Approach to Ultrasound Guided Radio-Frequency Needle Placement

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

Step-wise identification of ultrasound-visible anatomical landmarks for 3D visualization of scoliotic spine

C-mode Real Time Tomographic Reflection for a Matrix Array Ultrasound Sonic Flashlight

Software Strategy for Robotic Transperineal Prostate Therapy in Closed-Bore MRI

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

COMPREHENSIVE QUALITY CONTROL OF NMR TOMOGRAPHY USING 3D PRINTED PHANTOM

Fractional labelmaps for computing accurate dose volume histograms

NA-MIC National Alliance for Medical Image Computing SlicerRT Extension

A high accuracy multi-image registration method for tracking MRIguided

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

An Image Overlay System with Enhanced Reality for Percutaneous Therapy Performed Inside CT Scanner

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab

MR-Guided Mixed Reality for Breast Conserving Surgical Planning

MR IMAGE OVERLAY: AN AUGMENTED REALITY SYSTEM FOR NEEDLE GUIDANCE

Position accuracy analysis of the stereotactic reference defined by the CBCT on Leksell Gamma Knife Icon

A Study of Medical Image Analysis System

Open-source software for collision detection in external beam radiation therapy Vinith M. Suriyakumar, Renee Xu, Csaba Pinter, Gabor Fichtinger

Projection-Based Needle Segmentation in 3D Ultrasound Images

The team. Disclosures. Ultrasound Guidance During Radiation Delivery: Confronting the Treatment Interference Challenge.

Optical Guidance. Sanford L. Meeks. July 22, 2010

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Homework Assignment /645 Fall Instructions and Score Sheet (hand in with answers)

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

Towards an Ultrasound Probe with Vision: Structured Light to Determine Surface Orientation

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

C a t p h a n / T h e P h a n t o m L a b o r a t o r y

RADIOMICS: potential role in the clinics and challenges

Trackerless Surgical Image-guided System Design Using an Interactive Extension of 3D Slicer

Thank-You Members of TG147 TG 147: QA for nonradiographic

Project report Augmented reality with ARToolKit

Mech. Engineering, Comp. Science, and Rad. Oncology Departments. Schools of Engineering and Medicine, Bio-X Program, Stanford University

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha

Introducing Computer-Assisted Surgery into combined PET/CT image based Biopsy

Fast CT-CT Fluoroscopy Registration with Respiratory Motion Compensation for Image-Guided Lung Intervention

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object

Lecture 6: Medical imaging and image-guided interventions

Biomedical Image Processing for Human Elbow

Advanced Reconstruction Techniques Applied to an On-Site CT System

Gregory Walsh, Ph.D. San Ramon, CA January 25, 2011

Overview of Active Vision Techniques

GPU-accelerated Rendering for Medical Augmented Reality in Minimally-invasive Procedures

Enabling Technologies for Robot Assisted Ultrasound Tomography

Medical Image Analysis

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

Virtual Remote Center of Motion Control for Needle Placement Robots

Virtual Remote Center of Motion Control for Needle Placement Robots

Improved temporal calibration of tracked ultrasound: an open-source solution

Needle Insertion in CT Scanner with Image Overlay Cadaver Studies

Technical aspects of SPECT and SPECT-CT. John Buscombe

LEAD LOCALIZATION. Version 1.0. Software User Guide Revision 1.1. Copyright 2018, Brainlab AG Germany. All rights reserved.

Virtual Phantoms for IGRT QA

Sensor-aided Milling with a Surgical Robot System

Tracked surgical drill calibration

A NEW LASER FOCUS ON PATIENT SAFETY

Development and Human Factors Analysis of Neuronavigation vs. Augmented Reality

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter

An Automated Ultrasound Calibration Framework Incorporating Elevation Beamwidth for Tracked Ultrasound Interventions

Leica Absolute Trackers

Image-Guided Interventions Technology and Applications. Organizers: Emad Boctor, Pascal Fallavollita, Ali Kamen, Ziv Yaniv

7/31/2011. Learning Objective. Video Positioning. 3D Surface Imaging by VisionRT

CARS 2008 Computer Assisted Radiology and Surgery

Computed tomography (Item No.: P )

The Role of DICOM for Modular Perioperative Surgical Assist Systems Design

Phased Array Assisted Manual Nozzle Inspection Solution with Data Archiving Capability

Measurement of Head-to-Trunk Orientation Using Handheld 3D Optical Apparatus

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

xorantech.com Suite of DR Products

An Accuracy Approach to Robotic Microsurgery in the Ear

3DCITY. Spatial Mapping and Holographic Tools for 3D Data Acquisition and Visualization of Underground Infrastructure Networks

Leksell SurgiPlan Overview. Powerful planning for surgical success

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart

Augmenting Reality, Naturally:

Automatic Patient Pose Estimation Using Pressure Sensing Mattresses

Supplementary methods

Fully Automatic Endoscope Calibration for Intraoperative Use

System Integration and Preliminary In-Vivo Experiments of a Robot for Ultrasound Guidance and Monitoring during Radiotherapy

New 4-D Imaging for Real-Time Intraoperative MRI: Adaptive 4-D Scan

Three-Dimensional Laser Scanner. Field Evaluation Specifications

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Future Topics. Projection Imaging Dose Reporting, XA 3D Volume Objects. for DICOM WG-02. Presented by Heinz Blendinger, Siemens Medical Solutions

Depth Camera Calibration and Knife Tip Position Estimation for Liver Surgery Support System

Medical Images Analysis and Processing

Augmented interiors with digital camera images

Transcription:

Real-time self-calibration of a tracked augmented reality display Zachary Baum, Andras Lasso, Tamas Ungi, Gabor Fichtinger Laboratory for Percutaneous Surgery, Queen s University, Kingston, Canada ABSTRACT PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform s SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95 th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95 th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions. KEYWORDS: Augmented Reality, Image-Guided Intervention, Image-Guided Therapy, Calibration, Visualization, 3D Slicer, SlicerIGT, PLUS Toolkit 1. PURPOSE Inserting a needle percutaneously under Computerized Tomography (CT) or Magnetic Resonance Image (MRI) guidance is common practice for several medical procedures, such as biopsies and musculoskeletal injections. Given the current clinical standard for image-guided insertions, clinicians typically acquire multiple volumetric images while inserting the needle to find a suitable orientation, location and trajectory and assess progression. Multiple image acquisition leads to longer treatment times, increased patient discomfort and increased radiation exposure. Acquiring a single volumetric image and showing it in three dimensions at an actual position inside the patient would allow the clinician to see the target and insert the needle similarly to the way it would be done in transparent material. Early versions of such image overlay systems for image-guided needle interventions were mostly static [1, 2] and the clinician could not easily reposition the virtual display when the system was in use. These systems were fixed to the CT or MR imaging system or placed on a floor-mounted frame above the patient table. The mounting process of these systems required complex setup procedures and were prone to misalignment from unintentional contact with the hardware [3]. The static system limited access to the patient and did not allow for a full range of clinically relevant tool trajectories. The first mobile image overlay systems consumed much of the useful operating room floor space, as a counterbalanced arm was necessary to support the display [3]. Anand et al. designed a mobile image overlay system with

the goal of reducing the system s overall weight [3]. Use of a tablet computer to display the re-sliced image allowed the system to be handheld and completely wireless. The MicronTracker H3-60 (Claron Technology Inc., Toronto, Ontario, Canada) passive optical tracking camera was connected to a host computer and obtained position data. The tablet computer connected wirelessly to the host computer, received the re-sliced image, and displayed it. This lead to low frame rates on the tablet screen as large amounts of data had to be transferred between the host computer and the tablet itself. The design of the hardware for the system was fixed, and could only be attached onto one type of tablet display. This system also required the use of temporary markers for its calibration process. These markers were simply placed on the device and used to calculate the intermediate transforms for the viewbox to virtual calibration marker transform. Given that this temporary marker was not fixed in place on the viewbox of the pervious iteration, the calibration was variable. The primary goal of our work was to create a handheld tablet computer based mobile image overlay system with an automatic calibration process, a modular viewbox design, and make it a self-contained system. Our model of the mobile image overlay system allows self-calibration without the use of any temporary markers to achieve a consistent and precise calibration method. Using an adaptable and modular design also allows for different tablet computers to be used interchangeably. We have also made the system increasingly self-contained, requiring only the tracking information to be sent to the tablet computer via a wireless network. 2.1 System description 2. METHODS A handheld viewbox, which attaches the mirror to the tablet and holds the optical markers, was designed to grip onto a tablet computer and a semi-transparent mirror. The viewbox is equipped with optical markers on the sides to determine the pose of image overlay plane at all times during the procedure. Another optical marker is fixed onto the phantom or patient for registration of scanned images and the viewbox with respect to the phantom or patient s position. These markers are optically tracked using the MicronTracker optical tracker to give the position of the viewbox, and the phantom or patient, with respect to an overlaid image plane. The tracking information from the MicronTracker optical tracker is then sent from a laptop computer to a tablet computer via wireless network. As we are only transferring position measurement data, the network bandwidth is not critical in our system. Once the tracking data is available on the tablet computer, a CT scan of the phantom is registered to the patient marker using landmark registration. Next, an image slice corresponding to the position of the virtual overlaid image is computed in real-time based on the pose information provided by the tracker and displayed on the tablet computer s screen. The overall design required a system that can be handheld, and could be setup by one user without any programming required Only the MicronTracker optical tracker should need floor space in our setup, allowing the overall footprint of the system to be reduced. The system should be compact enough so to be hand-held by the physician and used for exploration of the image volume over the patient. 2.2 Design The viewbox for the mobile image overlay system was designed using SolidWorks 2015 SP03 (Dassault Systèmes, Vélizy, France) and 3D printed with ABS-M30 Production Grade Thermoplastic in a Dimension 1200es SST (Stratasys Inc., Eden Prairie, Minnesota, U.S.A.) rapid prototyping Figure 1: Viewbox mechanical design.

system for use with a 12 Surface Pro 3 (Microsoft Corporation, Redmond, Washington, U.S.A.) tablet computer (Figure 1). The viewbox design is modular and has reusable and replaceable parts for use with other tablet computers. The design includes six parts that are compatible with other tablet computers and three parts that are unique to the size of the device. Parts are fastened together and then attached to the tablet computer. A 17.7 cm by 12.5 cm glass beamsplitter (Edmunds Optics Inc., Barrington, New Jersey U.S.A.) is used as semi-transparent mirror. The optional upper frame can be attached to the tablet computer as well, and is compatible with Video Electronics Standards Association (VESA) MIS-B and MIS-C wall mountable interfaces, should a wall or ceiling mount be required. For the handheld setup, the upper frame is not required. There are attachments for a handle interface on the side of the lower mount. 2.3 Calibration process The system s calibration process has two components. The first is the calibration of the system itself (Figure 2), and the second is the calibration of the system to the patient, which is a separate and one-time per patient procedure. Figure 2: Coordinate system diagram of the required transforms for the calibration process of the system. 2.3.1 System calibration Three optically tracked markers are needed for the automatic calibration process. The viewbox is equipped with removable and interchangeable optically tracked markers on either side of the screen and optically tracked markers are also displayed on the tablet computer s screen and the virtual overlay plane. During the calibration

process, the system is moved through space to calculate and register the required transforms. These transforms allow us to determine where the markers are relative to the virtual image. We utilize the capability of the MicronTracker optical tracker to recognize black and white patterns. These markers are printed on the side of the viewbox; additional screen-rendered markers are displayed on the tablet computer s screen and visible in the virtual image (Figure 3). These markers are captured and recognized using the template identification feature before beginning the calibration process. This ensures that the MicronTracker optical tracker will be tracking the correct markers at run time, should other tools be present in the workspace. The transform VCM T VB, between the image overlay plane virtual calibration marker (VCM) and the side of the viewbox (VB) is calculated by the application of two transforms. The first, VCM T CM, is the transform between the image overlay plane virtual calibration marker and the rendered screen calibration marker (CM). The second, CM T VB, is the transform between the rendered screen calibration marker and the side of the viewbox. Our resulting transform, VCM T VB, is the product of VCM T CM CM T VB. Figure 3: Calculation of the Calibration Marker (CM) to Virtual Calibration Marker (VCM) transform and of the Viewbox (VB) to Calibration Marker (CM) transform. As seen in Figure 2, the rendered markers are displayed at an angle. This prevents the MicronTracker optical tracker from assuming that the two markers are orthogonal in any plane when it is tracking them during use. It was determined that the markers needed to be offset by seven (7) degrees before the MicronTracker optical tracker would no longer see them as orthogonal. The final transform required to operate the system is used to correct for this. From calibrating the image overlay plane at a 7-degree angle, we need to correct the image and ensure that it is displayed at the correct angle. This requires the calculation of the CV T VCM, which is computed by the module during calibration and is applied in the overall transform hierarchy. 2.3.2 Patient registration To complete the registration process, the location of the patient is registered to an attached reference marker with an optically tracked stylus. The scanned image volume is then applied and registered to a patient by overlapping the probed points with those same points in the scanned image. The position of the image overlay plane with respect to the patient is determined by the real time calculation of VCM T REF. This transform is calculated by multiplying the transform between the side of the viewbox and the patient reference (REF), VB T REF, by the resulting transform from the calibration process, VCM T VB. This gives us the transform between the image overlay plane virtual calibration marker and the patient marker, VCM T REF, as the product of VCM T VB VB T REF. 2.4 3D Slicer module development A custom 3D Slicer [5] module was developed for the system (Figure 4). It is contained in the image overlay system module of the overall software architecture, as seen in Figure 5. The Image Overlay module was implemented in the Python programming language as a scripted module within 3D Slicer. The module has two core functionalities, the first being the functionality for the self-calibration process, and the second being used for the visualization of the re-sliced image. 2.4.1 Self-calibration

The calibration functionality of the module has two distinct features. The first is used for the rendering of the calibration marker, and the second is used for calculating the remaining required transforms after VCM T CM and CM T VB have been calculated. Figure 4: User interface in 3D Slicer showing the developed module and a 3D volume reconstruction of the CT Volume used for setup and testing. Through a simple graphical interface, the module allows the user to input the size of the calibration marker to be drawn on screen for the self-calibration process. Once input is given, the module draws the marker to the specified by rendering an image volume. This image volume is then shown on the tablet computer s screen for viewing from the MicronTracker optical tracker. After the VCM T CM and CM T VB transforms have been calculated through the process described in section 2.3.1, the user can define the CV T VCM transform from the interface. This transform illustrates the translation, rotation and scaling between the center of the Virtual Calibration Marker and the actual image volume from the CT or MRI volume. Once all of the static transforms have been defined through the calibration process described in section 2.3.1 and 2.3.2, the module allows the user to select the required transforms, REF T RAS, VB T REF, CM T VB, VCM T CM and CV T VCM from existing transforms within 3D Slicer. The user is then able to create the required transform hierarchy for use with the system. 2.4.2 Visualization The visualization functionality of the module is simple. This section of the module allows the user to display a selected image volume to the screen using 3D Slicer. When the user clicks the Display to Screen button after completing the calibration process, the Volume Re-slice Driver, a module for setting re-slicing planes by using linear transforms in

image volumes, within SlicerIGT (www.slicerigt.org) is accessed. When accessed to re-slice the image volume, the transform hierarchy that was built by the user with the use of the calibration functionality of the module is selected to re-slice the patient image volume. As the required transforms are updating in real time, the image displayed on the screen of the tablet will update in real time as well. When the viewbox is moved through space above the patient, the Volume Re-slice Driver will continue to access the transform hierarchy and update the image. 2.5 Implementation 3D Slicer, running natively on the tablet, processes the scanned CT or MRI image volume. With all re-slicing and visualization running on the tablet, only real-time tracking data has to be sent through a wireless network. The MicronTracker is connected to a computer that runs PLUS (www.plustoolkit.org) server application [6] to acquire tracking data and stream it real-time through the OpenIGTLink protocol [7]. The Volume Re-slice Driver module within re-slices the image volume depending upon the pose information obtained from the MicronTracker. This re-sliced image is projected onto the tablet display device through the developed image overlay module. The PLUS toolkit was developed to allow users to gather tracking and image data from various hardware that is used in image-guided interventions [6], such as the MicronTracker which is used in our system. OpenIGTLink standardizes how data is transmitted between various components of image-guided intervention system. It allows data and images for tool-tracking to be continuously transferred [7]. 2.6 Testing To validate the accuracy of the displayed image overlay, virtual target points were defined that covered the full field of the overlay. Each target point was marked with the tip of an optically tracked stylus, just by looking at the virtual image without the phantom in place. When the operator was confident that the stylus tip was in the correct position, the tip of the stylus tip was recorded using the optical markers attached directly to the stylus. 3.1 Visualization frame rates Figure 5: Software Architecture of the mobile image overlay system using 3D Slicer, PLUS and SlicerIGT. The MicronTracker optical tracker and image overlay system are hardware. 2. RESULTS The mobile image overlay system developed by Anand et al. transmitted the re-sliced images wirelessly [4], which yielded frame rates of 2 frames per second (FPS) or lower. These slow speeds meant that clinicians would have to pre-plan all insertions on a computer where they can review the image conveniently. In storing the volumetric image locally, and performing the image re-slicing on the tablet, the real-time pose information was the only information transmitted wirelessly to the tablet. Thus, this allowed for visualization at speeds of approximately the same rate of capture of the tracking system used, at 9 FPS. This frame rate may allow clinicians the option of browsing the volume to find the optimal tool placement path in real-time and start inserting the needle immediately.

3.2 System Design The overall design required a system that can be handheld, and could be setup by one user without any need for programming. With only the MicronTracker optical tracker requiring floor space, and the laptop computer used for gathering the tracking data needing to be in close proximity, the overall footprint of the system is reduced (Figure 6). The system can be hand-held by the physician and used for exploration of the image volume over the patient. Figure 6: Setup of the system and MicronTracker optical tracking device. 3.3 Calibration accuracy Distances were calculated between each predefined target point and the corresponding point that was placed with a stylus. From 21 points used, the mean in-plane distance between each pair of points was 0.99 mm (75 th percentile 1.45 mm, 95 th percentile 1.93 mm). The mean distance out-of-plane between points was 0.61 mm (75 th percentile 0.97 mm, 95 th percentile 1.19 mm) (Figure 7). The differences between the in-plane and out-of-plane distances are attributed to the tracking error resulting from the MicronTracker. In our setup, the tracking was seen to be quickly shaking back and forth in the in-plane direction due to lighting conditions in the room. Given this error, further testing in different workspaces and with different lighting shall be required. 4. CONCLUSION The objective of the mobile image overlay system was to design and validate the accuracy of using an automatic self-calibration process. This process was used to set up the system for image-guided percutaneous needle interventions. In testing the accuracy of the mobile image overlay system, it was determined that the image overlay system could be used to guide a tool with accuracy that is suitable for facet joint injections or other musculoskeletal needle placements as clinicians must be able to place a needle accurately within a few millimeters of the target point. We were also able to overcome the low frame rates of the mobile image overlay system developed by Anand et al. [4] and improve them by at least 4.5 times.

Distance (mm) In-Plane Out-of-Plane Figure 7: Box and Whisker Plot of in-plane and out-of-plane distances. ACKNOWLEDGMENTS Gabor Fichtinger is supported as a Cancer Care Ontario Research Chair in Cancer Imaging. REFERENCES [1] Fichtinger et al., Image Overlay Guidance for Needle Insertion in CT Scanner, Biomedical Engineering, IEEE Transactions on, 52(8):1415 1424, 2005. [2] Fischer et al., MRI Image Overlay: Application to Arthrography Needle Insertion, Computer aided surgery: official journal of the International Society for Computer Aided Surgery, 12(1):2 14, 2007. [3] Anand et al., Workspace Analysis and Calibration Method for Mobile Image Overlay System used for Image-Guided Interventions, in The Hamlyn Symposium on Medical Robotics, 2013. [4] Anand et al., Design and Development of a Mobile Image Overlay System for Image-Guided Needle Interventions, in 36 th Annual International Conference of IEEE EMBS, 6159-6162, 2014. [5] Fedorov, A., Beichel, R., Kalpathy-Cramer, J., et al., "3D Slicer as an image computing platform for the quantitative imaging network," Magnetic Resonance Imaging. 30(9), 1323-1341 (2012). [6] Lasso et al., PLUS: open-source toolkit for ultrasound-guided intervention systems, Biomedical engineering, IEEE Transactions on, vol. 61, no. 10, pp. 2527-37, 2014. [7] Tokuda et al., OpenIGTLink: an open network protocol for image-guided therapy environment, International Journal of Medical Robotics and Computer Assisted Surgery, vol. 5, no. 4, pp. 423-434, 2009.