Use of Ultrasound and Computer Vision for 3D Reconstruction

Similar documents
Geometric Hand-Eye Calibration for an Endoscopic Neurosurgery System

Object Segmentation Using Growing Neural Gas and Generalized Gradient Vector Flow in the Geometric Algebra Framework

Advanced Geometric Approach for Graphics and Visual Guided Robot Object Manipulation

Self-calibration of a pair of stereo cameras in general position

Inverse kinematics computation in computer graphics and robotics using conformal geometric algebra

Fully Automatic Endoscope Calibration for Intraoperative Use

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

Introduction to Geometric Algebra Lecture VI

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Robot object manipulation using stereoscopic vision and conformal geometric algebra

Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Digital Image Processing

Real-time 3D visual tracking of laparoscopic instruments for robotized endoscope holder

Model-Based Human Motion Capture from Monocular Video Sequences

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter

Stereo Vision. MAN-522 Computer Vision

COS Lecture 10 Autonomous Robot Navigation

Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

MEDICAL IMAGE ANALYSIS

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Auto-calibration. Computer Vision II CSE 252B

An Edge-Based Approach to Motion Detection*

Overview of Active Vision Techniques

Announcements. Hough Transform [ Patented 1962 ] Generalized Hough Transform, line fitting. Assignment 2: Due today Midterm: Thursday, May 5 in class

Segmentation and Tracking of Partial Planar Templates

3D Hyperbolic Tiling and Horosphere Cross Section

Model-based segmentation and recognition from range data

Short Survey on Static Hand Gesture Recognition

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Digital Image Processing COSC 6380/4393

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Structured light 3D reconstruction

A model-based approach for tool tracking in laparoscopy

ECG782: Multidimensional Digital Signal Processing

Automatic detection of specular reflectance in colour images using the MS diagram

Parallel algorithms for stereo vision shape from stereo

ACHIEVE A NEW PERSPECTIVE

CS231A Midterm Review. Friday 5/6/2016

Layered Scene Decomposition via the Occlusion-CRF Supplementary material

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Using temporal seeding to constrain the disparity search range in stereo matching

CS 664 Segmentation. Daniel Huttenlocher

Utilizing Semantic Interpretation of Junctions for 3D-2D Pose Estimation

A novel 3D torso image reconstruction procedure using a pair of digital stereo back images

Introducing Robotics Vision System to a Manufacturing Robotics Course

Morphological Image Processing

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Mech. Engineering, Comp. Science, and Rad. Oncology Departments. Schools of Engineering and Medicine, Bio-X Program, Stanford University

Robot vision review. Martin Jagersand

Surface Curvature Estimation for Edge Spinning Algorithm *

Modeling Adaptive Deformations during Free-form Pose Estimation

Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Robust color segmentation algorithms in illumination variation conditions

Geometry of Multiple views

A Statistical Consistency Check for the Space Carving Algorithm.

Multiple View Geometry

Low Cost Motion Capture

3D Modeling of Objects Using Laser Scanning

Multiple View Geometry in Computer Vision Second Edition

Flexible Calibration of a Portable Structured Light System through Surface Plane

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Stereo: Disparity and Matching

Seminar Heidelberg University

Generalized Hough Transform, line fitting

Retrieval of 3D-position of a Passive Object Using Infrared LED s and Photodiodes Christensen, Henrik Vie

LUMS Mine Detector Project

A Study of Medical Image Analysis System

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics]

IMAGE-BASED 3D ACQUISITION TOOL FOR ARCHITECTURAL CONSERVATION

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

Chapter 9 Object Tracking an Overview

Acoustic Field Comparison of High Intensity Focused Ultrasound by using Experimental Characterization and Finite Element Simulation

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis

CSE 252B: Computer Vision II

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Computer Vision I - Appearance-based Matching and Projective Geometry

Feature description. IE PŁ M. Strzelecki, P. Strumiłło

Computer Vision Lecture 17

Volumetric Scene Reconstruction from Multiple Views

ABSTRACT 1. INTRODUCTION

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

Multiple View Geometry

Computer Vision Lecture 17

Ch 22 Inspection Technologies

Surface approximation using growing self-organizing nets and gradient information

Task analysis based on observing hands and objects by vision

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Transcription:

Use of Ultrasound and Computer Vision for 3D Reconstruction Ruben Machucho-Cadena 1, Eduardo Moya-Sánchez 1, Sergio de la Cruz-Rodríguez 2, and Eduardo Bayro-Corrochano 1 1 CINVESTAV, Unidad Guadalajara, Departamento de Ingeniería Eléctrica y Ciencias de la Computación, Jalisco, México {rmachuch,emoya,edb}@gdl.cinvestav.mx 2 Instituto Superior Politécnico José A. Echeverría, Havana, Cuba sergio@electrica.cujae.edu.cu Abstract. The main result of this work is an approach for reconstructing the 3D shape and pose of tumors for applications in laparoscopy from stereo endoscopic ultrasound images using Conformal Geometric Algebra. We record simultaneously stereo endoscopic and ultrasonic images and then the 3D pose of the ultrasound probe is calculated using conformal geometric algebra. When the position in 3D of the ultrasound probe is calculated, we compound multiple 2D ultrasound images into a 3D volume. To segment 2D ultrasound images we have used morphological operators and compared its performance versus the obtained with segmentation using level set methods. 1 Introduction Endoscopy became an increasing part of daily work in many subspecialties of medicine and the spectrum of applications and devices has grown exponentially [1]. The use of a stereoendoscope (i.e. an endoscope with two cameras instead of one) provides more information of the scenario, that is, two slightly different views of the same scene at the same time allows the calculation of the spatial coordinates [2]. On the other hand, the ultrasound is found to be a rapid, effective, radiation free, portable and safe imaging modality [3]. However, the endoscopic images can not see beyond opaque or occluded structures. The incorporation of ultrasound images into stereoendoscope operative procedures generating more visibility in the occluded regions. By superimposing Stereo Endoscopic Ultrasound (SEUS) images in the operative field (in the case of laparoscopic or robotic procedures), it would be possible integrate them into a 3D model. This 3D model can help surgeons to better locate some structures such as tumors during the course of the operation. Virtually all available methods use either a magnetic or optic tracking system (and even a combination of both) to locate the tip of the US probe (USP) in 3D space[4]. These systems are sometimes difficult to implement in intraoperative scenarios, because the ferromagnetic properties of surgical instruments can affect magnetic tracking systems [5]. E. Bayro-Corrochano and J.-O. Eklundh (Eds.): CIARP 2009, LNCS 5856, pp. 782 789, 2009. c Springer-Verlag Berlin Heidelberg 2009

Use of Ultrasound and Computer Vision for 3D Reconstruction 783 We extent a previously proposed method [4] which used just monocular endoscopic images to calculate the pose (i.e. the position and orientation) of the USP. In this paper we use stereo endoscopic images and apply our approach in laparoscopy. 2 System Setup Our experimental setup is illustrated in Figure 1. The equipment setup is as follows: The endoneurosonography equipment (ENS) provides an ultrasound probe (USP) that is connected to an ultrasound system Aloka. The USP is introduced through a channel in the stereo endoscope (Fig. 1a) and is observed by the endoscope. The USP is flexible and is rotating around the longitudinal axis at about 60 rpm. It can also move back and forth and since the channel is wider than the USP there is also a random movement around the channel. The US image is orthogonal to the USP axis. We know that in small interval of time Δt, the USP is fixed, and the two endoscopic cameras undergo a movement, which is equivalent to an inverse motion, that is, the endoscopic camera is fixed, and ultrasound probe moves. (a) (b) Fig. 1. a) Experimental setup. b) Equipment, telescope and camera of the stereo endoscope. 3 Tracking the Ultrasound Probe We have used multiple view geometry to process the stereo images; the Figure 2a shows a pair of rectificated stereo images and Fig. 2b is its depth map. The cameras were calibrated using the method described in [6]. We track the USP throughout the endoscopic camera images. In order to track the USP we have used the particle filter and an auxiliary method based on thresholding in the HSV-Space in order to improve the tracking as follows:

784 R. Machucho-Cadena et al. (a) (b) Fig. 2. a) Pair of rectificated stereo images. b)its depth map. 3.1 The Particle Filter Tracker We used the particle filter to found the USP in the endoscopic images in a similar way to our previous paper [4], with addition of some heuristics. The resultant process is: Throw 35 particles on each image and test the likelihood for each of them with respect to a reference histogram. The likelihood is the distance between the color histograms of the model and the current particle. The particle which have the higher likelihood value will be the winner. Then we take the orientation, position and scale from the particle winner as an estimation of the USP in the endoscopic camera image. If the likelihood value of the best particle is higher than a threshold value, we use this information to update the reference model, that is the reference histogram. We have used a threshold value of 0.17 to update the reference model. If the likelihood value of the best particle is lower than a threshold value, then we set a flag to enable an additional tracking process in order to find the USP; this additional process is explained in section 3.2. We enable the additional tracking process when the likelihood value is less than 0.08, otherwise this additional process is disabled. We select the 35 particles from a set of particles that we have built for the possible positions, orientations and scales of the USP in the endoscopic image. This set of particles is built off-line and is saved in a text file by using only a chain code for each particle. We also save the size (scale) and the orientation for each particle. In the beginning of the tracking process we read the text file just one time and we store this information in a data structure. The threshold values and the number of particles (35) were experimentally obtained. Figure 3a shows the model used to obtain the initial reference histogram. Figure 3b shows a mask used to build the set of particles. We can see here four points with different colors used as reference in the construction process of the set of particles. The position of the four points is also saved in the text file

Use of Ultrasound and Computer Vision for 3D Reconstruction 785 afore-mentioned are used to identify the top of the particle in order to place it correctly on the frame of the endoscopic camera. Figure 3c shows the used camera frame. It is built just one time at beginning of the tracking process and Figure 4 shows some particles taken from the set of particles. The results of the tracking of the USP are show in Figure 6 we have an example of tracking of the ultrasound probe in the stereo endoscopic camera by using particle filter. 3.2 Tracking Based on Thresholding in the HSV-Space The tracking process is based on thresholding in the HSV-Space and it is used as an auxiliary to find the USP in the endoscopic camera when the likelihood value of the best particle is lower than a threshold value. This works as follows: Make a copy of the original image (to preserve it). Convert the original image from RGB to HSV-Space. Make a binary image by selecting an interval of values from the saturation histogram. This interval should separate the USP from the rest of the image. We have selected the interval [0, 50] where we observe the first peak of the histogram. (a) (b) (c) Fig. 3. a)model used to obtain the initial reference histogram. b) Model mask used to build the set of particles. c) Camera frame, it is built just one time at the beginning of the tracking process. Fig. 4. Set of particles, some particles selected

786 R. Machucho-Cadena et al. Build a new image called sectors, from the binary image and the Canny filter as follows: If the pixel (x, y) has an edge (from Canny) and it is part of the mask (binary image) then (x, y) will belong to sectors, see Fig. 5e. Apply the closing morphological operator to the image sectors in order to fill small holes and to join broken edges. See Fig. 5f. Apply the chain code to calculate the smallest areas of the image sectors, and eliminate them. see Fig. 5g. Get the initial, middle and final of the segmented USP (from the image sectors) on the camera frame and use this information to throw nine particles (replacing the nine particles with the lower likelihood values) in order to improve the tracking process. Figure 5a illustrates the saturation histogram for the endoscopic camera image shown in Fig. 5b. Figure 5c shows its saturation image and Figure 5d is the application of the Canny filter to the original image. This tracking method is just used as a support method because it does not take into account temporal information and because is also sensible to partial occlusions of the USP in the endoscopic cameras images as well as background, illumination and contrast variations. Fig. 5. Tracking process in the HSV-Space Fig. 6. Example of Tracking of the USP. The best particle is shown in red color.

Use of Ultrasound and Computer Vision for 3D Reconstruction 787 4 Ultrasound Image Processing We have used two methods in order to process the ultrasound images; the first one is based in morphological operators [4] and the second one is the level sets method. The level sets method uses an initial seed on the image. This seed evolves with the time until a zero velocity is reached or the curve is collapsed (or a maximum number of iterations is reached). To evolve the curve, the method uses two lists called Lin and Lout [7]. We present the results of the processing and a comparison between both methods. They work independently of the tumor characteristics. Figure 7 shows the results obtained by using morphological operators. Figure 7a is an original ultrasound image. In Figure 7b the central part of the image is excluded, because it only contains noise and the ROI is selected. The binary mask obtained for this method that will be applied to the original image is showed in Figure 7c and Figure 7d shows the result of the segmentation. Figure 8 shows the results obtained by using the level sets method. Figure 8a is an original ultrasound image. Figure 8c shows the ROI selected. Figure 8d shows the initial seed applied to Figure 8c. Figure 8e shows the collapsed curve. Figure 8f is the binary mask obtained from Figure 8e. This mask is applied to the original image and so we have obtained the result of the segmentation (Figure 8b. Both figures were obtained from the Aloka ultrasound equipment by using a box shaped rubber phantom. 4.1 Comparison between Both Methods We have obtained a processing time of 0.005305 seconds for the morphological operators method versus 0.009 seconds for the level sets method, that is 188 fps versus 111 fps. We recommend both methods for in line implementation, because they are fast and reliable. Fig. 7. Isolating the tumor. a) Original US image to be segmented. b) The central part of the image is excluded. c) ROI. d) Result of segmentation.

788 R. Machucho-Cadena et al. Fig. 8. Segmentation using the level sets method 5 Calculating the 3D Pose of the Tumor This work make use Conformal Geometric Algebra (CGA) to represent geometric entities ( points, lines, planes, spheres, etc.) in a compact and powerful form [8]. The CGA preserves the Euclidean metric and adds two basis vectors: e +, e (where e 2 + =1ande 2 = 1), which are used to define the point at the origin e 0 = 1 2 (e e + ) and the point at the infinite e = e + e +. The points in CGA are related with the Euclidean space by: p = p + p2 2 e + e 0. A sphere in dual form is represented as the wedge of four conformal points that lies on sphere s = a b c d, its radius ρ and its center p in R 3 can be obtained using: ρ 2 = s2, p = s (s e) 2 (s e) + 1 2 ρ2 e. A plane in dual form is defined as a sphere, but the last point is at the infinity: π = a b c e. Alineindualformisrepresented as the wedge of two points and the infinity point: L = a b e. A line can also be calculated as the intersection of two planes: L = π 1 π 2.Thisequation is used to calculate the 3D line that represents the ultrasound probe axis. To achieve a translation by a distance d 2 from a point p 1 in the direction of a line and to obtain p 2 : T = exp ( 1 2 d 2L ), p 2 = Tp 1 T. The last equation is used to find the position of the ultrasound sensor in order to put the segmented ultrasound image in 3D space; where p 1, p 2 represent the begin and the end respectively of the best particle on the stereo endoscopic images and d 2 is the retroprojected distance between them. 5.1 Results Figure 9a shows a convex hull applied to a set of slices of tumor in the 3D space and, Figure 9b shows the part of the phantom used and the tumor. 6 Conclusions We have addressed the problem of obtaining 3D information from joint stereo endoscopic and ultrasound images obtained with SEUS equipment. We have used

Use of Ultrasound and Computer Vision for 3D Reconstruction 789 (a) (b) Fig. 9. 3D Reconstruction from a set of slices of tumor conformal geometric algebra to calculate the pose of the ultrasound sensor in order to put the segmented tumor in the 3D space. To verify the segmentation of the tumor in the ultrasound images, we have compared two different segmentation methods and obtained good results at a reasonable speed for both. Some preliminary results are presented. References 1. Muller, U.: Possibilities and limitations of current stereo-endoscopy. Surg. Endosc 18, 942 947 (2004) 2. Senemaud, J.: 3D-Tracking of minimal invasive instruments with a stereoendoscope. PhD thesis, Universität Karlsruhe, Karlsruhe, Germany (2008) 3. Nascimentoa, R.G., Jonathan, C., Solomon, S.B.: Current and future imaging for urologic interventions. Current Opinion in Urology 18, 116 121 (2008) 4. Machucho Cadena, R., de la Cruz Rodriguez, S., Bayro Corrochano, E.: Joint freehand ultrasound and endoscopic reconstruction of brain tumors, pp. 691 698 (2008) 5. Hummel, J.B., et al.: Design and application of an assessment protocol for electromagnetic tracking systems. Medical Physics 32, 2371 2379 (2005) 6. Bayro Corrochano, E., Daniilidis, K.: The dual quaternion approach to hand-eye calibration. In: International Conference on Pattern Recognition, vol. 1 (1996) 7. Shi, Y.: Object-based Dynamic Imaging with Level Set Methods. PhD thesis, Boston University, Boston, U.S.A. (2005) 8. Bayro Corrochano, E.: Geometric Computing for Perception Action Systems: Concepts, Algorithms, and Scientific Applications. Springer, Heidelberg (2001)