Computed Photography - Final Project Endoscope Exploration on Knee Surface

Similar documents
An easy calibration for oblique-viewing endoscopes

CS6670: Computer Vision

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems

A Full Geometric and Photometric Calibration Method for Oblique-viewing Endoscope

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Today s lecture. Image Alignment and Stitching. Readings. Motion models

Camera model and multiple view geometry

Flexible Calibration of a Portable Structured Light System through Surface Plane

Mosaics. Today s Readings

CS 4620 Midterm, March 21, 2017

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.

Markerless Endoscopic Registration and Referencing

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

CS6670: Computer Vision

CS4670: Computer Vision

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Camera Calibration for a Robust Omni-directional Photogrammetry System

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Miniature faking. In close-up photo, the depth of field is limited.

3D Modeling of Objects Using Laser Scanning

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Projective Geometry and Camera Models

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)

Introduction to 3D Concepts

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling

A Fast Linear Registration Framework for Multi-Camera GIS Coordination

A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot

Intelligent Robots for Handling of Flexible Objects. IRFO Vision System

Camera Model and Calibration

Image stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac

PART A Three-Dimensional Measurement with iwitness

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc. A wavelet based real-time rendering technology for indoor mixed reality ABSTRACT

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Computer Vision Lecture 17

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

Computer Vision Lecture 17

Image-Based Rendering

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Endoscopic Reconstruction with Robust Feature Matching

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Understanding Variability

Announcements. Mosaics. How to do it? Image Mosaics

What have we leaned so far?

Camera Model and Calibration. Lecture-12

Projective Geometry and Camera Models

Fully Automatic Endoscope Calibration for Intraoperative Use

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics]

Rectification and Disparity

Image warping and stitching

A Method for Tracking the Camera Motion of Real Endoscope by Epipolar Geometry Analysis and Virtual Endoscopy System

Topics and things to know about them:

CV: 3D sensing and calibration

A Study of Medical Image Analysis System

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

Cameras and Stereo CSE 455. Linda Shapiro

Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera

Simultaneous Stereoscope Localization and Soft-Tissue Mapping for Minimal Invasive Surgery

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Il colore: acquisizione e visualizzazione. Lezione 17: 11 Maggio 2012

ECE Digital Image Processing and Introduction to Computer Vision. Outline

A Survey of Light Source Detection Methods

A Novel Stereo Camera System by a Biprism

CSCI 5980: Assignment #3 Homography

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Projector Calibration for Pattern Projection Systems

Image warping and stitching

Midterm Examination CS 534: Computational Photography

CS201 Computer Vision Camera Geometry

Other approaches to obtaining 3D structure

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Epipolar geometry contd.

Chapters 1-4: Summary

Freehand Voxel Carving Scanning on a Mobile Device

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Agenda. Rotations. Camera models. Camera calibration. Homographies

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Geometric Hand-Eye Calibration for an Endoscopic Neurosurgery System

Dense 3D Reconstruction. Christiano Gava

Compositing a bird's eye view mosaic

Texture Generation for the Computer Representation of the Upper Gastrointestinal System

EE795: Computer Vision and Intelligent Systems

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

CS451Real-time Rendering Pipeline

Geometry of Multiple views

''VISION'' APPROACH OF CALmRATION METIIODS FOR RADIOGRAPHIC

Geometric Transformations and Image Warping

IMAGE-BASED RENDERING

Markerless Endoscopic Registration and Referencing

Transcription:

15-862 Computed Photography - Final Project Endoscope Exploration on Knee Surface Chenyu Wu Robotics Institute, Nov. 2005 Abstract Endoscope is widely used in the minimally invasive surgery. However the size and distortion of endoscope images limit its use in the surgery planning and navigation. In this project, we stitch the endoscope video images up to get a large view of the knee surface, which actually enlarge the view of the surgeons thus enable them to explore the knee surface in the operation. We first correct the radial distortion by using camera sphere projection model. Then we calibrate the camera by estimating the intrinsic parameters of endoscope camera. With two trackers installed on the bone and endoscope, we can easily recover the transformation from the bone to the endoscope. For each vertex on the bone surface, we can compute the texture coordinates based on the bone to endoscope transformation. Finally, texture mapping technique is used to stitch the endoscope images onto the surface. 1 Introduction The trend of computer assisted orthopedic surgery is to achieve true minimally invasive surgery (MIS). As a the diagnose tool of MIS, Endoscope and related research are attracting more attentions. Due to the small field of view (FOV) and big image distortion of endoscope, it is very difficult to connect individual images to the global anatomical shape. Especially for the bone surface, with less texture information, all the images look similar, we will get lost by looking through only one image. Why not create a big view for the object of interest based on these disconnected images? Is it possible to recover the 3D surface based on these images? these questions have been explored for years and some important works are briefly described as follows. After that, our method is proposed and detailed. 1.1 Previous Work Many people are interested in navigation across endoscope images [1], [4]. Dey et al. [2] map the endoscope images onto the 3D surfaces via a two-dimensional 2D to 3D mapping algorithm. They first register the position and orientation of the endoscope by an optical tracking system, so does the 3D surface model. An optical model is then used to identify and extract the surface patch which is visible to the endoscope. Finally a 3D to 2D transform is applied to generate a virtual endoscope image and texture map the real image onto it. Some people focus on the 3D reconstruction. Quartucci and Tozzi [3] use only one endoscope image to reconstruct the 3D shape of anatomical objects. They model the camera as a spherical projection model and use shape of shading technique to recover the 3D shape after the camera calibration. A dichromatic model is used to obtain a lambertian surface. 1.2 Our work We actually have proposed two applications of endoscope images in this project. Both are related to generating a bigger view of bone surface from endoscope images. By fixing the center of projection of endoscope, we warp all the images onto a 2D plane to create a panorama. If we define the orientation of the endoscope by a ray passing through the head and the end of endoscope, fixing center of projection means all possible rays intersect at one point, which is the center of projection. Under this assumption, we can warp each image by computing a homograph based on the tracking information. However, without some important equipments, we have to postpone this experiment. Without the above assumption, we have to use existed 3D surface to help stitch these images. After we correct the radial distortion, we can register each image to the corresponding 3D anatomical position based on the tracking information. We use texture mapping technique to patch the image onto the local surface. After using blending technique, a big 3D view will be generated. 2 Methods In our work, we use camera spherical projection model to correct the radial distortion of endoscope camera. Then we 1

calibrate the camera by estimating the intrinsic parameters from a set of control points. Finally we compute the texture coordinates for each vertex on the surface based on tracking information and use texture mapping technique to stich all the endoscope images up to a large 3D image map. ωu v ωv v ω = K x v y v z v ω v (3) 2.1 Calibration 2.1.1 Image Calibration There are many kinds of lens for endoscope, such as fisheye, wide angle. The endoscope we use in our experiment has a fisheye lens installed. The resulting endoscope images have a large distortion. The deformation can not be recovered by computing a homograph. We calculate the deformation based on the camera spherical projection model. In the camera spherical projection model, a 3D space point is firstly projected on a spherical surface of radius f with origin at the image projection center and then the obtained point on the spherical surface is projected onto the image plane. The model maps the 3D space points into a sphere with radius f. It covers 180 o field of vision [3]. Let (x, y, z) be the point coordinates in the 3D space with origin at the projection center, and the z axis along the camera optical axis. (u, v) the coordinates in the image plane (distorted). Thus we can easily get the following warping transformations: u = v = f x x2 + y 2 + z 2 (1) f y x2 + y 2 + z 2 (2) By printing out different size of checker board images on the paper, we apply above warping transformations to the corresponding captured endoscope images to get corrected images by tuning the parameter f. Based on the visual judgement of correctness, we can find a series of f for different size of checker board images. Finally we use the average f as the distortion parameter. During calculation, we assume each pixel in the image has the same z value. We have also tuned this value to get best results. Figure 1 and 2 show the results of correctness of radial distortion. The distorted checker board images and grid images are listed in the top row and the corrected images in the bottom. 2.1.2 Camera Calibration Camera Calibration is a classical problem in computer vision field, where a camera transformation from the 3D view coordinates to the 2D image plane needs to be calculated. This 3D to 2D transformation is described by where K is a 3 4 transformation matrix defined by the camera intrinsic parameters. (x v, y v, z v, ω v) are the 3D homogeneous view coordinates of endoscope. (ωu v, ωv v, ω) are the projected 2D coordinates on the image plane. By labelling a set of control points with known (x v, y v, z v, ω v) and (ωu v, ωv v, ω), we can use least square to recover the matrix P. 2.2 Texture patching 2.2.1 Tracking System We build a two-tracker optical tracking system by putting one tracker on the bone and the other one on the endoscope, in order to obtain the position and orientation of the endoscope and the bone. Moreover, we can compute the transformation from bone surface to the endoscope based on the registration. That means, we know the transformation from world coordinates to the endoscope view coordinates. By denoting this transformation as T, we will have x v y v z v ω v = T x v y v z v ω v (4) where (x v, y v, z v, ω v ) are 3D homogeneous world coordinates. Figure 3 illustrates the tracking system. 2.2.2 Texture Mapping By calculating the transformation T from world coordinates to view coordinates and the transformation K from view coordinates to image plane, we can project 3D scene to 2D image plane ωp = T K P (5) where p = (u, v, 1) is a point on the 2D image plane. P = (x, y, z, 1) is a point on the 3D space. Endoscope images are 2D projections of 3D scenes. For each tiny patch on the surface, we use above transformation to compute the texture coordinates for each vertex on the surface. Then we can use texture mapping to patch the images onto the surface. 2

Figure 1: Radial distortion correction. Top row: test checker board images. Bottom row: corrected images 3 Experiments and Results 3.1 Experiments We have captured 743 endoscope image frames for a knee model. Each frame consists of a 640 480 bit black-white image. We also build a 3D mesh for this knee model. We have printed out 3 different sizes of checker board images and 3 different sizes of grid images for image calibration. We use f = i f i/6 as the distortion parameter, where f i is the distortion parameter estimated from each test image i. We have collected 100 control points (illustrated in Figure 4) for camera calibration. 3.2 Results Figure 5 shows the 3D bone surface. Figure 6 shows the first 20 endoscope frames texture mapped onto the surface. (I only show the texture mapped triangles here since I still have trouble with displaying both the mesh and textured map at the same time.) 4 Summary In this project, we have developed a framework to stitch a set of endoscope images together onto a bone surface, such that we can obtain a large view of bone surface. We use camera spherical projection model to correct the radial distortion. We build a two-tracker system to track the position and orientation of the endoscope and bone surface. Furthermore we compute the transformation from the world coordinates to the endoscope coordinates. After the camera calibration, we have the transformation from 3d space to 2D image plane. By computing the texture coordinates of each vertex on the surface, the endoscope image is mapped to the surface directly. References [1] L. M. Auer, D. P. Auer, Virtual endoscopy for planning and simulation of minimally invasive neurosurgery, Neurosurgery, vol. 43, pp. 529-548, 1998. [2] D. Dey, D. G. Gobbi, P. J. Slomka, K. J. Surry, T. M. Peters, Automatic fusion of freehand endoscopic brain images to three-dimensional surfaces: creating stereoscopic panoramas, IEEE Transcactions on Medical Imaging, vol. 21(1), pp. 23-30, 2002. [3] C. H. Quartucci Forster, C. L. Tozzi, Towards 3D Reconstruction of Endoscope Images using Shape from Shading, XIII Brizilian Symposium on Computer Graphics and Image Processing, pp. 90-96, 2000. [4] A. J. Sherbondy, A. P. Kiraly, A. L. Austin, J. P. Herferty, S. Y. Wan, J. Z. Turlington, T. Yang, C. Zhang, E. A. Hoffman, G. McLennan, W. E. Higgins, Virtual bronchoscopic approach for combining 3D CT and endoscopic video, SPIE Conference on Image Display, vol. 3978, pp. 104-116, 2000. 3

Figure 2: Radial distortion correction. Top row: test grid images. Bottom row: corrected images Figure 3: Tracking system 4

Figure 4: Control points for camera calibration 5

Figure 5: 3D bone surface 6

Figure 6: Texture mapped surface by the first 20 endoscope frames 7