Fluoroscopy-Based X-Ray Navigation in Orthopedics Using Electromagnetic Tracking: A Practical Study

Size: px
Start display at page:

Download "Fluoroscopy-Based X-Ray Navigation in Orthopedics Using Electromagnetic Tracking: A Practical Study"

Transcription

1 Fluoroscopy-Based X-Ray Navigation in Orthopedics Using Electromagnetic Tracking: A Practical Study A thesis submitted in fulfillment of the requirements for the degree of Master of Science by Yair Yarom supervised by Prof. Leo Joskowicz The Rachel and Selim Benin School of Computer Science and Engineering The Hebrew University of Jerusalem Jerusalem, Israel November 27, 2008 h"qyz,oeyga h''k

2

3 Acknowledgments I would like to thank my supervisor Prof. Leo Joskowicz for his guidance and support throughout the research and the writing of this thesis. I thank my colleagues in the Computer Assisted Surgery and Medical Image Processing laboratory: Aviv Hurvitz, Moti Freiman and Ruby Shamir, for their help and support, and making the lab a pleasant place to work in. I thank Eran Peleg from Hadassah University Medical Center for his assistance in the field experiments and in access to hospital facilities. I thank Dr. Neil Glossop and his team from Traxtal Inc. for the development and manufacture of the C-arm calibration ring, and for helping with the distortion correction code. I would also like to thank Harel Livyatan and Ziv Yaniv whos master theses [1, 2] were used as guidance but were not cited directly.

4

5 Abstract Computer-assisted intra-operative navigation systems aim to improve the surgeon s hand/eye coordination and spatial perception. In orthopedics, where the main intraoperative image modality is the X-ray fluoroscopy image, using a navigation system can minimize the number of fluoroscopic images used by continuously tracking the patient and the tools. Thus, it minimizes the exposure of both the patient and surgeon to the X- ray radiation. The most common navigation systems today are based on optical trackers which requires a line of sight between the sensors and the cameras, and have relatively large sensors. This thesis evaluates the feasibility of electromagnetic tracking in orthopedic surgical environment, and quantifies its expected accuracy. Electromagnetic trackers do not have the line of sight problem and they have small sensors, but they are less accurate than the optical trackers as they are more sensitive to metallic and electromagnetic interferences. To evaluate the accuracy of the electromagnetic tracker, we took fluoroscopic images of tools while tracking both the tools and the fluoroscope. The fluoroscopic images where then dewarped and the location of the tools, as perceived by the tracker, was marked on the images. The error was defined by the distance between the marked location and the true location of the tools on the images. In our experiments we achieved an average accuracy of 2.41 mm in the range of 0.54 mm 3.91 mm for a factory calibrated tool, and an average accuracy of 1.85 mm in the range of 1.02 mm 4.33 mm for a lab calibrated tool.

6

7 Contents List of Figures List of Tables vii ix 1 Introduction Computer assisted surgery System overview Basic concepts and terminology Thesis organization Literature Review Tracking and navigation Fluoroscopic image processing Fluoroscopic image distortion Camera calibration Materials and Methods Problem statement Materials and devices C-arm fluoroscope X-ray unit C-arm calibration ring Electromagnetic tracker Tracking tools Methods Rigid transformations v

8 3.3.2 Pivot calibration Rigid body point registration Fluoroscopic image dewarping Camera calibration Frames averaging Setup and Protocols Calibration chain Tool calibration C-arm ring calibration Fluoroscopic image calibration Accuracy measurements and noise handling Accuracy and noise measurement Noise handling Visualization Experimental Results Static and quasi-static tracking Measurement volume Working surface Interference sources Tools and fluoroscope ring calibration Pivot calibration C-arm ring calibration Fluoroscopic image registration Summary and discussion Conclusions Summary and conclusion Future research Bibliography 61 vi

9 List of Figures 1.1 Tracking tools C-arm fluoroscope X-ray unit C-arm calibration ring Aurora electromagnetic tracker Tracking tools (catheter, pointer, and reference) Transformations between coordinate systems Reference attached to pointer Pivot calibration Dewarping overview Empty fluoroscope image Edge detection Dewarpped image Calibration chain C-arm noise Visualization Fluoroscopic image intersection Fluoroscopic image location Measurement volume Stationary location distribution Stationary error distribution Error along the z axis Working surfaces vii

10 5.6 Distance changes per surface Metals used to check interference Metal interference Fluoroscopic image registration setup Image registration results Correlations between error measurements viii

11 List of Tables 5.1 Measurement volume test results Accuracy along the z axis Working surfaces test results Metal interference tests results Pivot calibration results Pivot calibration results Image registration results ix

12 x

13 Chapter 1 Introduction This thesis measures and evaluates the accuracy of electromagnetic tracker with X-ray fluoroscopy for orthopedic surgery. The goal is to quantify the accuracy of the electromagnetic tracker in a realistic environment, and to evaluate whether the electromagnetic tracker can replace the commonly used optical trackers. This replacement can simplify the usage of the navigation system during an operation, giving the surgeon more freedom of movement and more options for positioning the tools. This chapter introduces the research topics and presents an overview of our system. Section 1.1 presents a background overview of the relevant technologies. Section 1.2 describes briefly the system and methods. Section 1.3 describes the basic concepts used throughout this thesis. Section 1.4 presents the organization of this thesis. 1.1 Computer assisted surgery Computer-Assisted Surgery (CAS) is a collection of methods and procedures designed to simplify and improve surgical procedures in the operation room. It uses computers for image analysis, clinical diagnosis, pre-operative planning, and intra-operative control, enabling more precise and less invasive surgical procedures than conventional methods. The goal of CAS systems is to enhance the dexterity, visual feedback, and information available to the surgeon, while leaving him in control. To achieve this goal, CAS uses virtual reality, image guided surgery, navigation, and surgical robotics, among others. Computer-Assisted Orthopedic Surgery (CAOS) is a branch of CAS that focuses on applications in orthopedics. CAOS systems use the same methods and tools as CAS: imaging modalities, image processing, simulation, and surgical planning. The imaging modalities used in CAOS are mostly from Computed Tomography (CT), and X-ray fluo- 1

14 roscopy. Segmentation and registration techniques are then used to extract and process the bone models. Intra-operative CAS, also called image guided surgery, uses navigation systems to track the location of both the patient s anatomy and the surgical instruments. The navigation and pre-operative data are processed in real-time and presented to the surgeon to give him a real-time understanding of the surgical procedure. Depending on the procedure, the computer can also monitor for errors or badly placed implants, or give advice for their proper location. If the computer controls a robot, this data is used to guide the robot to the correct place. The main component in the navigation system is the tracker. A tracker uses sensors whose position and orientation in the 3D space are located in real-time. These sensors are attached to the surgical tools and to the patient. The computer locates the sensors attachment position, either automatically or manually by calibration, and using the preoperative data it recognize the location of the patient and the tools. Today there are two main types of trackers: optical and electromagnetic. Optical trackers use two or more infrared cameras to view the sensors and use the stereo view to locate the sensors in the 3D space. The sensors are either active infrared LEDs or passive infrared reflective spheres. Each tool has several sensors spread with known geometry so that the orientation can be computed (Figure 1.1). Electromagnetic trackers work by creating an electromagnetic field detected by the sensors. The sensors themselves are small coils that produce an electric current when placed in the field. By measuring the current, the tracker can locate the sensor in the electromagnetic field. Only partial orientation can be retrieved with a single sensor, so if a complete orientation is desired, the tool must have two sensors embedded (Figure 1.1). Other trackers exists, such as mechanical digitizers or stereotactic frames, an overview of the different trackers is in [3]. Optical trackers are the more common trackers, and are already widely used in many surgical procedures due to their very high accuracy. Their disadvantages are the line of sight required between the cameras and the sensors, and the size of the tracked tools which can come to several centimeters due to the spread of the infrared sensors. Electromagnetic trackers have smaller sensors and tools, and do not require a line of sight, but they are susceptible to interference, specifically by metallic objects. They are generally less accurate than the optical trackers. The common way to obtain new intra-operative data during an orthopedic procedure is by using an X-ray fluoroscope. An X-ray fluoroscope is a medical imaging device used in a wide range of diagnostic and operative medical procedures. Like a conventional X- ray device, it is used to view the patient internals, but instead of capturing the image on 2

15 Active optic tools Passive optic tools Electromagnetic tools Sensors Embedded sensors Figure 1.1: Tracking tools. The optical tools are larger and need a line of sight to the tracker. The electromagnetic tools are smaller but there are no passive electromagnetic tools. film it projects the image on a fluorescent screen, which can be viewed on a monitor. A fluoroscope can be mobile and used intra-operatively, allowing the surgeon to view the patient status in real-time. Fluoroscopic image can be used as input to the navigation system. However there are two main difficulties: distortion and calibration. Due to the physical nature of the fluoroscope, the images come distorted. This distortion needs to be corrected before any further processing is done, as otherwise the accuracy will decrease. After fixing the distortions, the images needs to be properly located relative to tracked sensors. This can be done either by applying image processing techniques to locate objects with known location in the image, or by actively tracking the fluoroscope. 1.2 System overview Our goal was to evaluate and quantify the accuracy of an electromagnetic tracker in orthopedic procedures that uses X-ray fluoroscopy. Today, the common navigation systems used for such procedures, are optical systems. With electromagnetic system, the line of sight problem is solved, letting the surgeon operate more freely, as he does not have to worry about obscuring the sensors; and he has more choices to attach the sensors as they are smaller and can be covered by other tissues. The accuracy of the tracker is determined by the distance between the tools location 3

16 in the real world, and the tools location as seen by the tracker. To measure the accuracy in a fluoroscopic environment, we acquired fluoroscopic images while using the tracker to track both the fluoroscope and the imaged tools. After aligning the images to the tracker s coordinate system, we measured the distance between the tracked tools and their location on the image. The system we assembled is composed of five main elements: 1) Electromagnetic tracker and tracking tools; 2) C-arm fluoroscope X-ray unit; 3) C-arm calibration ring; 4) Frame grabber to capture fluoroscope images, and; 5) PC with custom software to capture and visualize tracking data. The overall system accuracy was measured in two stages: 1. Measurements were taken in a controlled lab environment, to assess the overall accuracy of the tracker. These tests includes measurement of static accuracy of the tracker, and calibration accuracy. The fluoroscope was not used in these tests. 2. Measurements with fluoroscope. These tests were held at the Hadassah University Medical Center. After calibrating the fluoroscope, we captured images from the fluoroscope of various tools while tracking them. We measured the distance between the tools as they appear on the image, and the tools as they are seen by the tracker. 1.3 Basic concepts and terminology Following are the meanings of the basic concepts used in this thesis. In 3D space, each object has six degrees of freedom (dof). Three dof are for the position, one for each axis x, y and z. The other three are for the orientation, which are rotations around each individual axis. Each module in the system has it own independent local coordinate system. In the calibration processes, we obtain the transformations between those different systems, enabling us to place all the modules in one global coordinate system. When tracking, a frame represents a single rigid transformation of a tracked tool as reported by the tracking device. This includes position and orientation in 3D space. Depending on the tracker and the tracked tools, the frame can be of three, five or six dof. Each frame describes the location of a single point in the 3D space. We call this the tracked location. Since the tools are 3D rigid objects, we also need to know where, on the tool, the frame refers to. We call this the tracked position. Usually the tracked location is in the global coordinate system, while the tracked position is in the local coordinate 4

17 system of each object. 1.4 Thesis organization This thesis is composed of six chapters. Chapter 2 surveys previous work. Chapter 3 describes the tools and algorithms used in this research. Chapter 4 describes the system setup for the experiments. Chapter 5 presents the experiments and their results. Chapter 6 concludes with summary, and suggests future research to expand this work. 5

18 6

19 Chapter 2 Literature Review This chapter reviews the previous work done on electromagnetic tracking from its initial application until current days with emphasis on fluoroscopy usages. Section 2.1 describes the development of the electromagnetic tracking system, its usages, its advantages and disadvantages. Section 2.2 reviews researches on fluoroscopic image processing and fluoroscopic camera calibration. 2.1 Tracking and navigation The first CAS systems [4, 5] were introduced in neurosurgery in the late 1980s. Where intra-operative correlation between the patient s skull and a 3D model generated from CT or MR, enabled real-time positioning of surgical instruments [6]. Optical navigation was first introduced in 1992 [7]. It was based on the detection of light-emitting diodes (LEDs) by stereo infrared cameras. Passive tracking tools were developed, reducing the number of cables and resolving sterilization problems [8]. In the mid 1990s, virtual X-ray fluoroscopy [9] was introduced. This method is used to address the lack of real-time 3D information, and the high radiation exposure of both the patient and the surgeon. The patient, the tools and the fluoroscope are tracked by the navigation system, and after the first images have been processed, the computer can track those tools, eliminating the need for more fluoroscopic images. In the late 1990s, electromagnetic tracking systems were developed for tracking the movement of tumors caused by respiration and circulation [10, 11, 12] and for bronchoscopy navigation [13]. These tracking systems used small sensors and do not require a line of sight like the optical tracker. This makes them ideal for tracking objects inside the body in minimal invasive surgery. 7

20 Several comparative studies between optical and electromagnetic trackers have been published. Wilson et al. [14] compares two electromagnetic trackers in different environments: interventional radiology, CT, and pulmonology. They show that different environment effects the accuracy in different ways, and that there are other factor to consider besides accuracy. They note that there is no optimal system across environments and procedures. Ricci et al. [15] compares two optical and one electromagnetic tracking systems for freehand targeting using fluoroscopic guidance. They measure the accuracy of guidewire placement through a 130 mm block with the three systems and with no navigation system. The results are accuracy of 2.1 mm and 1.9 mm with the optical systems, 2.4 mm with the electromagnetic system, and 7.1 mm with no navigation system. The conclusion is that all three system can significantly improve the results of the freehand technique, while the accuracy between the different systems is similar. The main drawback of the electromagnetic trackers is the interferences from metallic object to the tracking data [16]. Wilson et al. [17, 18] present a method for measuring the accuracy of the electromagnetic tracker by using a calibration phantom in various environments. In [17] the phantom is a motion platform on which the electromagnetic sensors are mounted and moved. In [18] the phantom is a perforated calibration cube, where the electromagnetic sensor is inserted into each of the different holes. In both cases, the conclusion was that while the environment effects the tracker to some degree, it is hard to predict in advance which environment is best suited for electromagnetic tracking. Methods to improve the accuracy of the electromagnetic tracker by calibrating it using an optical tracker have been reported in [19, 20, 21, 22]. These studies use the optical tracker to measure the ground truth location of the electromagnetic sensors. By sampling several locations pre-operatively with both trackers, a correction map is built to enhance the electromagnetic accuracy. Since the electromagnetic distortion is directly affected by metallic objects, calibration is necessary before each operation. The metallic object must be in the same location during the operation and the calibration. Within these constraints, the position accuracy of the electromagnetic tracker can be increased by 30% in average. There were also studies to measure the accuracy of the electromagnetic tracker in the presence of a C-arm X-ray fluoroscope. Hummel et al. [23] measured the tracker distortions caused by a C-arm fluoroscopy unit and obtained a distortion error of 18.6 ± 24.9 mm. Yaniv and Cleary [24] investigate a method for tracking respiratory induced organ movement by using electromagnetically tracked fiducials. To asses the accuracy of the tracker, they use a bi-plane fluoroscopic C-arm. Their results shows that the proximity to the C-arm frame has negligible effects on the accuracy of the tracker, with 8

21 errors below 1.5 mm. [15] reports no significant advantage to the optical tracker over the electromagnetic tracker, even in the presence of a fluoroscope, though it was not specified whether the fluoroscope was used while tracking in real-time. In [14] the conclusion was that the electromagnetic trackers are not yet accurate enough in the presence of the C- arm, though this conclusion applies to the rotation of the C-arm for a cone-beam CT data acquisition. The standard deviation for a stationary C-arm was between 0.06 mm and 0.61 mm. In all of [14, 15, 24], the fluoroscope itself was not directly tracked using the electromagnetic tracker. Today, electromagnetic trackers are being used in bronchoscopy [25, 26, 27], respiratory movement tracking [24, 28], spine surgeries [29], knee surgeries [30], among others. Examples of clinically available systems based on electromagnetic tracking are the AXIEM (Medtronic) used for total knee replacement, and cranial neurosurgery procedures; and the IstraTrak 3500 (GE Healthcare) used for cranial and spine procedures. 2.2 Fluoroscopic image processing The two key technical issues of fluoroscopic X-ray image processing are distortion correction and calibration. Image distortions requires correction to accurately locate objects on the image. Camera calibration is necessary to map the world coordinate system and the image coordinate system. These two processes can be performed together [31, 32] or separately [33, 34] Fluoroscopic image distortion Fluoroscopic X-ray imaging has geometric distortions that must be corrected. The distortions are caused by three main factors [2, 35, 36, 37]: 1. The X-ray beams are projected on the image intensifier which has a slightly curved surface. Thus, the assumption of an image plane, which is part of the camera model, is not accurate. 2. Surrounding magnetic fields caused by nearby instruments and the earth s magnetic field deflects the electrons beams inside the image intensifier. This skews the resulting fluoroscopic image. 3. The image intensifier s weight causes a shift between the image intensifier and the radiation source of the fluoroscope. This shift changes the position of the image plane with different rotations of the C-arm. 9

22 To correct these distortions, we must compute a function that maps each pixel in the corrected image to a location on the distorted image. This mapping function is based on the a distortion model. The correction can be defined globally over the entire image, or locally over portion of the image. Zhang et al. [37] combines both methods to overcome their disadvantages. To compute the mapping, a phantom with known geometry is imaged using the X- ray fluoroscope. This phantom has a grid of metallic fiducials, either spheres [32, 36], grooves [38], or wires [37], and it is attached to the fluoroscope s image intensifier. By locating the fiducials on the image, the correction mapping is computed. Other methods, such as Chintalapani and Taylor [39], use landmark in the patient CT as fiducials. The local correction method divides the image to small patches, usually rectangular or triangular, and calculates the local distortion on each patch [37, 40]. The patches are aligned with the phantom s grid so their coordinates are known on both the distorted and undistorted image. The assumption is that each patch is small enough so the distortion is the same in the patch. A non-linear polynomial model can be used, such as: x = a 0 x + a 1 y + a 2 xy + a 3 y = b 0 x + b 1 y + b 2 xy + b 3 where (x, y) are the coordinates of the undistorted grid point, and (x, y ) are the coordinates of the distorted grid point. There is a pair of such equations for each of patch s vertices. The advantages of this method is its simplicity and that there are no assumptions regarding the nature of the distortion. The disadvantage is the possible discontinuity between different patches. In the global correction method, a single function defines the distortions over the entire image [37, 38]. As with the local method, the phantom s grid is first detected to learn the mapping of the grid itself. The global function is usually a pair of high-order polynomial equations, such as [37]: x = n 1 i=0 r+s=i a irsx r y s y = n 1 i=0 r+s=i b irsx r y s where (x, y) are the coordinates of the undistorted grid point, and (x, y ) are the coordinates of the distorted grid point. The advantages of this method are that it produces smoother and continues results, and handles missing data and outliers. The disadvantage is the inability to correct local errors, and the assumption about the continues nature of the distortions. 10

23 2.2.2 Camera calibration Camera calibration is required to map points from the world coordinate system to points on the image and vice-versa. The calibration process includes imaging a phantom with known geometry, and identifying it on the result image [32, 38, 41]. The same phantom from the distortion correction process is usually used. When the calibration is performed separately from the distortion correction, the pin-hole camera model is used [33, 38, 42]. When performed together with the distortion correction, the distortions are modeled inside the camera model [31]. Due to the nature of the distortions, the calibration process is performed for different positions of the fluoroscope. Two calibration protocols are used [32, 38]: off-line [2, 34] and on-line [32, 43]. In off-line calibration, the calibration parameters are computed for a fixed set of C-arm orientations before the surgery starts, by working on empty images of only the calibration phantom. This approach allows larger and more complex calibration phantoms, and produces images that are simpler to analyze as only the phantom is visible and it is not obscured by other objects. However, they require pre-operative calibration of the fluoroscope. In on-line calibration, the calibration parameters are computed for each image during surgery. This approach produces more accurate results, but requires a more sophisticated image processing as the phantom s size and complexity are limited by the surgical environment, and the phantom itself might be partially obscured. This this the preferred method today, and the method we use in this thesis. 11

24 12

25 Chapter 3 Materials and Methods This chapter describes the tools and algorithms that we developed for the electromagnetic tracking. Section 3.1 presents a general overview of the tracking process. Section 3.2 describes in detail the materials and tools we used. Section 3.3 presents the methods and algorithms used to solve the problems. 3.1 Problem statement Orthopedic navigation using a fluoroscope and an electromagnetic tracker consists of calibration, registration, and image dewarping. Each of the tracking tools has to be individually calibrated and registered, and the fluoroscope images must be dewarped and registered. In addition, the entire tracking environment must be stable enough, so it will not interfere with the electromagnetic tracking. The main requirement when tracking is to place all the system components in the same coordinate system the global coordinate system. This registration process is composed of several sub-processes. As each component has its own unique structure and qualities, a different method of registration needs to be used. Because of the physical nature of the fluoroscope, the output images can come distorted. When viewing those images with a naked eye, the physician can overcome those distortions. But each one of the images needs to be dewarped for the computer to recognize the image features accurately, and later register the image to the global coordinate system. 13

26 Image intensifier Radiation source (a) C-arm mobile frame (b) Example fluoroscope image Figure 3.1: C-arm fluoroscope X-ray unit 3.2 Materials and devices In this section, we present the specific tools and devices used in this research. We describe the C-arm fluoroscope X-ray unit, the calibration ring for the fluoroscopic image dewarping, the electromagnetic tracker, and the specific tracking tools C-arm fluoroscope X-ray unit A C-arm fluoroscope X-ray unit is a mobile X-ray device. The fluoroscope is used to take X-ray images during an operation and display them in real-time. In orthopedics, the physician uses fluoroscope images to know the location of the patients internals relative to each other and relative to the operation tools. A fluoroscope is comprised of an X-ray device which is mounted on a mobile C-shaped frame called a C-arm (Figure 3.1). The X-ray device includes a radiation source and an image intensifier which are located on the opposite sides of the C-arm frame. This frame can be positioned around the patient at various viewing angles. This allows the patient to be located between the radiation source and the image intensifier, and produce the required image of the patient. Due to the harmful nature of X-rays, the fluoroscopic images are taken individually, one at a time (the fluoroscope can also take images continuously, but this increases the radiation exposure). Several images are taken, the physician learns what he can from them, and continues the operation. Later, more images are taken as required. The goal of real-time tracking is to simulate continuous fluoroscopy without the continuous X-ray exposure. In all our experiments, we used a Philips BV Endura unit with 9 field of view, 14

27 Calibration divots Ring sensors Symmetric grid Calibration ring Image intensifier Asymmetric grid (a) C-arm ring front side (b) C-arm ring back side Ring sensors (c) C-arm ring connected Figure 3.2: C-arm calibration ring. (a) The front of the calibration ring with the asymmetric pattern. (b) The back of the calibration ring with the symmetric pattern (The pattern is embedded and hard to see). (c) The ring connected to the fluoroscope s image intensifier. available at the Hadassah University Medical Center. The images were downloaded to the computer using a viewcast Osprey-210 frame grabber, at resolution of pixels with 8 bits per pixel grey-scale image C-arm calibration ring The C-arm calibration ring (Figure 3.2) is a custom designed ring that is attached to the C-arm fluoroscope on the image intensifier (Traxtal Inc. Toronto, Canada). This ring has three purposes: 1) it is a mean of connecting a tracker sensor to the C-arm; 2) it is used to correct distortions on the fluoroscope image, and; 3) it provides a known reference to register the image to the global coordinate system. The ring is made out of a plastic polymer so it does not interfere with the electromagnetic tracker. It is made of two plates: the first has a symmetric grid of 69, 2 mm diameter metallic spheres (called BBs); the second has an asymmetric pattern of 20, 4 mm metallic spheres. The two grids are used to dewarp the fluoroscopic image, and to find the fluoroscope camera parameters. The ring has a mounting spot to connect sensors, allowing the tracker to follow the C-arm. Because of the way the image intensifier works, the raw fluoroscopic images come distorted. To dewarp the image, we locate the symmetric grid s BBs on the image and compare them to their known position on the plate. By locating the asymmetric grid, we can also find the position and orientation of the image, relative to the ring. 15

28 Mounting arm Field generator Control unit Tracking tools Figure 3.3: The Aurora electromagnetic tracker. Including the control unit, the field generator, and some of the tracking tools Electromagnetic tracker A tracking system is used to find the position and orientation of objects in the 3D space, and continuously monitor their location in real time. The tracker consists of a tracking device, tracking sensors and tools, and a control unit. The tracker locates the sensors, and the control unit calculates the position of the tools. The commonly used trackers today are optical trackers. An optical tracker uses infrared cameras as tracker, and infrared emitting diodes, or infrared reflecting spheres as sensors. An electromagnetic tracker, on the other hand, uses an electromagnetic field generator as tracker and electric coils as sensors. The electromagnetic tracker used in this research is the Aurora tracker (Northern Digital Inc. Ontario, Canada. Figure 3.3). From each sensor coil the Aurora measures a five degrees of freedom (dof) frame. These five dof are the sensor s position (three dof), and the orientation of the z axis (two dof); the orientation around the z axis is missing. Each tool has one or two sensors coils. The tool s tracked position, relative to the coil(s), is embedded in its memory. Tools with two coils also have the coil position relative to each other. By using the relative position of two coils, the system produces a six dof frame and an error indicator based on how far the sensors are from each other compared to the embedded data. For each tool, the Aurora sends a frame, an error indicator, and some basic data regarding the tools and the measurement environment. If the tool has only five dof, the error is zero, and the rotation around the z axis should be ignored. The frames are sent 16

29 Embedded Sensor Embedded sensors Embedded sensors (a) Catheter (b) Pointer (c) Reference Figure 3.4: Electromagnetic tracking tools: (a) catheter, (b) pointer, and (c) reference. to the computer using a standard RS-232 serial connection Tracking tools In all the experiments, we used four tracking tools with the Aurora electromagnetic tracker: 1. The catheter (NDI Inc. Figure 3.4a), consists of a single sensor coil, with the tracked position in the center of the coil. 2. The pointer (From Traxtal Inc. Figure 3.4b), is a six dof pen-shaped pointer tool, with a cone tip. The shape of the tip is important as it affects the accuracy of some of the measurements and calibration algorithms (see Section 3.3.2). The tool is factory calibrated so that the tracked position is at the edge of the tip. 3. The reference tool (Traxtal Inc. Figure 3.4c), is a small six dof tool. It is usually attached to a reference object. A reference object is to appear stationary, and all other tools will be relative to it. As we did not use a reference object, this tool was attached to a spherical tipped rod, and was used as an additional pointer (Figure 3.6). 4. The calibration ring sensor (Traxtal Inc. Figure 3.2) is a tool which includes two six dof tools a total of four sensors coils. To track the fluoroscope only one six dof tool is needed, the second is used for error detection. This tool was designed to be attached to the C-arm ring, to track the fluoroscope location as the images were taken. 17

30 P B P A P B A B A T Figure 3.5: Transformations between coordinate systems. Point P in coordinate systems A and B, and coordinate system A relative to B. Note that B P = B A T A P 3.3 Methods This section describes the methods and algorithms we used for the tracking process. Section describes the different representations for rigid transformations. Section explains the pivot calibration method used to calibrate tipped tools. Section describes an algorithm for finding the rigid transformation between two coordinate systems. Section describes the dewarping algorithm for correcting fluoroscopic image distortions. Section describes the algorithm used to find the fluoroscopic camera parameters. And Section describes a method for averaging frames in order to reduce noise Rigid transformations In 3D Euclidean space, a coordinate system defines the origin s location and orientation. Each component in the system has its own coordinate system, usually centered at the component s location and oriented according to the component s main axis. A location inside a coordinate system is defined by a rigid transformation, i.e. by a translation and a rotation from the origin (Figure 3.5). A transformation between coordinate systems is defined the same way. Let P be the location of a point in coordinate system A, denoted by A P, and the location of coordinate system A relative to coordinate system B by B AT. Using these definitions, the location of P in coordinate system B is B P = B A T A P. 18

31 Transformation Matrices A common representation of a rigid transformations is a 3D linear transformation matrix in homogeneous coordinates. Homogeneous coordinates represents each point in the 3D space as a line in 4D space. Each point p = (x, y, z) R 3 is defined in homogeneous coordinates as p = (x, y, z, w ), where x = x, y = y and z w w w in homogeneous coordinates is defined as: T = R T = z. A rigid transformation where rotation matrix R is a 3 3 orthonormal matrix whose determinant is 1, and T is the translation vector. Homogeneous coordinates represent rigid transformations in one matrix operation instead of a separate rotation and translation (e.g. T p instead of Rp + T ). Quaternions Quaternions are an alternative method for representing rotations in 3D space. All rotations can be defined by an axis vector, and a rotation angle around it. The quaternions provide a convenient method for operating using this representation. Quaternions are a non-commutative extension to the complex numbers of the form: H = {q 0 + q x i + q y j + q z k q 0, q x, q y, q z R} Each quaternion is a four element vector q = (q 0, q x, q y, q k ), where q 0 is called the real part and q x, q y and q z are called the imaginary parts. i, j and k are imaginary units with the following multiplication table: i j k i 1 k j j k 1 i k j i 1 Addition and multiplication are defined as in the complex numbers, with the above multiplication table of the imaginary units. The inverse, conjugation and norm are also 19

32 similar to the complex numbers: q + ẇ = (q 0, q x, q y, q z ) + (w 0, w x, w y, w z ) = (q 0 + w 0, q x + w x, q y + w y, q z + w z ) q ẇ = (q 0, q x, q y, q z ) (w 0, w x, w y, w z ) q 0 w 0 q x w x q y w y q z w z = q 0 w x + q x w 0 + q y w z q z w y q 0 w y q x w z + q y w 0 + q z w x q 0 w z + q x w y q y w x + q z w 0 q = (q 0, q x, q y, q z ) q = q0 2 + qx 2 + qy 2 + qz 2 q 1 = q q q = q q 2 The multiplication of two quaternions can be represented with a 4 4 orthogonal matrix: and q 0 q x q y q z q ẇ = q x q 0 q z q y q y q z q 0 q x ẇ = Qẇ q z q y q x q 0 q 0 q x q y q z ẇ q = q x q 0 q z q y q y q z q 0 q x ẇ = Qẇ q z q y q x q 0 Notice the non-commutative nature of the quaternions. The difference between Q and Q is that the lower right 3 3 sub-matrix is transposed. Unit quaternions (i.e. q = 1 q 1 = q) are used to represent rotations in R 3. Let v = (v x, v y, v z ) be a vector we wish to rotate around axis l = (l x, l y, l z ) (a unit vector) by θ degrees. We represent v as a pure imaginary quaternion v = (0, v x, v y, v z ), and the rotation as q = cos θ 2 + sin θ 2 (il x + jl y + kl z ). With these representations, the quaternion u = q v q is a pure imaginary quaternion which represents the rotated vector (see [44] for details). Using quaternions for rotation has several advantages. They use fewer arithmetic operations than the rotation matrices, and they are more compact. Also, solving equations with unknown quaternions is simpler than solving equations with unknown matrices. In this thesis we use both quaternions and homogeneous matrices for representing frames. The Aurora s output frame is given with a quaternion and a translation vector, 20 T

33 Desiered tracked position Reference s tracked position Figure 3.6: Reference attached to pointer. The tracked position is on the reference, but we need it positioned at the pointer s tip and some of the algorithms use quaternions for their calculations. Most computer graphic and computer vision libraries use homogeneous matrices to describe frames Pivot calibration Since the tracking tools are 3D rigid objects, and since a single frame describes only a single point in the 3D space, the frame can be located anywhere on the tool (the tracked position), including outside the physical tool. While on most tools the tracked position is well defined and positioned correctly, there are cases where we want a different position. For that we need to calibrate the tool to find the transformation to relocate the tracked position from its original location to the desired location (Figure 3.6). Pivot calibration [45, 46, 47] is a method for finding the position of the tip of a pointer tool, relative to a position of another part of the tool known in the global coordinate system. The input is the location of the tool, as given by the tracker; the output is a rigid transformation from the tracking position to the tool s tip position. One of the advantages of this method is that the exact tracked position on the tool is not required. In pivot calibration, the tip of the tool is placed in a divot, which is a small coneshaped hole. The divot is stationary relative to the global coordinate system, or to any sensor that will be set as the reference. Then, the tool is rotated (pivoted) around the divot, keeping the tip inside the divot at all times (Figure 3.7). The tool is tracked and frames are sampled either in real-time, or by sampling discrete locations. The frames locations should fall on the surface of a sphere whose radius is the distance between the tracked position and the divot/tip. The center of the sphere with the transformation 21

34 Tracked position Pivot motion Divot Figure 3.7: Pivot calibration. Pivoting a tool around a divot when the tracked position is not at the tip. from each of the samples can then be calculated. A pivot calibration transform can be obtained from three, five or six dof tools. For three dof, only the distance is obtained. For five dof, the distance along the z axis can be obtained. For six dof, the result is the translation from the tracked location to the tip location. The orientation of the tip is usually not required, as only the position is relevant. Once the pivot transformation has been calculated, we can multiply it with the tracker s output transformation and use the result as the tool s frame: P tool s tip = T pivot P tool where T pivot is the pivot transformation, P tool is the tool s tracked location in the tracker coordinate system, and P tool s tip is the tool s tip location in the tracker coordinate system. Let T be the translation vector of the tool s tracked location, R be the rotation matrix, offset be the pivot translation, and (x 0, y 0, z 0 ) be the divot/tip position. We can write the equation as: [ R T 0 1 ] offset x offset y offset z 1 = r 00 r 01 r 02 t x r 10 r 11 r 12 t y r 20 r 21 r 22 t z offset x offset y offset z 1 = x 0 y 0 z 0 1 (3.1) Since the unknown variables are offset and (x 0, y 0, z 0 ), Equation 3.1 can be rewritten as: 22

35 r 00 offset x + r 01 offset y + r 02 offset z 1 x y z 0 = t x r 10 offset x + r 11 offset y + r 12 offset z + 0 x 0 1 y z 0 = t y r 20 offset x + r 21 offset y + r 22 offset z + 0 x y 0 1 z 0 = t z or M offset x offset y offset z x 0 y 0 z 0 = N (3.2) Now, since we have more than one sample, we can add the rest of the samples to M and N: r00 1 r01 1 r r10 1 r11 1 r r20 1 r21 1 r M = r00 2 r01 2 r , N = r10 2 r11 2 r r20 2 r21 2 r Equation 3.2 can now be solved using Single Value Decomposition (SVD) or the Moore-Penrose inverse: offset x offset y offset z = ( M T M ) 1 M T N x 0 y 0 z 0 t 1 x t 1 y t 1 z t 2 x t 2 y t 3 z. For each calibration (involving several samples), the root mean square error can be calculated to determine the quality of the calibration: M (offset ) T 2 x offset y offset z x 0 y 0 z 0 N RMS = number of samples 23

36 3.3.3 Rigid body point registration When tracking an object with a known geometry that does not come with a built-in pre-calibrated sensor, we need to manually attach a sensor to track it. After attaching the sensor, we need to locate where the sensor is on the object s geometry (the tracked position) so we can position the object correctly in the global coordinate system. The contact-based point registration, also known as Horn s closed-form solution [44], is an algorithm for finding the rigid transformation between two coordinate systems. The input is two corresponding sets of points in the different coordinate systems, and the output is the rigid transformation between them. The first set is with respect to the object s model s coordinate system, taken from the object s model. The second set is with respect to the global coordinate system, and the points are usually sampled using another tracked tool (or, if the points are on the tips of the object, pivot calibration can be used). The input to the contact-based point registration is two sets of n corresponding points in the 3D space: A = {a 1,..., a n } are the source points (the model) and B = {b 1,..., b n } are the target points (in the global coordinate system). The pairing between the two sets in known, i.e. for every i, a i matches b i. A minimum of three points are required for a registration in a 3D space. An axis computed from the ordered points is used as the local coordinate system. The rotation matrix for each local coordinate system is the matrix with the axes as its columns. Specifically, if the axes for A are ˆx A, ŷ A and ẑ A, the rotation matrix is R A = [ˆx A, ŷ A, ẑ A ]. The rotation from A to B is R = R B R T A, and the translation is T = b i Ra i. Since the data is not prefect, there is no single solution. So instead we define the error for each point: e i = b i Ra i T and try to minimize n i=1 e i 2. To minimize the error and take advantage of additional information, we sample more than three points. In this case, we first find the centroid of both point sets, and work with the sets centered, which leaves us only the rotation to find. Then, we write the error of the translation using the unknown rotation matrix, and minimize it. Let the centroid of both sets be: c A = 1 n n a i, c B = 1 n n b i i=1 i=1 Let the centered sets be: A = {a 1,..., a n} B = {b 1,..., b n} where a i = a i c A and b i = b i c B with n i=1 a i = n i=1 b i = 0. 24

37 We now define the error for each point as: e i = b i Ra i T (3.3) where T = T (c B Rc A ) and R and T are the rotation and translation we are looking for. The sum of square errors is: or n b i Ra i T 2 i=1 n b i Ra i 2 2T i=1 n (b i Ra i) + n T 2 (3.4) i=1 As A and B are centered and R is a rotation matrix we get n i=1 a i = n i=1 b i = n i=1 Ra i = 0. Note that the middle expression in Equation 3.4 is also zero. As the first expression does not depend on T, and the third expression is not negative, the error is minimized when n T 2 = 0. i.e. when: or T = T c B + Rc A = 0 T = c B Rc A (3.5) that is, when the translation from A to B is the same translation from RA to B. To find the best rotation when T e i = b i Ra i. The goal is to minimize n b i Ra i 2 i=1 and since a i 2 = Ra i 2, we can obtain n n b i 2 2 b ira i + i=1 = 0, we write the error from Equation 3.3 as i=1 n a i 2 Since only the middle term is affected by R, we seek the rotation that maximizes n b ira i i=1 In quaternion notation, we seek the unit quaternion q (corresponding to R) that maximizes: i=1 n ( i=1 qȧ i q ) ḃ i = n i=1 qȧ i) (ḃ i q) = n ( i=1 Qai q ) (Q bi q) = n i=1 qt QT ai Q bi q = q ( T n Q ) i=1 T a i Q bi q = q T N q 25 (3.6)

38 where Q ai and Q bi are the matrix representations of quaternion multiplication of ȧ i and ḃ i (see Section 3.3.1), and N is n Q i=1 T a i Q bi : (S xx + S yy + S zz ) S yz S zy S zx S xz S xy S yx S yz S zy (S xx S yy S zz ) S xy + S yx S zx + S xz N = and S zx S xz S xy + S yx ( S xx + S yy S zz ) S yz + S zy S xy S yx S zx + S xz S yz + S zy ( S xx S yy + S zz ) S xx = n a i x b i x, S xy = i=1 and a i x is the x coordinate of the sample a i. n a i x b i y The quaternion that maximizes Equation 3.6 is the eigenvector of the matrix N with the maximum eigenvalue [44]. method to find eigenvectors and eigenvalues. i=1 As N is a real symmetric matrix we used the Jacobi After finding the best rotation, we substitute R in Equation 3.5, and find the translation that minimizes the error Fluoroscopic image dewarping Before the fluoroscopic images can be used they need to be processed. As the images come distorted, they first need to be dewarped before we can align them in the global coordinate system. This algorithm identifies the C-arm calibration ring s features on the image, and outputs a dewarped image, and a global transformation (including translation, rotation and scaling) from the C-arm model to the dewarped image. The dewarping algorithm consists of two main steps. The initialization step, where we acquire an empty image which shows only the metallic spheres (BBs) of the calibration ring. This image is used for an initial guess for both locating the BBs on the actual image, and finding the basic transformation. Then, for each actual image, the BBs are re-detected, the basic transformation (scaling and rotation) is calculated, the dewarp map is built, and the image is dewarped. Figure 3.8 shows an overview of this algorithm. The input for the initialization step includes an empty image, and the properties of the calibration ring. These properties include the calibration ring s grids layout, and an estimation for the size of each BB on the image (in pixels). This data is required in order to correctly identify the different BBs (the asymmetric BBs are bigger than the symmetric). After the initialization, the algorithm receives actual images and outputs a dewarped versions of them. 26

39 fluoroscope empty image find effective area actual image detect all BB effective area BB size detect BB image BB location ring properties image BB location find asymmetric BB corespondences BB location on calibration ring find asymmetric BB corespondences global asymmetric transform asymmetric BB correspondences asymmetric BB correspondences global asymmetric transform find symmetric BB corespondences find symmetric BB location empty symmetric BB location symmetric BB location align symmetric grid build correction map correction map global symmetric transformation dewarp image dewarpped image Figure 3.8: Dewarping overview. Boxes represents algorithmic step, and ellipses represents intermediate results. Input and output of the algorithm are highlighted. The left side describes the empty image processing and is computed only once. The right side describes the actual image processing and is computed once per image. 27

40 (a) Empty fluoroscope image (b) Dewarped empty image Figure 3.9: Empty fluoroscope image. (a) is the empty image as captured from the fluoroscope. The small circles compose the symmetric grid, and the large circles - the asymmetric grid. (b) is the image after dewarping (i.e. treating the empty image as an actual image) When processing a fluoroscopic image, we always refer to the effective fluoroscopic data, i.e. the big centered circle of the image (e.g. Figures 3.1b, 3.9a and 3.11a). Since this circle is centered, its radius is detected by going over the image s center lines until we reach the white pixels. This effective image area is fluoroscope dependent and does not change throughout the experiment. Thus, the same area is used for all images. The initialization consists of three steps. First, the effective image area is detected. Second, the BBs are detected. Third, the correspondence between the image s BBs and the model s BBs are assigned. As the empty image consists mostly of black circles on a white background (Figure 3.9a), the BB detection is done by iterating over the image s pixels and locating the black ones. These black pixels are checked to see if they belong to a circle in the appropriate radius. Once we have found all the BBs, we need to match each BB in the image, with the correct BB in the model. This is done using the geometric hashing algorithm [48]. This algorithm provides the correspondence of the asymmetric grid, and the global transformation from the model grid to the image grid. This global transformation is the best rigid transformation between the model and the image BB s. To find the symmetric grid correspondences, we apply the transformation on the gird, and take the closest points from the image s BBs to the transformed model s ones. Once the global transformation and the BBs location on an empty image have been computed, we can start working on the actual fluoroscopic image. First, we detect the asymmetric BBs. As there are other elements in the image, we detect the BBs by searching at the image edges (subtracting the image from its blur, Figure 3.10). The BB detection is similar to the empty image BB detection except that now we iterate over the edge map pixels, and the BBs have positive value surrounded by negatives. Then, by using the 28

41 (a) Unblurred image (b) Blurred image (c) Blurred subtract Figure 3.10: Edge Detection. (a) is the original image unblurred. (b) is the blurred image. (c) is the image subtracted from the blurred image (normalized) geometric hashing we find the correspondences of the asymmetric grid, and the global transformation. Not all asymmetric BBs are necessarily detected, but the geometric hashing algorithm does not need all of them. Now we have the global transformation based on the asymmetric grid. We use this transformation as a starting point to find the global transformation of the symmetric grid. After transforming the model using the global transformation, we find the exact location of each symmetric BB using the generalized Hough transform [49] in the local area surrounding the BB s approximate location on the image edges. If the BB is occluded in the image, the BB location is taken from the empty image. After this step, we have the location of each symmetric BB on the image. To find the global transformation of the symmetric grid, which differs mostly in scaling from the asymmetric global transformation, we seek translation, rotation and scaling. First, we find the center of the grid on both the image and the model. This handles the translation. Next, we take each of the BBs surrounding the center, and warp the model accordingly, thus adding the rotation and scaling. We keep the best result, where the error is measured by summing over the distance between each warped point and its corresponding image point. For better accuracy, we also use the Levenberg-Marquardt algorithm [50] with the best result as initial parameters. As the image is distorted, we need more precise correction of the image than what the global transformation can give us. For that we build a correction map, which maps each pixel in the dewarped image to pixel from the transformed image. We first transform the image using the global transformation in order to minimize the distorting effects of the dewarping map. Then, for each triangle in the symmetric grid we find the affine transformation between the warped triangle and the model triangle (the grid s triangulation is part of the ring s properties, but can be calculated using Delaunay triangulation). This 29

42 (a) Fluoroscope image (b) Dewarpped image Figure 3.11: Dewarpped image. (a) Original image from the fluoroscope. (b) Image after dewarping. transformation is calculated by solving: p 11 p 12 p 13 p 21 p 22 p x 1 y 1 1 = x 2 y 2 1 where (x 1, y 1 ) and (x 2, y 2 ) are corresponding pair of vertices between the warped triangle and the model triangle, and p ij are the six unknowns of the affine transformation. As each triangle has three vertices, we have six equations with six unknowns. When none of the triangles are degenerate (formed by three collinear vertices), we get a single solution. The final result is a correction map, where each pixel is assigned with an affine transform which gives us the pixel s value from the transformed image. After we have the correction map, we can apply it on each pixel in the dewarped image (sub-pixels results are computed with linear interpolation) and result in a dewarped image (Figures 3.9b and 3.11b) Camera calibration After dewarping the fluoroscopic images, we know the location of the C-arm calibration ring on the image. To accurately locate other object on the image, or to position the image in other locations in the global coordinate system other than on the calibration ring, we need to find the fluoroscope camera parameters. After dewarping, the image is consistent with the pin-hole camera model, which is a projection transformation of the 3D space into the 2D space of the image. The pin-hole camera model is represented by a projective 3 4 homogeneous matrix with 11 dofs, which maps each point in the world (x, y, z) R 3 to a point on the image 30

43 (u, v) R 2. Or, in homogeneous coordinates: p 11 p 12 p 13 p 14 p 21 p 22 p 23 p 24 p 31 p 32 p 33 p 34 x y z 1 = u v w (3.7) Since P s elements are the unknowns, we can rewrite it as: [ x y z ux uy uz u x y z 1 vx vy vz v ] p 11 p 12. p 33 ( = 0 0 ) p 34 Each such pair of points yields two equations (the third equation is removed with w ). For 11 dofs we need at least 6 pairs. For n pairs we get: x 1 y 1 z u 1 x 1 u 1 y 1 u 1 z 1 u 1 p x 1 y 1 z 1 1 v 1 x 1 v 1 y 1 v 1 z 1 v 1 p =. x n y n z n u n x n u n y n u n z n u n p x n y n z n 1 v n x n v n y n v n z n v n p 34 0 (3.8) We used the grids BBs as the input pairs, since their location were already calculated, both in the global coordinate system and on the image. Equation 3.8 has the form AP = 0. Due to noise and more than the minimum requirement of point pairs, there is no single solution. Instead we minimize AP subject to P = 1 (we can add this constraint since there are 11 dof and 12 unknowns). This minimization can be achieved by finding the eigenvector with the smallest eigenvalue of A T A (see [51]). After finding the camera parameters, we can position each point in the world on the image, or locate the ray that each pixel on the image represents in the world. Finding the location on the image of a point in the 3D space, is performed by applying the camera transformation on the desired point (Equation 3.7). Finding the ray in the 3D space corresponding to a pixel on the image requires further computations [51]. A ray is described by interpolating two points on the ray. The points we use are the camera s center, which all rays pass through, and the intersection of the ray with the line at infinity (horizon). 31

44 The camera parameters matrix (Equation 3.7) is: P = p 11 p 12 p 13 p 14 p 21 p 22 p 23 p 24 = [ M d ] p 31 p 32 p 33 p 34 where M is a 3 3 matrix and d is a 1 3 vector. The camera s center is a point C R 3 which maps to 0: [ ] ( ) C M d = M C + d = 0 1 The solution to this equation is: C = M 1 d. The second point is the intersection with the horizon. In homogeneous coordinates, infinity is represented with the last element in the homogeneous vector as w = 0. For a pixel in homogeneous coordinates p R 3, we seek a point in the 3D space x R 3 such that: [ ] ( ) x p = M d = M x 0 which is x = M 1 p. We can now represent the 3D ray of each pixel p with: ( ) ( ) ( ) ( C x M 1 d X (µ) = + µ = + µ M 1 p 0 ) where ( C T, 1 ) is the origin of the ray, and ( x T, 0 ) is the direction. By choosing µ we can position the pixel in the 3D space anywhere on the ray Frames averaging Frames averaging is a technique to lower the variance of the measurements. This also helps in reducing the effect of outliers frames. We describe next the averaging methods. Averaging translations Averaging the position of two frames is computed by arithmetic average (mean) in Euclidean coordinates. Given two position vectors t 1 and t 2, the average is t = 1 2 (t 1+t 2 ). For averaging n frames, the average is t = 1 n n i=1 t i, and for weighted average: t = where w i are the weights ( i w i 0, n i=1 w i > 0). 32 P n Pi=1 w ix i n, i=1 w i

45 Averaging orientations Orientations cannot be averaged directly, since arithmetic average on the rotation matrices, on each rotation axis separately, or on the quaternion will result in a non smooth interpolation at best, or not a valid rotation matrix at worst. The solution is to use Spherical Linear Interpolation (Slerp). This interpolation takes an arc in an n-dimensional space, defined by its end-points, and returns a point on the arc: Slerp(p 1, p 2, t) = sin((1 t)ω) p sin(ω) 1 + sin(tω) p sin(ω) 2 (3.9) cos(ω) = p 1 p 2 where 0 t 1, and p 1 and p 2 are the first and last points of an arc. Slerp produces a point on the arc between p 1 and p 2 according to t. As unit quaternions also describe a unit sphere in the quaternion space, we can apply Slerp to unit quaternions: Slerp( q 1, q 2, t) = sin((1 t)ω) q sin(ω) 1 + sin(tω) q sin(ω) 2 (3.10) cos(ω) = q 1 q 2 We obtain a path in the 3D rotations space around a fixed rotation axis, with uniform angular velocity (i.e. when moving t from 0 to 1). A weighted average for two rotations is obtained by normalizing the weights such that w 1 = t and w 2 = 1 t for 0 t 1. This method yields two solutions: one for Ω, and one for 2π Ω. We seek solution that goes through the shortest path. But here the shortest path is not equivalent to the smallest angle in the quaternion space since q and q represent the same rotation. To solve this we choose q 1 over q 1 if q 1 + q 2 < q 1 + ( q 2 ) accordingly (or equivalently, if cos(ω) < 0). Exponential moving average Since the frames are from real-time tracking, they are sequenced. As such, we used the moving average to average several frames. In moving average, we define the current frame (at stage N) as an average of the last n frames: f N = n 1 i=0 f N i n (3.11) Weighted moving average, is defined similarly: f N = n 1 i=0 w if M i n 1 i=0 w i 33 (3.12)

46 Note that each frame is assigned different weight each time, as the weights are applied to the i-th frame from the end, instead of from the beginning. By choosing w i = n i (for 0 i n 1) we obtain an arithmetically decreasing average, i.e. the last frame has the highest weight, and the n-th last frame has the lowest weight. Exponential moving average uses the same basic idea, but the weights decreases exponentially. For a given smoothing factor α, the current frame is defined as: f N = αf N + (1 α) f N 1 (3.13) In this method, all frames are accounted for, though the first frame has a very low weight of (1 α) N. In each step in the exponential moving average, we interpolate two frames (f N and f N 1 ). This addresses the problem of interpolating more than two quaternions which is more complex. 34

47 Chapter 4 Setup and Protocols This chapter describes the overall system setup for the experiments. Section 4.1 describes the entire calibration process for each of the system s elements. Section 4.2 describes how we measure accuracy and handle noise. Section 4.3 describes the software developed to visualize and interpret the results. 4.1 Calibration chain We define the field generator s coordinate system as the global coordinate system. To determine where each element is in this coordinate system, we apply the registration transformations (or their inverse) on the element s attached sensor: 1. The sensors themselves are already in the global coordinate system and do not need to be calibrated. This includes the pointer which is factory calibrated. 2. The reference which is attached to a pointer with a spherical tip, is pivot-calibrated. 3. The C-arm calibration ring is calibrated to the ring s sensors with the aid of a pointer. 4. Each fluoroscopic image is dewarped and registered to the ring, and through it, to the ring s attached sensor. Following is a detailed description of each of the tool s calibration process. Figure 4.1 shows the entire calibration chain. 35

48 Image Intensifier Calibration Ring C Sensors S S C T F S T F C-arm C R T I Tool F T T T Field Generator Radiation Source R I T R Flouroscopic Image Figure 4.1: Calibration chain. Each element has its own coordinate system marked with three arrows and a capital letter. The transformations between coordinate system are marked with dashed arrows and A BT for the appropriate coordinate systems A and B Tool calibration Tool calibration is required to determine where the tracked position is on the tool. Some tools are calibrated in the factory. For example the pointer (Figure 3.4b) is calibrated so that the tracked position is on the pointer s tip. In our experiments, we used the reference tool (Figure 3.4c) as a six dof pointer with a spherical tip (Figure 3.6). For that, a small rod with a metal spherical tip was attached to the reference tool, and pivot calibration was used to find the position of the sphere relative to the reference s tracked position. For the calibration, we performed both real-time tracking and discrete sampling. In real-time tracking, the operator pivots the tool while the tracker keeps reading the location. In discrete sampling, the operator positions the tool in a divot and record a single frame. Then, he pivots the tool, stops, and record another single frame. This continues until sufficient frames are captured. We observed that in average, there were no significant advantages to either method. With real-time tracking the number of samples is much higher (hundreds), but there are some tracking errors that cannot be anticipated and every accidental movement of the pivoted tool can increase the error. With discrete sampling, the tracker error can be checked before the sampling and we can ensure that the tool is steady beforehand. In 36

49 this case the number of samples is low (usually below 20) and it is more tedious and time-consuming C-arm ring calibration The C-arm calibration ring must be calibrated as the ring does not have embedded sensors. To perform this calibration, there are eight calibration divots on the front plate of the ring (Figure 3.2a). The positions of those divots relative to each other and to the ring s metallic spheres are known. Once the transformation between the sensors and the divots is known, the ring can be positioned in the global coordinate system. To locate the divots in the global coordinate system (and relative to the ring s sensors) we took a calibrated tipped tool, with a tracked position on the tip, and pivoted it inside the divots. We tracked the tip s location, and averaged the results. As with the pivot calibration there was no significant difference between real-time pivoting and discrete sampling. Once the divots location is known, the ring calibration is done using the rigid body point registration. For that, three divots are sufficient as they are on a circle and no three divots are collinear. In practice, all eight divots where calibrated, and only the best (lowest RMS) four or five calibrations were taken for the final registration. Usually those were the divots close to the field generator as their accuracy was the highest Fluoroscopic image calibration Each fluoroscopic image is treated as a new object in the global coordinate system. The images are first dewarped, and the fluoroscope camera parameters are computed as described in Sections and Using these parameters the image can be correctly positioned anywhere between the fluoroscope s radiation source and the image intensifier. 4.2 Accuracy measurements and noise handling This section elaborates on the effects of noise and calibration errors. When gathering the data from the electromagnetic tracker, some noise and errors are unavoidable. The noise can come from any metallic, electronic or magnetic object in the area near the tracker, and sometimes the tracker itself can output an outlier frame. In addition, any human error while calibrating will increase the total error. This error causes the objects to be misplaced in the global coordinate system. 37

50 Calibration Ring x δ x Flouroscopic Image d Radiation Source Figure 4.2: C-arm noise. The green parts mark the real location. The dashed red parts mark the location as the tracker sees them, as if the calibration ring is rotated by δ degrees. The error in the location of the fluoroscopic image is marked with d = 2 x 2 (1 cosδ). During tracking, we can tolerate some noise and expect a tracked object to be up to σ mm and δ degrees off from its real location. The fluoroscope and the calibration ring accuracy is the most important since it is amplified due to the large distance between the ring and the image. The fluoroscope focal point, which is equal to the distance between the image intensifier and the radiation source, is about 1000 mm. The tools, together with the fluoroscopic images, are about 300 mm from the image intensifier. When the calibration ring is off by one degree, using the cosine rule we get a shift of 2 x2 (1 cos (δ)) = (1 cos (1)) = 5.24 mm in the image s location (See Figure 4.2). This calculation also applies to any pivot calibration or the ring calibration process, but it has the most effect between the ring and the image as they have the greatest distance Accuracy and noise measurement We used several ways to measure the accuracy of the system. There was no formula to produce a single value for accuracy estimation. However, there was correlation between the different parameters, and not all parameters were available in all the experiments. The first method for accuracy measurement is the error indicator provided by the Aurora. This indicator is a unit-less number between 0 and 10. This method is only available for six dof tools as the indicator is based on the distance between two sensor 38

51 coils, which is not available in five dof tools. The second method relies on the Aurora s output. It is the percentage of missing frames out of the total number of frames collected. When the Aurora is unable to get any signal from the sensor (either from high interference or if the sensors is outside the measurement volume), it will mark the sensor as missing. In this case, the tracker does not report any location about the sensor. A third method is the variance in the location of a stationary sensor: the higher the noise, the higher the variance. This method is applicable only to stationary sensors. For non stationary tools, we can rely on the variance of a known distance between two tools. This method was used with the C-arm calibration ring sensor, as it is composed of two six dof tools with a fixed distance between them. The final measurement we used was the calibration error from each calibration process. Usually this is the RMS of the calibration Noise handling We use several ways to handle the noise in the system, depending on the source of the noise and its measurement method. Not all noise and errors could be reduced, and some of them were used as an indicator for the stability of the system. Frame averaging is the method of choice to reduce the overall noise in the system. It is a good method to improve accuracy for static scenarios, for removing outlier frames and for smoothing any sudden movement which would usually result in large errors. Aside from averaging, we discard any measurements with high error. When a calibration has high error we discard the results and re-calibrate. For example, as the ring calibration process can use different calibrated tools (See Section 4.1.2), each calibration can effect the final error of the calibration ring. We perform calibration several times, and the best calibration is taken. To handle missing frames we use the tool s last known location. Afterwards, when analyzing the results, the frames from other tools are discarded, i.e. the locations of the other tools at the same time are also ignored. This is the logical course of action as most experiments require all the tools to be present. For the variance measurement we apply a threshold, treating the frames as missing if the variance is above the threshold. This is applied on both the variance of the stationary tool and of the distance between two tools. 39

52 (a) Real world (b) Virtual world Figure 4.3: Visualization. (a) the real world. (b) the virtual world as the tracker/computer sees it, the outlined black box is the measurement volume boundaries. 4.3 Visualization We developed a visualization software to see the virtual world as the computer sees it. This helped us get clear view of where each element is located in the global coordinate system. The goal is to show in real time all the systems elements in a virtual 3D world. Most other visualizations software which incorporate fluoroscopic images, projects the tools on the fluoroscopic image. Here we add the fluoroscopic image as a 3D object in the virtual world. This method allows us to see the world as the tracker and the computer sees it (Figure 4.3); to see the location of elements that are partially or fully occluded in the real world; and to display two or more fluoroscopic images with the correct angle between them (Figure 4.4). We can place the image anywhere along the z axis of the camera (Figure 4.5) since we have the fluoroscope camera parameters obtained from the image calibration. We can place the image anywhere between the radiation source and the image intensifier. Usually we placed the image so it intersects with the imaged object. After the image is taken, both the C-arm and the imaged object might move in different directions. For that we can attach the image to the relevant object in the virtual world and let the image move along with the object. This is usually the desired action, as in orthopedics the images are mainly of rigid objects. 40

53 (a) Intersecting images (b) Separated images Figure 4.4: Fluoroscopic image intersection. X-ray images of the pointer, taken from two different orientations. In (a) the images are positioned to intersect at the tracked location of the pointer. (b) is the same frame, but the images are located so they do not intersect. Figure 4.5: Fluoroscopic image location. Eight different locations to place the same image between the radiation source and the image intensifier. The image shrinks as it gets closer to the radiation source due to the perspective nature of the fluoroscopic camera. 41

54 42

55 Chapter 5 Experimental Results This chapter describes the experiments and their results. The goal of the experiments is to evaluate the overall accuracy of the system, and the usability of the Aurora system in a fluoroscopy-based orthopedic surgical environment. We conducted three experiments: 1. Static and quasi-static tracking Simple measurements of stationary sensors without any calculations (Section 5.1). 2. Calibration Calibration of tools and the calibration ring (Section 5.2). 3. Fluoroscopic image registration Fluoroscopic image capture and registration in a surgical environment (Section 5.3). 5.1 Static and quasi-static tracking In this group of experiments we measured the accuracy and stability of the Aurora system. No calibration or registration were done, and any intentional sensor movement was not measured. Only the raw sensors frames as they were reported by the Aurora were used Measurement volume Tools were placed in various locations inside the measurement volume, and their position were measured over time. The tools were not moved during the measurements, and the error was checked using several criteria: the percent of missing frames, the error indicator from the Aurora, and the variance of the sensor s location and orientation as reported by the Aurora. The goal of these tests is to locate where the error is the smallest and the 43

56 x ( 250, 250, 50) Measurement Volume z y +z Field Generator +y (250, 250, 550) +x Figure 5.1: Measurement volume. The Aurora measurement volume relative to the field generator. The origin (0, 0, 0) is located in the middle of the field generator s front plate. highest inside the Aurora s measurement volume, so as to determine where it is best to position the tools within the measurement volume. According to the Aurora specifications the measurement volume is a box parallel to the axes spreading from ( 250, 250, 50) to (250, 250, 550) in the field generator s coordinate system (Figure 5.1). In practice, we can measure from ( 300, 300, 30) to (300, 300, 600), but these measurements were reported by the Aurora as Out of Volume, and with the exception of special circumstances, we treated them as Missing. The results of these tests are summarized in Tables 5.1 and 5.2. The conclusions are as follows: The sensors location and orientation have a Gaussian distribution (Figure 5.2). This applies when measuring each axis individually or the Euclidean distance from the origin. The error distribution is Gaussian on accurate measurements, but has a longer tail when the precision decreases (Figure 5.3). The x and the y axis are symmetric, with the highest precision at the center, and the precision decreases as the distance from the center increases. The z axis has 44

57 Error Distance Angle Test Location dof Frames mean min max std range std range center ( 0, 0,-300) x (-240, 0,-300) x ( 240, 0,-300) y ( 0,-240,-300) y ( 0, 240,-300) z ( 0, 0,-540) z ( 0, 0, -60) center ( 0, 0,-300) x (-240, 0,-300) x ( 240, 0,-300) y ( 0,-240,-300) y ( 0, 240,-300) z ( 0, 0,-540) z ( 0, 0, -60) Table 5.1: Measurement volume test results. Frames is the number of frames for the experiment. Error is the error indicator from the Aurora. As the error is not Gaussian, the mean, minimum and maximum values are presented. Distance is the Euclidean distance (range = maximum - minimum). Angle is the angle of the z axis. the highest precision close the field generator and the precision decreases as the distance from the field generator increases (Figure 5.4). The precision decreases along the z axis faster than the other axes, and has the most impact on the overall accuracy of the measurement. The most accurate location is about 75 mm in front of the field generator - (0, 0, 75). The least accurate location is farther away along the z axis, near the edges of the measurement volume Working surface The goal of this test is to quantify the effects of different working surfaces on the measurements. We tested four different surfaces (Figure 5.5): 1) a wooden table with metallic frame and legs; 2) a custom stand with aluminum legs; 3) the floor, and; 4) the air. We took the C-arm calibration ring sensor and moved it in the center of the measurement volume while measuring the distance between its two sensors. The sensors were kept stationary for about five seconds, moved to a different position and kept stationary again. This was repeated 15 times. In the air test, as the field generator and sensors were hand-held, the test was shorter. In each test we measured the range of the sensors 45

58 mm 0 frames time (frames) (a) location mm (b) location histogram degrees frames time (frames) (c) orientation degrees (d) orientation histogram Figure 5.2: Stationary location distribution. (a) is the Euclidean distance of the tool, as was measured over time (in mm). (b) is the histogram of (a). (c) and (d) are the changes in the angle over time (in degrees). This data is of the center test with six dof tool (Row 1 in Table 5.1). 46

59 Error Distance Angle z Frames mean min max std range std range Table 5.2: Accuracy along the z axis. A six dof tool was used. It was placed at (0, 0, z) where 50 z 550. All the -50 and -550 measurements were reported as Out of Volume frames 150 frames error (a) -100 mm error (b) -500 mm Figure 5.3: Stationary error distribution. (a) and (b) are the histograms of the error measured at (0, 0, 100) and (0, 0, 500) accordingly (Table 5.2). Note that the histogram in (b) is shifted to the left, as there are more values on the right tail. Also note the different order of magnitude between (a) and (b). 47

60 3 2.5 STD Range mm 1.5 error distance from field generator (mm) (a) distance distance from field generator (mm) (b) error Figure 5.4: Error along the z axis. (a) is the distance s range (blue) and std (dashed red) on different location along the z axis. (b) is the error range of the same tests. The data for these figures are from Table 5.2. Distance Angle Test Frames std range std range Table Stand Floor air Table 5.3: Working surfaces test results. The distance and the angle are between the two sensors of the calibration ring. distance between each five seconds interval. This range was measured after smoothing the data to reduce the effect of the local noise inside each interval. The results for these tests are shown in Table 5.3 and in Figure 5.6. The conclusion is that on the table, although very steady, the measurements can have an error of above 2 mm and 2. The stand, the floor, and in the air, have the same location error which does not exceed 0.9 mm. Most of the tests where conducted on the floor, or on the stand when the C-arm was in use Interference sources In these tests we quantified various kinds of interferences on the measurements. We considered the interferences of different kinds of metals, multiple sensors, and cell phone. We tested four different metals: the small rod we later used as a pointer tool, a small 48

61 Stand Table Metalic frame Aluminum legs Floor Figure 5.5: Working surfaces. screwdriver, a large metal plate, and one of the legs from the stand (Figure 5.7). In the tests we took the reference tool and kept it stationary while moving the metals around the tool, and between the tool and the field generator. The results are summarized in Table 5.4. The conclusions are that metals can cause large interference depending on they size, shape, type of metal and the location relative to the field generator and the sensors. Big metals causes high interference and so do magnetic metals (the plate and the screwdriver are made of ferromagnetic metals metals that can be magnetized). Small and non-magnetic metals causes low or no interference. Also, each metal causes the highest interference when it is placed between the sensor and the field generator (Figure 5.8). In the multiple sensor tests we connected four sensors (pointer, reference and two catheters) and placed them in the center of the measurement volume about 20 cm apart. The stationary noise was checked in the same way as in Section We then moved the reference tool near the other tools and measured the noise. The conclusions are that multiple sensors, stationary or moving, have little or no effect on the measurements. The cell phone test was conducted in a similar way. The reference was placed stationary while measuring the noise and talking on the cell phone nearby. The results, is that the cell phone (Samsung SGH-X510, connected through Orange Inc.) has no or very little effect on the measurements. 49

62 mm 0 mm time (frames) (a) table time (frames) (b) stand mm 0 mm time (frames) (c) floor time (frames) (d) air Figure 5.6: Distance changes per surface. Here we see the distances between the two sensors while moving them in the measurement volume on different surfaces. Each plateau is a five seconds interval when the sensors were stationary. The range measurements were between these plateaus. 50

63 rod plate screwdriver stand leg Figure 5.7: Metals used to check interference. around between Error Distance Angle Test Frames mean min max std range std range rod screwdriver leg plate rod screwdriver leg plate Table 5.4: Metal interference tests results. The first four rows are when moving the metal around the sensors. The last four rows are when moving the metal between the sensors and the field generator. 51

64 (a) rod around (b) rod between (c) screwdriver around (d) screwdriver between (e) leg around (f) leg between (g) plate around (h) plate between Figure 5.8: Metal interference. The left column shows the changes in the tool s location while moving the metal around it. The right column shows the changes while moving the metal between the tool and the field generator. (h) has different scale than the others, the common scale of 4 mm range is marked with dashed lines.

65 RMS Tool Sampling Frames min max Pointer Discrete Pointer Real time Pointer Real time, smoothed Attached Discrete Attached Real time Attached Real time, smoothed Table 5.5: Pivot calibration results. Each test was conducted several times. Frames is the average of the tests frames. RMS shows the minimum and maximum RMS of each type of test. 5.2 Tools and fluoroscope ring calibration The goal of this group of tests is to validate the calibration processes. We tested two calibration processes: 1) pivot calibration, used to calibrate an attached pointer tool to a sensor, and 2) rigid body point registration, used to calibration the C-arm calibration ring to its sensors Pivot calibration In this test, we pivot-calibrated both the pointer tool and the reference tool while it was attached to a spherical tipped pointer (Figure 3.6). The calibrations were done in a 90 divot in the center of the measurement volume closer to the field generator. We tested both discrete sampling and real time tracking, and with real time tracking we also smoothed the frames to see if it improves accuracy. The results (Table 5.5) are that the pivot calibration has an average RMS of 0.97 mm for the pointer and 0.4 mm for the reference. For the reference, discrete calibration has better results over real-time tracking. For the pointer, discrete calibration has the same average RMS as real-time, but with a lower std. The averaging improved the accuracy of the pointer calibration, but decreased the accuracy of the reference C-arm ring calibration In this test, we calibrated the C-arm calibration ring to its sensor as described in Section After locating the divots, we calibrated the ring once with all the eight divots, and again with only the best calibrated divots. The ring was position relative to the field generator so that all the divots were in the measurement volume, and the ring s sensors 53

66 Divot RMS RMS Tool Sampling all best pointer real time reference real time reference discrete reference discrete Table 5.6: Pivot calibration results. For each calibration, each divot s RMS is displayed and the RMS of the calibration is given for eight divots and for the best calibrated divots (highlighted). were as stable as possible. The ring was kept in the same position for the entire test. The results (Table 5.6) are that each divot s RMS can range from 0.6 mm to above 5 mm depending on its location relative to the field generator. The RMS of the calibration itself can range from 0.23 mm to 2.02 mm. 5.3 Fluoroscopic image registration The goal of this experiment is to acquire fluoroscopic images of the tracked tools, and to measure the error between their location on the image and their tracked location. These experiments were conducted at the Hadassah University Medical Center. The attached reference was first pivot calibrated and the C-arm calibration ring was calibrated (Figure 5.9a). The reference was calibrated with RMS of 0.92 mm, and the ring was calibrated with RMS of 0.81 mm using 5 divots (Row 3 in Table 5.6). In the experiment, we acquired several images from different angles of the pointer and of the attached reference (Figure 5.9b), which were placed on the stand. The images were dewarped and the tracked position was marked on them. Then, we found the tip of the pointer and the attached reference on the image, and measured the distance between the marked location and the real location. The tip of the pointer was marked manually on each image, and the tip of the attached reference was detected using the generalized Hough transform [49]. The distance was measured in pixels. To convert pixels to millimeters, we measured the radii of the ring s metallic spheres, and of the attached reference spherical tip (again, using the generalized Hough transform). The result is: 1 pixel = 0.24 mm. The results are summarized in Table 5.7. For the attached reference the distance between the measured location and the image location ranges between 1.02 mm to 4.33 mm with an average of 1.85 mm. For the pointer, the range is from 0.54 mm to 3.91 mm but 54

67 (a) Ring calibration (b) Experiment setup Figure 5.9: Fluoroscopic image registration setup. (a) shows the calibration process of the C-arm ring sensors. (b) shows the system setup when taking the fluoroscopic images. with an average of 2.41 mm (Figure 5.10). 5.4 Summary and discussion Using the Aurora electromagnetic tracker, we obtained an accuracy of 0.1 mm or better in stable environment with no interferences. In most cases however, the accuracy was 0.5 mm. The working surfaces tests showed that the accuracy is 0.85 mm. The measurement are most precise near the field generator, and the precision decreases rapidly while moving along the z axis. Metallic objects in the measurement volume can cause large interference depending on their type, size and location relative to the sensors. Large magnetic metals, located between the sensor and the field generator yield error of 40 mm or more, making the measurement useless. Small non-magnetic metals can have no or very little interference on the measurement. In pivot-calibration we measured an accuracy of 0.31 mm in a stable environment. In practice we got 0.92 mm. For the C-arm ring calibration we measured 0.23 mm, and in practice 0.81 mm. In the fluoroscopic image registration, the error ranges from 0.54 mm to 4.33 mm. The attached reference, which was pivot-calibrated, had the highest error, but most of the results were 1.5 mm or better. The pointer, which was calibrated at factory, had the lowest error, but most of the results were 2 mm and above. This indicates that the pivot calibration error does not have the dominant effect, since the pointer was not pivot 55

68 (a) reference best (1.02 mm) (b) reference worst (4.33 mm) (c) pointer best (0.54 mm) (d) pointer worst (3.91 mm) Figure 5.10: Image registration results. These figures shows the best and worst results of the image calibration. The cyan crosses represent the tracked location of the tools. The green crosses is the tracked location of the attached reference (before the calibration transform is applied). 56

Introduction to Digitization Techniques for Surgical Guidance

Introduction to Digitization Techniques for Surgical Guidance Introduction to Digitization Techniques for Surgical Guidance Rebekah H. Conley, MS and Logan W. Clements, PhD Department of Biomedical Engineering Biomedical Modeling Laboratory Outline Overview of Tracking

More information

Lecture 8: Registration

Lecture 8: Registration ME 328: Medical Robotics Winter 2019 Lecture 8: Registration Allison Okamura Stanford University Updates Assignment 4 Sign up for teams/ultrasound by noon today at: https://tinyurl.com/me328-uslab Main

More information

Sensor-aided Milling with a Surgical Robot System

Sensor-aided Milling with a Surgical Robot System 1 Sensor-aided Milling with a Surgical Robot System Dirk Engel, Joerg Raczkowsky, Heinz Woern Institute for Process Control and Robotics (IPR), Universität Karlsruhe (TH) Engler-Bunte-Ring 8, 76131 Karlsruhe

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Fluoroscopic X-ray image guidance for manual and robotic orthopedic surgery. Thesis submitted for the degree of Doctor of Philosophy by Ziv R.

Fluoroscopic X-ray image guidance for manual and robotic orthopedic surgery. Thesis submitted for the degree of Doctor of Philosophy by Ziv R. Fluoroscopic X-ray image guidance for manual and robotic orthopedic surgery Thesis submitted for the degree of Doctor of Philosophy by Ziv R. Yaniv Submitted to the Senate of the Hebrew University April

More information

Absolute Scale Structure from Motion Using a Refractive Plate

Absolute Scale Structure from Motion Using a Refractive Plate Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and

More information

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery 3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery Masahiko Nakamoto 1, Yoshinobu Sato 1, Masaki Miyamoto 1, Yoshikazu Nakamjima

More information

3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli

3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli 3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli Client Contact: Dr. Joseph McIsaac Hartford Hospital 80 Seymour St. PO Box 5037 Hartford, CT 06102 (860)

More information

Fully Automatic Endoscope Calibration for Intraoperative Use

Fully Automatic Endoscope Calibration for Intraoperative Use Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,

More information

Validation System of MR Image Overlay and Other Needle Insertion Techniques

Validation System of MR Image Overlay and Other Needle Insertion Techniques Submitted to the Proceedings fro Medicine Meets Virtual Reality 15 February 2007, Long Beach, California To appear in Stud. Health Inform Validation System of MR Image Overlay and Other Needle Insertion

More information

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Jianhua Yao National Institute of Health Bethesda, MD USA jyao@cc.nih.gov Russell Taylor The Johns

More information

An idea which can be used once is a trick. If it can be used more than once it becomes a method

An idea which can be used once is a trick. If it can be used more than once it becomes a method An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Mamoru Kuga a*, Kazunori Yasuda b, Nobuhiko Hata a, Takeyoshi Dohi a a Graduate School of

More information

Computed Photography - Final Project Endoscope Exploration on Knee Surface

Computed Photography - Final Project Endoscope Exploration on Knee Surface 15-862 Computed Photography - Final Project Endoscope Exploration on Knee Surface Chenyu Wu Robotics Institute, Nov. 2005 Abstract Endoscope is widely used in the minimally invasive surgery. However the

More information

Geometric Hand-Eye Calibration for an Endoscopic Neurosurgery System

Geometric Hand-Eye Calibration for an Endoscopic Neurosurgery System 2008 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-23, 2008 Geometric Hand-Eye Calibration for an Endoscopic Neurosurgery System Jorge Rivera-Rovelo Silena Herold-Garcia

More information

Tracked surgical drill calibration

Tracked surgical drill calibration Tracked surgical drill calibration An acetabular fracture is a break in the socket portion of the "ball-and-socket" hip joint. The majority of acetabular fractures are caused by some type of highenergy

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Guide wire tracking in interventional radiology

Guide wire tracking in interventional radiology Guide wire tracking in interventional radiology S.A.M. Baert,W.J. Niessen, E.H.W. Meijering, A.F. Frangi, M.A. Viergever Image Sciences Institute, University Medical Center Utrecht, rm E 01.334, P.O.Box

More information

ABSTRACT 1. INTRODUCTION

ABSTRACT 1. INTRODUCTION Tracked ultrasound calibration studies with a phantom made of LEGO bricks Marie Soehl, Ryan Walsh, Adam Rankin, Andras Lasso, Gabor Fichtinger Queen s University, Kingston ON, Canada ABSTRACT PURPOSE:

More information

Surgery Simulation and Planning

Surgery Simulation and Planning Surgery Simulation and Planning S. H. Martin Roth Dr. Rolf M. Koch Daniel Bielser Prof. Dr. Markus Gross Facial surgery project in collaboration with Prof. Dr. Dr. H. Sailer, University Hospital Zurich,

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations 3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations Martin Groher 2, Frederik Bender 1, Ali Khamene 3, Wolfgang Wein 3, Tim Hauke Heibel 2, Nassir Navab 2 1 Siemens

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Lecture 6: Medical imaging and image-guided interventions

Lecture 6: Medical imaging and image-guided interventions ME 328: Medical Robotics Winter 2019 Lecture 6: Medical imaging and image-guided interventions Allison Okamura Stanford University Updates Assignment 3 Due this Thursday, Jan. 31 Note that this assignment

More information

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001 Calibration Calibrate (vt) : 1. to determine the caliber of (as a thermometer tube); 2. to determine, rectify, or mark the gradations of (as a thermometer tube); 3. to standardize (as a measuring instrument)

More information

Robot Control for Medical Applications and Hair Transplantation

Robot Control for Medical Applications and Hair Transplantation Dr. John Tenney Director of Research Restoration Robotics, Inc. Robot Control for Medical Applications and Hair Transplantation Presented to the IEEE Control Systems Society, Santa Clara Valley 18 November

More information

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001 Calibration Calibrate (vt) : 1. to determine the caliber of (as a thermometer tube); 2. to determine, rectify, or mark the gradations of (as a thermometer tube); 3. to standardize (as a measuring instrument)

More information

A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING

A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING by ANTON OENTORO A thesis submitted to the School of Computing in conformity with the requirements for the degree of Master of Science

More information

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane Calibration Method for Determining the Physical Location of the Ultrasound Image Plane Devin V. Amin, Ph.D. 1, Takeo Kanade, Ph.D 1., Branislav Jaramaz, Ph.D. 2, Anthony M. DiGioia III, MD 2, Constantinos

More information

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab Introduction Medical Imaging and Application CGV 3D Organ Modeling Model-based Simulation Model-based Quantification

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database 2017 29 6 16 GITI 3D From 3D to 4D imaging Data Fusion Virtual Surgery Medical Virtual Reality Team Morphological Database Functional Database Endo-Robot High Dimensional Database Team Tele-surgery Robotic

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

Enabling Technologies for Robot Assisted Ultrasound Tomography

Enabling Technologies for Robot Assisted Ultrasound Tomography Enabling Technologies for Robot Assisted Ultrasound Tomography Seminar Presentation By: Fereshteh Aalamifar Team Members: Rishabh Khurana, Fereshteh Aalamifar Mentors: Emad Boctor, Iulian Iordachita, Russell

More information

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION Philips J. Res. 51 (1998) 197-201 FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION This special issue of Philips Journalof Research includes a number of papers presented at a Philips

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Constructing System Matrices for SPECT Simulations and Reconstructions

Constructing System Matrices for SPECT Simulations and Reconstructions Constructing System Matrices for SPECT Simulations and Reconstructions Nirantha Balagopal April 28th, 2017 M.S. Report The University of Arizona College of Optical Sciences 1 Acknowledgement I would like

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES YINGYING REN Abstract. In this paper, the applications of homogeneous coordinates are discussed to obtain an efficient model

More information

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1 Modifications for P551 Fall 2013 Medical Physics Laboratory Introduction Following the introductory lab 0, this lab exercise the student through

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

Transformation. Jane Li Assistant Professor Mechanical Engineering & Robotics Engineering

Transformation. Jane Li Assistant Professor Mechanical Engineering & Robotics Engineering RBE 550 MOTION PLANNING BASED ON DR. DMITRY BERENSON S RBE 550 Transformation Jane Li Assistant Professor Mechanical Engineering & Robotics Engineering http://users.wpi.edu/~zli11 Announcement Project

More information

Perspective Projection Describes Image Formation Berthold K.P. Horn

Perspective Projection Describes Image Formation Berthold K.P. Horn Perspective Projection Describes Image Formation Berthold K.P. Horn Wheel Alignment: Camber, Caster, Toe-In, SAI, Camber: angle between axle and horizontal plane. Toe: angle between projection of axle

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Sean Gill a, Purang Abolmaesumi a,b, Siddharth Vikal a, Parvin Mousavi a and Gabor Fichtinger a,b,* (a) School of Computing, Queen

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

[ Ω 1 ] Diagonal matrix of system 2 (updated) eigenvalues [ Φ 1 ] System 1 modal matrix [ Φ 2 ] System 2 (updated) modal matrix Φ fb

[ Ω 1 ] Diagonal matrix of system 2 (updated) eigenvalues [ Φ 1 ] System 1 modal matrix [ Φ 2 ] System 2 (updated) modal matrix Φ fb Proceedings of the IMAC-XXVIII February 1 4, 2010, Jacksonville, Florida USA 2010 Society for Experimental Mechanics Inc. Modal Test Data Adjustment For Interface Compliance Ryan E. Tuttle, Member of the

More information

GEOMETRIC TRANSFORMATIONS AND VIEWING

GEOMETRIC TRANSFORMATIONS AND VIEWING GEOMETRIC TRANSFORMATIONS AND VIEWING 2D and 3D 1/44 2D TRANSFORMATIONS HOMOGENIZED Transformation Scaling Rotation Translation Matrix s x s y cosθ sinθ sinθ cosθ 1 dx 1 dy These 3 transformations are

More information

An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics

An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics Copyright c 2008 ICCES ICCES, vol.8, no.2, pp.41-46 An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics G. Papaioannou 1, C. Mitrogiannis 1 and G. Nianios 1

More information

A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray

A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray Petra Dorn, Peter Fischer,, Holger Mönnich, Philip Mewes, Muhammad Asim Khalil, Abhinav Gulhar, Andreas Maier

More information

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION C. Schwalm, DESY, Hamburg, Germany Abstract For the Alignment of the European XFEL, a Straight Line Reference System will be used

More information

Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform

Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform Chatchai Neatpisarnvanit Department of Electrical Engineering Mahidol University Nakorn Pathom, Thailand egcnp@mahidol.ac.th

More information

Mech. Engineering, Comp. Science, and Rad. Oncology Departments. Schools of Engineering and Medicine, Bio-X Program, Stanford University

Mech. Engineering, Comp. Science, and Rad. Oncology Departments. Schools of Engineering and Medicine, Bio-X Program, Stanford University Mech. Engineering, Comp. Science, and Rad. Oncology Departments Schools of Engineering and Medicine, Bio-X Program, Stanford University 1 Conflict of Interest Nothing to disclose 2 Imaging During Beam

More information

Calibration of Video Cameras to the Coordinate System of a Radiation Therapy Treatment Machine

Calibration of Video Cameras to the Coordinate System of a Radiation Therapy Treatment Machine Calibration of Video Cameras to the Coordinate System of a Radiation Therapy Treatment Machine Scott W. Hadley, L. Scott Johnson, and Charles A. Pelizzari University of Chicago The Department of Radiation

More information

Thank-You Members of TG147 TG 147: QA for nonradiographic

Thank-You Members of TG147 TG 147: QA for nonradiographic Thank-You Members of TG147 TG 147: QA for nonradiographic localization and positioning systems Twyla Willoughby, M.S. Medical Physicist Clinical AAPM Meeting March 2013 Department of Radiation Oncology

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay Jian Wang, Anja Borsdorf, Benno Heigl, Thomas Köhler, Joachim Hornegger Pattern Recognition Lab, Friedrich-Alexander-University

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.1: 3D Geometry Jürgen Sturm Technische Universität München Points in 3D 3D point Augmented vector Homogeneous

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

The Three Dimensional Coordinate System

The Three Dimensional Coordinate System The Three-Dimensional Coordinate System The Three Dimensional Coordinate System You can construct a three-dimensional coordinate system by passing a z-axis perpendicular to both the x- and y-axes at the

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I Tobias Ortmaier Laboratoire de Robotique de Paris 18, route du Panorama - BP 61 92265 Fontenay-aux-Roses Cedex France Tobias.Ortmaier@alumni.tum.de

More information

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices Computergrafik Matthias Zwicker Universität Bern Herbst 2008 Today Transformations & matrices Introduction Matrices Homogeneous Affine transformations Concatenating transformations Change of Common coordinate

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4 TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4 23 November 2001 Two-camera stations are located at the ends of a base, which are 191.46m long, measured horizontally. Photographs

More information

Inertial Navigation Static Calibration

Inertial Navigation Static Calibration INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2018, VOL. 64, NO. 2, PP. 243 248 Manuscript received December 2, 2017; revised April, 2018. DOI: 10.24425/119518 Inertial Navigation Static Calibration

More information

521493S Computer Graphics Exercise 2 Solution (Chapters 4-5)

521493S Computer Graphics Exercise 2 Solution (Chapters 4-5) 5493S Computer Graphics Exercise Solution (Chapters 4-5). Given two nonparallel, three-dimensional vectors u and v, how can we form an orthogonal coordinate system in which u is one of the basis vectors?

More information

COMPREHENSIVE QUALITY CONTROL OF NMR TOMOGRAPHY USING 3D PRINTED PHANTOM

COMPREHENSIVE QUALITY CONTROL OF NMR TOMOGRAPHY USING 3D PRINTED PHANTOM COMPREHENSIVE QUALITY CONTROL OF NMR TOMOGRAPHY USING 3D PRINTED PHANTOM Mažena MACIUSOVIČ *, Marius BURKANAS *, Jonas VENIUS *, ** * Medical Physics Department, National Cancer Institute, Vilnius, Lithuania

More information

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, 3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective

More information

Imaging protocols for navigated procedures

Imaging protocols for navigated procedures 9732379 G02 Rev. 1 2015-11 Imaging protocols for navigated procedures How to use this document This document contains imaging protocols for navigated cranial, DBS and stereotactic, ENT, and spine procedures

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) Research Subject Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) (1) Goal and summary Introduction Humanoid has less actuators than its movable degrees of freedom (DOF) which

More information

Online Detection of Straight Lines in 3-D Ultrasound Image Volumes for Image-Guided Needle Navigation

Online Detection of Straight Lines in 3-D Ultrasound Image Volumes for Image-Guided Needle Navigation Online Detection of Straight Lines in 3-D Ultrasound Image Volumes for Image-Guided Needle Navigation Heinrich Martin Overhoff, Stefan Bußmann University of Applied Sciences Gelsenkirchen, Gelsenkirchen,

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi Full Field Displacement And Strain Measurement And Modal Analysis Using VIC-3D-HS, High Speed 3D Digital Image Correlation System At Indian Institute of Technology New Delhi VIC-3D, 3D Digital Image Correlation

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

On the Design and Experiments of a Fluoro-Robotic Navigation System for Closed Intramedullary Nailing of Femur

On the Design and Experiments of a Fluoro-Robotic Navigation System for Closed Intramedullary Nailing of Femur On the Design and Experiments of a Fluoro-Robotic Navigation System for Closed Intramedullary Nailing of Femur Sakol Nakdhamabhorn and Jackrit Suthakorn 1* Abstract. Closed Intramedullary Nailing of Femur

More information

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon WHITE PAPER Introduction Introducing an image guidance system based on Cone Beam CT (CBCT) and a mask immobilization

More information

Real-time self-calibration of a tracked augmented reality display

Real-time self-calibration of a tracked augmented reality display Real-time self-calibration of a tracked augmented reality display Zachary Baum, Andras Lasso, Tamas Ungi, Gabor Fichtinger Laboratory for Percutaneous Surgery, Queen s University, Kingston, Canada ABSTRACT

More information

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space White Pixel Artifact Caused by a noise spike during acquisition Spike in K-space sinusoid in image space Susceptibility Artifacts Off-resonance artifacts caused by adjacent regions with different

More information

GEOMETRIC TOOLS FOR COMPUTER GRAPHICS

GEOMETRIC TOOLS FOR COMPUTER GRAPHICS GEOMETRIC TOOLS FOR COMPUTER GRAPHICS PHILIP J. SCHNEIDER DAVID H. EBERLY MORGAN KAUFMANN PUBLISHERS A N I M P R I N T O F E L S E V I E R S C I E N C E A M S T E R D A M B O S T O N L O N D O N N E W

More information

Humanoid Robotics. Least Squares. Maren Bennewitz

Humanoid Robotics. Least Squares. Maren Bennewitz Humanoid Robotics Least Squares Maren Bennewitz Goal of This Lecture Introduction into least squares Use it yourself for odometry calibration, later in the lecture: camera and whole-body self-calibration

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Optical Guidance. Sanford L. Meeks. July 22, 2010

Optical Guidance. Sanford L. Meeks. July 22, 2010 Optical Guidance Sanford L. Meeks July 22, 2010 Optical Tracking Optical tracking is a means of determining in real-time the position of a patient relative to the treatment unit. Markerbased systems track

More information

Industrial Robots : Manipulators, Kinematics, Dynamics

Industrial Robots : Manipulators, Kinematics, Dynamics Industrial Robots : Manipulators, Kinematics, Dynamics z z y x z y x z y y x x In Industrial terms Robot Manipulators The study of robot manipulators involves dealing with the positions and orientations

More information

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart

Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart Towards Projector-based Visualization for Computer-assisted CABG at the Open Heart Christine Hartung 1, Claudia Gnahm 1, Stefan Sailer 1, Marcel Schenderlein 1, Reinhard Friedl 2, Martin Hoffmann 3, Klaus

More information

Registration concepts for the just-in-time artefact correction by means of virtual computed tomography

Registration concepts for the just-in-time artefact correction by means of virtual computed tomography DIR 2007 - International Symposium on Digital industrial Radiology and Computed Tomography, June 25-27, 2007, Lyon, France Registration concepts for the just-in-time artefact correction by means of virtual

More information

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object

CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM 2.1 Theory and Construction Target Object Laser Projector CCD Camera Host Computer / Image Processor Figure 2.1 Block Diagram of 3D Areal Mapper

More information

Medicale Image Analysis

Medicale Image Analysis Medicale Image Analysis Registration Validation Prof. Dr. Philippe Cattin MIAC, University of Basel Prof. Dr. Philippe Cattin: Registration Validation Contents 1 Validation 1.1 Validation of Registration

More information

EEE 187: Robotics Summary 2

EEE 187: Robotics Summary 2 1 EEE 187: Robotics Summary 2 09/05/2017 Robotic system components A robotic system has three major components: Actuators: the muscles of the robot Sensors: provide information about the environment and

More information